[R] Workshop on Statistics in Functional Genomics 2004

2003-11-27 Thread Christina Kuenzli
Apologies in advance if you receive multiple copies of this email.

This is to announce and invite your participation in a workshop on
Statistics in Functional Genomics, to be held from 27 June - 2 July
2004 at Ascona in the Italian-speaking part of Switzerland.
The purpose of the workshop is to bring together participants
from statistics, computational sciences, bioinformatics and
biology, and aim to encourage interaction among them.

Confirmed invited speakers include: Philip Brown (Kent),
Sandrine Dudoit (Berkeley), Robert Gentleman (Harvard),
Othmar Pfannes (GeneData), Sylvia Richardson (Imperial College),
Terry Speed (Berkeley), Martin Vingron (Max Planck Institute),
Anja Wille (ETH Zurich).  Other acceptances are pending.
Contributed presentations will also be welcome.

More details and pre-registration instructions are available at
http://www.stat.math.ethz.ch/talks/Ascona_04

Peter Bühlmann
Anthony Davison
Darlene Goldstein



Eidgenoessische Technische Hochschule Zuerich
Swiss Federal Institute of Technology  Zurich

Christina Kuenzli[EMAIL PROTECTED]
Seminar fuer Statistik  
Leonhardstr. 27,  LEO D11  phone: +41 1 632 3438 
ETH Zentrum,   fax  : +41 1 632 1228 
CH-8092 Zurich, Switzerlandhttp://stat.ethz.ch/

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] would like to know how to simulated a GARCH(1,2)

2003-11-27 Thread M. M. Palhoto N. Rodrigues
Follow the example in tseries, we can simulated a GARCH(0,2),  
n - 1100
a - c(0.1, 0.5, 0.2)  # ARCH(2) coefficients
e - rnorm(n)  
x - double(n)
x[1:2] - rnorm(2, sd = sqrt(a[1]/(1.0-a[2]-a[3]))) 
for(i in 3:n)  # Generate ARCH(2) process
{
  x[i] - e[i]*sqrt(a[1]+a[2]*x[i-1]^2+a[3]*x[i-2]^2)
}
x - ts(x[101:1100])
and x is a GARCH(0,2).
But, I would like to know how to simulated a GARCH(1,2) ?




[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] multiple peaks in data frame

2003-11-27 Thread Rieckermann Joerg
Hello!

why not try the R site search for peaks:
http://finzi.psych.upenn.edu/search.html


which gives you (among other things):
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/7593.html

Hope this helps,
J.


Joerg Rieckermann
Environmental Engineering
Swiss Federal Institute for Environmental Science and Technology (EAWAG)



 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
 Sent: Mittwoch, 26. November 2003 23:09
 To: [EMAIL PROTECTED]
 Subject: [R] multiple peaks in data frame
 
 
 Hello, it wanted to know how I can extract of a dates frame 
 the values 
 peaks according to an interval that I
 establish.  For example if dates are: 
 1 23
  2 4
  3 56
  4 7
  5 99
  6 33
  extract the date i wanted to divide into intervals of 2 an
 d to take alone the numbers 23, 56 and 99 of those 3 
 intervals.  Thanks 
 Ruben
 
 __
 [EMAIL PROTECTED] mailing list 
 https://www.stat.math.ethz.ch/mailman/listinfo /r-help


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] would like to know how to simulated a GARCH(1,2)

2003-11-27 Thread Patrick Burns
Prelude for those not in the know:

GARCH models the variance of a times series
conditional on past information (often only the
series itself).  It is a reasonably good model of
the variance of the returns of market-priced
assets, which display big jumps upwards in
variance followed by gradual decays.
Rob Engle is just about to receive the Nobel Prize
in Economics for originating the model (without
the G for generalized).
The Answer:

You need to create a vector of conditional variances,
traditionally called h.  So at the start you will have an
extra line:
h - double(n)

in the for loop you will have:

h[i] - a[1]+a[2]*x[i-1]^2+a[3]*x[i-2]^2 + b[1] * h[i-1]
x[i] - e[i] * sqrt(h[i])
This leaves just one (I think) detail:

What is the initial value of h? This will depend on what you
are doing.  If you are simulating into the future, then you
want to use the (conditional) variance for the present.
Other choices can be the observed unconditional variance
and a random selection from the estimated conditional
variances that are observed.
Patrick Burns

Burns Statistics
[EMAIL PROTECTED]
+44 (0)20 8525 0696
http://www.burns-stat.com
(home of S Poetry and A Guide for the Unwilling S User)
M. M. Palhoto N. Rodrigues wrote:

Follow the example in tseries, we can simulated a GARCH(0,2),  
n - 1100
a - c(0.1, 0.5, 0.2)  # ARCH(2) coefficients
e - rnorm(n)  
x - double(n)
x[1:2] - rnorm(2, sd = sqrt(a[1]/(1.0-a[2]-a[3]))) 
for(i in 3:n)  # Generate ARCH(2) process
{
 x[i] - e[i]*sqrt(a[1]+a[2]*x[i-1]^2+a[3]*x[i-2]^2)
}
x - ts(x[101:1100])
and x is a GARCH(0,2).
But, I would like to know how to simulated a GARCH(1,2) ?



	[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] FDA and ICH Compliance of R

2003-11-27 Thread Spencer Graves
 Have you tried www.r-project.org - search - R site search?  
This issue has been discussed in the past.  With luck, you may find 
something there that might help. 

 hope this helps.  spencer graves

Antonia Drugica wrote:

I'm quite new to this medical stuff. But my associates told me that we are
not free in choice of Statistical Software because the FDA has high
standards concerning this topic. But if they would prefer a specific
package (like SAS) that could mean, that this package vendourer could lay
back and hold it's hand open for licence money.
Is there any part of the ICH document referring to software packages? I
really would use R for some tasks but therefor I need arguments...


 

Antonia Drugica [EMAIL PROTECTED] writes:

   

Does anybody know if R is FDA or ICH (or EMEA...) compliant? AFAIK
 

S-Plus
 

is but that means nothing...
 

As Thomas pointed out, that does mean nothing -- there was a group of
folks discussing what might be done to help, earlier this year, but
then everyone got busy...
best,
-tony
--
[EMAIL PROTECTED]http://www.analytics.washington.edu/ 
Biomedical and Health Informatics   University of Washington
Biostatistics, SCHARP/HVTN  Fred Hutchinson Cancer Research Center
UW (Tu/Th/F): 206-616-7630 FAX=206-543-3461 | Voicemail is unreliable
FHCRC  (M/W): 206-667-7025 FAX=206-667-4812 | use Email

CONFIDENTIALITY NOTICE: This e-mail message and any attachme...{{dropped}}

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
   

--
Trebate bolji pristup internetu?
Nazovite IskonInternet na 0800 1000 ili pogledajte
http://www.iskon.biz/individualni/usluge/dialup/
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] OT: apt-get and R in Debian

2003-11-27 Thread Stefano Calza
Hi everybody,

Sorry for the OffTopic, but I always have a problem using apt-get to 
update my debian siystem and R. 
Anytime it updates the packages (right now I installed a self-compiled 
version of 1.8.1), even if they are exactly the same. Anybody can help 
me?

TIA,
Stefano

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: RE: [R] Correlation test in time series

2003-11-27 Thread Adrian Trapletti


Thanks for you help,

And how to test covariance = zero in time series ,
cov(r_t, r_t-1)=0
and r_t  are homoscedastik and dependent ?
   

How about:

?acf
?pacf
in package 'ts'
 

Box.test from package 'ts'

best
Adrian
--
Dr. Adrian Trapletti
Trapletti Statistical Computing
Wildsbergstrasse 31, 8610 Uster
Switzerland
Phone  Fax : +41 (0) 1 994 5631
Mobile : +41 (0) 76 370 5631
Email : mailto:[EMAIL PROTECTED]
WWW : http://trapletti.homelinux.com
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Blocked Mail Notification

2003-11-27 Thread Peter Dalgaard
[EMAIL PROTECTED] writes:

 * eManager Notification **
 
 Recipient, Content filter has detected a sensitive e-mail.
 
 Source mailbox: [EMAIL PROTECTED]
 Destination mailbox(es): [EMAIL PROTECTED]
 
 *** End of message ***
 
 Received: from 129.73.8.34 by postoffice.scr.siemens.com (InterScan E-Mail VirusWall 
 NT); Thu, 27 Nov 2003 06:12:27 -0500
 Received: from idmz1.scr.siemens.com ([129.73.8.9])
   by scr.siemens.com (8.11.7/8.11.7) with ESMTP id hARBCMg06444
   for [EMAIL PROTECTED]; Thu, 27 Nov 2003 06:12:22 -0500 (EST)
 X-SCR-Return-Path:  [EMAIL PROTECTED]   (as seen by idmz1.scr.siemens.com) 
 Received: from hypatia.math.ethz.ch (hypatia.ethz.ch [129.132.58.23])
   by idmz1.scr.siemens.com (8.12.10/8.12.10) with ESMTP id hARBCTmN022366
   for [EMAIL PROTECTED]; Thu, 27 Nov 2003 06:12:30 -0500 (EST)
 Received: from hypatia.math.ethz.ch (hypatia [129.132.58.23])
   by hypatia.math.ethz.ch (8.12.10/8.12.10) with ESMTP id hARB53Cw016647;
   Thu, 27 Nov 2003 12:09:26 +0100 (MET)
 Date: Thu, 27 Nov 2003 12:09:26 +0100 (MET)
 Message-Id: [EMAIL PROTECTED]
 From: [EMAIL PROTECTED]
 Subject: R-help Digest, Vol 9, Issue 27
 To: [EMAIL PROTECTED]
 Reply-To: [EMAIL PROTECTED]
 MIME-Version: 1.0

[ and proceeds to spew the entire digest message into r-help!]

Will someone please tell these guys how to configure their MTA
correctly (errors should *never* go to the Reply-To field), and/or
unsubscribe mr. [EMAIL PROTECTED]

Grrr

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] lagsarlm - using mixed explanatory variables (spdep package)

2003-11-27 Thread Roy Sanderson
Hello

I'm very new to R (which is excellent), so apologies if this has already
been raised.  In the spdep package, I'm trying to undertake an
autoregressive mixed model using the lagsarlm function.  This is working
fine, but there does not appear to be a method of including an explanatory
variable without it automatically being included as a lagged term.  I'm
after something along the lines of

y = rho.W.y + x1 + x2 + lag(x2)

but am only able to output

y = rho.W.y + x1 + x2 + lag(x1) + lag(x2)

Is there any way around this issue?

Many thanks
Roy


Roy Sanderson
Centre for Life Sciences Modelling
Porter Building
University of Newcastle
Newcastle upon Tyne
NE1 7RU
United Kingdom

Tel: +44 191 222 7789

[EMAIL PROTECTED]
http://www.ncl.ac.uk/clsm

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] would like to know how to simulated a GARCH(1,2)

2003-11-27 Thread Adrian Trapletti


Follow the example in tseries, we can simulated a GARCH(0,2),  
n - 1100
a - c(0.1, 0.5, 0.2)  # ARCH(2) coefficients
e - rnorm(n)  
x - double(n)
x[1:2] - rnorm(2, sd = sqrt(a[1]/(1.0-a[2]-a[3]))) 
for(i in 3:n)  # Generate ARCH(2) process
{
 x[i] - e[i]*sqrt(a[1]+a[2]*x[i-1]^2+a[3]*x[i-2]^2)
}
x - ts(x[101:1100])
and x is a GARCH(0,2).
But, I would like to know how to simulated a GARCH(1,2) ?

 

GARCH(1,1) something like

n - 1100
a - c(0.1, 0.2, 0.7)
e - rnorm(n) 
x - double(n)
v - double(n)

v[1] - a[1]/(1.0-a[2]-a[3])
x[1] - rnorm(1, sd = sqrt(v[1]))
for(i in 2:n) {
   v[i] - a[1]+a[2]*x[i-1]^2+a[3]*v[i-1]
   x[i] - e[i]*sqrt(v[i])
}
x - ts(x[101:1100])
x.garch - garch(x, order = c(1,1))
summary(x.garch)
and accordingly the GARCH(1,2)

best
Adrian
--
Dr. Adrian Trapletti
Trapletti Statistical Computing
Wildsbergstrasse 31, 8610 Uster
Switzerland
Phone  Fax : +41 (0) 1 994 5631
Mobile : +41 (0) 76 370 5631
Email : mailto:[EMAIL PROTECTED]
WWW : http://trapletti.homelinux.com
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] stl and NA

2003-11-27 Thread Unternährer Thomas, uth

Hi,

I try to figure out what the stl-function exactly do.
I was reading the paper by Cleveland et al. (1990) and tested some features of stl 
(the ability to decompose time series with missing values and the robustness feature).

I tried the following:
 data(co2)
 co2.na - co2
 is.na(co2.na[c(50, 100)]) - TRUE
 plot(stl(co2.na, s.window = 12, na.action = na.exclude))

With the error message:
Error in stl(co2.na, s.window = 12, na.action = na.exclude) : 
series is not periodic or has less than two periods

The following works fine:
 plot(stl(co2, s.window = 12))

I had then a short look in the code of stl. Is it true that the argument na.action 
must be a generic function, one of ?na.fail?

The help of stl:
na.action   action on missing values.   (Mmmh, not really helpful)

The other functions na.omit and na.pass do not what I was expecting?!
 plot(stl(co2.na, s.window = 12, na.action = na.omit))
Error in na.omit.ts(as.ts(x)) : time series contains internal NAs


Is this feature correctly implemented? 
I do not found a bug report http://r-bugs.biostat.ku.dk/cgi-bin/R... So I assume that 
I'm missing something.  


How can I handle NAs in stl() correctly?


Thanks for any hints and comments

Thomas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] OT: apt-get and R in Debian

2003-11-27 Thread Dirk Eddelbuettel
On Thu, Nov 27, 2003 at 01:11:48PM +0100, Stefano Calza wrote:
 Sorry for the OffTopic, but I always have a problem using apt-get to 
 update my debian siystem and R. 
 Anytime it updates the packages (right now I installed a self-compiled 
 version of 1.8.1), even if they are exactly the same. Anybody can help 
 me?

We probably need more info to help you.  

I presume you have CRAN and Debian testing in /etc/apt/sources.list?  Did
you try to use the apt configuration to give preference to one archive over
another, or exclude one, or ...  See some of the available apt documents,
e.g. from the apt-howto package, should help. Also, 'apt-cache policy
r-base-core' will tell how apt sees and ranks the archives you have set up.

The easiest will probably be to simply exclude, say, CRAN.  

Dirk

-- 
Those are my principles, and if you don't like them... well, I have others.
-- Groucho Marx

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] FDA and ICH Compliance of R

2003-11-27 Thread Thomas Lumley
On Thu, 27 Nov 2003, Antonia Drugica wrote:

 I'm quite new to this medical stuff. But my associates told me that we are
 not free in choice of Statistical Software because the FDA has high
 standards concerning this topic. But if they would prefer a specific
 package (like SAS) that could mean, that this package vendourer could lay
 back and hold it's hand open for licence money.

 Is there any part of the ICH document referring to software packages? I
 really would use R for some tasks but therefor I need arguments...

As far as I can see, ICH has not said anything useful about software (the
most relevant things are about data management).

-thomas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] FDA and ICH Compliance of R

2003-11-27 Thread Frank E Harrell Jr
On Thu, 27 Nov 2003 07:48:02 +0100
Antonia Drugica [EMAIL PROTECTED] wrote:

 I'm quite new to this medical stuff. But my associates told me that we
 are not free in choice of Statistical Software because the FDA has high
 standards concerning this topic. But if they would prefer a specific
 package (like SAS) that could mean, that this package vendourer could
 lay back and hold it's hand open for licence money.

Your associates are completely wrong.  It is only sponsors that choose not
to be free in their choice, due in my humble opinion mainly to the fact
that SAS has been in use since 1966 and that no one has ever been
criticized by the FDA for using SAS.  FDA even receives submissions based
on Excel and we all know about the accuracy of Excel's statistical
calculations.  High standards need to be held by statisticians doing the
analyses.  Related to such standards open source systems such as R have
many advantages, and the reproducible reporting capabilities of R using
its Sweave package have major impacts on accuracy of reporting.

I along with colleagues at another institution are working on an open
source R package for clinical trial analysis and reporting that should be
mature in about a year.  I am currently using the package in two
pharmaceutical industry-sponsored randomized clinical trials to report to
data monitoring committees.  I'm also working on a document addressing
validation of statistical calculations.  Let me know if you'd like a copy
of the current version of that document.

 
 Is there any part of the ICH document referring to software packages? I
 really would use R for some tasks but therefor I need arguments...

Don't know of anything in ICH.

In view of the fact that large pharma companies have to pay more than $10M
per year in SAS licenses and have to hire armies of non-intellectually
challenged SAS programmers to do the work of significantly fewer
programmers that use modern statistical computing tools like R and S-Plus,
it is surprising that SAS is still the most commonly used tool in the
clinical side of drug development.  I quit using SAS in 1991 because my
productivity jumped at least 20% within one month of using S-Plus.
---
Frank E Harrell JrProfessor and ChairSchool of Medicine
  Department of BiostatisticsVanderbilt University

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] help me!

2003-11-27 Thread Patricia
Would you help me to answer this question, please?
 

 

/r/ is one of the most difficult English sounds to acquire and imitate. Describe in 
what ways it is different from our Spanish rolls (perro) and taps (pero).  How does 
the pronunciation of this sound vary according to the context? Find at least three 
examples of linking r and intrusive r and transcribe into phonetics. When do we use 
them? Why are they used? 

I`ll waiting for your answer.
Thanks
Patricia
[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] OT: apt-get and R in Debian

2003-11-27 Thread Stefano Calza
 Can you give us more details, please?  Which version of Debian
 (stable, testing or unstable) are you running and what are the
 relevant parts of your /etc/apt/sources.list file?  (Just send us the
 whole file if you are not sure what parts are relevant.)

Ok. Find attach my source.list + the output from apt-cache show 
r-base-core + apt-cache policy r-base-core.
I use Debian/testing version + some unstable package.

 
 To others on the cc: list:  May I suggest that we ask Martin to create
 an email list with a name like r-debian for Debian-specific questions
 about R installation?

Yes, it would be a great idea!

Thanks,
Ste

 
 -- 
 Douglas Bates[EMAIL PROTECTED]
 Statistics Department608/262-2598
 University of Wisconsin - Madisonhttp://www.stat.wisc.edu/~bates/

-- 
Stefano Calza,
Sezione di Statistica Medica
Dip. di Scienze Biomediche e Biotecnologie
Università degli Studi di Brescia - Italy
Viale Europa, 11 25123 Brescia
email: [EMAIL PROTECTED]
Telefono/Phone: +390303717532
Fax: +390303701157
deb http://ftp.it.debian.org/debian/ stable main non-free contrib
deb-src http://ftp.it.debian.org/debian/ stable main non-free contrib
deb http://non-us.debian.org/debian-non-US stable/non-US main contrib non-free
deb-src http://non-us.debian.org/debian-non-US stable/non-US main contrib non-free

deb http://ftp.it.debian.org/debian/ testing main non-free contrib
deb http://ftp.it.debian.org/debian-non-US testing/non-US main contrib non-free
deb-src http://ftp.it.debian.org/debian testing main contrib non-free
deb-src http://ftp.it.debian.org/debian-non-US testing/non-US main contrib non-free

deb http://ftp.it.debian.org/debian/ unstable main non-free contrib
deb http://ftp.it.debian.org/debian-non-US unstable/non-US main contrib non-free
deb-src http://ftp.it.debian.org/debian unstable main contrib non-free
deb-src http://ftp.it.debian.org/debian-non-US unstable/non-US main contrib non-free

# R

#deb http://cran.at.r-project.org/bin/linux/debian stable main
#deb http://cran.at.r-project.org/bin/linux/debian testing main
#deb http://cran.at.r-project.org/bin/linux/debian unstable main

deb http://security.debian.org/ stable/updates main contrib non-free


## Java Environment

#deb http://mirrors.publicshout.org/java-linux/debian testing main non-free
#deb http://mirrors.publicshout.org/java-linux/debian unstable main non-free

## Bioconductor

deb http://lab.analytics.washington.edu/debian-local ./
Package: r-base-core
Priority: optional
Section: math
Installed-Size: 23916
Maintainer: Dirk Eddelbuettel [EMAIL PROTECTED]
Architecture: i386
Source: r-base
Version: 1.8.0.cvs.20031114-1
Replaces: r-base (= 1.4.1-1)
Depends: perl, zlib-bin, libbz2-1.0, libc6 (= 2.3.2.ds1-4), libg2c0 (= 1:3.3.2-1), 
libgcc1 (= 1:3.3.2-1), libjpeg62, libncurses5 (= 5.3.20030510-1), libpcre3 (= 4.0), 
libpng10-0 (= 1.0.15-4), libreadline4 (= 4.3-1), tcl8.4 (= 8.4.2), tk8.4 (= 
8.4.2), xlibs ( 4.1.0), zlib1g (= 1:1.1.4)
Recommends: r-recommended, r-base-dev
Suggests: libpaperg, ess, r-doc-info | r-doc-pdf | r-doc-html
Filename: pool/main/r/r-base/r-base-core_1.8.0.cvs.20031114-1_i386.deb
Size: 5939210
MD5sum: 8cc59556c1385f4ec50b84f69380c691
Description: GNU R core of statistical computing language and environment
 R is `GNU S' - A language and environment for statistical computing
 and graphics. R is similar to the award-winning S system, which was
 developed at Bell Laboratories by John Chambers et al. It provides a
 wide variety of statistical and graphical techniques (linear and
 nonlinear modelling, statistical tests, time series analysis,
 classification, clustering, ...).
 .
 R is designed as a true computer language with control-flow
 constructions for iteration and alternation, and it allows users to
 add additional functionality by defining new functions. For
 computationally intensive tasks, C, C++ and Fortran code can be
 linked and called at run time.
 .
 S is the statistician's Matlab and R is to S what Octave is to Matlab.
 .
 This package provides the core GNU R system from which only the optional
 documentation packages r-base-html, r-base-latex, r-doc-html, r-doc-pdf
 and r-doc-info have been split off to somewhat reduce the size of this
 package.

Package: r-base-core
Status: install ok installed
Priority: optional
Section: math
Installed-Size: 23912
Maintainer: Dirk Eddelbuettel [EMAIL PROTECTED]
Source: r-base
Version: 1.8.0.cvs.20031114-1
Replaces: r-base (= 1.4.1-1)
Depends: perl, zlib-bin, libbz2-1.0, libc6 (= 2.3.2.ds1-4), libg2c0 (= 1:3.3.2-1), 
libgcc1 (= 1:3.3.2-1), libjpeg62, libncurses5 (= 5.3.20030510-1), libpcre3 (= 4.0), 
libpng12-0 (= 1.2.5.0-4), libreadline4 (= 4.3-1), tcl8.4 (= 8.4.2), tk8.4 (= 
8.4.2), xlibs ( 4.1.0), zlib1g (= 1:1.1.4)
Recommends: r-recommended, r-base-dev
Suggests: libpaperg, ess, r-doc-info | r-doc-pdf | r-doc-html
Conffiles:
 /etc/R/Makeconf c819eaeb69d4f1514795e3a9eb7e2c9c
 /etc/R/Renviron 

Re: [R] FDA and ICH Compliance of R

2003-11-27 Thread Gabor Grothendieck

From: Frank E Harrell Jr [EMAIL PROTECTED]
 per year in SAS licenses and have to hire armies of non-intellectually
 challenged SAS programmers to do the work of significantly fewer
 programmers that use modern statistical computing tools like R and S-Plus,
 it is surprising that SAS is still the most commonly used tool in the
 clinical side of drug development. I quit using SAS in 1991 because my
 productivity jumped at least 20% within one month of using S-Plus.

I have not used SAS for even longer than you but to
give SAS its due:

- its pretty easy to produce all the info you need for a
  complete analysis with a few SAS commands.  It would be
  possible to create analogous R commands but as it stands
  you have to keep going back and forth with R rather than
  just get it all out at once like you can with SAS.

- SAS has more functionality in missing values.  You
  can have different types of SAS missing values but in R you
  can have only one type of missing value.

- the BY phrase in SAS is incredibly powerful and handy.  You
  can get the same effect in R but I think that specific
  functionality is easier with SAS.

Obviously R is incredibly powerful and functional and I really
am out of touch with the SAS world but I thought I would make
whatever case I could.  I am willing to be corrected by those 
more in the know with SAS if this wrong.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] problems with R graph.

2003-11-27 Thread aurelie . defferrard





Hello,

I have some problems to generate graph with R...

I am working on two different platform :

- Compaq Alpha Server (Unix True 64 5.1) + R 1.6
- Sparc Server (Sun Solaris 8) + R 1.6

I use different functions like the bitmap function, the legend function and the
barplot function.
The graph are made by the same script on the both platform.

I obtain nice graph on the compaq server but the graph  which is generate on the
sun server has some problems...
I don't understand why the result is not the same because we use the same
library on each platform.
Please look at these images to see what I mean.

(See attached file: histogramme_bad.bmp)(See attached file:
histogramme_good.bmp)

thank you for your help.

Aurelie.__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] problems with R graph.

2003-11-27 Thread Prof Brian Ripley
If indeed you used the bitmap() function, please read its help page.
The differences are very likely due to the installations of ghostscript
and nothing to do with R.

There never was an `R 1.6', but the current version of R is 1.8.1, so it 
looks as if an update is well overdue.

On Thu, 27 Nov 2003 [EMAIL PROTECTED] wrote:

 I have some problems to generate graph with R...
 
 I am working on two different platform :
 
 - Compaq Alpha Server (Unix True 64 5.1) + R 1.6
 - Sparc Server (Sun Solaris 8) + R 1.6
 
 I use different functions like the bitmap function, the legend function and the
 barplot function.
 The graph are made by the same script on the both platform.
 
 I obtain nice graph on the compaq server but the graph  which is generate on the
 sun server has some problems...
 I don't understand why the result is not the same because we use the same
 library on each platform.
 Please look at these images to see what I mean.
 
 (See attached file: histogramme_bad.bmp)(See attached file:
 histogramme_good.bmp)

You cannot send binary attachments to R-help, so we can't see anything 
here.  In any case, .bmp is a strange choice for use on Unix systems: png 
would be much better for your readers here (but you would have to put them 
on a web site).

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] significance in difference of proportions

2003-11-27 Thread Arne.Muller
Hello,

I'm looking for some guidance with the following problem:

I've 2 samples A (111 items) and B (10 items) drawn from the same unknown
population. Witihn A I find 9 positives and in B 0 positives. I'd like to
know if the 2 samples A and B are different, ie is there a way to find out
whether the number of positives is significantly different in A and B?

I'm currently using prop.test, but unfortunately some of my data contains
less than 5 items in a group (like in the example above), and the test
statistics may not hold:

 prop.test(c(9,0), c(111,10))

2-sample test for equality of proportions with continuity correction

data:  c(9, 0) out of c(111, 10) 
X-squared = 0.0941, df = 1, p-value = 0.759
alternative hypothesis: two.sided 
95 percent confidence interval:
 -0.02420252  0.18636468 
sample estimates:
prop 1 prop 2 
0.08108108 0. 

Warning message: 
Chi-squared approximation may be incorrect in: prop.test(c(9, 0), c(111, 10))


Do you have suggestions for an alternative test?

many thanks for your help,
+kind regards,

Arne

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] R 1.8.1 on SUSE 9.0

2003-11-27 Thread J.R. Lockwood
 
 I think the problem is that by default g77 is not installed.  However you
 should still be able to find the rpm on the CDROM.
 
 HTH,
 Andy

Thanks to all for your replies.  Indeed the package gcc-g77 was on the
install disk, and I was able to install the R rpm with no problems.


J.R. Lockwood
412-683-2300 x4941
[EMAIL PROTECTED]
http://www.rand.org/methodology/stat/members/lockwood/

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] lme v. aov?

2003-11-27 Thread John Christie
I am trying to understand better an analysis mean RT in various 
conditions in a within subjects design with the overall mean RT / 
subject as one of the factors.  LME seems to be the right way to do 
this. using something like m- lme(rt~ a *b *subjectRT, random= 
~1|subject) and then anova(m,type = marginal).  My understanding is 
that lme is an easy interface for dummy coding variables and doing a 
multiple regression (and that could be wrong).  But, what is aov doing 
in this instance? MANOVA?  I also haven't been able to find anything 
really useful on what to properly assign to  random in the lme 
formula.  For repeated measures the use above is always in the 
examples.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] help me!

2003-11-27 Thread Spencer Graves
 English may have greater variation between regional dialects than 
does Spanish,  The Scots roll their r's, and I suspect it is more like 
the Spanish rolls (perro) than taps (pero), though I have not had enough 
contact with the Scots to judge.  The English r may be closer to the r 
in Portuguese, French, and German than Spanish but does not close the 
throat.  Try making an r like you do but without letting your tongue hit 
the roof of your mouth. 

 hope this helps. 
 Que esto pueda ayudarle!
  spencer graves

Patricia wrote:

Would you help me to answer this question, please?



/r/ is one of the most difficult English sounds to acquire and imitate. Describe in what ways it is different from our Spanish rolls (perro) and taps (pero).  How does the pronunciation of this sound vary according to the context? Find at least three examples of linking r and intrusive r and transcribe into phonetics. When do we use them? Why are they used? 

I`ll waiting for your answer.
Thanks
Patricia
[[alternative HTML version deleted]]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] lme v. aov?

2003-11-27 Thread Spencer Graves
 Do you want to make inference about the specific subjects in your 
study?  If yes, the subjects are a fixed effect.  If instead you want to 
make inference about the societal processes that will generate the 
subjects you will get in the future, that is a random effect.  The 
function lme handles both fixed and random effects, as does 
varcomp.  The functions aov and lm are restricted to fixed effects 
only.  You can use dummy coding for lm and aov as well. 

 The the distinction between fixed and random effects seems to 
me to be the same as what Deming called the difference between 
enumerative and analytic studies:  With a fixed effect / enumerative 
study, the objective is to determine the disposition of the sampling 
frame.  For example, Deming managed a survey of food distribution in 
Japan in 1946 or so, right after World War II.  The purpose was to 
determine where to deliver food the next day, etc., to keep people from 
dying of starvation.  That was an enumerative study.  If the purpose had 
been to advance economic theories for use not only in Japan or in 
1946-47, that is an analytic study. 

 Do you have the book Pinhiero and Bates (2000) Mixed-Effects 
Models in S and S-Plus (Springer)?  If you have more than one use for 
analyzing data on human subjects, I suggest you get and study this book 
if you haven't already.  Doug Bates and several of his graduate students 
have developed lme.  I am not current in the absolute latest 
literature in that area of statistics, but Bates seems to me to be among 
the leaders in that area and specifically in statistical computing for 
that kind of problem. 

 hope this helps.  spencer graves

John Christie wrote:

I am trying to understand better an analysis mean RT in various 
conditions in a within subjects design with the overall mean RT / 
subject as one of the factors.  LME seems to be the right way to do 
this. using something like m- lme(rt~ a *b *subjectRT, random= 
~1|subject) and then anova(m,type = marginal).  My understanding is 
that lme is an easy interface for dummy coding variables and doing a 
multiple regression (and that could be wrong).  But, what is aov doing 
in this instance? MANOVA?  I also haven't been able to find anything 
really useful on what to properly assign to  random in the lme 
formula.  For repeated measures the use above is always in the examples.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] lagsarlm - using mixed explanatory variables (spdep package)

2003-11-27 Thread Roger Bivand
On Thu, 27 Nov 2003, Roy Sanderson wrote:

 Hello

Usually it is easier to send package questions directly to the package 
maintainer, because they may not interest the whole list.

 
 I'm very new to R (which is excellent), so apologies if this has already
 been raised.  In the spdep package, I'm trying to undertake an
 autoregressive mixed model using the lagsarlm function.  This is working
 fine, but there does not appear to be a method of including an explanatory
 variable without it automatically being included as a lagged term.  I'm
 after something along the lines of
 
 y = rho.W.y + x1 + x2 + lag(x2)
 
 but am only able to output
 
 y = rho.W.y + x1 + x2 + lag(x1) + lag(x2)
 

Using the old Columbus data set in the spdep package:

 data(oldcol)
 lagsarlm(CRIME ~ INC + HOVAL, data = COL.OLD, nb2listw(COL.nb))
 lagsarlm(CRIME ~ INC + HOVAL, data = COL.OLD, nb2listw(COL.nb), 
+ type=mixed)

give the standard lag and mixed model types, but you need to use
type=lag and include the spatially lagged x variable(s) manually:

 WINC - lag.listw(nb2listw(COL.nb), COL.OLD$INC)
 lagsarlm(CRIME ~ INC + HOVAL + WINC, data = COL.OLD, nb2listw(COL.nb))

Hope this helps,

Roger


 Is there any way around this issue?
 
 Many thanks
 Roy
 
 
 Roy Sanderson
 Centre for Life Sciences Modelling
 Porter Building
 University of Newcastle
 Newcastle upon Tyne
 NE1 7RU
 United Kingdom
 
 Tel: +44 191 222 7789
 
 [EMAIL PROTECTED]
 http://www.ncl.ac.uk/clsm
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 

-- 
Roger Bivand
Economic Geography Section, Department of Economics, Norwegian School of
Economics and Business Administration, Breiviksveien 40, N-5045 Bergen,
Norway. voice: +47 55 95 93 55; fax +47 55 95 93 93
e-mail: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] significance in difference of proportions

2003-11-27 Thread Jonathan Baron
On 11/27/03 17:04, [EMAIL PROTECTED] wrote:
Hello,

I'm looking for some guidance with the following problem:

I've 2 samples A (111 items) and B (10 items) drawn from the same unknown
population. Witihn A I find 9 positives and in B 0 positives. I'd like to
know if the 2 samples A and B are different, ie is there a way to find out
whether the number of positives is significantly different in A and B?

I'm currently using prop.test, but unfortunately some of my data contains
less than 5 items in a group (like in the example above), and the test
statistics may not hold:

fisher.test in the ctest package, which loads automatically.

-- 
Jonathan Baron, Professor of Psychology, University of Pennsylvania
Home page:http://www.sas.upenn.edu/~baron

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] significance in difference of proportions

2003-11-27 Thread Torsten Hothorn

 Hello,

 I'm looking for some guidance with the following problem:

 I've 2 samples A (111 items) and B (10 items) drawn from the same unknown
 population. Witihn A I find 9 positives and in B 0 positives. I'd like to
 know if the 2 samples A and B are different, ie is there a way to find out
 whether the number of positives is significantly different in A and B?

 I'm currently using prop.test, but unfortunately some of my data contains
 less than 5 items in a group (like in the example above), and the test
 statistics may not hold:

The statistic is fine, the approximation to its null distribution may be
questionable :-)




  prop.test(c(9,0), c(111,10))

 2-sample test for equality of proportions with continuity correction

 data:  c(9, 0) out of c(111, 10)
 X-squared = 0.0941, df = 1, p-value = 0.759
 alternative hypothesis: two.sided
 95 percent confidence interval:
  -0.02420252  0.18636468
 sample estimates:
 prop 1 prop 2
 0.08108108 0.

 Warning message:
 Chi-squared approximation may be incorrect in: prop.test(c(9, 0), c(111, 10))


 Do you have suggestions for an alternative test?


you may consider a permutation test for two independent samples:

R library(exactRankTests)
R x = c(rep(1, 9), rep(0, 102))
R y = rep(0, 10)
R mean(x)
[1] 0.08108108
R mean(y)
[1] 0
R perm.test(y, x, exact = TRUE)

2-sample Permutation Test

data:  y and x
T = 0, p-value = 0.6092
alternative hypothesis: true mu is not equal to 0

Best,

Torsten


   many thanks for your help,
   +kind regards,

   Arne

 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help



__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] significance in difference of proportions: What problem are you solving?

2003-11-27 Thread Spencer Graves
Hi, Torsten: 

 Thanks for the reference to library(exactRankTests).  That seems 
like a reasonable alternative to prop.test with small samples. 

 However, aren't exact tests and the related bootstrap 
methodology what Deming called enumerative techniques, more relating 
to describing a fixed finite population than enumerative techniques 
for describing more general processes that will likely generate similar 
samples in the future?  Don't exact tests and bootstraps answer 
different (enumerative) questions from those posed by standard 
(analytic) parametric procedures?  (I know that the chi-square 
distribution is only an approximation to the distribution of the 
contingency table chi-square;  however, that is a different issue from 
the question of enumerative vs. analytic studies.) 

 Thanks again for this and your many other interesting 
contributions to r-help. 
 Spencer Graves

Torsten Hothorn wrote:

Hello,

I'm looking for some guidance with the following problem:

I've 2 samples A (111 items) and B (10 items) drawn from the same unknown
population. Witihn A I find 9 positives and in B 0 positives. I'd like to
know if the 2 samples A and B are different, ie is there a way to find out
whether the number of positives is significantly different in A and B?
I'm currently using prop.test, but unfortunately some of my data contains
less than 5 items in a group (like in the example above), and the test
statistics may not hold:
   

The statistic is fine, the approximation to its null distribution may be
questionable :-)


 

prop.test(c(9,0), c(111,10))
 

   2-sample test for equality of proportions with continuity correction

data:  c(9, 0) out of c(111, 10)
X-squared = 0.0941, df = 1, p-value = 0.759
alternative hypothesis: two.sided
95 percent confidence interval:
-0.02420252  0.18636468
sample estimates:
   prop 1 prop 2
0.08108108 0.
Warning message:
Chi-squared approximation may be incorrect in: prop.test(c(9, 0), c(111, 10))
Do you have suggestions for an alternative test?

   

you may consider a permutation test for two independent samples:

R library(exactRankTests)
R x = c(rep(1, 9), rep(0, 102))
R y = rep(0, 10)
R mean(x)
[1] 0.08108108
R mean(y)
[1] 0
R perm.test(y, x, exact = TRUE)
   2-sample Permutation Test

data:  y and x
T = 0, p-value = 0.6092
alternative hypothesis: true mu is not equal to 0
Best,

Torsten

 

many thanks for your help,
+kind regards,
	Arne

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
   

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] MASS fitdistr()

2003-11-27 Thread Oscar Linares
Dear R experts,

I am trying to use the R MASS library fitdistr() to fit the following
list:

k21stsList-c(0.76697,0.57642,0.75938,0.82616,0.93706,0.77377,0.58923,0.37157,0.60796,1.00070,0.97529,0.62858,0.63504,0.68697,0.61714,0.75227,1.16390,0.66702,0.83578)

as follows,

library(MASS)
fitdistr(k21stsList, normal)

But, I get

Error in fitdistr(k21stsList, normal) : 'start' must be a named list

What am I doing wrong (probably alot!)

Thanks,

Oscar

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] dvips function gives Documents not found error

2003-11-27 Thread David Kelly
This is primarily an FYI to the Hmisc author, though any brilliant 
suggestions from him or anyone else are always welcome.  Re this problem 
reported earlier in this thread:

  getting an error from Hmisc when trying
  dvips(latex(describe(mtcars)),file=/kellytest/kelly.ps)
  that says
  Error in system(cmd, intern = intern, wait = wait | intern,
  show.output.on.console = wait,  :
C:/Documents not found
  which appears to be a path-parsing problem of Documents and Settings
The problem is still present. I've tried every suggestion that was sent 
to me. Most recently, I created a .Renviron which said
TMP=c:/kellytest/rtemp
and I then ran R (from within emacs/ess).

I verified that TMP had been set correctly as follows:
 tempdir()
[1] c:/kellytest/rtemp\\Rtmp5417
but I still got the C:/Documents not found error.

P.S. I found that a TeX file was created when the command was executed, 
but I could find no evidence of a dvi or ps file.

-- David Kelly

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] MASS fitdistr()

2003-11-27 Thread Prof Brian Ripley
On Thu, 27 Nov 2003, Oscar Linares wrote:

 Dear R experts,
 
 I am trying to use the R MASS library fitdistr() to fit the following
 list:

Well, that is not a list!

 k21stsList-c(0.76697,0.57642,0.75938,0.82616,0.93706,0.77377,0.58923,0.37157,0.60796,1.00070,0.97529,0.62858,0.63504,0.68697,0.61714,0.75227,1.16390,0.66702,0.83578)
 
 as follows,
 
 library(MASS)
 fitdistr(k21stsList, normal)
 
 But, I get
 
 Error in fitdistr(k21stsList, normal) : 'start' must be a named list

You omitted the `start' argument, and normal is not in the list 
specified in the Details section.

 What am I doing wrong (probably alot!)

It always helps to read through the whole help page rather than guessing.

Given that the MLEs for a normal are known explicitly, why are you doing 
this?

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] lme v. aov?

2003-11-27 Thread John Christie
Its not so much that I wasn't getting the difference between fixed and 
random effects.  Although, I do like the way you put the comment below. 
 For my purposes subject is a random effect.  It was more on correct 
notation in lme with repeated measures designs (my a and b are repeated 
while the mean subjectRT is between).  And, on whether the way aov 
treats repeated measures might best be called a MANOVA method.

On Nov 27, 2003, at 12:54 PM, Spencer Graves wrote:

 Do you want to make inference about the specific subjects in your 
study?  If yes, the subjects are a fixed effect.  If instead you want 
to make inference about the societal processes that will generate the 
subjects you will get in the future, that is a random effect.  The 
function lme handles both fixed and random effects, as does 
varcomp.  The functions aov and lm are restricted to fixed 
effects only.  You can use dummy coding for lm and aov as well.
 The the distinction between fixed and random effects seems to 
me to be the same as what Deming called the difference between 
enumerative and analytic studies:  With a fixed effect / 
enumerative study, the objective is to determine the disposition of 
the sampling frame.  For example, Deming managed a survey of food 
distribution in Japan in 1946 or so, right after World War II.  The 
purpose was to determine where to deliver food the next day, etc., to 
keep people from dying of starvation.  That was an enumerative study.  
If the purpose had been to advance economic theories for use not only 
in Japan or in 1946-47, that is an analytic study.
 Do you have the book Pinhiero and Bates (2000) Mixed-Effects 
Models in S and S-Plus (Springer)?  If you have more than one use for 
analyzing data on human subjects, I suggest you get and study this 
book if you haven't already.  Doug Bates and several of his graduate 
students have developed lme.  I am not current in the absolute 
latest literature in that area of statistics, but Bates seems to me to 
be among the leaders in that area and specifically in statistical 
computing for that kind of problem.
 hope this helps.  spencer graves

John Christie wrote:

I am trying to understand better an analysis mean RT in various 
conditions in a within subjects design with the overall mean RT / 
subject as one of the factors.  LME seems to be the right way to do 
this. using something like m- lme(rt~ a *b *subjectRT, random= 
~1|subject) and then anova(m,type = marginal).  My understanding is 
that lme is an easy interface for dummy coding variables and doing a 
multiple regression (and that could be wrong).  But, what is aov 
doing in this instance? MANOVA?  I also haven't been able to find 
anything really useful on what to properly assign to  random in the 
lme formula.  For repeated measures the use above is always in the 
examples.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help



__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] tcltk - tkcreate question

2003-11-27 Thread Thomas Stabla
Hello,

i'm trying to translate following tcltk source code, which I found in
newsgroup comp.lang.tcl, written by Tom Wilkason, into R Code.

proc scrolled_Canvas {base} {
   frame $base.fm -borderwidth 2 -relief sunken

   canvas $base.fm.cv -yscrollcommand $base.fm.cv_vertscrollbar set
   scrollbar $base.fm.cv_vertscrollbar -orient vertical \
   -command  $base.fm.cv yview
   pack $base.fm.cv -side left -fill both -expand true
   pack $base.fm.cv_vertscrollbar -side right -fill y
   pack $base.fm -side top -fill both -expand true

   set hull [frame $base.fm.cv.hull -borderwidth 2 -relief ridge]

   set wid [winfo width $base.fm]
   $base.fm.cv create window 0 0 -anchor nw -window $hull -width 10 -height 500 -tag 
window
   bind $base.fm.cv Configure ResizeCanvas %W %w %h
   return $hull
}


I have successfully translated the code until the line

   $base.fm.cv create window 0 0 -anchor nw -window $hull -width 10 -height 500 -tag 
window

which i don't fully understand because i started with tcltk just this
week.

I tried to translate this line using the R function tkcreate, but i didn't
get very far.

Thanks for your help.


Greetings,
Thomas Stabla

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] tcltk - tkcreate question

2003-11-27 Thread Peter Dalgaard
Thomas Stabla [EMAIL PROTECTED] writes:

 
 I have successfully translated the code until the line
 
$base.fm.cv create window 0 0 -anchor nw -window $hull -width 10 -height 500 -tag 
 window
 
 which i don't fully understand because i started with tcltk just this
 week.
 
 I tried to translate this line using the R function tkcreate, but i didn't
 get very far.

I assume you got the canvas ($base.fm.cv) stored in a variable,
cv, say, and hull similarly. Then my first guess would be  

tkcreate(cv, window, 0, 0, anchor=nw, window=hull, width=10,
 height=5, tag= window)


-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] tcltk - tkcreate question

2003-11-27 Thread Thomas Stabla
On 27 Nov 2003, Peter Dalgaard wrote:

 Thomas Stabla [EMAIL PROTECTED] writes:

 
  I have successfully translated the code until the line
 
 $base.fm.cv create window 0 0 -anchor nw -window $hull -width 10 -height 500 
  -tag window
 
  which i don't fully understand because i started with tcltk just this
  week.
 
  I tried to translate this line using the R function tkcreate, but i didn't
  get very far.

 I assume you got the canvas ($base.fm.cv) stored in a variable,
 cv, say, and hull similarly. Then my first guess would be

 tkcreate(cv, window, 0, 0, anchor=nw, window=hull, width=10,
  height=5, tag= window)


Works fine, thank you for your fast help.

Best regards,
Thomas Stabla

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] cclust - cindex - binary data

2003-11-27 Thread Bruno Giordano
Hi,
I'm trying to debug a function I wrote to calculate the cindex for a
hierarchical tree.
For this it is useful to compare my calculations with those in output from
the clustindex function, in the cclust library.
There's no way, however, to have the cindex value for a given output of the
cclust function, as a NA value is always returned.
This happens almost surely because the cindex in clustIndex is calculated
only for binary data, but, in turn, I can't find a way to specify either
with the cclust function or with the clustIndex function, that an eventual
input data set is binary.

Thanks a lot
Bruno

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] dvips function gives Documents not found error

2003-11-27 Thread Frank E Harrell Jr
On Thu, 27 Nov 2003 10:04:51 -0800
David Kelly [EMAIL PROTECTED] wrote:

 This is primarily an FYI to the Hmisc author, though any brilliant 
 suggestions from him or anyone else are always welcome.  Re this problem
 
 reported earlier in this thread:
 
getting an error from Hmisc when trying
dvips(latex(describe(mtcars)),file=/kellytest/kelly.ps)
that says
Error in system(cmd, intern = intern, wait = wait | intern,
show.output.on.console = wait,  :
   C:/Documents not found
which appears to be a path-parsing problem of Documents and
Settings
 
 The problem is still present. I've tried every suggestion that was sent 
 to me. Most recently, I created a .Renviron which said
 TMP=c:/kellytest/rtemp
 and I then ran R (from within emacs/ess).
 
 I verified that TMP had been set correctly as follows:
   tempdir()
 [1] c:/kellytest/rtemp\\Rtmp5417
 
 but I still got the C:/Documents not found error.
 
 P.S. I found that a TeX file was created when the command was executed, 
 but I could find no evidence of a dvi or ps file.
 
 -- David Kelly
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help

Did you make the changes I suggested (surrounding two items by dQuote( ))?

Have you tried Linux?  :)

---
Frank E Harrell JrProfessor and ChairSchool of Medicine
  Department of BiostatisticsVanderbilt University

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Re: FDA and ICH Compliance of R

2003-11-27 Thread Paul Sorenson
 Antonia Drugica [EMAIL PROTECTED] writes:
 
  Does anybody know if R is FDA or ICH (or EMEA...) 
 compliant? AFAIK S-Plus
  is but that means nothing...
 
 As Thomas pointed out, that does mean nothing -- there was a group of
 folks discussing what might be done to help, earlier this year, but
 then everyone got busy...

FDA has a guidance document for off-the-shelf software:

http://www.fda.gov/cdrh/ode/guidance/585.html

Note that if focuses on OTS used in medical devices.  However you
should read it.  The document:

http://www.fda.gov/cdrh/comp/guidance/938.html

Has a section on applicability of the software guidance (which
encompasses stuff outside the instrument itself.  Since I am no
lawyer, I can't say whether R falls within this scope.

It is fair to say however that the FDA consider safety and
effectiveness very important.  If the effectiveness that you claim is
based on statistics provided by software, or you rely in software for
determining safe levels (eg of a drug) then I would say (as a layman)
it is largely irrelevant whether the vendor claims some sort of FDA
badge because that does not prevent someone from writing dodgy
scripts.

So what you can do (other than soliciting mail list opinions)
includes: 
o Think.  What are the implications for end users, patients
etc.  Would you take a pill based on your own stats?  
o Read what the FDA have to say.  
o Evaluate the risk and safety implications of the
statistics you use.  
o Manage the risk.  Eg can you indepently confirm
the key results?  
o Your scripts are software - the FDA requires
evidence of a credible process in the life cycle of software, whether
they be spreadsheets, real time control systems or whatever.

OTS software that is validated does not remove responsibility for
reducing risk to acceptable levels.

HTH

paul

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] dvips function gives Documents not found error

2003-11-27 Thread David Kelly
Frank Harrell wrote:

 Did you make the changes I suggested (surrounding two items by
 dQuote())?

 Have you tried Linux?

No, I didn't make those changes because I've just been working with a 
binary distribution. I may go ahead and pull down sources and try; I was 
trying to avoid that.

I wish I were working with Linux, but I'm doing this work for someone 
with many installed PCs that want to use R, and they're all running 
Win2K and that isn't going to change.

Thanks for the reply -
David Kelly
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] dvips function gives Documents not found error

2003-11-27 Thread Dirk Eddelbuettel
On Thu, Nov 27, 2003 at 04:21:04PM -0800, David Kelly wrote:
 Frank Harrell wrote:
 
  Did you make the changes I suggested (surrounding two items by
  dQuote())?
 
  Have you tried Linux?
 
 No, I didn't make those changes because I've just been working with a 
 binary distribution. I may go ahead and pull down sources and try; I was 

But that's the beauty of it -- even in Windoze, the $R_HOME/library/$FOO/R/
directory for a package $FOO contains simple text code which you can edit.

If you know how to read and write S source code, I may take you only about
one minute to locate the file, and insert the suggested dQuote().

 trying to avoid that.
 
 I wish I were working with Linux, but I'm doing this work for someone 
 with many installed PCs that want to use R, and they're all running 
 Win2K and that isn't going to change.

If you organise your work environment wel, operating systems matter less and
less. R, Perl, Python, ... are pretty much completely cross-platform (unless
you insist on using OS-specific features). 

Dirk


-- 
Those are my principles, and if you don't like them... well, I have others.
-- Groucho Marx

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help