Re: [R] Differential Equations

2004-06-08 Thread Wolski
Hi!

   Suspicion Breeds Confidence!

   --Brazil


Take a look at the packages listed on cran.r-project.org

or on the r prompt (if all R-packages installed type)

help.search(differential equations)

Help files with alias or concept or title matching 'differential
equations' using fuzzy matching:

IndomethODE(nlmeODE)Pharmacokinetic modelling of Indomethacin
using differential equations
nlmeODE(nlmeODE)Non-linear mixed-effects modelling in nlme
using differential equations
lsoda(odesolve) Solve System of ODE (ordinary differential
equation)s.
rk4(odesolve)   Solve System of ODE (ordinary differential
equation)s by classical Runge-Kutta 4th order
integration.



Type 'help(FOO, package = PKG)' to inspect entry 'FOO(PKG) TITLE'.




Sincerely
Eryk

*** REPLY SEPARATOR  ***

On 6/8/2004 at 12:38 AM Márcio de Medeiros Ribeiro wrote:

Hello!

I would like to know if R can solve Differential Equations...
I don't think so because, in my point, I see R like a Statistical
System, not a 
Math System. Am I wrong?

Thank you very much.

Márcio de Medeiros Ribeiro
Graduando em Ciência da Computação
Departamento de Tecnologia da Informação - TCI
Universidade Federal de Alagoas - UFAL
Maceió - Alagoas - Brasil
Projeto CoCADa

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html



Dipl. bio-chem. Eryk Witold Wolski@MPI-Moleculare Genetic   
Ihnestrasse 63-73 14195 Berlin   'v'
tel: 0049-30-83875219   /   \
mail: [EMAIL PROTECTED]---W-Whttp://www.molgen.mpg.de/~wolski

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] AR models

2004-06-08 Thread Prof Brian Ripley
On Mon, 7 Jun 2004, Laura Holt wrote:

 Dear R People:
 
 Is it possible to fit an AR model such as:
 
 y_t = fee_1 y_t-1 + fee_2 y_t-9 + a_t,
 please?
 
 I know that we can fit an AR(9) model, but I was wondering if we could do a
 partial as described.

Yes.  See ?arima, and especially the 'fixed' argument.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Load a dll

2004-06-08 Thread Uwe Ligges
Rui wrote:
Hi folks,
 
I have a question about how to load a dll. 
First, I use the command 

dyn.load(lassofu.dll);
then, I got the message below
NULL
Warning message: 
DLL attempted to change FPU control word from 9001f to 90003;
See ?dyn.load.
This message should make you a little bit worried about your dll.

After I tried to use this dll in one of s functions, I got the message
below
Error in .Fortran(lasso, as.double(x), as.double(y), as.double(b),
as.integer(n),  : 
Fortran function name not in load table.
BTW, my system is Windows98 + R1.9.0.
Could anyone help me to solve this question? Thanks
Did you follow the instructions in the manual Writing R Extensions and 
file readme.packages?

Uwe Ligges

Rui Wang
 
Phone: (403)220-4501
Email: [EMAIL PROTECTED]
Department of Mathematics and Statistics
University of Calgary
 

[[alternative HTML version deleted]]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] How do I sort data containts character

2004-06-08 Thread Unung Istopo Hartanto
Sorry, it's simple question:

example :
 data.test
  label value
1   one21.35746
2   two22.07592
3 three20.74098



I would like the return :

label   value
3 three20.74098
1   one21.35746
2   two22.07592

Anyone can help it?

Thanks,

regards

Unung Istopo

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How do I sort data containts character

2004-06-08 Thread Dimitris Rizopoulos
try to use:

data.test[order(data.test[,2]),]


I hope this helps.

Best,
Dimitris


Dimitris Rizopoulos
Doctoral Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/16/396887
Fax: +32/16/337015
Web: http://www.med.kuleuven.ac.be/biostat/
 http://www.student.kuleuven.ac.be/~m0390867/dimitris.htm



- Original Message - 
From: Unung Istopo Hartanto [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, June 08, 2004 9:18 AM
Subject: [R] How do I sort data containts character


 Sorry, it's simple question:

 example :
  data.test
   label value
 1   one21.35746
 2   two22.07592
 3 three20.74098

 

 I would like the return :

 label value
 3 three20.74098
 1   one21.35746
 2   two22.07592

 Anyone can help it?

 Thanks,

 regards

 Unung Istopo

 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] GLMM(..., family=binomial(link=cloglog))?

2004-06-08 Thread Peter Dalgaard
Spencer Graves [EMAIL PROTECTED] writes:

 Data: DF
   log-likelihood:  -55.8861
 Random effects:
   Groups NameVariance   Std.Dev.
   smpl   (Intercept) 1.7500e-12 1.3229e-06
 
 Estimated scale (compare to 1)  3.280753
 
 Fixed effects:
  Estimate Std. Error z value Pr(|z|)
 (Intercept) 0.148271   0.063419  2.3379  0.01939
 
 Number of Observations: 10
 Number of Groups: 10

The only information you have about sigma is in the overdispersion
of y, so you probably cannot both have a scale parameter and a random
effect of sample in there. Doug did say something about fixing the
scale in GLMM, but I forget whether he had implemented it or planned
to...

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] Recoding a multiple response question into a series of 1, 0 variables

2004-06-08 Thread Philippe Grosjean
Hello,
Here is a slightly more sophisticate and fully vectorized, answer.

RecodeChoices - function(mat) {
# Make sure mat is a matrix (in case it is a data.frame)
mat - as.matrix(mat)

# Get dimensions of the matrix
Dim - dim(mat)
Nr - Dim[1]
Nc - Dim[2]

# Flatten it into a vector, but by row (need to transpose first!)
mat - t(mat)
dim(mat) - NULL

# Offset is a vector of offsets to make locations unique in vector mat
# (a solution to avoid loops, see Jonathan Baron's answer)
Offset - sort(rep(0:(Nr - 1) * Nc, Nc))

# Initialize a vector of results of the same size with 0's
res - rep(0, Nr * Nc)

# Now replace locations pointed by (mat + Offset) by 1 in res
res[mat + Offset] - 1

# Transform res into a matrix of same size of mat, by row
res - matrix(res, nrow = Nr, byrow = TRUE)

# Return the result
return(res)
}

# Now your example:
A - matrix(c(4,  2, NA, NA, NA,
  1,  3,  4,  5, NA,
  3,  2, NA, NA, NA), nrow = 3, byrow = TRUE)
A
RecodeChoices(A)

Depending on the use you make of this, it is perhaps preferable to recode it
as a boolean (as.numeric() would give you the c(1, 0) as above easily). To
do this, just replace:
res - rep(0, Nr * Nc)   by   res - rep(FALSE, Nr * Nc)
and:
res[mat + Offset] - TRUE

You may also consider to make it factors... and to finalize this function,
you should add code to collect row and column names from mat and apply them
to res, and perhaps transforn res into a data.frame if mat was a data.frame
itself.

Best,

Philippe Grosjean

...?}))
 ) ) ) ) )
( ( ( ( (   Prof. Philippe Grosjean
\  ___   )
 \/ECO\ (   Numerical Ecology of Aquatic Systems
 /\___/  )  Mons-Hainaut University, Pentagone
/ ___  /(   8, Av. du Champ de Mars, 7000 Mons, Belgium
 /NUM\/  )
 \___/\ (   phone: + 32.65.37.34.97, fax: + 32.65.37.33.12
   \ )  email: [EMAIL PROTECTED]
 ) ) ) ) )  SciViews project coordinator (http://www.sciviews.org)
( ( ( ( (
...

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Jonathan Baron
Sent: Tuesday, 08 June, 2004 04:45
To: Greg Blevins
Cc: R-Help
Subject: Re: [R] Recoding a multiple response question into a series of
1,0 variables


On 06/07/04 21:28, Greg Blevins wrote:
Hello R folks.

1) The question that generated the data, which I call Qx:
Which of the following 5 items have you performed in the past month?
(multipe
response)

2) How the data is coded in my current dataframe:
The first item that a person selected is coded under a field called
Qxfirst; the
second selected under Qxsecond, etc.  For the first Person, the NAs mean
that that
person only selected two of the five items.

Hypothetical data is shown

QxfirstQxsecondQxthirdQxfourthQxfifth
Person142NANANA
Person2134   5
NA
Person332NANANA

3) How I want the data to be be coded:

I want each field to be one of the five items and I want each field to
contain a 1 or
0 code--1 if they mentioned the item, 0 otherwise.

Given the above data, the new fields would look as follows:

Item1Item2Item3Item4
Item5
Person101   01
0
Person210   11
1
Person301   10
0

Here is an idea:
X - c(4,5,NA,NA,NA) # one row
Y - rep(NA,5) # an empty row
Y[X] - 1

Y is now
NA NA NA 1 1
which is what you want.

So you need to do this on each row and then convert the NAs to
0s.  So first create an empty data frame, the same size as your
original one X, like my Y.  Callit Y.  Then a loop?  (I can't
think of a better way just now, like with mapply.)

for (i in [whatever]) Y[i][X[i]] - 1

(Not tested.)  Jon
--
Jonathan Baron, Professor of Psychology, University of Pennsylvania
Home page:http://www.sas.upenn.edu/~baron
R search page:http://finzi.psych.upenn.edu/

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Censboot Warning and Error Messages

2004-06-08 Thread Prof Brian Ripley
On Mon, 7 Jun 2004, Jingky Lozano wrote:

 Good day R help list!!!
 
 I've been trying to do Bootstrap in R on Censored data.  I encountered
 WARNING/ERROR messages which I could not find explanation.
 I've been searching on the literature for two days now and still can't find
 answers.  I hope there's anyone out there who can help me 
 with these two questions: 
 
 1. If the Loglik converged before variable... message appears (please see
 printout below) while doing ordinary bootstrap,
 does it mean that I cannot trust the result of the bootstrap statistics?  Is
 there a valid way to resolve it like increase the sample size?

This is a message from coxph and nothing at all to do with censboot.

 2. In doing conditional bootstrap with survival data, how can one handle a data
 with two largest survival time observations (ties)
 but one is censored while the other is not.  For example, if the censoring time
 is 48 months and a patient died exactly at that time, he will
 have the same survival time as another patient who was also observed to lived 48
 months long but classified as censored because he was still 
 alive at the set censoring time.  Doing the recommended algorithm in R gives an
 error in sample length... message.

Recommended by whom?  You could break the ties (death first) and avoid the 
problem.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] (no subject)

2004-06-08 Thread PJARES
Hi!!
I am a new user of R (just trying to analysis microarrays with some packages from the 
bioconductor project).
I would like to import a text-delimeted file containing 20 columns and 22200 rows.

I have tried 
read.table;
scan(file=)
 matrix(scan(file, n=20*200,20,200, byrow=TRUE)); 
Doesn't matter what I try I got the next message:
error in file (file r): unable to open connection
in addition warning message
cannot open file MAS5orig (this is the name of the file I am trying to import in R)

Any advice? which comman I should you use to import a text file (Excell, text 
tab-delimeted file) into R.

Thank you very much for your help.

Best wishes,

Pedro

Pedro Jares, Ph.D.
Genomics Unit, IDIBAPS
Barcelona University
C/ Villarroel 170, 
08036 .Barcelona, Spain
Telf. 93 2275400,
Ext 2184 o 2129
Fax 93 2275717

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Reading in Files (was: no subject)

2004-06-08 Thread Ko-Kang Kevin Wang
Please use a more appropriate subject!

- Original Message - 
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, June 08, 2004 8:25 PM
Subject: [R] (no subject)


 Hi!!
 I am a new user of R (just trying to analysis microarrays with some
packages from the bioconductor project).
 I would like to import a text-delimeted file containing 20 columns and
22200 rows.

 I have tried
 read.table;
 scan(file=)
  matrix(scan(file, n=20*200,20,200, byrow=TRUE));
 Doesn't matter what I try I got the next message:
 error in file (file r): unable to open connection
 in addition warning message
 cannot open file MAS5orig (this is the name of the file I am trying to
import in R)

It's a syntax error, I think.  Why do you have a  before matrix()
command?

Also, which operating system are you running?  Are you sure your working
directory is correct?

Cheers,

Kevin

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] (no subject)

2004-06-08 Thread Wolski
Hi!

The message means that R does not find the file. 

Under windows (if you are using windows) you must escape the backslash with a 
backslash \.
or give us an example how you provide the path to the file.


Or use:

file.choose()

or.

library(tkWidgets)
fileBrowser()


both functions will return the proper path to the file.

Sincerely 

Eryk




*** REPLY SEPARATOR  ***

On 6/8/2004 at 10:25 AM [EMAIL PROTECTED] wrote:

Hi!!
I am a new user of R (just trying to analysis microarrays with some
packages from the bioconductor project).
I would like to import a text-delimeted file containing 20 columns and
22200 rows.

I have tried 
read.table;
scan(file=)
 matrix(scan(file, n=20*200,20,200, byrow=TRUE)); 
Doesn't matter what I try I got the next message:
error in file (file r): unable to open connection
in addition warning message
cannot open file MAS5orig (this is the name of the file I am trying to
import in R)

Any advice? which comman I should you use to import a text file (Excell,
text tab-delimeted file) into R.

Thank you very much for your help.

Best wishes,

Pedro

Pedro Jares, Ph.D.
Genomics Unit, IDIBAPS
Barcelona University
C/ Villarroel 170, 
08036 .Barcelona, Spain
Telf. 93 2275400,
Ext 2184 o 2129
Fax 93 2275717

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html



Dipl. bio-chem. Eryk Witold Wolski@MPI-Moleculare Genetic   
Ihnestrasse 63-73 14195 Berlin   'v'
tel: 0049-30-83875219   /   \
mail: [EMAIL PROTECTED]---W-Whttp://www.molgen.mpg.de/~wolski

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] [Q] raw - gpr in aroma package

2004-06-08 Thread Henrik Bengtsson
Hi. 

First, I think you should address questions about aroma directly to me (the
author) instead of the r-help list since it is not a core R package.

To answer you question, you can not make a GenePixData object out of a
RawData object because the latter contains much less information than a
minimum GPR file would require. However, you can given an existing
GenePixData object overwrite/modify or add fields to it. But, before
explaining how you can do it you should be aware that modifying an existing
GPR structure will make some of the data inconsistent and this may (or may
not) confuse the GenePix software. Adding fields may as well confuse
GenePix.

Easiest is if you modify an existing field:

gpr[[F532 Median]] - raw2$G
gpr[[F635 Median]] - raw2$R

To find existing fields do ll(gpr).

To add new fields you also have to updated the internal/private .fieldNames
field, which is not recommended:
gpr$R - raw2$R
gpr$G - raw2$G
gpr$.fieldNames - c(gpr$.fieldNames, c(R,G))

You write a GenePixData object to file by

write(gpr, result.gpr)

Aroma will update the GenePix header so it follows the correct GPR file
format. Some people have reported that GenePix requires the header elements
to come in a pre-defined order. Currently aroma does not take care of this
and it might be that you will experience problems. As a last resort you can
always append fields manually in, say, Excel. If you do so, do not forget to
update the ATF header from, say
ATF  1.0 
24  43
to
ATF  1.0 
24  45
where 43 and 45 is the number of fields/columns.

But again, I'm not convinced that you should edit GPR files yourself.

Cheers

Henrik Bengtsson
http://www.braju.com/R/

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of ÁøÈñÁ¤
Sent: Tuesday, June 08, 2004 4:24 AM
To: [EMAIL PROTECTED]
Cc: 'junmyungjae'
Subject: [R] [Q] raw - gpr in aroma package


Hi.
 
Is it possible to make gpr from raw?
 
library(aroma)
#read gpr file
gpr - GenePixData$read(gpr123.gpr, path=aroma$dataPath)
# gpr - raw
raw - as.RawData(gpr)
# raw - ma
ma - getSignal(raw, bgSubtract=FALSE)
ma.norm - clone(ma)
#normalization
normalizeWithinSlide(ma.norm, s)
#ma - raw
raw2 - as.RawData(ma)
 
I want to make gpr data from raw2 and then I want to write new gpr(
write(gpr,”result.gpr”)). Is it possible? Can anyone help me with this?
 
Thanks,
 
Hee-Jeong Jin.

[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] error during make of R-patched on Fedora core 2

2004-06-08 Thread Gavin Simpson
Marc Schwartz wrote:
On Mon, 2004-06-07 at 15:51, Gavin Simpson wrote:
snip

Thanks Roger and Marc, for suggesting I use ./tools/rsync-recommended 
from within the R-patched directory.

This seems to have done the trick as make completed without errors this 
time round. The Recommended directory also contained the links to the 
actual tar.gz files after doing the rsync command, so I guess this was 
the problem (or at least related to it.) I'm off home now with the 
laptop to see if I can finish make check-all and make install R.

I have re-read the section describing the installation process for 
R-patched or R-devel in the R Installation and Administration manual 
(from R.1.9.0) just in case I missed something. Section 1.2 of this 
manual indicates that one can proceed *either* by downloading R-patched 
and then the Recommended packages from CRAN and placing the tar.gz files 
in R_HOME/src/library/Recommended, or by using rsync to download 
R-patched, and then to get the Recommended packages. The two are quite 
separately documented in the manual, and do seem to be in disagreement 
with the R-sources page on the CRAN website, which doesn't mention the 
manual download method (for Recommended) at all.

Is there something wrong with the current Recommended files on CRAN, or 
is the section in the R Installation  Admin manual out-of-date or in 
error, or am I missing something vital here? This isn't a complaint: I'm 
just pointing this out in case this is something that needs updating in 
the documentation.

All the best,
Gavin

Perhaps I am being dense, but in reviewing the two documents (R Admin
and the CRAN sources page), I think that the only thing lacking is a
description on the CRAN page of the manual download option for the Rec
packages.
You would need to go here now for 1.9.1 Alpha/Beta which is where the
current r-patched is:
http://www.cran.mirrors.pair.com/src/contrib/1.9.1/Recommended/
The standard links on CRAN are for the current 'released' version, which
is still 1.9.0 for the moment.
Yes, but having downloaded the contents of that directory (as VERSION 
indicated that R-patched was 1.9.1 alpha), the links to the source files 
for the Recommended packages or not present (obviously). And make 
doesn't seem to work without these links. The rsync approach places the 
package sources *and* the links in the correct directory.

So the instructions in the Admin manual are lacking a statement that you 
need to create links to each of the package sources in the following 
form name-of-package.tgz which links to name-of-package_version.tar.gz. 
As it stands, the instructions in the Installation  Admin manual are 
not sufficient to get the manual download method to work.

Procedurally, I think that the rsync approach is substantially easier
(one step instead of multiple downloads) and certainly less error prone.
Also the ./tools/rsync-recommended script is set up to pick up the
proper package versions, which also helps to avoid conflicts.
I agree - being a bit of a Linux newbie, I hadn't used rsync before. 
Seeing how easy it was to use this method of getting the required 
sources I will be using this method in future.

HTH,
Marc
Cheers
Gavin
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Gavin Simpson [T] +44 (0)20 7679 5522
ENSIS Research Fellow [F] +44 (0)20 7679 7565
ENSIS Ltd.  ECRC [E] [EMAIL PROTECTED]
UCL Department of Geography   [W] http://www.ucl.ac.uk/~ucfagls/cv/
26 Bedford Way[W] http://www.ucl.ac.uk/~ucfagls/
London.  WC1H 0AP.
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Downloading of packages

2004-06-08 Thread eobudho
Hi,
I have been trying to download BradleyTerry and brlr software but in
vain. Can you help me please. I already have R installed in my computer.
Thank you in advance.

Elias Obudho.




-
University of Nairobi Mail Services
   You can't afford to stay offline
http://mail.uonbi.ac.ke/

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Little question

2004-06-08 Thread zze-PELAY Nicolas FTRD/DMR/BEL
Here I got a function :

f-function(x){
res-list()
res$bool=(x=0)
res$tot=x
return(res)
}

I want that a=res$bool and b=res$tot
I know that a possible solution is :

Result=f(x)
a=Result$bool
b=Result$tot

But I don't want to use the variable Result.
I'd like to write directly something like [a,b]=f(x). 
Is it possible ?

Thank you

nicolas

[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] msm capabilities

2004-06-08 Thread Chris Jackson
russell wrote:
Hello,
I'm wondering if anyone has used the msm package to compute the steady 
state probabilities for a Markov model? 

There's no built-in function in msm to do this, but this would be a
useful feature.  For discrete time Markov chains this is a matter of
finding the eigenvector of the transition probability matrix.  But msm
is really for fitting continuous-time Markov models.  In the continuous
case, assuming a steady state p exists,   you'd need to solve the two
equations
p.Q = 0
p.1 = 1
for example, using something like
 n - nrow(Q)
 qr.solve(rbind(t(Q), rep(1, n)),  c(rep(0,n), 1))
This is also the limit as t - Inf of P(t) = Exp(tQ).
Chris (author of msm)
--
Christopher Jackson [EMAIL PROTECTED], Research Associate,
Department of Epidemiology and Public Health, Imperial College
School of Medicine, Norfolk Place, London W2 1PG, tel. 020 759 43371
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Little question

2004-06-08 Thread Jason Turner
On Tue, 2004-06-08 at 21:44, zze-PELAY Nicolas FTRD/DMR/BEL wrote:
 Here I got a function :
 
 f-function(x){
 res-list()
 res$bool=(x=0)
 res$tot=x
 return(res)
 }
 
 I want that a=res$bool and b=res$tot
 I know that a possible solution is :
 
Like

foo - function(x,...){
list(a=(x=0),b=x)
}

Cheers

Jason

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Downloading of packages

2004-06-08 Thread Uwe Ligges
[EMAIL PROTECTED] wrote:
Hi,
I have been trying to download BradleyTerry and brlr software but in
These are packages.
vain. Can you help me please. I already have R installed in my computer.
Thank you in advance.
Version of R?
Operating system and its version?
What was the error message after trying
 install.packages(c(BradleyTerry, brlr))
?
Uwe Ligges

Elias Obudho.

-
University of Nairobi Mail Services
   You can't afford to stay offline
http://mail.uonbi.ac.ke/
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] level assignment

2004-06-08 Thread Christian Schulz
Hi,

i would like recocde some numeric variables in one
step, but hanging unexpected in a level asignment problem?

for(i in 2:length(msegmente))
{   msegmente[,i] - as.factor(msegmente[,i])
}
 
Problem is that not every level is in every variable, so the
asignment is necessary!?

levels(LT.200301) -   c(1=AK,3=GC,10=OC, 
29=AM,32=IA,38=ACH,52=ZBA,53=A9L,59=EHK)
Error: syntax error
 levels(LT.200301) -   list(c(1=AK,3=GC,10=OC, 
29=AM,32=IA,38=ACH,52=ZBA,53=A9L,59=EHK))
Error: syntax error
 levels(LT.200301) -   c(1=AK,3=GC,10=OC, 
29=AM,32=IA,38=ACH,52=ZBA,53=A9L,59=EHK)
Error: syntax error

Many thanks for any hint/help
Christian

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] error during make of R-patched on Fedora core 2

2004-06-08 Thread Prof Brian Ripley
On Tue, 8 Jun 2004, Gavin Simpson wrote:

 Marc Schwartz wrote:
  On Mon, 2004-06-07 at 15:51, Gavin Simpson wrote:
  
  snip
  
  
 Thanks Roger and Marc, for suggesting I use ./tools/rsync-recommended 
 from within the R-patched directory.
 
 This seems to have done the trick as make completed without errors this 
 time round. The Recommended directory also contained the links to the 
 actual tar.gz files after doing the rsync command, so I guess this was 
 the problem (or at least related to it.) I'm off home now with the 
 laptop to see if I can finish make check-all and make install R.
 
 I have re-read the section describing the installation process for 
 R-patched or R-devel in the R Installation and Administration manual 
 (from R.1.9.0) just in case I missed something. Section 1.2 of this 
 manual indicates that one can proceed *either* by downloading R-patched 
 and then the Recommended packages from CRAN and placing the tar.gz files 
 in R_HOME/src/library/Recommended, or by using rsync to download 
 R-patched, and then to get the Recommended packages. The two are quite 
 separately documented in the manual, and do seem to be in disagreement 
 with the R-sources page on the CRAN website, which doesn't mention the 
 manual download method (for Recommended) at all.
 
 Is there something wrong with the current Recommended files on CRAN, or 
 is the section in the R Installation  Admin manual out-of-date or in 
 error, or am I missing something vital here? This isn't a complaint: I'm 
 just pointing this out in case this is something that needs updating in 
 the documentation.
 
 All the best,
 
 Gavin
  
  
  Perhaps I am being dense, but in reviewing the two documents (R Admin
  and the CRAN sources page), I think that the only thing lacking is a
  description on the CRAN page of the manual download option for the Rec
  packages.
  
  You would need to go here now for 1.9.1 Alpha/Beta which is where the
  current r-patched is:
  
  http://www.cran.mirrors.pair.com/src/contrib/1.9.1/Recommended/
  
  The standard links on CRAN are for the current 'released' version, which
  is still 1.9.0 for the moment.
 
 Yes, but having downloaded the contents of that directory (as VERSION 
 indicated that R-patched was 1.9.1 alpha), the links to the source files 
 for the Recommended packages or not present (obviously). And make 
 doesn't seem to work without these links. The rsync approach places the 
 package sources *and* the links in the correct directory.
 
 So the instructions in the Admin manual are lacking a statement that you 
 need to create links to each of the package sources in the following 
 form name-of-package.tgz which links to name-of-package_version.tar.gz. 
 As it stands, the instructions in the Installation  Admin manual are 
 not sufficient to get the manual download method to work.

You need to run tools/link-recommended.  I've added that to R-admin.

  Procedurally, I think that the rsync approach is substantially easier
  (one step instead of multiple downloads) and certainly less error prone.
  Also the ./tools/rsync-recommended script is set up to pick up the
  proper package versions, which also helps to avoid conflicts.
 
 I agree - being a bit of a Linux newbie, I hadn't used rsync before. 
 Seeing how easy it was to use this method of getting the required 
 sources I will be using this method in future.

rsync is great, *provided* you have permission to use the ports it uses.  
Users with http proxies often do not, hence the description of the manual 
method.  During alpha/beta periods, we do make a complete tarball 
available, and I wonder if we should not be doing so with 
R-patched/R-devel at all times.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] level assignment

2004-06-08 Thread Achim Zeileis
On Tue, 8 Jun 2004 13:09:25 +0200 Christian Schulz wrote:

 Hi,
 
 i would like recocde some numeric variables in one
 step, but hanging unexpected in a level asignment problem?
 
 for(i in 2:length(msegmente))
 {   msegmente[,i] - as.factor(msegmente[,i])
 }
  
 Problem is that not every level is in every variable, so the
 asignment is necessary!?
 
 levels(LT.200301) -   c(1=AK,3=GC,10=OC, 
 29=AM,32=IA,38=ACH,52=ZBA,53=A9L,59=EHK)
 Error: syntax error
  levels(LT.200301) -   list(c(1=AK,3=GC,10=OC, 
 29=AM,32=IA,38=ACH,52=ZBA,53=A9L,59=EHK))
 Error: syntax error
  levels(LT.200301) -   c(1=AK,3=GC,10=OC, 
 29=AM,32=IA,38=ACH,52=ZBA,53=A9L,59=EHK)
 Error: syntax error

I'm not sure what exactly you are trying to do, but does replacing

  as.factor(msegmente[,i])

by

  factor(msegmente[,i],
 levels = c(1, 3, 10, 29, 32, 38, 52, 53, 59),
 labels = c(AK,GC,OC,AM,IA,ACH,ZBA,A9L,EHK))

yield the desired result?

hth,
Z


 Many thanks for any hint/help
 Christian
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] Average R-squared of model1 to model n

2004-06-08 Thread kan Liu
Hi,

Thanks for your message. I tried to prove that the
R-squared of the averaged model is always greater than
or equals to the average R-squared of individual
models (supposed m=2), Please see the attached r2.pdf.
I hope this can be generalized to general case (m 
2).

Any comment would be very appreciated!


Kan

Cambridge University, UK

--- Liaw, Andy [EMAIL PROTECTED] wrote:
 The Y1, Y2, etc. that Kan mentioned are predicted
 values of a test set data
 from models that supposedly were fitted to the same
 (or similar) data.  It's
 hard for me to imagine the outcome would be as
 `severe' as Y1 = -Y2.
 
 That said, I do not think that the R-squared (or
 q-squared as some call it)
 of the aggregate model is necessarily larger or
 equal to the average
 R-squared of the component models.  It obviously
 depends on how the
 component models are generated.  As a hypothetical
 example (because I
 haven't acutally tried it, just speculating): 
 Suppose the data are
 generated from a step function, the sort that would
 be perfect for
 regression trees.  If one grows several well-pruned
 trees, I'd guess that
 the average R-squared of the individual trees has a
 chance of being larger
 than the R-squared of the averaged model.
 
 Best,
 Andy
 
  From: Gabor Grothendieck
  
  Suppose m=2, Y1=Y and Y2= -Y.  Then (b) is zero so
 (a) must be
  greater or equal to (b).  Thus (b) is not
 necessarily greater 
  than (a).
  
  
  kan Liu kan_liu1 at yahoo.com writes:
  
  : 
  : Hi,
  : 
  : We got a question about interpretating
 R-suqared.
  : 
  : The actual outputs for a test dataset is
 X=(x1,x2, ..., xn).
  : model 1 predicted the outputs as
 Y1=(y11,y12,..., y1n)
  : model n predicted the outputs as
 Y2=(y21,y22,..., y2n)
  : 
  : ... 
  : model m predicted the outputs as
 Ym=(ym1,ym2,..., ymn)
  : 
  : Now we have two ways to calculate R squared to
 evaluate the average 
  performance of committee model.
  : 
  : (a) Calculate R squared between (X, Y1), (X,
 Y2), ..., 
  (X,Ym), and then 
  averaging the R squared
  : (b) Calculate average Y=(Y1+Y2, + ... Ym)/m, and
 then 
  calculate the R 
  squared between (X, Y). 
  : 
  : We found it seemed that R squared calculated in
 (b) is 
  'always' higher than 
  that in (a).
  : 
  : Does this result depends on the test dataset or
 this 
  happened by chance?Can 
  you advise me any reference for
  : this issue? 
  : 
  : Many thanks in advance!
  : 
  : Kan
  : 
  : 
  :   
  : -
  : 
  :   [[alternative HTML version deleted]]
  : 
  : __
  : R-help at stat.math.ethz.ch mailing list
  :

https://www.stat.math.ethz.ch/mailman/listinfo/r-help
  : PLEASE do read the posting guide! 
  http://www.R-project.org/posting-guide.html
  : 
  :
  
  
  __
  [EMAIL PROTECTED] mailing list
 

https://www.stat.math.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide! 
  http://www.R-project.org/posting-guide.html
  
  
 
 

--
 Notice:  This e-mail message, together with any
 attachments, contains information of Merck  Co.,
 Inc. (One Merck Drive, Whitehouse Station, New
 Jersey, USA 08889), and/or its affiliates (which may
 be known outside the United States as Merck Frosst,
 Merck Sharp  Dohme or MSD and in Japan, as Banyu)
 that may be confidential, proprietary copyrighted
 and/or legally privileged. It is intended solely for
 the use of the individual or entity named on this
 message.  If you are not the intended recipient, and
 have received this message in error, please notify
 us immediately by reply e-mail and then delete it
 from your system.
 --




__




r2.pdf
Description: r2.pdf
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

[R] counting number of objects in a list

2004-06-08 Thread Vumani Dlamini
Dear R-users;
I am interested in getting the number of objects in a list, and have thus 
far been unsuccessful. Can you please help.

Thank you.
Vumani
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] counting number of objects in a list

2004-06-08 Thread Uwe Ligges
Vumani Dlamini wrote:
Dear R-users;
I am interested in getting the number of objects in a list, and have 
thus far been unsuccessful. Can you please help.

Thank you.
Vumani
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html
?length
Uwe Ligges
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] counting number of objects in a list

2004-06-08 Thread Liaw, Andy
Lists in the S language are like vectors, so length(mylist) would tell you
how many components there are in mylist.  Beware, though, that lists can
well be nested...

Andy

 From: Vumani Dlamini
 
 Dear R-users;
 
 I am interested in getting the number of objects in a list, 
 and have thus 
 far been unsuccessful. Can you please help.
 
 Thank you.
 
 Vumani

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] level assignment

2004-06-08 Thread Christian Schulz
many thanks!

christian


 I'm not sure what exactly you are trying to do, but does replacing

   as.factor(msegmente[,i])

 by

   factor(msegmente[,i],
  levels = c(1, 3, 10, 29, 32, 38, 52, 53, 59),
  labels = c(AK,GC,OC,AM,IA,ACH,ZBA,A9L,EHK))

 yield the desired result?

 hth,
 Z

  Many thanks for any hint/help
  Christian
 
  __
  [EMAIL PROTECTED] mailing list
  https://www.stat.math.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide!
  http://www.R-project.org/posting-guide.html

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Printing Lattice Graphs from Windows

2004-06-08 Thread Duncan Murdoch
On Sun, 6 Jun 2004 14:13:59 -0400, Charles and Kimberly Maner
[EMAIL PROTECTED] wrote :

Hello.  I have researched this topic and have found no answers.  I am
running R 1.9.0 and am trying to print a lattice graph, (e.g., xyplot(1~1)),
using mouse right click - print.  It produces a blank page.  Also, I right
clich, copy the metafile and paste into a MS Office document, (e.g., .ppt,
.doc) and, same thing, a blank.  I have updated to the latest lattice
package and still no printing.  Any help/advice?

This bug appears to be fixed in the current alpha build (Version 1.9.1
alpha (2004-06-08)).  This will be available on CRAN by tomorrow
morning, and on the mirrors soon afterwards.

You can download the windows binary from
http://cran.us.r-project.org/bin/windows/base/rpatched.html.  Wait
until it gives today or later as the build date, or you'll get an old
one.

Duncan Murdoch

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Computting statistics on a matrix with 2 factor column

2004-06-08 Thread Marc Mamin
Hello,

I suppose this is a basic question but couldn't find a solution.:

I have a large matrix with let say 3 columns:

V1  V2  V3
a   x   2
a   x   4
a   y   8
b   z   16

and I want to compute some statistics based on 
the levels resulting form the combination of the two first columns

e.g.:

SUM-

V1  V2  V3
a   x   6
a   y   8
b   z   16


Thanks for your hints .

Marc

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Computting statistics on a matrix with 2 factor column

2004-06-08 Thread Chuck Cleland
?tapply
?aggregate
  You probably have a data frame, not a matrix.
Marc Mamin wrote:
I suppose this is a basic question but couldn't find a solution.:
I have a large matrix with let say 3 columns:
V1  V2  V3
a   x   2
a   x   4
a   y   8
b   z   16
and I want to compute some statistics based on 
the levels resulting form the combination of the two first columns

e.g.:
SUM-
V1  V2  V3
a   x   6
a   y   8
b   z   16
--
Chuck Cleland, Ph.D.
NDRI, Inc.
71 West 23rd Street, 8th floor
New York, NY 10010
tel: (212) 845-4495 (Tu, Th)
tel: (732) 452-1424 (M, W, F)
fax: (917) 438-0894
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Computting statistics on a matrix with 2 factor column

2004-06-08 Thread Gavin Simpson
Marc Mamin wrote:
Hello,
I suppose this is a basic question but couldn't find a solution.:
I have a large matrix with let say 3 columns:
V1  V2  V3
a   x   2
a   x   4
a   y   8
b   z   16
and I want to compute some statistics based on 
the levels resulting form the combination of the two first columns

e.g.:
SUM-
V1  V2  V3
a   x   6
a   y   8
b   z   16
Thanks for your hints .
Marc
?tapply and ?aggregate are two ways, with aggregate giving you something 
that more closely resembles what you asked for:

 a - factor(c(a,a,a,b))
 b - factor(c(x,x,y,x))
 c - c(2,4,8,16)
 abc - data.frame(a, b, c)
 abc
  a b  c
1 a x  2
2 a x  4
3 a y  8
4 b x 16
 tapply(abc$c, list(abc$a, abc$b), sum)
   x  y
a  6  8
b 16 NA
 aggregate(abc$c, list(abc$a, abc$b), sum)
  Group.1 Group.2  x
1   a   x  6
2   b   x 16
3   a   y  8
HTH
Gavin
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Gavin Simpson [T] +44 (0)20 7679 5522
ENSIS Research Fellow [F] +44 (0)20 7679 7565
ENSIS Ltd.  ECRC [E] [EMAIL PROTECTED]
UCL Department of Geography   [W] http://www.ucl.ac.uk/~ucfagls/cv/
26 Bedford Way[W] http://www.ucl.ac.uk/~ucfagls/
London.  WC1H 0AP.
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] error during make of R-patched on Fedora core 2

2004-06-08 Thread Marc Schwartz
On Tue, 2004-06-08 at 06:23, Prof Brian Ripley wrote:
 On Tue, 8 Jun 2004, Gavin Simpson wrote:
 
  Marc Schwartz wrote:
   On Mon, 2004-06-07 at 15:51, Gavin Simpson wrote:
   
   snip

snip

   
   
   Perhaps I am being dense, but in reviewing the two documents (R Admin
   and the CRAN sources page), I think that the only thing lacking is a
   description on the CRAN page of the manual download option for the Rec
   packages.

snip

  
  Yes, but having downloaded the contents of that directory (as VERSION 
  indicated that R-patched was 1.9.1 alpha), the links to the source files 
  for the Recommended packages or not present (obviously). And make 
  doesn't seem to work without these links. The rsync approach places the 
  package sources *and* the links in the correct directory.

Yep. I was being dense. Missed the symlink part of the process. My
error.

I also missed the venus transit this morning due to clouds...  :-(

  So the instructions in the Admin manual are lacking a statement that you 
  need to create links to each of the package sources in the following 
  form name-of-package.tgz which links to name-of-package_version.tar.gz. 
  As it stands, the instructions in the Installation  Admin manual are 
  not sufficient to get the manual download method to work.
 
 You need to run tools/link-recommended.  I've added that to R-admin.

Should Fritz also add that to the CRAN 'R Sources' page so that both
locations are in synch procedurally?

   Procedurally, I think that the rsync approach is substantially easier
   (one step instead of multiple downloads) and certainly less error prone.
   Also the ./tools/rsync-recommended script is set up to pick up the
   proper package versions, which also helps to avoid conflicts.
  
  I agree - being a bit of a Linux newbie, I hadn't used rsync before. 
  Seeing how easy it was to use this method of getting the required 
  sources I will be using this method in future.
 
 rsync is great, *provided* you have permission to use the ports it uses.  
 Users with http proxies often do not, hence the description of the manual 
 method.  During alpha/beta periods, we do make a complete tarball 
 available, and I wonder if we should not be doing so with 
 R-patched/R-devel at all times.

Good point on rsync. Perhaps another option to consider/suggest (though
it might complicate things) is to use wget. Since wget supports proxy
servers, etc. and can use http, it might be an alternative for folks.

The wget command syntax (assuming that your working dir is the main R
source dir) would be:

wget -r -l1 --no-parent -A*.gz -nd -P src/library/Recommended
http://www.cran.mirrors.pair.com/src/contrib/1.9.1/Recommended

The above _should_ be one one line, but of course will wrap here. There
should be a space   between the two lines. The above will copy the tar
files (-A*.gz) from the server (-r -l1 --no-parent) to the appropriate
'Recommended' directory (-P), without recreating the source server's
tree (-nd).

One could refer the reader to 'man wget' or
http://www.gnu.org/software/wget/wget.html for further information on
how to use wget behind proxies and related issues.

You would then of course run the ./tools/link-recommended script to
create the symlinks, followed by ./configure and make.

HTH,

Marc

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] GLMM(..., family=binomial(link=cloglog))?

2004-06-08 Thread Douglas Bates
Spencer Graves [EMAIL PROTECTED] writes:

 Another GLMM/glmm problem:  I simulate rbinom(N, 100, pz),
 where logit(pz) = rnorm(N).  I'd like to estimate the mean
 and standard deviation of logit(pz).  I've tried GLMM{lme4},
 glmmPQL{MASS}, and glmm{Jim Lindsey's repeated}.  In several
 replicates of this for N = 10, 100, 500, etc., my glmm call
 produced estimates of the standard deviation of the random
 effect in the range of 0.6 to 0.8 (never as high as the 1
 simulated).  Meanwhile, my calls to GLMM produced estimates
 between 1e-12 and 1e-9, while the glmmPQL results tended to
 be closer to 0.001, though it gave one number as high as
 0.7.  (I'm running R 1.9.1 alpha, lme4 0.6-1 under Windows
 2000)
 
 
 Am I doing something wrong, or do these results suggest bugs
 in the software or deficiencies in the theory or ... ?
 
 
 Consider the following:
 
   set.seed(1); N - 10
   z - rnorm(N)
   pz - inv.logit(z)
   DF - data.frame(z=z, pz=pz, y=rbinom(N, 100, pz)/100, n=100,
   smpl=factor(1:N))
 
   GLMM(y~1, family=binomial, data=DF, random=~1|smpl, weights=n)
 Generalized Linear Mixed Model

Check the observed proportions in the data and see if they apparently
vary enough to be able to expect to estimate a random effect.

It is entirely possible to have the MLE of a variance component be
zero.

Another thing to do it to check the convergence.  Use

GLMM(y ~ 1, family = binomial, random = ~1|smpl, weigths = n, 
 control = list(EMv=TRUE, msV=TRUE))

or 

GLMM(y ~ 1, family = binomial, random = ~1|smpl, weigths = n, 
 control = list(EMv=TRUE, msV=TRUE, opt = 'optim'))

You will see that both optimizers push the precision of the random
effects to very large values (i.e. the variance going to zero) in the
second of the penalized least squares steps.

I think that this is a legitimate optimum for the approximate
problem.  It may be an indication that the approximate problem is not
the best one to use.  As George Box would tell us,

 You have a big approximation and a small approximation.  The big
 approximation is that your approximation to the problem you want to
 solve.  The small approximation is involved in getting the solution
 to the approximate problem.

For this case, even if I turn off the PQL iterations and go directly
to the Laplacian approximation I still get a near-zero estimate of the
variance component.  You can see the gory details with

GLMM(y ~ 1, family = binomial, random = ~1|smpl, weigths = n,
 control = list(EMv=TRUE, msV=TRUE, glmmMaxIter = 1), method = 'Laplace')

I am beginning to suspect that for these data the MLE of the variance
component is zero.

 So far, the best tool I've got for this problem is a normal
 probability plot of a transform of the binomial responses
 with Monte Carlo confidence bands, as suggested by Venables
 and Ripley, S Programming and Atkinson (1985).  However, I
 ultimately need to estimate these numbers.

I think the most reliable method of fitting this particular form of a
GLMM is using adaptive Gauss-Hermite quadrature to evaluate the
marginal likelihood and optimizing that directly.  In this model the
marginal likelihood is a function of two parameters.  If you have
access to SAS I would try to fit these data with PROC NLMIXED and see
what that does.  You may also be able to use Goran Brostrom's package
for R on this model.  As a third option you could set up evaluation of
the marginal likelihood using either the Laplacian approximation to
the integral or your own version of adaptive Gauss-Hermite and look at
the contours of the marginal log-likelihood.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Problems compiling Rd-Files

2004-06-08 Thread Bernd Weiss
Dear all,

there has been a thread on missing Rd.sty, when converting Rd-files 
to DVI-files, see 
http://maths.newcastle.edu.au/~rking/R/help/03b/8179.html. 

This problem could be solved by editing Rd2div.sh manually, but then 
I got the following error message:

---[error]---
 LaTeX Error: Command \middle already defined.
   Or name \end... illegal, see p.192 of the manual.

See the LaTeX manual or LaTeX Companion for explanation.
Type  H return  for immediate help.
 ...  
  
l.45 \newlength{\middle}

! Missing $ inserted.
inserted text 
.
.
.
for the full message see http://www.metaanalyse.de/rd-error.txt
---[error]---

Again, there has been a thread on this topic at 
http://tolstoy.newcastle.edu.au/R/help/04/03/1707.html. 

Has anybody encountered the same problem? 

I am using R1.9 on a Windows 2000 system and a fully updated MikTeX. 

Any help will be appreciated!

Bernd

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] GLMM(..., family=binomial(link=cloglog))?

2004-06-08 Thread Spencer Graves
Hi, Peter: 

 Thanks.  The help page on GLMM in lme4 0.6-1 2004/05/31 mentions 
GLMM(formula, family, data, random, ...) with additional arguments 
subset, method, na.action, control, and model, x logicals.  I may try 
reading the source code. 

 On the other hand, my need is sufficiently specialized that I may 
just program the log(likelihood) for that specific model.  Then I can 
make contour and perspective plots of the log(likelihood) surface to 
examine thereby the adequacy of Wald's approximation for different 
parameterizations, as well as feed it to optim for estimation.  [My 
application requires binomial(link=cloglog) not logit;  I used 
logit in the simulation below, because it is more commonly known and 
understood.] 

 Best Wishes,
 Spencer Graves
Peter Dalgaard wrote:
Spencer Graves [EMAIL PROTECTED] writes:
 

Data: DF
 log-likelihood:  -55.8861
Random effects:
 Groups NameVariance   Std.Dev.
 smpl   (Intercept) 1.7500e-12 1.3229e-06
Estimated scale (compare to 1)  3.280753
Fixed effects:
Estimate Std. Error z value Pr(|z|)
(Intercept) 0.148271   0.063419  2.3379  0.01939
Number of Observations: 10
Number of Groups: 10
   

The only information you have about sigma is in the overdispersion
of y, so you probably cannot both have a scale parameter and a random
effect of sample in there. Doug did say something about fixing the
scale in GLMM, but I forget whether he had implemented it or planned
to...
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] GLMM(..., family=binomial(link=cloglog))?

2004-06-08 Thread Spencer Graves
Hi, Doug: 

 Thanks.  I'll try the things you suggests.  The observed 
proportions ranged from roughly 0.2 to 0.8 in 100 binomial random 
samples where sigma is at most 0.05.  Jim Lindsey's glmm does 
Gauss-Hermite quadrature, but I don't know if it bothers with the 
adaptive step.  With it, I've seen estimates of the variance component 
ranging from 0.4 to 0.7 or so.  Since I simulated normal 0 standard 
deviation of 1, the algorithm was clearly underestimating what was 
simulated.  My next step, I think, is to program adaptive Gauss-Hermite 
quadrature for something closer to my real problem (as you just 
suggested), and see what I get. 

 You mentioned the little vs. big approximations:  My real 
application involves something close to a binomial response driven by 
Poisson defects, where the Poisson defect rate is not constant.  I've 
shown that it can make a difference whether the defect rate is lognormal 
or gamma, so that is another complication and another reason to write my 
own log(likelihood).  I've thought about writing my own function to do 
adaptive Gauss-Hermite quadrature, as you suggested, but decided to 
check more carefully the available tools before I jumped into my own 
software development effort. 

 Thanks again.
 Spencer Graves

Douglas Bates wrote:
Spencer Graves [EMAIL PROTECTED] writes:
 

  Another GLMM/glmm problem:  I simulate rbinom(N, 100, pz),
  where logit(pz) = rnorm(N).  I'd like to estimate the mean
  and standard deviation of logit(pz).  I've tried GLMM{lme4},
  glmmPQL{MASS}, and glmm{Jim Lindsey's repeated}.  In several
  replicates of this for N = 10, 100, 500, etc., my glmm call
  produced estimates of the standard deviation of the random
  effect in the range of 0.6 to 0.8 (never as high as the 1
  simulated).  Meanwhile, my calls to GLMM produced estimates
  between 1e-12 and 1e-9, while the glmmPQL results tended to
  be closer to 0.001, though it gave one number as high as
  0.7.  (I'm running R 1.9.1 alpha, lme4 0.6-1 under Windows
  2000)
  Am I doing something wrong, or do these results suggest bugs
  in the software or deficiencies in the theory or ... ?
  Consider the following:
 set.seed(1); N - 10
 z - rnorm(N)
 pz - inv.logit(z)
 DF - data.frame(z=z, pz=pz, y=rbinom(N, 100, pz)/100, n=100,
 smpl=factor(1:N))
 GLMM(y~1, family=binomial, data=DF, random=~1|smpl, weights=n)
Generalized Linear Mixed Model
   

Check the observed proportions in the data and see if they apparently
vary enough to be able to expect to estimate a random effect.
It is entirely possible to have the MLE of a variance component be
zero.
Another thing to do it to check the convergence.  Use
GLMM(y ~ 1, family = binomial, random = ~1|smpl, weigths = n, 
control = list(EMv=TRUE, msV=TRUE))

or 

GLMM(y ~ 1, family = binomial, random = ~1|smpl, weigths = n, 
control = list(EMv=TRUE, msV=TRUE, opt = 'optim'))

You will see that both optimizers push the precision of the random
effects to very large values (i.e. the variance going to zero) in the
second of the penalized least squares steps.
I think that this is a legitimate optimum for the approximate
problem.  It may be an indication that the approximate problem is not
the best one to use.  As George Box would tell us,
You have a big approximation and a small approximation.  The big
approximation is that your approximation to the problem you want to
solve.  The small approximation is involved in getting the solution
to the approximate problem.
For this case, even if I turn off the PQL iterations and go directly
to the Laplacian approximation I still get a near-zero estimate of the
variance component.  You can see the gory details with
GLMM(y ~ 1, family = binomial, random = ~1|smpl, weigths = n,
control = list(EMv=TRUE, msV=TRUE, glmmMaxIter = 1), method = 'Laplace')
I am beginning to suspect that for these data the MLE of the variance
component is zero.
 

	  So far, the best tool I've got for this problem is a normal
	  probability plot of a transform of the binomial responses
	  with Monte Carlo confidence bands, as suggested by Venables
	  and Ripley, S Programming and Atkinson (1985).  However, I
	  ultimately need to estimate these numbers.
   

I think the most reliable method of fitting this particular form of a
GLMM is using adaptive Gauss-Hermite quadrature to evaluate the
marginal likelihood and optimizing that directly.  In this model the
marginal likelihood is a function of two parameters.  If you have
access to SAS I would try to fit these data with PROC NLMIXED and see
what that does.  You may also be able to use Goran Brostrom's package
for R on this model.  As a third option you could set up evaluation of
the marginal likelihood using either the Laplacian approximation to
the integral or your own version of adaptive Gauss-Hermite and look at
the contours of the marginal 

[R] George Box quote.

2004-06-08 Thread Rolf Turner

Doug Bates wrote:

 As George Box would tell us,
 
  You have a big approximation and a small approximation.  The big
  approximation is your approximation to the problem you want to
  solve.  The small approximation is involved in getting the solution
  to the approximate problem.

I asked Prof. Bates if he had a source or citation for that quote.
(I love it, and would like to make use of it.)  He said no.  Can
anyone out there give me a reference to it?

Thanks.
cheers,

Rolf Turner
[EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] SJAVA error

2004-06-08 Thread $BLpLn(B $B=g;R(B
Hi
(B
(BI'm trying to use SJava and I have troubles. 
(BI try to run examples from "Calling R from Java"
(Bbut,I have an error that 
(B"fatal error: enable to open the base package"
(B
(BI heard  SJAVA bug,
(Bso,could you  send me your compiled SJava package with the modified 
(BREmbed.c because 
(Bin Windows i'm not able to recompile!!!
(B
(B--example
(Bpackage org.omegahat.R.Java;
(B
(Bpublic class REvalSample {
(B	public static void main(String[] args) {
(B		String[] rargs = { "--slave", "--vanilla" };
(B
(B		System.out.println("Java$B$+$i(BR$B$r%3!<%k$9$k%W%m%0%i%`(B");
(B
(B		ROmegahatInterpreter interp =
(B			new ROmegahatInterpreter(
(BROmegahatInterpreter.fixArgs(rargs),
(Bfalse);
(B		REvaluator e = new REvaluator();
(B
(B		Object val = e.eval("x - sin(seq(0, 2*pi, length=30))");
(B		val = e.eval("x * 2.0");
(B
(B		if (val != null) {
(B			double[] objects = (double[]) val;
(B			for (int i = 0; i  objects.length; i++) {
(BSystem.err.println("(" + i + ") " + objects[i]);
(B			}
(B		}
(B	}
(B}
(B-
(B
(BThank you 
(B
(B
(BJunko Yano
(BE-mail : [EMAIL PROTECTED]
(B
(B__
(B[EMAIL PROTECTED] mailing list
(Bhttps://www.stat.math.ethz.ch/mailman/listinfo/r-help
(BPLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

[R] Is there an R-version of rayplot

2004-06-08 Thread richard . kittler
I need to make plots similar to those produced by the s-plus rayplot function but 
can't seem to find it in R.  These 'vector maps' plot a ray or vector at each 
specified location. Is there something similar in R ? 

--Rich

Richard Kittler 
AMD TDG
408-749-4099

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] interaction plot with intervals based on TukeyHSD

2004-06-08 Thread Manuel López-Ibáñez
Hi,

The problem is that I would like to do an interaction plot with 
intervals based on Tukey's honestly significant difference (HSD) 
procedure, but I do not know how to do it in R.

I have 3 factors A, B and C and a response variable response.
I would like to study a model where there are main effects and second 
order interaction effects.

For instance, for factors B and C, I would like to plot a line for each 
level of C connecting the least square means for every level of B. 
Additionally, I would like to include the 95,0% HSD intervals for
the means in such a way that any two intervals which do not overlap 
correspond to a
pair of means which have a statistically significant difference.

For comparison, in STATGRAPHICS Plus, I choose Analysis of Variance - 
Multifactor ANOVA...
Then I plot the second order interactions and I have the option to plot 
intervals. I choose to plot Tukey HSD intervals at 95% and I obtain an 
interaction plot like using interaction.plot on R, but addittionally an 
interval is plotted on every point.

How can I do this on R?

I have already looked in Google, in the mail archive and in some books, 
but I didn´t find the answer.

I tried to calculate the intervals using:

|tk - TukeyHSD(aov(X$response ~ (factor(X$A) +  factor(X$B) + 
factor(X$C))^2, data=X) , conf.level=0.95)
   
   
   
 

HSDfactor1 - 
max(abs(tk$factor(X$A):factor(X$B)[,2]-tk$factor(X$A):factor(X$B)[,3]))
HSDfactor2 - 
max(abs(tk$factor(X$A):factor(X$C)[,2]-tk$factor(X$A):factor(X$C)[,3]))
HSDfactor3 - 
max(abs(tk$factor(X$B):factor(X$C)[,2]-tk$factor(X$B):factor(X$C)[,3]))


And I modified the function interaction.plot() adding the following 
lines of code:

...

*++ ylim - c(min(cells)-(HSDfactor*0.5), max(cells)+(HSDfactor*0.5))*
   
   

  matplot(xvals, cells, ..., type = type,  xlim = xlim, ylim = ylim,
  xlab = xlab, ylab = ylab, axes = axes, xaxt = n,
  col = col, lty = lty, pch = pch)
   
   

*++  ly - cells[,1]+(HSDfactor*0.5)
++   uy - cells[,1]-(HSDfactor*0.5)*
   
   

*++  errbar(xvals,cells[,1],ly,uy,add=TRUE, lty=3, cap=0, lwd=2)
   
  

++  ly - cells[,2]+(HSDfactor*0.5)
++   uy - cells[,2]-(HSDfactor*0.5)
   
  

++  errbar(xvals,cells[,2],ly,uy,add=TRUE, lty=3, cap=0, lwd=2)*

 if(legend) {
yrng - diff(ylim)
yleg - ylim[2] - 0.1 * yrng

.

|Finally, I call this modified function as:

|interaction.plot2( factor(X$A), factor(X$B), response,las=3, type=b, 
HSDfactor = HSDfactor1, lwd=3)


|However, the resulting intervals are much bigger than the ones 
calculated by STATGRAPHICS, thus I think I did something wrong.

I am not an expert in Statistics nor in R, thus if anyone has any 
suggestion...

Thank you very much,

Manuel.




[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] clustalw

2004-06-08 Thread [EMAIL PROTECTED]
Hi, 
I'm using the function clustalw in packages dna, but every time i have a segmentation 
fault!
In your opinion What is the problem?Memory?
Please help me
Daniela

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] clustalw

2004-06-08 Thread A.J. Rossini

Might be a compilation problem, or change of system libraries.


[EMAIL PROTECTED] [EMAIL PROTECTED] writes:

 Hi, 
 I'm using the function clustalw in packages dna, but every time i have a 
 segmentation fault!
 In your opinion What is the problem?Memory?
 Please help me
 Daniela

 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


-- 
[EMAIL PROTECTED]http://www.analytics.washington.edu/ 
Biomedical and Health Informatics   University of Washington
Biostatistics, SCHARP/HVTN  Fred Hutchinson Cancer Research Center
UW (Tu/Th/F): 206-616-7630 FAX=206-543-3461 | Voicemail is unreliable
FHCRC  (M/W): 206-667-7025 FAX=206-667-4812 | use Email

CONFIDENTIALITY NOTICE: This e-mail message and any attachme...{{dropped}}

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] clustalw

2004-06-08 Thread Wolski
Hi Daniela!

From where you got this package?
I even havent found it at cran.
It is not a bioconductor package either.
and
search.help(clustalW)

In any case it is better to contact the package provider directly.

Sincerely Eryk


*** REPLY SEPARATOR  ***

On 6/8/2004 at 6:38 PM [EMAIL PROTECTED] wrote:

Hi, 
I'm using the function clustalw in packages dna, but every time i have a
segmentation fault!
In your opinion What is the problem?Memory?
Please help me
Daniela

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html



Dipl. bio-chem. Eryk Witold Wolski@MPI-Moleculare Genetic   
Ihnestrasse 63-73 14195 Berlin   'v'
tel: 0049-30-83875219   /   \
mail: [EMAIL PROTECTED]---W-Whttp://www.molgen.mpg.de/~wolski

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to Describe R to Finance People

2004-06-08 Thread Tony Plate
At Monday 07:58 PM 6/7/2004, Richard A. O'Keefe wrote:
[snip]
There are three perspectives on programming languages like the S/R family:
(1) The programming language perspective.
I am sorry to tell you that the only excuse for R is S.
R is *weird*.  It combines error-prone C-like syntax with data structures
that are APL-like but not sufficiently* APL-like to have behaviour that
is easy to reason about.  The scope rules (certainly the scope rules for
S) were obviously designed by someone who had a fanatical hatred of
compilers and wanted to ensure that the language could never be usefully
compiled.
What in particular about the scope rules for S makes it tough for 
compilers?  The scope for ordinary variables seems pretty straightforward 
-- either local or in one of several global locations.  (Or are you 
referring to the feature of the get() function that it can access variables 
in any frame?)


  Thanks to 'with' the R scope rules are little better.  The
fact that (object)$name returns NULL instead of reporting an error when
the object doesn't _have_ a $name property means that errors can be
delayed to the point where debugging is harder than it needs to be.
Yup, that's why I proposed (and provided an implementation) of an 
alternative $$ operator that did report an error when object$$name didn't 
have a name component (and also didn't allow abbreviation), but there was 
no interest shown in incorporating this into R.

-- Tony Plate
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] bootstrap: stratified resampling

2004-06-08 Thread Ramon Diaz-Uriarte
Dear All,

I was writing a small wrapper to bootstrap a classification algorithm, but if 
we generate the indices in the usual way as:

bootindex - sample(index, N, replace = TRUE)

there is a non-zero probability that all the samples belong to only 
one class, thus leading to problems in the fitting (or that some classes will 
end up with only one sample, which will be a problem for quadratic 
discriminant analysis).

It thought this situation should be frequent enough to be mentioned in the 
literature, but I have found almost no mention in the references I have 
available, except for Hirst (see below). If I've reread correctly, this issue 
is not mentioned in Efron  Tibshirani (1997; the .632+ paper), or in Efron 
and Gong (the TAS leisure look paper), or the Efron  Tibshirani 1993 
bootstrap book, or Chernick's Bootstrap methods book. I've only seen some 
side mentions in Ripley's Pattern recognition (when talking about stratified 
cross-validation), and Davison  Hinkley's bootstrap book when, on p. 304, 
they refer to some subsets having singular design matrices, and thus 
requiring stratification on covars. McLachlan (in his discriminant analysis 
book), on p. 347, differentiates between mixture sampling and separate 
sampling, but I can find a mention of what do when, under mixture sampling, we 
end up with all samples in only one group.

Only Hirst (1996, Technometrics, 38 (4): 389--399) says that each bootstrap 
sample should include at least one observation for each group, and at least 
enough different observations from each group to allow estimation of the 
covariance matrix (he is referring to discriminant analysis), and thus he 
uses essentially stratified bootstrap samples.

Interestingly, the boot function (boot library) says For nonparametric 
multi-sample problems stratified resampling is used.. As well, the 
predab.resample (Design library) says  group: a grouping variable used to 
stratify the sample upon bootstrapping. This allows one to handle k-sample 
problems, (...).

That the authors of boot and Design are using stratified resampling indicates 
to me that this might be the obvious, unproblematic way to go, but I 
understood that stratified resampling was OK only when that was sampling 
scheme that generated the data.  

What am I missing?

Thanks,

R.


-- 
Ramón Díaz-Uriarte
Bioinformatics Unit
Centro Nacional de Investigaciones Oncológicas (CNIO)
(Spanish National Cancer Center)
Melchor Fernández Almagro, 3
28029 Madrid (Spain)
Fax: +-34-91-224-6972
Phone: +-34-91-224-6900

http://bioinfo.cnio.es/~rdiaz
PGP KeyID: 0xE89B3462
(http://bioinfo.cnio.es/~rdiaz/0xE89B3462.asc)

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] off topic publication question

2004-06-08 Thread Erin Hodgess
Dear R People:

Please excuse the off topic question, but I
know that I'll get a good answer here.


If a single author is writing a journal article,
should she use We performed a test
or I performed a test,
please?

I had learned to use we without regard to the number
of authors.  Is that true, please?

Thanks for the off topic help.

Sincerely,
Erin Hodgess
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] clustalw

2004-06-08 Thread A.J. Rossini

It's on Jim Lindsey's WWW site.  It's a nice package.  We've got an
extension that is somewhat incomplete that incorporates BLAST results
(from NCBI, though could config for local use) in a package called
BioSeq1 on Bioconductor's dev site -- I don't think that the CVS
viewer is available, but I could provide a tarball for builds (might
need a bit of hacking in its current state) if there is interest.

Email me privately if so.

best,
-tony


Wolski [EMAIL PROTECTED] writes:

 Hi Daniela!

From where you got this package?
 I even havent found it at cran.
 It is not a bioconductor package either.
 and
 search.help(clustalW)

 In any case it is better to contact the package provider directly.

 Sincerely Eryk


 *** REPLY SEPARATOR  ***

 On 6/8/2004 at 6:38 PM [EMAIL PROTECTED] wrote:

Hi, 
I'm using the function clustalw in packages dna, but every time i have a
segmentation fault!
In your opinion What is the problem?Memory?
Please help me
Daniela

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html



 Dipl. bio-chem. Eryk Witold Wolski@MPI-Moleculare Genetic   
 Ihnestrasse 63-73 14195 Berlin   'v'
 tel: 0049-30-83875219   /   \
 mail: [EMAIL PROTECTED]---W-Whttp://www.molgen.mpg.de/~wolski

 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


-- 
[EMAIL PROTECTED]http://www.analytics.washington.edu/ 
Biomedical and Health Informatics   University of Washington
Biostatistics, SCHARP/HVTN  Fred Hutchinson Cancer Research Center
UW (Tu/Th/F): 206-616-7630 FAX=206-543-3461 | Voicemail is unreliable
FHCRC  (M/W): 206-667-7025 FAX=206-667-4812 | use Email

CONFIDENTIALITY NOTICE: This e-mail message and any attachme...{{dropped}}

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] scoping rules

2004-06-08 Thread joerg van den hoff
is there a good way to get the following fragment to work when calling 
it as wrapper(1) ?

# cut here==
wrapper - function (choose=0)
{
  x - seq(0,2*pi,len=100)
  y - sin(1.5*x);
  y - rnorm(y,y,.1*max(y))
  if (choose==0) {
 rm(fifu,pos=1)
 fifu - function(w,x) {sin(w*x)}
  }
  else
 assign('fifu',function(w,x) {sin(w*x)},.GlobalEnv)
  res - nls(y ~ fifu(w,x),start=list(w=1))
  res
}
# cut here==
I understand, the problem is that  the scoping rules are such that nls 
does not resolve 'fifu' in the parent environment, but rather in the 
GlobalEnv. (this is different for the data, which *are* taken from the 
parent environment of the nls-call).

The solution to assign 'fifu' directly into the GlobalEnv (which 
happens when calling 'wrapper(1)) does obviously work but leads to the 
undesirable effect of accumulating objects in the workspace which are 
not needed there (and might overwrite existing ones).

so: is there a way to enforce that nls takes the model definition from 
the parent environment together with the data?

joerg
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] binary data

2004-06-08 Thread Moises Hassan
What's the preferred way in R for handling samples with binary data
(like chemical fingerprints encoded as hexadecimal strings with 0's and
1's indicating the absence or presence of chemical features) in methods
such as clustering and MDS. Do you always have to expand the fingerprint
data into individual variables (which can be a few hundreds) or can they
be used directly as binary data by some of these methods.

 

Thanks,  Moises

 


[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Comparing two pairs of non-normal datasets in R?

2004-06-08 Thread Peter Sebastian Masny
Hi all,

I'm using R to analyze some research and I'm not sure which test would be 
appropriate for my data.  I was hoping someone here might be able to help.

Short version:
Evaluate null hypothesis that change A1-A2 is similar to change C1-C2, for 
continuous, non-normal datasets.


Long version:

I have two populations A and C.  I take a measurement on samples of these 
populations before and after a process.  So basically I have:
A1 - sample of A before process
A2 -  sample of A after process
C1 - sample of C (control) before process
C2 - sample of C (control) after process

The data is continuous and I have about 100 measurements in each dataset.  
Also, the data is not normally distributed (more like a Poisson).

By Wilcoxon Rank Sum, A1 is significantly different than A2 and C1 is 
different than C2.

Here is the problem:
C1 is only slightly different than C2 (Wilcoxon, p.02), while A1 is more 
noticeably different than A2 (p1E-22).  What I would like to do is assume 
that the changes seen in C are typical, and evaluate the changes in A 
relative to the changes in C (i.e. are the changes greater?).

Any thoughts?



Thanks,
Peter Masny

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] error during make of R-patched on Fedora core 2

2004-06-08 Thread Peter Dalgaard
Marc Schwartz [EMAIL PROTECTED] writes:

 wget -r -l1 --no-parent -A*.gz -nd -P src/library/Recommended
 http://www.cran.mirrors.pair.com/src/contrib/1.9.1/Recommended
 
 The above _should_ be one one line, but of course will wrap here. There
 should be a space   between the two lines.

Kids these days... Make that

wget -r -l1 --no-parent -A*.gz -nd -P src/library/Recommended \
 http://www.cran.mirrors.pair.com/src/contrib/1.9.1/Recommended

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] error during make of R-patched on Fedora core 2

2004-06-08 Thread Marc Schwartz
On Tue, 2004-06-08 at 12:40, Peter Dalgaard wrote:
 Marc Schwartz [EMAIL PROTECTED] writes:
 
  wget -r -l1 --no-parent -A*.gz -nd -P src/library/Recommended
  http://www.cran.mirrors.pair.com/src/contrib/1.9.1/Recommended
  
  The above _should_ be one one line, but of course will wrap here. There
  should be a space   between the two lines.
 
 Kids these days... Make that
 
 wget -r -l1 --no-parent -A*.gz -nd -P src/library/Recommended \
  http://www.cran.mirrors.pair.com/src/contrib/1.9.1/Recommended

LOL

Thanks Dad  ;-)

Marc

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] overlaying text() onto image() or filled.contour()

2004-06-08 Thread Laura Quinn
I am trying to add some markers onto a contour map image I have created,
by using the text() function when I have already produced the map using
either image() or filled.contour(). For some reason the points appear to
be shifted considerably to the right of where they should be appearing,
despite me using exactly the same co-ordinate systems for both. This
offset is also dependent on the aspect ratio I use.

The map I am looking at is around 3.5x larger in height than width and I
need to maximise this in an X window. If I simply use asp=1 the map is
pretty unreadable, is there a way I can drastically reduce the margin
sizes perhaps? The par() commands [EMAIL PROTECTED] tried haven;t made an appreciable
difference - and i'M still very puzzled as to why my data markers are
appearing in the wrong place...any suggestions?

Thanks,
Laura

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Comparing two pairs of non-normal datasets in R?

2004-06-08 Thread Spencer Graves
 Have you considered qqplot(A1, A2) and qqplot(C1, C2)?  If A2, 
A2, C1, C2 are more like Poisson, I might try qqplot(sqrt(A1), 
sqrt(A2)), etc.:  Without the sqrt, the image might be excessively 
distorted by largest values, at least in my experience. 

 hope this helps.  spencer graves
Peter Sebastian Masny wrote:
Hi all,
I'm using R to analyze some research and I'm not sure which test would be 
appropriate for my data.  I was hoping someone here might be able to help.

Short version:
Evaluate null hypothesis that change A1-A2 is similar to change C1-C2, for 
continuous, non-normal datasets.

Long version:
I have two populations A and C.  I take a measurement on samples of these 
populations before and after a process.  So basically I have:
A1 - sample of A before process
A2 -  sample of A after process
C1 - sample of C (control) before process
C2 - sample of C (control) after process

The data is continuous and I have about 100 measurements in each dataset.  
Also, the data is not normally distributed (more like a Poisson).

By Wilcoxon Rank Sum, A1 is significantly different than A2 and C1 is 
different than C2.

Here is the problem:
C1 is only slightly different than C2 (Wilcoxon, p.02), while A1 is more 
noticeably different than A2 (p1E-22).  What I would like to do is assume 
that the changes seen in C are typical, and evaluate the changes in A 
relative to the changes in C (i.e. are the changes greater?).

Any thoughts?

Thanks,
Peter Masny
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Lazy Evaluation?

2004-06-08 Thread John Chambers
No, not lazy evaluation.  The explanation has to do with environments
and how callGeneric works.

It's an interesting (if obscure) example.

Here's the essence of it, embedded in some comments on debugging and on
style.  (This will be a  fairly long discussion, I'm afraid.)

A useful starting point in debugging is often to look at the objects
involved.  In this case there are two versions of a method that
generates an object containing a function.

In the first version, which works, the object generated in the example
is:

 sinNF
An object of class NumFunction
Slot fun:
function(n) nfun(n)
environment: 0x2f5c268

In the second, which gets into an infinite loop, it is:

 sinNF
An object of class NumFunction
Slot fun:
function(n) callGeneric([EMAIL PROTECTED](n))
environment: 0x3ada884

The first version is totally inscrutable to the user who looks at the
object.

But a little thinking will suggest why the second one fails.  The
`callGeneric' function is meant to be used inside a method definition. 
But [EMAIL PROTECTED] isn't a method; it's not obvious what callGeneric should
do but it probably won't be good.

(It's arguagle that it should fail right away because the function being
used is not a generic.  But even legitimate uses of callGeneric can
accidentally get into infinite loops by in effect ending up with the
same method on the same data.)

We can verify the infinite loop by using the trace() function to catch
calls to callGeneric.

 trace(callGeneric, recover)

Then:

 [EMAIL PROTECTED](sqrt(pi/2))
Tracing callGeneric([EMAIL PROTECTED](n)) on entry 

Enter a frame number, or 0 to exit   
1:[EMAIL PROTECTED](sqrt(pi/2)) 
2:callGeneric([EMAIL PROTECTED](n)) 
Selection: 0
Tracing callGeneric([EMAIL PROTECTED](n)) on entry 

Enter a frame number, or 0 to exit   
1:[EMAIL PROTECTED](sqrt(pi/2)) 
2:callGeneric([EMAIL PROTECTED](n)) 
3:eval(call, sys.frame(sys.parent())) 
4:eval(expr, envir, enclos) 
5:[EMAIL PROTECTED]([EMAIL PROTECTED](n)) 
6:callGeneric([EMAIL PROTECTED](n)) 

and so on.

But why does the first version work?  Because that nfun function is
defined inside another function where a local definition of the name of
the generic is stored in its environment.  So the call from nfun to
callGeneric does find the generic (sin in this case).  To figure out
that this happens however, would require a lot of analysis.

This is definitely NOT the sort of arcane knowledge users are expected
to apply.

There was a discussion on this mailing list recently of the recommended
style that could be called functional.  The definition of a function
should make sense on its own and, in particular, should avoid depending
on external objects, particularly external objects that may be changed. 
The methods in the example are very far from this style.

Both versions of sinNF are essentially incomprehensible on their own,
and the one that works more so.  Not meant as a criticism, it was
remarkable that you arrived at a version that did work.

But if possible one would like to get to the same effect with an object
that makes sense.  In this example, it's possible to use some of the
same information to construct an object that says what it does.  I think
you want to compose the function currently in the object with the
function you're calling.  By making the computation a method for the
Math group generic, you get all the math functions to work this way in
one step (clever, though rather unintuitive for the user).

To produce a clearer (and more efficient) version, get the .Generic
evaluated when the object is created; e.g., 

setMethod(Math, 
  NumFunction,
  function(x){
f - function(n){g - gg; f(g(n))}
body(f) - substitute(
  {g - gg; f(g(n))},
   list(f = as.name(.Generic), gg = [EMAIL PROTECTED]))
NumFunction(f)
  })

Then the object sinNF makes sense to the user:

 sinNF
An object of class NumFunction
Slot fun:
function (n) 
{
g - function (x) 
x^2
sin(g(n))
}
environment: 0x384c8e8

 [EMAIL PROTECTED](sqrt(pi/2))
[1] 1

Thomas Stabla wrote:
 
 Hello,
 
 I've stumbled upon following problem, when trying to overload the methods
 for group Math for an S4-class which contains functions as slots.
 
   setClass(NumFunction, representation = list(fun = function))
 
   NumFunction - function(f) new(NumFunction, fun = f)
 
   square - function(x) x^2
   NF - NumFunction(square)
 
   setMethod(Math,
 NumFunction,
 function(x){
 nfun  - function(n) callGeneric([EMAIL PROTECTED](n))
 tmp - function(n) nfun(n)
 NumFunction(tmp)
 })
 
   sinNF - sin(NF)
   [EMAIL PROTECTED](sqrt(pi/2))
 
 # works as expected, returns 1
 
 # now a slightly different version of setMethod(Math, NumFunction,
 # ...), which dispenses the unnecessary wrapper function tmp()
 
   setMethod(Math,
 NumFunction,
 function(x){
 

Re: [R] Comparing two pairs of non-normal datasets in R?

2004-06-08 Thread Peter Sebastian Masny
On Tuesday 08 June 2004 10:43 am, you wrote:
 If I understand you correctly, you have two set of ***paired***
 data, one set from the A population, and one from the C population.

 Form the pairwise differences:

   A.diff - A1 - A2
   C.diff - C1 - C2

Alas, they are not paired.  A1 and A2 are samples from the same population, 
but of different members.  Also, the number of measurements is different for 
each dataset.

 Boxplots and histograms of A.diff and C.diff will tell you
 (much more than a test ever would) what's ***really*** going on.

The boxplots I have clearly show the difference, but I need a p value to go 
with it.

Here are the boxplots if that helps:
http://www.ps.masny.dk/guests/misc/A1.png
http://www.ps.masny.dk/guests/misc/A2.png
http://www.ps.masny.dk/guests/misc/C1.png
http://www.ps.masny.dk/guests/misc/C2.png

 P.S. BTW --- you say that your data are continuous, but that their
 distributions are ``more like a Poisson''.  The Poisson distribution
 is DISCRETE!!!

Hence the like.  The data is indeed continuous, but a distribution graph 
increases towards one extreme...

Visually, the results are convincing, but I really need a test of 
significance.



Thank you very much for the help,

Peter

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] vardiag Package and nlregb

2004-06-08 Thread jferrer
Hi everyone,

I'm interested in the analysis of spatial data, and I'm trying out several
R-packages.

Today I was attempting to use the package vardiag (version 0.1):

 library(vardiag)
 rs4.vo - varobj(rs4[,2:4],trace=2)
[1] 1
Error: couldn't find function nlregb

so far I know nlregb is a S-plus function for optimization, so this can
not work in R, other?

How could one create or import an appropiate variogram object to use the
other functions in this package, for example if I estimate the empirical
variogram using geoR?

thanks,

JR Ferrer-Paris

PS: Currently I'm using R 1.9.0 on Linux.


Dipl.-Biol. J.R. Ferrer Paris 
Laboratorio de Biología de Organismos - Centro de Ecología
   Instituto Venezolano de Investigaciones Científicas
 Apartado 21827 - Caracas 1020A
   REPUBLICA BOLIVARIANA DE VENEZUELA
 Tel:00-58-212-5041452 --- Fax: 00-58-212-5041088
~~ [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] fast mkChar

2004-06-08 Thread Vadim Ogranovich
Hi,
 
To speed up reading of large (few million lines) CSV files I am writing
custom read functions (in C). By timing various approaches I figured out
that one of the bottlenecks in reading character fields is the mkChar()
function which on each call incurs a lot of garbage-collection-related
overhead.
 
I wonder if there is a vectorized version of mkChar, say mkChar2(char
**, int length) that converts an array of C strings to a string vector,
which somehow amortizes the gc overhead over the entire array?
 
If no such function exists, I'd appreciate any hint as to how to write
it.
 
Thanks,
Vadim

[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Comparing two pairs of non-normal datasets in R?

2004-06-08 Thread Spencer Graves
 It looks like ks.test might help, but it seems to me you need to 
make parsimonious models of the change in distributions, so you estimate 
a few parameters for the distributions of each of your 4 data sets with 
standard errors for all the parameters you estimate.  Then you can test 
if the change in the estimated parameters in going from A1 - A2 exceeds 
the change going from C1 - C2 using z scores or t tests of the 
differences in the parameter estimates -- or chi-squares / F's, etc., if 
you want to do multiple dimensions all at once. 

 hope this helps.  spencer graves
Peter Sebastian Masny wrote:
On Tuesday 08 June 2004 10:43 am, you wrote:
 

If I understand you correctly, you have two set of ***paired***
data, one set from the A population, and one from the C population.
Form the pairwise differences:
	A.diff - A1 - A2
	C.diff - C1 - C2
   

Alas, they are not paired.  A1 and A2 are samples from the same population, 
but of different members.  Also, the number of measurements is different for 
each dataset.

 

Boxplots and histograms of A.diff and C.diff will tell you
(much more than a test ever would) what's ***really*** going on.
   

The boxplots I have clearly show the difference, but I need a p value to go 
with it.

Here are the boxplots if that helps:
http://www.ps.masny.dk/guests/misc/A1.png
http://www.ps.masny.dk/guests/misc/A2.png
http://www.ps.masny.dk/guests/misc/C1.png
http://www.ps.masny.dk/guests/misc/C2.png
 

P.S. BTW --- you say that your data are continuous, but that their
distributions are ``more like a Poisson''.  The Poisson distribution
is DISCRETE!!!
   

Hence the like.  The data is indeed continuous, but a distribution graph 
increases towards one extreme...

Visually, the results are convincing, but I really need a test of 
significance.


Thank you very much for the help,
Peter
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] off topic publication question

2004-06-08 Thread Phineas Campbell
I had assumed that the use of we in articles was either due to formality,
like the distinction between tu and vous in French.  The English monarch
never refer to themselves in the singular.  Or we as in both the author and
the reader.  However a sample of size 4 of articles to hand suggests that
the use of we for single author papers is not universal.

HTH
Phineas Campbell
http://www.phineas.pwp.blueyonder.co.uk/


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Erin Hodgess
Sent: Tuesday, June 08, 2004 6:11 PM
To: [EMAIL PROTECTED]
Subject: [R] off topic publication question


Dear R People:

Please excuse the off topic question, but I
know that I'll get a good answer here.


If a single author is writing a journal article,
should she use We performed a test
or I performed a test,
please?

I had learned to use we without regard to the number
of authors.  Is that true, please?

Thanks for the off topic help.

Sincerely,
Erin Hodgess
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] a doubt

2004-06-08 Thread Martínez Ovando Juan Carlos
Hello all,

 

I'm working with different models that where implemented in DOS system each one, but 
for doing comparisons between them I require to put some of the results in an unified 
data form for all the models. I wonder if some of you have the knowledge of one 
function in R that enable me the invocation of DOS execution files inside the R 
application to allow me doing an R interface with such programs, i.e. some function 
like the existed 'dos()' function in Matlab, which execute '.exe' files from DOS, 
after have created the corresponding specification file in plain format.  

 

In advance, many thanks.

 

Saludos,

 

Juan Carlos

 

 


[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] overlaying text() onto image() or filled.contour()

2004-06-08 Thread Paul Murrell
Hi
Laura Quinn wrote:
I am trying to add some markers onto a contour map image I have created,
by using the text() function when I have already produced the map using
either image() or filled.contour(). For some reason the points appear to
be shifted considerably to the right of where they should be appearing,
despite me using exactly the same co-ordinate systems for both. This
offset is also dependent on the aspect ratio I use.

See example three in help(filled.contour) - you can use the plot.axes 
argument.


The map I am looking at is around 3.5x larger in height than width and I
need to maximise this in an X window. If I simply use asp=1 the map is
pretty unreadable, is there a way I can drastically reduce the margin
sizes perhaps? The par() commands [EMAIL PROTECTED] tried haven;t made an appreciable
difference - and i'M still very puzzled as to why my data markers are
appearing in the wrong place...any suggestions?

par(mar=whatever) has some effect - although par(mar)[4] does get 
overridden to make a space between the plot and the legend.

Paul
--
Dr Paul Murrell
Department of Statistics
The University of Auckland
Private Bag 92019
Auckland
New Zealand
64 9 3737599 x85392
[EMAIL PROTECTED]
http://www.stat.auckland.ac.nz/~paul/
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] a doubt

2004-06-08 Thread Prof Brian Ripley
?system
?shell (under Windows, which we must guess you are using if `DOS' means 
MS-DOS).

Please use an informative subject line.

On Tue, 8 Jun 2004, Martínez Ovando Juan Carlos wrote:

 I'm working with different models that where implemented in DOS system
 each one, but for doing comparisons between them I require to put some
 of the results in an unified data form for all the models. I wonder if
 some of you have the knowledge of one function in R that enable me the
 invocation of DOS execution files inside the R application to allow me
 doing an R interface with such programs, i.e. some function like the
 existed 'dos()' function in Matlab, which execute '.exe' files from DOS,
 after have created the corresponding specification file in plain format.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] fast mkChar

2004-06-08 Thread Peter Dalgaard
Vadim Ogranovich [EMAIL PROTECTED] writes:

 Hi,
  
 To speed up reading of large (few million lines) CSV files I am writing
 custom read functions (in C). By timing various approaches I figured out
 that one of the bottlenecks in reading character fields is the mkChar()
 function which on each call incurs a lot of garbage-collection-related
 overhead.
  
 I wonder if there is a vectorized version of mkChar, say mkChar2(char
 **, int length) that converts an array of C strings to a string vector,
 which somehow amortizes the gc overhead over the entire array?
  
 If no such function exists, I'd appreciate any hint as to how to write
 it.

The real issue here is that character vectors are implemented as
generic vectors of little R objects (CHARSXP type) that each hold one
string. Allocating all those objects is probably what does you in.

The reason behind the implementation is probably that doing it that
way allows the mechanics of the garbage collector to be applied
directly (CHARSXPs are just vectors of bytes), but it is obviously
wasteful in terms of total allocation. If you can think up something
better, please say so (but remember that the memory management issues
are nontrivial).

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] fast mkChar

2004-06-08 Thread Duncan Murdoch
On Tue, 8 Jun 2004 12:23:58 -0700, Vadim Ogranovich
[EMAIL PROTECTED] wrote :

Hi,
 
To speed up reading of large (few million lines) CSV files I am writing
custom read functions (in C). By timing various approaches I figured out
that one of the bottlenecks in reading character fields is the mkChar()
function which on each call incurs a lot of garbage-collection-related
overhead.
 
I wonder if there is a vectorized version of mkChar, say mkChar2(char
**, int length) that converts an array of C strings to a string vector,
which somehow amortizes the gc overhead over the entire array?
 
If no such function exists, I'd appreciate any hint as to how to write
it.

It's not easy.  Internally R strings always have a header at the
front, so you need to allocate memory and move C strings to get R to
understand them.  

Duncan Murdoch

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] (no subject)

2004-06-08 Thread Martínez Ovando Juan Carlos
Hello again,

 

In a previous message I request your help, but I don't have been clear in my problem. 
Specifically, I'm trying to create an interface in R for the X-12-ARIMA and TRAMO 
SEATS, for the versions that run in MS-DOS. This problem awake in me the interest for 
make interfaces to comparing some Bayesian models for classification that where 
implemented in MS-DOS to. The question for you is if you know about the existence of 
an R function that allows me to run -in Windows- executable files in MS-DOS from the R 
command window. 

 

The function 'dos' in Matlab that I have mention in the previous message allows to run 
'.exe' programs for a specific directory.  

 

Many thanks.  

 

Saludos,

 

Juan Carlos

 

 


[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] a doubt

2004-06-08 Thread Martínez Ovando Juan Carlos
Hello again,

 

In a previous message I request your help, but I don't have been clear in my problem. 
Specifically, I'm trying to create an interface in R for the X-12-ARIMA and TRAMO 
SEATS, for the versions that run in MS-DOS. This problem awake in me the interest for 
make interfaces to comparing some Bayesian models for classification that where 
implemented in MS-DOS to. The question for you is if you know about the existence of 
an R function that allows me to run -in Windows- executable files in MS-DOS from the R 
command window. 

 

The function 'dos' in Matlab that I have mention in the previous message allows to run 
'.exe' programs for a specific directory.  

 

Many thanks.  

 

Saludos,

 

Juan Carlos

 


[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] (no subject)

2004-06-08 Thread Jeff Gentry
 implemented in MS-DOS to. The question for you is if you know about
 the existence of an R function that allows me to run -in Windows-
 executable files in MS-DOS from the R command window.

Does system() do what you're looking for?

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] data.frame size limit

2004-06-08 Thread Philip Sobolik
Is there a limit to the number of columns that a data.frame can have?  For 
example, can I read.csv() a file that has 1000 columns and 10,000 rows, 
will it break or is it limited by available memory.

...
Philip Sobolik  781-862-8719 x111
Wrightsoft Corporation  781-861-2058 fax
394 Lowell Street, Suite 12 [EMAIL PROTECTED]
Lexington, MA   02420   www.wrightsoft.com
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] a doubt

2004-06-08 Thread Spencer Graves
 All of R is available in source from www.r-project.org.  This 
includes software for ARIMA, etc. 

PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html;;  it describes several things you can do that will get you closer to what you want and will help you formulate a question that might be easier for people on this list to understand and respond to.  

	  Espero que esto le ayuda.  Buena Suerte.  
	  spencer graves

Martínez Ovando Juan Carlos wrote:
Hello again,

In a previous message I request your help, but I don't have been clear in my problem. Specifically, I'm trying to create an interface in R for the X-12-ARIMA and TRAMO SEATS, for the versions that run in MS-DOS. This problem awake in me the interest for make interfaces to comparing some Bayesian models for classification that where implemented in MS-DOS to. The question for you is if you know about the existence of an R function that allows me to run -in Windows- executable files in MS-DOS from the R command window. 


The function 'dos' in Matlab that I have mention in the previous message allows to run '.exe' programs for a specific directory.  


Many thanks.  


Saludos,

   Juan Carlos

[[alternative HTML version deleted]]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] fast mkChar

2004-06-08 Thread Vadim Ogranovich
I am no expert in memory management in R so it's hard for me to tell
what is and what is not doable. From reading the code of allocVector()
in memory.c I think that the critical part is to vectorize
CLASS_GET_FREE_NODE and use the vectorized version along the lines of
the code fragment below (taken from memory.c).

if (node_class  NUM_SMALL_NODE_CLASSES) {
CLASS_GET_FREE_NODE(node_class, s); 

If this is possible than the rest is just a matter of code refactoring.

By vectorizing I mean writing a macro CLASS_GET_FREE_NODE2(node_class,
s, n) which in one go allocates n little objects of class node_class and
inscribes them into the elements of vector s, which is assumed to be
long enough to hold these objects.

If this is doable than the only missing piece would be a new function
setChar(CHARSXP rstr, const char * cstr) which copies 'cstr' into 'rstr'
and (re)allocates the heap memory if necessary. Here the setChar() macro
is safe since s[i]-s are all brand new and thus are not shared with any
other object.



 -Original Message-
 From: Peter Dalgaard [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, June 08, 2004 1:23 PM
 To: Vadim Ogranovich
 Cc: R-Help
 Subject: Re: [R] fast mkChar
 
 Vadim Ogranovich [EMAIL PROTECTED] writes:
 
  Hi,
   
  To speed up reading of large (few million lines) CSV files I am 
  writing custom read functions (in C). By timing various 
 approaches I 
  figured out that one of the bottlenecks in reading 
 character fields is 
  the mkChar() function which on each call incurs a lot of 
  garbage-collection-related overhead.
   
  I wonder if there is a vectorized version of mkChar, say 
  mkChar2(char **, int length) that converts an array of C 
 strings to a 
  string vector, which somehow amortizes the gc overhead over 
 the entire array?
   
  If no such function exists, I'd appreciate any hint as to 
 how to write 
  it.
 
 The real issue here is that character vectors are implemented 
 as generic vectors of little R objects (CHARSXP type) that 
 each hold one string. Allocating all those objects is 
 probably what does you in.
 
 The reason behind the implementation is probably that doing 
 it that way allows the mechanics of the garbage collector to 
 be applied directly (CHARSXPs are just vectors of bytes), but 
 it is obviously wasteful in terms of total allocation. If you 
 can think up something better, please say so (but remember 
 that the memory management issues are nontrivial).
 
 -- 
O__   Peter Dalgaard Blegdamsvej 3  
   c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
  (*) \(*) -- University of Copenhagen   Denmark  Ph: 
 (+45) 35327918
 ~~ - ([EMAIL PROTECTED]) FAX: 
 (+45) 35327907
 


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] fighting with ps.options and xlim/ylim

2004-06-08 Thread ivo welch
sorry to impose again. 

At the default point size, R seems very good in selecting nice xlim/ylim 
parameters, and leaving a little bit of space at the edges of its 
xlim/ylim.  alas, I now need to create ps graphics that can only occupy 
a quarter of a page, so I need to blow up the text for readability.  
Easy, I thought: make ps.options(pointsize=24).  Alas, this turns out to 
be trickier than I thought.

In plot, autogenerated xlim and ylim should now probably be scaled a 
little, though more importantly, there needs to be more space at the 
edge (e.g., if ylim=c(1,2), R seems to really draw axes from about 0.9 
to 1.1).  if I do not increase this space, my axis label names overflow, 
as do some text() annotations that are inside xlim/ylim, but have 
pos=1.  (e.g., text(1,1,something, pos=1) in the example; at standard 
point size, this fits nicely; just no longer.)

how to create smaller figures than full page is probably not an 
infrequent need.  has anyone written a smallpostscript-figures 
package, which sets this and perhaps other parameters?

if not, how do I tell R that
* the usual space it leaves at the xlim/ylim needs to be bigger now?
   * that I would like it to be more generous/less generous in its 
autogeneration of good ylim/xlim default coordinates

help appreciated.
regards,
/iaw
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] fast mkChar

2004-06-08 Thread Peter Dalgaard
Vadim Ogranovich [EMAIL PROTECTED] writes:

 I am no expert in memory management in R so it's hard for me to tell
 what is and what is not doable. From reading the code of allocVector()
 in memory.c I think that the critical part is to vectorize
 CLASS_GET_FREE_NODE and use the vectorized version along the lines of
 the code fragment below (taken from memory.c).
 
   if (node_class  NUM_SMALL_NODE_CLASSES) {
   CLASS_GET_FREE_NODE(node_class, s); 
 
 If this is possible than the rest is just a matter of code refactoring.
 
 By vectorizing I mean writing a macro CLASS_GET_FREE_NODE2(node_class,
 s, n) which in one go allocates n little objects of class node_class and
 inscribes them into the elements of vector s, which is assumed to be
 long enough to hold these objects.
 
 If this is doable than the only missing piece would be a new function
 setChar(CHARSXP rstr, const char * cstr) which copies 'cstr' into 'rstr'
 and (re)allocates the heap memory if necessary. Here the setChar() macro
 is safe since s[i]-s are all brand new and thus are not shared with any
 other object.

I had a similar idea initially, but I don't think it can fly: First,
allocating n objects at once is not likely to be much faster than
allocating them one-by-one, especially when you consider the
implications of having to deal with near-out-of-memory conditions.
Second, you have to know the string lengths when allocating, since the
structure of a vector object (CHARSXP) is a header immediately
followed by the data.

A more interesting line to pursue is that - depending on what it
really is that you need - you might be able to create a different kind
of object that could walk and quack like a character vector, but is
stored differently internally. E.g. you could set up a representation
that is just a block of pointers, pointing to strings that are being
maintained in malloc-style.

Have a look at External pointers and finalization.


-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] off topic publication question

2004-06-08 Thread Jason Turner
On Wed, 2004-06-09 at 05:11, Erin Hodgess wrote:
 If a single author is writing a journal article,
 should she use We performed a test
 or I performed a test,
 please?


Does the particular journal specify a style manual or style sheet?  The
Chicago Manual of Style \cite{Chicago2003}, p160, says 'We' is
sometimes used by an individual who is speaking for a group... called
the editorial 'we'.  Some writers also use 'we' to make their prose
appear less personal and to draw in the reader or listener.

Although journals may vary in their requirements, you are presumably
speaking for a group (the University of Houston Downtown), and we is
probably safer than I.  Probably.

It's important to note that the Chicago Manual does *not* claim to be
authoritative, merely helpful.
 
Cheers

Jason

@Book{Chicago2003,
  editor =   {University of Chicago Press Staff},
  title ={The Chicago Manual of Style},
  publisher ={University of Chicago Press},
  year = 2003,
  address =  {Chicago},
  edition =  {Fifteenth}
}

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] off topic publication question

2004-06-08 Thread Patrick Connolly
On Tue, 08-Jun-2004 at 08:56PM +0100, Phineas Campbell wrote:

| I had assumed that the use of we in articles was either due to
| formality, like the distinction between tu and vous in French.  The
| English monarch never refer to themselves in the singular.  

Hence the use of the term The Royal We when referring to a
purportedly group decision that was really made by one individual who
is attempting to hide that fact.  In the case of an author, the
attempt could be to appear more modest.

We are not all prima donnas.  :-)


-- 
Patrick Connolly
HortResearch
Mt Albert
Auckland
New Zealand 
Ph: +64-9 815 4200 x 7188
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~
I have the world`s largest collection of seashells. I keep it on all
the beaches of the world ... Perhaps you`ve seen it.  ---Steven Wright 
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] fighting with ps.options and xlim/ylim

2004-06-08 Thread ivo welch
thank you, marc.  I will play around with these parameters tomorrow at 
my real computer.  yes, the idea is to just create an .eps and .pdf 
file, which is then \includegraphics[0.25\textwidth]{} in pdflatex.  I 
need to tweak with the parameter ps.options(pointsize) because 
otherwise, I end up with 5pt fonts---which is not readable.  And once I 
do this, I need different R parameter defaults on the axes.  With the 
advice I have gotten, I think I am all set now.  However, I am a little 
bit surprised that noone has written a package around this task---there 
must be many people that have to produce quarter-page (or half-page) 
graphics, and probably everyone is tweaking plot parameters a bit 
differently.  It would be nice to build in some of this intelligence 
into plot parameters, themselves.   of course, R is a free volunteer 
effort, and I am grateful for all the stuff that has been done already.

/iaw
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] more obvious contribution mechanism?

2004-06-08 Thread ivo welch
can we put a how to donate money to R on the R webpage?  perhaps with 
a paypal button?

even better, because I would like to donate some funds from my research 
budget, could the R-project possibly sell some trinkets for a high price 
for support?  it is difficult to explain to a non-profit org (like yale) 
why i want to donate its money to another non-profit org.  they would be 
happy to pay $1,000 for S, of course, but not to donate $100 for R.  
buying a blank CD for $100 is much easier.

regards,
/iaw
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] fighting with ps.options and xlim/ylim

2004-06-08 Thread Duncan Murdoch
On Tue, 08 Jun 2004 21:18:34 -0400, ivo welch [EMAIL PROTECTED]
wrote:
 And once I 
do this, I need different R parameter defaults on the axes.  With the 
advice I have gotten, I think I am all set now.  However, I am a little 
bit surprised that noone has written a package around this task---there 
must be many people that have to produce quarter-page (or half-page) 
graphics, and probably everyone is tweaking plot parameters a bit 
differently. 

My general strategy for this is to change the width and height used in
the pdf() or postscript() device call, then just trust the defaults
chosen by R.  For inclusion in a paper, I generally specify sizes
about twice as big as I really want, and get text size similar to the
printed text.  So in your case, assuming a page is around 6 inches
wide, I'd use something like

pdf(width=3, height=3, ...)

and then get LaTeX to shrink it to half the size.

Duncan Murdoch

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] fighting with ps.options and xlim/ylim

2004-06-08 Thread Marc Schwartz
On Tue, 2004-06-08 at 20:18, ivo welch wrote:
 thank you, marc.  I will play around with these parameters tomorrow at 
 my real computer.  yes, the idea is to just create an .eps and .pdf 
 file, which is then \includegraphics[0.25\textwidth]{} in pdflatex.  I 
 need to tweak with the parameter ps.options(pointsize) because 
 otherwise, I end up with 5pt fonts---which is not readable.  And once I 
 do this, I need different R parameter defaults on the axes.  With the 
 advice I have gotten, I think I am all set now.  However, I am a little 
 bit surprised that noone has written a package around this task---there 
 must be many people that have to produce quarter-page (or half-page) 
 graphics, and probably everyone is tweaking plot parameters a bit 
 differently.  It would be nice to build in some of this intelligence 
 into plot parameters, themselves.   of course, R is a free volunteer 
 effort, and I am grateful for all the stuff that has been done already.
 
 /iaw


You might want to try to set the 'height' and 'width' arguments for
postscript() to something larger than the defaults. For example, use 6 x
6 (if square) and then use your code above to scale the plot down to
size. That might help with your font size and spacing problem, rather
than adjusting the point size.

I don't have a 'rule of thumb', but experience suggests that downsizing
a plot that is too big is better than upsizing one that is too small,
especially for a partial page.

I have done some other things using the 'seminar' LaTeX package for
landscape orientation slides and there I generally use the exact size
for the EPS files. But that is generally the only time that I do that.

YMMV,

Marc

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] fighting with ps.options and xlim/ylim

2004-06-08 Thread Marc Schwartz
On Tue, 2004-06-08 at 21:02, Duncan Murdoch wrote:
 On Tue, 08 Jun 2004 21:18:34 -0400, ivo welch [EMAIL PROTECTED]
 wrote:
  And once I 
 do this, I need different R parameter defaults on the axes.  With the 
 advice I have gotten, I think I am all set now.  However, I am a little 
 bit surprised that noone has written a package around this task---there 
 must be many people that have to produce quarter-page (or half-page) 
 graphics, and probably everyone is tweaking plot parameters a bit 
 differently. 
 
 My general strategy for this is to change the width and height used in
 the pdf() or postscript() device call, then just trust the defaults
 chosen by R.  For inclusion in a paper, I generally specify sizes
 about twice as big as I really want, and get text size similar to the
 printed text.  So in your case, assuming a page is around 6 inches
 wide, I'd use something like
 
 pdf(width=3, height=3, ...)
 
 and then get LaTeX to shrink it to half the size.
 
 Duncan Murdoch


I just got Duncan's msg, so I think that we are thinking along the same
lines here.

I agree with Duncan's suggestion relative to trying a 2x scaling factor
and would see how that goes with your particular plot. Then adjust if
need be as you develop some intuition.

Marc

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] more obvious contribution mechanism?

2004-06-08 Thread Jason Turner
On Wed, 2004-06-09 at 13:22, ivo welch wrote:
 can we put a how to donate money to R on the R webpage?  perhaps with 
 a paypal button?
 

Does the R Foundation link meet this need?

http://www.r-project.org/foundation/main.html

Cheers

Jason

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to Describe R to Finance People

2004-06-08 Thread Richard A. O'Keefe
I wrote:
 The scope rules (certainly the scope rules for S) were obviously
 designed by someone who had a fanatical hatred of compilers and
 wanted to ensure that the language could never be usefully
 compiled.
Drat!  I forgot the semi-smiley!

Tony Plate [EMAIL PROTECTED] wrote:
What in particular about the scope rules for S makes it tough
for compilers?  The scope for ordinary variables seems pretty
straightforward -- either local or in one of several global
locations.

One of *several* global locations.
attach() is the big one.
I spent a couple of months trying to design a compiler that would
respect all the statements about variable access in The (New) S
Programming Language book.

(Or are you referring to the feature of the get()
function that it can access variables in any frame?)

That's part of it too.  Worse still is that you can create and
delete variables dynamically in any frame.  (Based on my reading of
the blue book, not on experiments with an S system.)  And on one
fairly natural reading of the blue book,

f - function (x) {
z - y this is a global reference
y - z + 1
g()
y/z this is a local reference
}

whether a reference inside a function is local or global would be 
a dynamic* property even if the function g() couldn't zap the local
definition of y.

R has its own problems, like the fact than with(thingy, expression)
introduces local variables and you don't know WHICH local variables
until you look at the run-time value of thingy.  Actual transcript:

 z - 12
 f - function (d) with(d, z)
 f(list())
[1] 12
 f(list(z=27))
[1] 27

Is that reference to z a reference to the global z or to a local copy
of d$z?  We don't know until we find out whether d *has* a $z property.
How are you supposed to compile a language where you don't know which
variable references are global and which are local?

(Answer: it can be done, but it ain't pretty!)

In this trivial example, it's obvious that the intent is for z to come
from d, but (a) it isn't an error if it doesn't and (b) realistic examples
have more variables, some of which are often visible without 'with'.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html