Re: [R] numeric variables converted to character when recoding missingvalues

2006-06-23 Thread Juan Pablo Lewinger
Thanks Bert, that works of course and is much more straightforward than what
I was trying. However, I'm still puzzled as to why x[x==99]<-NA works (i.e.
it replaces the 999s with NAs and keeps the numeric variables numeric) but
is.na(x[x==999])<-TRUE doesn't (it replaces the 999s with NAs but changes
all variables where a replacement was made to character)

PS:  As far as I can tell section 2.5 of "An Introduction to R" -which I had
read- doesn't answer my original question.

Juan Pablo Lewinger
Department of Preventive Medicine 
Keck School of Medicine 
University of Southern California

-Original Message-
From: Berton Gunter [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 23, 2006 3:15 PM
To: 'Juan Pablo Lewinger'; r-help@stat.math.ethz.ch
Subject: RE: [R] numeric variables converted to character when recoding
missingvalues

Please read section 2.5 of "An Introduction to R". Numerical missing values
are assigned as NA:

x[x==999]<-NA

-- Bert Gunter
Genentech Non-Clinical Statistics
South San Francisco, CA
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Juan 
> Pablo Lewinger
> Sent: Friday, June 23, 2006 3:00 PM
> To: r-help@stat.math.ethz.ch
> Subject: [R] numeric variables converted to character when 
> recoding missingvalues
> 
> Dear R helpers,
> 
> I have a data frame where missing values for numeric 
> variables are coded as
> 999. I want to recode those as NAs. The following only 
> partially succeeds
> because numeric variables are converted to character in the process:
> 
> df <- data.frame(a=c(999,1,999,2), b=LETTERS[1:4])
> is.na(df[2,1]) <- TRUE
> df
> 
> a b
> 1 999 A
> 2  NA B
> 3 999 C
> 4   2 D
> 
> is.numeric(df$a)
> [1] TRUE
> 
> 
> is.na(df[!is.na(df) & df==999]) <- TRUE
> df
>  a b
> 1  A
> 21 B
> 3  C
> 42 D
> 
> is.character(df$a)
> [1] TRUE
> 
> My question is how to do the recoding while avoiding this 
> undesirable side
> effect. I'm using R 2.2.1 (yes, I know 2.3.1 is available but 
> don't want to
> switch mid project). I'd appreciate any help.
> 
> Further details:
> 
> platform i386-pc-mingw32
> arch i386   
> os   mingw32
> system   i386, mingw32  
> status  
> major2  
> minor2.1
> year 2005   
> month12 
> day  20 
> svn rev  36812  
> language R  
> 
> 
> 
> Juan Pablo Lewinger
> Department of Preventive Medicine 
> Keck School of Medicine 
> University of Southern California
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] lmer and mixed effects logistic regression

2006-06-23 Thread Spencer Graves
  Permit me to try to repeat what I said earlier a little more clearly: 
  When the outcomes are constant for each subject, either all 0's or all 
1's, the maximum likelihood estimate of the between-subject variance in 
Inf.  Any software that returns a different answer is wrong. This is NOT 
a criticism of 'lmer' or SAS NLMIXED:  This is a sufficiently rare, 
extreme case that the software does not test for it and doesn't handle 
it well when it occurs.  Adding other explanatory variables to the model 
only makes this problem worse, because anything that will produce 
complete separation for each subject will produce this kind of 
instability.

 Consider the following:

library(lme4)
DF <- data.frame(y=c(0,0, 0,1, 1,1),
  Subj=rep(letters[1:3], each=2),
  x=rep(c(-1, 1), 3))
fit1 <- lmer(y~1+(1|Subj), data=DF, family=binomial)

# 'lmer' works fine here, because the outcomes from
# 1 of the 3 subjects is not constant.

 > fit.x <- lmer(y~x+(1|Subj), data=DF, family=binomial)
Warning message:
IRLS iterations for PQL did not converge

  The addition of 'x' to the model now allows complete separation for 
each subject.  We see this in the result:

Generalized linear mixed model fit using PQL

Random effects:
  Groups NameVariance   Std.Dev.
  Subj   (Intercept) 3.5357e+20 1.8803e+10
number of obs: 6, groups: Subj, 3

Estimated scale (compare to 1)  9.9414e-09

Fixed effects:
Estimate  Std. Errorz value Pr(>|z|)
(Intercept) -5.4172e-05  1.0856e+10  -4.99e-151
x8.6474e+01  2.7397e+07 3.1563e-061

  Note that the subject variance is 3.5e20, the estimate for x is 86 
wit a standard error of 2.7e7.  All three of these numbers are reaching 
for Inf;  lmer quit before it got there.

  Does this make any sense, or are we still misunderstanding one 
another?

  Hope this helps.
  Spencer Graves

Rick Bilonick wrote:
> On Wed, 2006-06-21 at 08:35 -0700, Spencer Graves wrote:
>>You could think of 'lmer(..., family=binomial)' as doing a separate 
>> "glm" fit for each subject, with some shrinkage provided by the assumed 
>> distribution of the random effect parameters for each subject.  Since 
>> your data are constant within subject, the intercept in your model 
>> without the subject's random effect distribution will be estimated at 
>> +/-Inf.  Since this occurs for all subjects, the maximum likelihood 
>> estimate of the subject variance is Inf, which is what I wrote in an 
>> earlier contribution to this thread.
>>
>>What kind of answer do you get from SAS NLMIXED?  If it does NOT tell 
>> you that there is something strange about the estimation problem you've 
>> given it, I would call that a serious infelicity in the code.  If it is 
>> documented behavior, some might argue that it doesn't deserve the "B" 
>> word ("Bug").  The warning messages issued by 'lmer' in this case are 
>> something I think users would want, even if they are cryptic.
>>
>>Hope this helps.
>>Spencer Graves
>>
> I did send in an example with data set that duplicates the problem.
> Changing the control parameters allowed lmer to produce what seem like
> reasonable estimates. Even for the case with essentially duplicate
> pairs, lmer and NLMIXED produce similar estimates (finite intercepts
> also) although lmer's coefficient estimates are as far as I can tell the
> same as glm but the standard errors are larger.
> 
> The problem I really want estimates for is different from this one
> explanatory factor example.  The model I estimate will have several
> explanatory factors, including factors that differ within each subject
> (although the responses within each subject are the same). BTW, as far
> as I know, the responses could be different within a subject but it
> seems to be very rare.
> 
> 
> Possibly the example I thought I sent never made it to the list. The
> example is below.
> 
> Rick B.
> 
> ###
> # Example of lmer error message
> 
> 
> I made an example data set that exhibits the error. There is a dump of
> the data frame at the end.
> 
> First, I updated all my packages:
> 
>> sessionInfo()
> Version 2.3.1 (2006-06-01)
> i686-redhat-linux-gnu
> 
> attached base packages:
> [1] "methods"   "stats" "graphics"  "grDevices" "utils"
> "datasets"
> [7] "base"
> 
> other attached packages:
>  chron   lme4 Matrixlattice
>"2.3-3"  "0.995-2" "0.995-11"   "0.13-8"
> 
> But I still get the error.
> 
> For comparison, here is what glm gives:
> 
> 
>> summary(glm(y~x,data=example.df,family=binomial))
> 
> Call:
> glm(formula = y ~ x, family = binomial, data = example.df)
> 
> Deviance Residuals:
> Min   1Q   Median   3Q  Max
> -1.6747  -0.9087  -0.6125   1.1447   2.0017
> 
> Coefficients:
> Estimate Std. Error z value Pr(>|z|)
> (Intercept)  -

Re: [R] GARCH

2006-06-23 Thread Spencer Graves
  I'll outline here how you can solve this kind of problem, using the 
first example in the 'garch' help page:

library(tseries)
  n <- 1100
  a <- c(0.1, 0.5, 0.2)  # ARCH(2) coefficients
  e <- rnorm(n)
  x <- double(n)
  x[1:2] <- rnorm(2, sd = sqrt(a[1]/(1.0-a[2]-a[3])))
  for(i in 3:n)  # Generate ARCH(2) process
  {
x[i] <- e[i]*sqrt(a[1]+a[2]*x[i-1]^2+a[3]*x[i-2]^2)
  }
  x <- ts(x[101:1100])
  x.arch <- garch(x, order = c(0,2))  # Fit ARCH(2)
(sum.arch <- summary(x.arch))

 Estimate  Std. Error  t value Pr(>|t|)
a0   0.09887 0.010139.764  < 2e-16 ***
a1   0.43104 0.052768.170 2.22e-16 ***
a2   0.31261 0.058445.350 8.82e-08 ***


  Then I tried 'str(sum.arch)'.  This told me it is a list with 6 
components, and the one I want is named 'coef'.  This led me to examine 
'sum.arch$coef', which includes the desired numbers.  Moreover, 
'class(sum.arch$coef)' told me this is a 'matrix'.  This information 
suggests that the following might be what you requested:

  sum.arch$coef[, "Pr(>|t|)"]
   a0   a1   a2
0.00e+00 2.220446e-16 8.815239e-08

  Hope this helps.
  Spencer Graves

Jeff Newmiller wrote:
> Prof Brian Ripley wrote:
>> Why do you think
>>
>>> help.search("garch-methods", package="tseries")
>> finds accessor functions?  That is notation for S4 methods, and "garch" 
>> is an S3 class so there will be none.  Here there _is_ an accessor, 
>> coef(), and you can find that there is by
> 
> Probably because I used it, found a mention of various extraction functions
> including coef(), and could not find a way to access "Pr(>|t|)" using
> coef(). Nor have I had luck with
>   help.search("summary.garch", package="tseries")
> 
> Possibly also because I have not yet figured out the difference between
> S4 and S3 methods, but since the result of my  help.search call displayed S3
> functions I don't see how knowing this difference would have helped.
> 
>>> methods(class="garch")
>> [1] coef.garch*  fitted.garch*logLik.garch*plot.garch*
>> [5] predict.garch*   print.garch* residuals.garch* summary.garch*
>>
>>Non-visible functions are asterisked
>>
>> Note though that inherited methods might be relevant too (e.g. default 
>> methods) and indeed it seems that here the default method for coef would 
>> work just as well.
> 
> Given Arun Kumar Saha's question...
> 
>  > > Now I want to store the value of Pr(>|t|) for coefficient a0, a1,
>  > > and b1, and also values of these coefficients, so that I can use
>  > > them in future separately. I know that I can do it for coefficients
>  > > by using the command:
>  > > coef(garch1)["a0"] etc, but not for Pr(>|t|). Can anyone please
>  > > tell me how to do this?
> 
> ... I don't see how coef() helps because I have yet to figure out how
> to use coef() (or any other accessor) to find " Std. Error" of the
> coefficient, much less "Pr(>|t|)". summary.garch seems to have only a
> print method, with no accessors at all. Can you offer a solution?
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Running executable files from R

2006-06-23 Thread Sundar Dorai-Raj


Julian Burgos wrote:
> Hello fellow R's,
> 
> I apologize if this question was answer elsewhere.  I have an executable 
> file that I need to run from R.  Basically I want to use R to create the 
> input files this executable requires, then run it, and finally use R 
> again to analyze the output files.  Because I have to do this many times 
> I'd like to have the R code call the executable file after the input 
> files are ready.  Is there an easy way to have R run an .exe file?
> Thanks for any help,
> 
> Julian
> 

See ?system.

HTH
--sundar

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Running executable files from R

2006-06-23 Thread Julian Burgos
Hello fellow R's,

I apologize if this question was answer elsewhere.  I have an executable 
file that I need to run from R.  Basically I want to use R to create the 
input files this executable requires, then run it, and finally use R 
again to analyze the output files.  Because I have to do this many times 
I'd like to have the R code call the executable file after the input 
files are ready.  Is there an easy way to have R run an .exe file?
Thanks for any help,

Julian

-- 
Julian M. Burgos

Fisheries Acoustics Research Lab
School of Aquatic and Fishery Science
University of Washington

1122 NE Boat Street
Seattle, WA  98105 

Phone: 206-221-6864

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R connectivity to database

2006-06-23 Thread Juan Antonio Breña Moral

You should download ODBC Driver and use RODBC Package;

Best Regards.

Juan Antonio Breña Moral.
Advanced Marketing Ph.D. , URJC, Spain (Now)
Industrial Organisation Engineering, ICAI, Spain.
Technical Computer Programming Engineering, ICAI, Spain
Web: http://www.juanantonio.info
Mobile: +34 655970320 
--
View this message in context: 
http://www.nabble.com/R-connectivity-to-database-t1837933.html#a5020665
Sent from the R help forum at Nabble.com.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] Tetrachoric correlation in R vs. stata

2006-06-23 Thread Peter Dalgaard
Janet Rosenbaum <[EMAIL PROTECTED]> writes:

> Peter --- Thanks for pointing out the omitted information.  The
> hazards of attempting to be brief.
> 
> In R, I am using polychor(vec1, vec2, std.err=T) and have used both
> the ML and 2 step estimates, which give virtually identical answers.
> I am explicitly using only the 632 complete cases in R to make sure
> missing data is handled the same way as in stata.
> 
> Here's my data:
> 
> 522   54
> 3422
> 
> > polychor(v1, v2, std.err=T, ML=T)
> 
> Polychoric Correlation, ML est. = 0.5172 (0.08048)
> Test of bivariate normality: Chisquare = 8.063e-06, df = 0, p = NaN
> 
> Row Thresholds
> Threshold Std.Err.
>   1 1.349  0.07042
> 
> 
> Column Thresholds
> Threshold Std.Err.
>   1 1.174  0.06458
>   Warning message:
>   NaNs produced in: pchisq(q, df, lower.tail, log.p)
> 
> In stata, I get:
> 
> . tetrachoric t1_v19a ct1_ix17
> 
> Tetrachoric correlations (N=632)
> 
> --
>  Variable |  t1_v19a  ct1_ix17
> -+
>   t1_v19a |1
>  ct1_ix17 |.6169 1
> --

Well, 

> pmvnorm(c(1.349,1.174),c(Inf,Inf),
+sigma=matrix(c(1,.5172,.5172,1),2))*632
[1] 22.00511
attr(,"error")
[1] 1e-15
attr(,"msg")
[1] "Normal Completion"
> pnorm(1.349)*632
[1] 575.9615
> pnorm(1.174)*632
[1] 556.0352

so the estimates from R appear to be consistent with the table. In
contrast, plugging in the .6169 from Stata gives


> pmvnorm(c(1.349,1.174),c(Inf,Inf),
+ sigma=matrix(c(1,.6169,.6169,1),2))*632
[1] 26.34487
...

You might want to follow up on

http://www.ats.ucla.edu/stat/stata/faq/tetrac.htm


> Thanks for your help.
> 
> Janet
> 
> 
> 
> Peter Dalgaard wrote:
> > Janet Rosenbaum <[EMAIL PROTECTED]> writes:
> >
> >> I hope someone here knows the answer to this since it will save me
> >> from delving deep into documentation.
> >>
> >> Based on 22 pairs of vectors, I have noticed that tetrachoric
> >> correlation coefficients in stata are almost uniformly higher than
> >> those in R, sometimes dramatically so (TCC=.61 in stata, .51 in R;
> >> .51 in stata, .39 in R).  Stata's estimate is higher than R's in 20
> >> out of 22 computations, although the estimates always fall within
> >> the 95% CI for the TCC calculated by R.
> >>
> >> Do stata and R calculate TCC in dramatically different ways?  Is
> >> the handling of missing data perhaps different?  Any thoughts?
> >>
> >> Btw, I am sending this question only to the R-help list.
> > A bit more information seems necessary:
> > - tetrachoric correlations depend on 4 numbers, so you should be able
> >   to give a direct example
> > - you're not telling us how you calculate the TCC in R. This is not
> >   obvious (package polycor?).
> >
> 
> 
> 
> 
> This email message is for the sole use of the intended rec...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] numeric variables converted to character when recoding missingvalues

2006-06-23 Thread Berton Gunter
Please read section 2.5 of "An Introduction to R". Numerical missing values
are assigned as NA:

x[x==999]<-NA

-- Bert Gunter
Genentech Non-Clinical Statistics
South San Francisco, CA
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Juan 
> Pablo Lewinger
> Sent: Friday, June 23, 2006 3:00 PM
> To: r-help@stat.math.ethz.ch
> Subject: [R] numeric variables converted to character when 
> recoding missingvalues
> 
> Dear R helpers,
> 
> I have a data frame where missing values for numeric 
> variables are coded as
> 999. I want to recode those as NAs. The following only 
> partially succeeds
> because numeric variables are converted to character in the process:
> 
> df <- data.frame(a=c(999,1,999,2), b=LETTERS[1:4])
> is.na(df[2,1]) <- TRUE
> df
> 
> a b
> 1 999 A
> 2  NA B
> 3 999 C
> 4   2 D
> 
> is.numeric(df$a)
> [1] TRUE
> 
> 
> is.na(df[!is.na(df) & df==999]) <- TRUE
> df
>  a b
> 1  A
> 21 B
> 3  C
> 42 D
> 
> is.character(df$a)
> [1] TRUE
> 
> My question is how to do the recoding while avoiding this 
> undesirable side
> effect. I'm using R 2.2.1 (yes, I know 2.3.1 is available but 
> don't want to
> switch mid project). I'd appreciate any help.
> 
> Further details:
> 
> platform i386-pc-mingw32
> arch i386   
> os   mingw32
> system   i386, mingw32  
> status  
> major2  
> minor2.1
> year 2005   
> month12 
> day  20 
> svn rev  36812  
> language R  
> 
> 
> 
> Juan Pablo Lewinger
> Department of Preventive Medicine 
> Keck School of Medicine 
> University of Southern California
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Problems with weekday extraction from zoo objects

2006-06-23 Thread Achim Zeileis
On Fri, 23 Jun 2006 16:12:27 -0500 Kerpel, John wrote:

> > SP500<-read.zoo("SP500.csv", sep = ",")
> Error in read.zoo("SP500.csv", sep = ",") : 
> index contains NAs

Well, there are two problems with this: 1. the CSV is not
comma-separated (despite its siffix) and 2. the date format should be
specified.

> First ten records of SP500.csv:
> 
> Date  OpenHighLow Close
> VolumeAdj. Close*
^^^
and I changed this name to "Adj.Close" which gives a syntactically
valid name in R.

Then I successfully employed:
  SP500 <- read.zoo("SP500.csv", format = "%d-%b-%y", header = TRUE)
  DGS10 <- read.zoo("DGS10.csv", format = "%m/%d/%Y", header = TRUE)
and then
  weekdays(time(SP500))
  weekdays(time(DGS10))

Z

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] numeric variables converted to character when recoding missing values

2006-06-23 Thread Juan Pablo Lewinger
Dear R helpers,

I have a data frame where missing values for numeric variables are coded as
999. I want to recode those as NAs. The following only partially succeeds
because numeric variables are converted to character in the process:

df <- data.frame(a=c(999,1,999,2), b=LETTERS[1:4])
is.na(df[2,1]) <- TRUE
df

a b
1 999 A
2  NA B
3 999 C
4   2 D

is.numeric(df$a)
[1] TRUE


is.na(df[!is.na(df) & df==999]) <- TRUE
df
 a b
1  A
21 B
3  C
42 D

is.character(df$a)
[1] TRUE

My question is how to do the recoding while avoiding this undesirable side
effect. I'm using R 2.2.1 (yes, I know 2.3.1 is available but don't want to
switch mid project). I'd appreciate any help.

Further details:

platform i386-pc-mingw32
arch i386   
os   mingw32
system   i386, mingw32  
status  
major2  
minor2.1
year 2005   
month12 
day  20 
svn rev  36812  
language R  



Juan Pablo Lewinger
Department of Preventive Medicine 
Keck School of Medicine 
University of Southern California

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Tetrachoric correlation in R vs. stata

2006-06-23 Thread Janet Rosenbaum
Peter --- Thanks for pointing out the omitted information.  The hazards 
of attempting to be brief.

In R, I am using polychor(vec1, vec2, std.err=T) and have used both the 
ML and 2 step estimates, which give virtually identical answers.  I am 
explicitly using only the 632 complete cases in R to make sure missing 
data is handled the same way as in stata.

Here's my data:

522 54
34  22

> polychor(v1, v2, std.err=T, ML=T)

Polychoric Correlation, ML est. = 0.5172 (0.08048)
Test of bivariate normality: Chisquare = 8.063e-06, df = 0, p = NaN

Row Thresholds
Threshold Std.Err.
  1 1.349  0.07042


Column Thresholds
Threshold Std.Err.
  1 1.174  0.06458
  Warning message:
  NaNs produced in: pchisq(q, df, lower.tail, log.p)

In stata, I get:

. tetrachoric t1_v19a ct1_ix17

Tetrachoric correlations (N=632)

--
 Variable |  t1_v19a  ct1_ix17
-+
  t1_v19a |1
 ct1_ix17 |.6169 1
--

Thanks for your help.

Janet



Peter Dalgaard wrote:
> Janet Rosenbaum <[EMAIL PROTECTED]> writes:
> 
>> I hope someone here knows the answer to this since it will save me from 
>> delving deep into documentation.
>>
>> Based on 22 pairs of vectors, I have noticed that tetrachoric 
>> correlation coefficients in stata are almost uniformly higher than those 
>> in R, sometimes dramatically so (TCC=.61 in stata, .51 in R;  .51 in 
>> stata, .39 in R).  Stata's estimate is higher than R's in 20 out of 22 
>> computations, although the estimates always fall within the 95% CI for 
>> the TCC calculated by R.
>>
>> Do stata and R calculate TCC in dramatically different ways?  Is the 
>> handling of missing data perhaps different?  Any thoughts?
>>
>> Btw, I am sending this question only to the R-help list.
> 
> 
> A bit more information seems necessary:
> 
> - tetrachoric correlations depend on 4 numbers, so you should be able
>   to give a direct example
> 
> - you're not telling us how you calculate the TCC in R. This is not
>   obvious (package polycor?).
> 




This email message is for the sole use of the intended recip...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Problems with weekday extraction from zoo objects

2006-06-23 Thread Kerpel, John
Gabor:

In my attempts to provide a reproducible example I now run into the
following problems:

> SP500<-read.zoo("SP500.csv", sep = ",")
Error in read.zoo("SP500.csv", sep = ",") : 
index contains NAs
> DGS10<-read.zoo("DGS10.csv",sep=",")
Error in read.zoo("DGS10.csv", sep = ",") : 
index contains NAs
> SP500
Error: object "SP500" not found
> DGS10
Error: object "DGS10" not found
> 


First ten records of SP500.csv:

DateOpenHighLow Close   Volume  Adj. Close*
3-Jan-5016.66   16.66   16.66   16.66   126 16.66
4-Jan-5016.85   16.85   16.85   16.85   189 16.85
5-Jan-5016.93   16.93   16.93   16.93   255 16.93
6-Jan-5016.98   16.98   16.98   16.98   201 16.98
9-Jan-5017.08   17.08   17.08   17.08   252 17.08
10-Jan-50   17.03   17.03   17.03   17.03   216 17.03
11-Jan-50   17.09   17.09   17.09   17.09   263 17.09
12-Jan-50   16.76   16.76   16.76   16.76   297 16.76
13-Jan-50   16.67   16.67   16.67   16.67   333 16.67

First ten records of DGS10.csv

DATE VALUE
1/2/19624.06
1/3/19624.03
1/4/19623.99
1/5/19624.02
1/8/19624.03
1/9/19624.05
1/10/1962   4.07
1/11/1962   4.08
1/12/1962   4.08

This worked perfectly yesterday; I did get the NA warnings but it read
the entire file(s) correctly.

John


-Original Message-
From: Gabor Grothendieck [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 23, 2006 3:24 PM
To: Kerpel, John
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] Problems with weekday extraction from zoo objects

Please provide a reproducible example (and read the posting
guide at the bottom of each email).

On 6/23/06, Kerpel, John <[EMAIL PROTECTED]> wrote:
> Hi Folks!
>
>
>
> I'm struggling with dates - but enough about my personal life.
>
>
>
> I have two daily time series files.  In one (x) the date format is
Y/m/d
> and the other (y) is d/m/y.  I used read.zoo on both and they read
into
> R with no problem.
>
>
>
> Then I use: weekdays(as.Date(x$DATE)) and get what I expect - all the
> days of the week in my data set.
>
>
>
> When I use: weekdays(as.Date(y$Date)) I get:
>
>
>
> Error in fromchar(x) : character string is not in a standard
unambiguous
> format
>
>
>
> I've tried to set the format= in read.zoo to format="%d/%m/%Y" but
this
> doesn't seem to solve the problem.
>
>
>
> What's going on here?  (I'm new to these dates functions, so please be
> patient - I'll get the hang out of soon!)
>
>
>
> Best,
>
>
>
> John
>
>
>[[alternative HTML version deleted]]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R connectivity to database

2006-06-23 Thread Prof Brian Ripley
You have not told us your platform nor version of OO (nor what
'it' is which 'says that R can connect to MS Access').  R can certainly 
connect to MS Access on Windows via RODBC, which does come with Access 
examples.

I think though that it is unrealistic to claim that Base is 'OOo's version 
of MS Access': it seems to be a lot less than that even in version 2.0.
Base 2.0 comes with an HSQLDB DBMS, but Base is not itself a relational 
DBMS as Access is.  Both R and Base can act as ODBC clients, which may 
help if you use Base to store data in a fully-fledged RDBMS such as MySQL 
or SQLite.


On Fri, 23 Jun 2006, Ray D. wrote:

> Hello, does anyone know how I would go about getting R to connect to 
> OpenOffice's Base program (OOo's version of MS Access) such that I can 
> retrieve data from the database and perform calculations and data 
> analysis?  I'm totally new to R and Base and I've looked at some 
> documentation, but found only examples for R connecting to PostgreSQL 
> and MySQL, but nothing for OOo's Base (there wasn't any examples for MS 
> Access either even though it says that R can connect to MS Access).  Is 
> R even capable of this or am I just out of luck to use R and OOo's Base 
> together?  Thanks in advance.
>
>  -ray
>
>
> -
>
>
>   [[alternative HTML version deleted]]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] looping through a data frame

2006-06-23 Thread jim holtman
It looks like you want the column means for each unique instance of Line.
If that is so, try this: (Line has unique numbers in the range 1:5)

> str(x)
`data.frame':   25 obs. of  18 variables:
 $ Line: num  5 3 2 1 1 4 3 5 4 1 ...
 $ V2  : num  0.3861 0.0134 0.3824 0.8697 0.3403 ...
 $ V3  : num  0.4776 0.8612 0.4381 0.2448 0.0707 ...
 $ V4  : num  0.892 0.864 0.390 0.777 0.961 ...
 $ V5  : num  0.655 0.353 0.270 0.993 0.633 ...
 $ V6  : num  0.454 0.511 0.208 0.229 0.596 ...
 $ V7  : num  0.615 0.557 0.329 0.453 0.500 ...
 $ V8  : num  0.895 0.644 0.741 0.605 0.903 ...
 $ V9  : num  0.268 0.219 0.517 0.269 0.181 ...
 $ V10 : num  0.5110 0.2576 0.0465 0.4179 0.8540 ...
 $ V11 : num  0.762 0.933 0.471 0.604 0.485 ...
 $ V12 : num  0.192 0.257 0.181 0.477 0.771 ...
 $ V13 : num  0.6737 0.0949 0.4926 0.4616 0.3752 ...
 $ V14 : num  0.954 0.812 0.782 0.268 0.762 ...
 $ V15 : num  0.4861 0.0638 0.7845 0.4183 0.9810 ...
 $ V16 : num  0.420 0.334 0.865 0.177 0.493 ...
 $ V17 : num  0.659 0.185 0.954 0.898 0.944 ...
 $ V18 : num  0.4058 0.0853 0.9326 0.8384 0.8794 ...
> sapply(split(seq(nrow(x)), x$Line), function(z) colMeans(x[z,2:18]))
1 2 3 4 5
V2  0.6127840 0.5086587 0.5788833 0.4644615 0.4309832
V3  0.4767014 0.4742475 0.3711332 0.4924043 0.5234278
V4  0.5809474 0.4480547 0.7011737 0.4015890 0.4967741
V5  0.6308344 0.1973047 0.5139931 0.5954598 0.4103783
V6  0.5686456 0.2733915 0.4741611 0.6590434 0.3368377
V7  0.3733256 0.5335852 0.6860015 0.3432356 0.4859149
V8  0.5975514 0.5355630 0.6758798 0.4619429 0.6510002
V9  0.6814301 0.6151856 0.5076237 0.4173341 0.2176028
V10 0.6799704 0.3197591 0.3102719 0.4209485 0.4051600
V11 0.5540828 0.4474840 0.4946577 0.2194847 0.3836363
V12 0.5000410 0.1509925 0.3744429 0.2316218 0.3495196
V13 0.4898115 0.5852952 0.4697099 0.4346127 0.5736597
V14 0.4135897 0.7071779 0.4640510 0.5645719 0.7029126
V15 0.5346258 0.5340159 0.4429280 0.5265885 0.4918243
V16 0.4354619 0.7265643 0.4439110 0.4037036 0.4708805
V17 0.7393969 0.6011346 0.4725786 0.5430598 0.6076132
V18 0.4639411 0.6589378 0.4020718 0.5948647 0.2981538
>



On 6/23/06, Ivan Baxter <[EMAIL PROTECTED]> wrote:
>
> Hi- I am having trouble with the syntax of  looping through  the rows
> and columns of a data frame.
>
> I have a table with 17 observations for 84 lines at n=5-10 per line. So
> the table is ~700x17.
>
> I want to pull out the median and stdev for each line and put it in a
> dataframe with rowname = linename.
>
> So I have tried the following
> #read in the table
> input.table <- read.table(file =  "First_run_all.txt", header = T)
> #pull out the line names
> line.run <- unique(input.table$Line)
> #pull out the column names except for Line
> el.names <- names(input.table[2:18])
>
>
> #now I want to calculate the median for each line for each column. The
> code below would work for a matrix
> calc.frame.med <- matrix(ncol = length(el.names), nrow =
> length(line.run), dimnames = list(line.run,el.names))
> for(i in 1:length(el.names)){
>for(j in 1:length(line.run)){
>   calc.frame.med[j,i] <- median(input.table[input.table$Line ==
> line.run[j],el.names[i]])
>}
> }
>
>
> #however, it won't allow me to pull stuff out based on the row names
> will it?
> batch1.med <- calc.frame.med[rownames(calc.frame.med) == batch1,]
> #doesn't work.
> #It seems like I want to create the data as a matrix and then be able to
> treat it like a data.frame.
>
> can anyone set me straight on the right way to do this?
>
> Thanks
>
> Ivan
>
>
>
> --
> **
> Ivan Baxter
> Research Scientist
> Bindley Bioscience Center
> Purdue University
> 765-543-7288
> [EMAIL PROTECTED]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390 (Cell)
+1 513 247 0281 (Home)

What is the problem you are trying to solve?

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint - eps not suitable

2006-06-23 Thread Michael H. Prager
Previous posters have argued for EPS files as a desirable transfer 
format for quality reasons.  This is of course true when the output is 
through a Postscript device.

However, the original poster is making presentations with PowerPoint.  
Those essentially are projected from the screen -- and screens of 
Windows PCs are NOT Postscript devices.  The version of PowerPoint I 
have will display a bitmapped, low-resolution preview when EPS is 
imported, and that is what will be projected.  It is passable, but much 
better can be done!

In this application, I have had best results using cut and paste or the 
Windows metafile format, both of which (as others have said) give 
scalable vector graphics.  When quirks of Windows metafile arise (as 
they can do, especially when fonts differ between PCs), I have had good 
results with PNG for line art and JPG for other art.

Mike

-- 
Michael Prager, Ph.D.
Southeast Fisheries Science Center
NOAA Center for Coastal Fisheries and Habitat Research
Beaufort, North Carolina  28516
** Opinions expressed are personal, not official.  No
** official endorsement of any product is made or implied.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Interpreting as.factor

2006-06-23 Thread Gabor Grothendieck
Try this:

model.matrix(y ~ x + as.factor(x1))

to see the model matrix that is being used.  Matrix multiplication of
that matrix by the coefficients is the prediction equation.

On 6/23/06, Justin Rapp <[EMAIL PROTECTED]> wrote:
> When I run a linear regression and include a variable in the
> regression with as.factor  i.e.
>
> lm(y ~x +as.factor(x1)
>
> and i read the output as
> as.factor(x1)1
> as.factor(x1)2...
> etc.
>
> how do i interpret the estimate for each level?  Is this simply to be
> regarded as a shift in the equation predicted by the intercept and
> independent variable x?
>
> jdr
>
>
> --
> Justin Rapp
> 409 S. 22nd St.
> Apt. 1
> Philadelphia, PA 19146
> Cell:(267)252.0297
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Problems with weekday extraction from zoo objects

2006-06-23 Thread Gabor Grothendieck
Please provide a reproducible example (and read the posting
guide at the bottom of each email).

On 6/23/06, Kerpel, John <[EMAIL PROTECTED]> wrote:
> Hi Folks!
>
>
>
> I'm struggling with dates - but enough about my personal life.
>
>
>
> I have two daily time series files.  In one (x) the date format is Y/m/d
> and the other (y) is d/m/y.  I used read.zoo on both and they read into
> R with no problem.
>
>
>
> Then I use: weekdays(as.Date(x$DATE)) and get what I expect - all the
> days of the week in my data set.
>
>
>
> When I use: weekdays(as.Date(y$Date)) I get:
>
>
>
> Error in fromchar(x) : character string is not in a standard unambiguous
> format
>
>
>
> I've tried to set the format= in read.zoo to format="%d/%m/%Y" but this
> doesn't seem to solve the problem.
>
>
>
> What's going on here?  (I'm new to these dates functions, so please be
> patient - I'll get the hang out of soon!)
>
>
>
> Best,
>
>
>
> John
>
>
>[[alternative HTML version deleted]]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Interpreting as.factor

2006-06-23 Thread Dimitrios Rizopoulos
probably you want to look at the help page of ?contr.treatment

I hope it helps.

Best,
Dimitris

 
Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://med.kuleuven.be/biostat/
 http://www.student.kuleuven.be/~m0390867/dimitris.htm


Quoting Justin Rapp <[EMAIL PROTECTED]>:

> When I run a linear regression and include a variable in the
> regression with as.factor  i.e.
> 
> lm(y ~x +as.factor(x1)
> 
> and i read the output as
> as.factor(x1)1
> as.factor(x1)2...
> etc.
> 
> how do i interpret the estimate for each level?  Is this simply to
> be
> regarded as a shift in the equation predicted by the intercept and
> independent variable x?
> 
> jdr
> 
> 
> -- 
> Justin Rapp
> 409 S. 22nd St.
> Apt. 1
> Philadelphia, PA 19146
> Cell:(267)252.0297
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html
> 
> 


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Re : Interpreting as.factor

2006-06-23 Thread justin bem
Hi justin !
 
The interpretation depend of contrast. Read about anova in any good book (Try 
Analys of variance Henry Scheffe 1959)


 
- Message d'origine 
De : Justin Rapp <[EMAIL PROTECTED]>
À : r-help@stat.math.ethz.ch
Envoyé le : Vendredi, 23 Juin 2006, 10h07mn 38s
Objet : [R] Interpreting as.factor


When I run a linear regression and include a variable in the
regression with as.factor  i.e.

lm(y ~x +as.factor(x1)

and i read the output as
as.factor(x1)1
as.factor(x1)2...
etc.

how do i interpret the estimate for each level?  Is this simply to be
regarded as a shift in the equation predicted by the intercept and
independent variable x?

jdr


-- 
Justin Rapp
409 S. 22nd St.
Apt. 1
Philadelphia, PA 19146
Cell:(267)252.0297

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

[R] Problems creating packages.

2006-06-23 Thread Robert Robinson
I'm creating my own package for personal and I'm having trouble
getting it to a point where R (v 2.3.1) will recognise it.  I've
followed two different tutorials for how to create the package
structure and the DESCRIPTION file (
http://web.maths.unsw.edu.au/~wand/webcpdg/rpack.html ,
http://www.maths.bris.ac.uk/~maman/computerstuff/Rhelp/Rpackages.html#Lin-Lin
).  I'm still getting errors where when I try to load the library in R
by using library(samp) I get an error:

Error in library(samp) : 'samp' is not a valid package -- installed < 2.0.0?

And when I use the library() call I get this:

samp** No title available (pre-2.0.0 install?) **

I'm not really sure where to go, I've looked throgh the huge document
on how to create an R package on the site but that didn't help.  Just
for reference I'm going to add a sample of everything becasue at this
point I really don't know what it is.  Basic procedure is what they
layout in the tutorials I linked above.  First I made the files,
followed by calling R CMD build samp on the samp directory that holds
all my files, followed by calling sudo R CMD INSTALL samp_0.1-1.tar.gz
 I don't get errors on either of those calls.  When I call R CMD check
samp I get this output:

* checking for working latex ...sh: latex: command not found
 NO
* using log directory '/home/rrobinson/myrpac/samp.Rcheck'
* using Version 2.3.0 (2006-04-24)
* checking for file 'samp/DESCRIPTION' ... OK
* this is package 'samp' version '0.1-1'
* checking package dependencies ... OK
* checking if this is a source package ... ERROR
Only *source* packages can be checked.

Here is what my file structure looks like:

.:
DESCRIPTION  man  R

./man:
bootstrap.rd  hessian.rd loglikelihood.rd  stdeviation6.rd
freq.rd   ks.rd  score6.rd stdeviation.rd
hessian6.rd   loglikelihood6.rd  score.rd

./R:
bootsample.r  hessian6.r  loglikelihood6.r  score6.rstdeviation.r
firstlib.rhessian.r   loglikelihood.r   score.r
freq.rks.rProb.rstdeviation6.r

here is what my DESCRIPTION file looks like:

Package: samp
Title: R custom simulation package
Version: 0.1-1
Date: 2006-06-23
Author: Cheng Peng <*>
Maintainer: ** <[EMAIL PROTECTED]>
Description: Functions for custom simulation, includes a score and
loglikelihood function.
License: GPL (version 2 or later)
Built: R 2.3.0; i686-pc-linux-gnu; 2006-06-16 11:28:36; unix
Packaged: Fri Jun 16 11:36:02 2006; root

Here is a sample .r file format:

###
#
#Freq function returns the frequencies of numerical vectors
#
###

  freq = function(x1,x2,x3){
 nn1=rep(0, length(x))
 nn2=rep(0, length(x))
 nn3=rep(0, length(x))
 for ( i in 1:length(x)){
 nn1[i]=sum(x[i]==x1)
 nn2[i]=sum(x[i]==x2)
 nn3[i]=sum(x[i]==x3)
 }
   mm = as.matrix(rbind(nn1,nn2,nn3))
   mm
  }

Here is a sample man page format:

\name{freq}
\alias{freq}
\title{Frequencies of numerical vectors}
\description{
   Freq function returns the frequencies of numerical vectors
}
\usage{
freq(x1,x2,x3)
}
\arguments{
   \item{x1}{numerical vector}
   \item{x2}{numerical vector}
   \item{x3}{numerical vector}
}
\keyword{custom}


Thanks to anyone who can help me solve this.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Interpreting as.factor

2006-06-23 Thread Justin Rapp
When I run a linear regression and include a variable in the
regression with as.factor  i.e.

lm(y ~x +as.factor(x1)

and i read the output as
as.factor(x1)1
as.factor(x1)2...
etc.

how do i interpret the estimate for each level?  Is this simply to be
regarded as a shift in the equation predicted by the intercept and
independent variable x?

jdr


-- 
Justin Rapp
409 S. 22nd St.
Apt. 1
Philadelphia, PA 19146
Cell:(267)252.0297

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Problems with weekday extraction from zoo objects

2006-06-23 Thread Kerpel, John
Hi Folks!

 

I'm struggling with dates - but enough about my personal life.

 

I have two daily time series files.  In one (x) the date format is Y/m/d
and the other (y) is d/m/y.  I used read.zoo on both and they read into
R with no problem.  

 

Then I use: weekdays(as.Date(x$DATE)) and get what I expect - all the
days of the week in my data set.

 

When I use: weekdays(as.Date(y$Date)) I get:

 

Error in fromchar(x) : character string is not in a standard unambiguous
format

 

I've tried to set the format= in read.zoo to format="%d/%m/%Y" but this
doesn't seem to solve the problem.

 

What's going on here?  (I'm new to these dates functions, so please be
patient - I'll get the hang out of soon!)

 

Best,

 

John


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Tetrachoric correlation in R vs. stata

2006-06-23 Thread Peter Dalgaard
Janet Rosenbaum <[EMAIL PROTECTED]> writes:

> I hope someone here knows the answer to this since it will save me from 
> delving deep into documentation.
> 
> Based on 22 pairs of vectors, I have noticed that tetrachoric 
> correlation coefficients in stata are almost uniformly higher than those 
> in R, sometimes dramatically so (TCC=.61 in stata, .51 in R;  .51 in 
> stata, .39 in R).  Stata's estimate is higher than R's in 20 out of 22 
> computations, although the estimates always fall within the 95% CI for 
> the TCC calculated by R.
> 
> Do stata and R calculate TCC in dramatically different ways?  Is the 
> handling of missing data perhaps different?  Any thoughts?
> 
> Btw, I am sending this question only to the R-help list.


A bit more information seems necessary:

- tetrachoric correlations depend on 4 numbers, so you should be able
  to give a direct example

- you're not telling us how you calculate the TCC in R. This is not
  obvious (package polycor?).

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Tetrachoric correlation in R vs. stata

2006-06-23 Thread John Fox
Dear Janet,

Are you using the polychor() function in the polycor package to compute
tetrachoric correlations? If so, two methods are provided: A relatively
quick method (the default) and ML. The methods implemented are
described in the references given in ?polycor.

Missing data simply are eliminated from the contingency table from
which a tetrachoric correlation is computed. If, however, you're using
hetcor() to compute a matrix of tetrachoric correlations, then missing
data are handled according to the use argument, which defaults to
"complete.obs" and is described in ?hetcor.

If you want to know whether polychor() or Stata is right, then one
thing that you might do is try them on data for which you know the
answer. If you do this, you should of course make sure that both are
trying to compute the same thing (e.g., the ML estimate).

I hope this helps,
 John

On Fri, 23 Jun 2006 10:42:12 -0700
 Janet Rosenbaum <[EMAIL PROTECTED]> wrote:
> 
> I hope someone here knows the answer to this since it will save me
> from 
> delving deep into documentation.
> 
> Based on 22 pairs of vectors, I have noticed that tetrachoric 
> correlation coefficients in stata are almost uniformly higher than
> those 
> in R, sometimes dramatically so (TCC=.61 in stata, .51 in R;  .51 in 
> stata, .39 in R).  Stata's estimate is higher than R's in 20 out of
> 22 
> computations, although the estimates always fall within the 95% CI
> for 
> the TCC calculated by R.
> 
> Do stata and R calculate TCC in dramatically different ways?  Is the 
> handling of missing data perhaps different?  Any thoughts?
> 
> Btw, I am sending this question only to the R-help list.
> 
> Thanks,
> 
> Janet
> 
> 
> 
> 
> This email message is for the sole use of the intended\ > ...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R connectivity to database

2006-06-23 Thread Wensui Liu
Ray,

R can talk to Ms access very well through RODBC. Here is how:

mdbConnect<-odbcConnectAccess("C:\\temp\\demo.mdb");

sqlTables(mdbConnect);

demo<-sqlFetch(mdbConnect, "tblDemo");

odbcClose(mdbConnect);

rm(demo);



On 6/23/06, Ray D. <[EMAIL PROTECTED]> wrote:
>
> Hello, does anyone know how I would go about getting R to connect to
> OpenOffice's Base program (OOo's version of MS Access) such that I can
> retrieve data from the database and perform calculations and data
> analysis?  I'm totally new to R and Base and I've looked at some
> documentation, but found only examples for R connecting to PostgreSQL and
> MySQL, but nothing for OOo's Base (there wasn't any examples for MS Access
> either even though it says that R can connect to MS Access).  Is R even
> capable of this or am I just out of luck to use R and OOo's Base
> together?  Thanks in advance.
>
> -ray
>
>
> -
>
>
>[[alternative HTML version deleted]]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html
>



-- 
WenSui Liu
(http://spaces.msn.com/statcompute/blog)
Senior Decision Support Analyst
Health Policy and Clinical Effectiveness
Cincinnati Children Hospital Medical Center

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint

2006-06-23 Thread Cleber N. Borges

Hello,
IMHO,

for the printer

1 - The best choice of  graphics format is postscript ( PS )  in
Microsoft ( M$ ), since you install the M$ Convert Pack [1]!
Make a preview in PS file with EMF format! use epstool for this

2 - Enhanced MetaFile ( EMF ) in M$ and OpenOffice ( OOo) is not the
same... This can be a problem! See pstoedit page [2]

3 - In OOo, I use the EPS file with the follow procedure:
  - save my graphic in PS
  - make to use de EPSTOOL for to produce EPS ( ps with tiff 
preview
) -> preview is a tiff graphic with low quality

*note:  emf can be also to included for preview


  - I need a PS-printer... in the case of NO-ps-printer, the 
tiff
(low quality) will be printed! { :-(  }, then, I make a PDF final 
report
with ExtendendPDF!

  - this procedure also work in the M$-Word


HTH,
Cleber N. Borges {klebyn}
---

[1] -
http://www.microsoft.com/downloads/details.aspx?FamilyID=cf196df0-70e5-4595-8a98-370278f40c57&DisplayLang=en

[2] - http://www.pstoedit.net/pstoedit

[3] - http://www.3bview.com/epdf-home.html







Marc Bernard wrote:

>Dear All,
>   
>  I am looking for the best way to use graphs from R (like xyplot, curve ...)  
>  for a presentation with powerpoint. I used to save my plot as pdf and after 
> to copy them as image in powerpoint but the quality is not optimal by so 
> doing.
>   
>  Another completely independent question is the following: when I use "main"  
> in the  xyplot, the main title is very close to my plot, i.e. no ligne 
> separate the main and the plot. I would like my title to be well 
> distinguished from the plots.
>   
>  I would be grateful for any improvements...
>   
>  Many thanks,
>   
>  Bernard,
>   
>
>
>   
>-
>
>   [[alternative HTML version deleted]]
>
>__
>R-help@stat.math.ethz.ch mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>
>.
>
>  
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R connectivity to database

2006-06-23 Thread Ray D.
Hello, does anyone know how I would go about getting R to connect to 
OpenOffice's Base program (OOo's version of MS Access) such that I can retrieve 
data from the database and perform calculations and data analysis?  I'm totally 
new to R and Base and I've looked at some documentation, but found only 
examples for R connecting to PostgreSQL and MySQL, but nothing for OOo's Base 
(there wasn't any examples for MS Access either even though it says that R can 
connect to MS Access).  Is R even capable of this or am I just out of luck to 
use R and OOo's Base together?  Thanks in advance.
   
  -ray


-


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint - eps not suitable

2006-06-23 Thread Marc Schwartz (via MN)
On Fri, 2006-06-23 at 14:02 -0400, Michael H. Prager wrote:
> Previous posters have argued for EPS files as a desirable transfer 
> format for quality reasons.  This is of course true when the output is 
> through a Postscript device.
> 
> However, the original poster is making presentations with PowerPoint.  
> Those essentially are projected from the screen -- and screens of 
> Windows PCs are NOT Postscript devices.  The version of PowerPoint I 
> have will display a bitmapped, low-resolution preview when EPS is 
> imported, and that is what will be projected.  It is passable, but much 
> better can be done!
> 
> In this application, I have had best results using cut and paste or the 
> Windows metafile format, both of which (as others have said) give 
> scalable vector graphics.  When quirks of Windows metafile arise (as 
> they can do, especially when fonts differ between PCs), I have had good 
> results with PNG for line art and JPG for other art.
> 
> Mike

Just so that it is covered (though this has been noted in other
threads), even in this situation, one can still use EPS files embedded
in PowerPoint (or Impress) presentations.

The scenario is to print out the PowerPoint presentation to a Postscript
file (using a PS printer driver). If you have Ghostscript installed, you
can then use ps2pdf to convert the PS file to a PDF file.

If you have OO.org, there is a Distiller type of printer driver called
PDF Converter (configured via the printer admin program) available,
which you can use to go directly to a PDF in a single step. This also
uses Ghostscript (-sDEVICE=pdfwrite) as an intermediary (though hidden
from the user) step.

The standard OO.org PDF export mechanism (using the toolbar icon) only
exports the bitmapped preview, not the native EPS image. This is what
you see as the preview image in these "Office" type of apps by default.

Most PDF file viewers (Acrobat, xpdf, Evince, etc.) have a full screen
mode, whereby you can the use the viewer to display the presentation in
a landscape orientation to an audience.

I have done this frequently (under Linux with OO.org) to facilitate
presentations, when for any number of reasons, using LaTeX (ie. Beamer)
was not practical.

Even when using Beamer, the net result is still the same: creating a PDF
file via pdflatex, which is then displayed landscape in a PDF rendering
application full screen. 

This was the typical mode of operation at last week's useR! meeting in
Vienna.

All that being said, the ultimate test is in the eye of the user. So
whatever gives you sufficient quality for your application with minimal
hassle is the way to go.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] looping through a data frame

2006-06-23 Thread Ivan Baxter
Hi- I am having trouble with the syntax of  looping through  the rows 
and columns of a data frame.

I have a table with 17 observations for 84 lines at n=5-10 per line. So 
the table is ~700x17.

I want to pull out the median and stdev for each line and put it in a 
dataframe with rowname = linename.

So I have tried the following
#read in the table
input.table <- read.table(file =  "First_run_all.txt", header = T)
#pull out the line names
line.run <- unique(input.table$Line)
#pull out the column names except for Line
el.names <- names(input.table[2:18])


#now I want to calculate the median for each line for each column. The 
code below would work for a matrix
calc.frame.med <- matrix(ncol = length(el.names), nrow = 
length(line.run), dimnames = list(line.run,el.names))
for(i in 1:length(el.names)){
for(j in 1:length(line.run)){
   calc.frame.med[j,i] <- median(input.table[input.table$Line == 
line.run[j],el.names[i]])
}
}


#however, it won't allow me to pull stuff out based on the row names 
will it?
batch1.med <- calc.frame.med[rownames(calc.frame.med) == batch1,]
#doesn't work.
#It seems like I want to create the data as a matrix and then be able to 
treat it like a data.frame.

can anyone set me straight on the right way to do this?

Thanks

Ivan



-- 
**
Ivan Baxter
Research Scientist
Bindley Bioscience Center
Purdue University
765-543-7288
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] default help system (was Re: Basic package structure question)

2006-06-23 Thread Paul Roebuck
<<>>

On Fri, 23 Jun 2006, Duncan Murdoch wrote:

> >> On Fri, 23 Jun 2006, Joerg van den Hoff wrote:
> >>
> >>> our Windows machines lack proper development environments (mainly
> >>> missing perl is the problem for pure R-code packages, I believe?) and we
> >>> bypass this (for pure R-code packages only, of course) by
> >>>
> >>> 1.) install the package on the unix machine into the desired R library
> >>> 2.) zip the _installed_ package (not the source tree!) found in the R
> >>>   library directory
> >>> 3.) transfer this archive to the Windows machine
> >>> 4.) unzip directly into the desired library destination
> >>>
> >>> this procedure up to now always worked including properly installed
> >>> manpages (text + html (and I hope this remains the case in the future...)
>
> One obvious limitation of this install method is that it won't produce
> native Windows help files (.chm).  Plans are to make CHM the default
> help system as of the 2.4.0 release, so your packages will not work
> properly unless you give special instructions on how to change the help
> system defaults.

Is the decision to change default help files to CHM set
in stone? What's the payback for this change?

--
SIGSIG -- signature too long (core dumped)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint - eps not suitable

2006-06-23 Thread Gabor Grothendieck
I think I was just comparing the ones that were discussed but
certainly the vector format used on Windows is normally emf or wmf
and that is what I would normally use too.

On 6/23/06, Michael H. Prager <[EMAIL PROTECTED]> wrote:
> Previous posters have argued for EPS files as a desirable transfer
> format for quality reasons.  This is of course true when the output is
> through a Postscript device.
>
> However, the original poster is making presentations with PowerPoint.
> Those essentially are projected from the screen -- and screens of
> Windows PCs are NOT Postscript devices.  The version of PowerPoint I
> have will display a bitmapped, low-resolution preview when EPS is
> imported, and that is what will be projected.  It is passable, but much
> better can be done!
>
> In this application, I have had best results using cut and paste or the
> Windows metafile format, both of which (as others have said) give
> scalable vector graphics.  When quirks of Windows metafile arise (as
> they can do, especially when fonts differ between PCs), I have had good
> results with PNG for line art and JPG for other art.
>
> Mike
>
> --
> Michael Prager, Ph.D.
> Southeast Fisheries Science Center
> NOAA Center for Coastal Fisheries and Habitat Research
> Beaufort, North Carolina  28516
> ** Opinions expressed are personal, not official.  No
> ** official endorsement of any product is made or implied.
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] integrate

2006-06-23 Thread Spencer Graves


Thomas Lumley wrote:
> On Fri, 23 Jun 2006, Rogério Rosa da Silva wrote:
> 
>> Dear All,
>>
>> My doubt about how to integrate a simple kernel density estimation 
>> goes on.
>>
>> I have seen the recent posts on integrate density estimation, which seem
>> similar to my question. However, I haven't found a solution.
>>
>> I have made two simple kernel density estimation by:
>>
>>kde.1 <-density(x, bw=sd(x), kernel="gaussian")$y # x<- 
>> c(2,3,5,12)
>>kde.2 <-density(y, bw=sd(y), kernel="gaussian")$y # y<- 
>> c(4,2,4,11)
>>
>> Now I would like to integrate the difference in the estimated density
>> values, i.e.:
>>
>>diff.kde <- abs (kde.1- kde.2)
>>
>> How can I integrate diff.kde over -Inf to Inf ?
> 
> Well, the answer is zero.
> 
> Computationally this is a bit tricky.  You can turn the density 
> estimates into functions with approxfun()
> x<-rexp(100)
> kde<-density(x)
>  f<-approxfun(kde$x,kde$y,rule=2)
> integrate(f,-1,10)
> 1.000936 with absolute error < 3.3e-05
> 
> But if you want to integrate over -Inf to Inf you need the function to 
> specify the values outside the range of the data.  The only value that 
> will work over the range -Inf to Inf is zero

  This may be true for standard splines but not if the segment at the 
end is log-linear.  For example, consider data with range(x) = c(x1, 
x.n) with (n-2) knots at x[i], i = 2:(n-1) with x[i]0,

  integral{exp(a1+b1*x) from -Inf to x[2]}
  = exp(a1+b1*x[2])/b1.

  Do you know if this kind of thing has been studied?  I am NOT 
familiar with the literature on splines, but it would seem to me that 
this kind of thing could be quite valuable for building fast 
approximations to various mixture distributions that are otherwise quite 
difficult and expensive to compute.  It seems to me that this kind of 
thing could be used in multilevel modeling, with multidimensional 
smoothing splines fit to data obtained from potentially expensive 
evaluations of the unconditional likelihood, and the marginal likelihood 
could then be computed and optimized from the spline.  Then an estimate 
of the uncertainty could then be converted into another set of points at 
which to evaluate the likelihood and the process repeated until 
convergence.  I have problems like this that I can't solve right now, 
and it seems to me that this might lead to fast, accurate solutions.

  Comments?
  Spencer Graves

>> f<-approxfun(kde$x,kde$y,yleft=0,yright=0)
>> integrate(f,-1,10)
> 1.00072 with absolute error < 1.5e-05
>> integrate(f,-Inf,Inf)
> 1.000811 with absolute error < 2.3e-05
> 
> 
> -thomas
> 
> 
> 
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Tetrachoric correlation in R vs. stata

2006-06-23 Thread Janet Rosenbaum

I hope someone here knows the answer to this since it will save me from 
delving deep into documentation.

Based on 22 pairs of vectors, I have noticed that tetrachoric 
correlation coefficients in stata are almost uniformly higher than those 
in R, sometimes dramatically so (TCC=.61 in stata, .51 in R;  .51 in 
stata, .39 in R).  Stata's estimate is higher than R's in 20 out of 22 
computations, although the estimates always fall within the 95% CI for 
the TCC calculated by R.

Do stata and R calculate TCC in dramatically different ways?  Is the 
handling of missing data perhaps different?  Any thoughts?

Btw, I am sending this question only to the R-help list.

Thanks,

Janet




This email message is for the sole use of the intended recip...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint

2006-06-23 Thread Philipp Pagel
On Fri, Jun 23, 2006 at 11:27:00AM -0500, Marc Schwartz (via MN) wrote:
> On Fri, 2006-06-23 at 18:16 +0200, Philipp Pagel wrote:
> > On Fri, Jun 23, 2006 at 09:21:37AM -0400, Gabor Grothendieck wrote:
> > > Note that jpg, bmp and png are in less desirable bit mapped formats 
> > > whereas
> > > eps is in a more desirable vector format (magnification and shrinking does
> > > not involve loss of info) and so would be preferable from a quality
> > > viewpoint.
> > 
> > In addition to seconding the above statement I'd like to add that in
> > cases where you are forced to use a bitmap format png tends to produce
> > much better results than jpg where line drawings (e.g. most plots) are
> > concerned. JPG format on the other hand is great for anyting which can be
> > discirbed as photography-like. jpg images of plots tend to suffer from
> > bad artifacts...

> That is generally because png files are not compressed, whereas jpg
> files are. 

In fact, png uses a combination of pre-filtering and lossless compression 
(in contrast to lossy compression algortihms use in jpg). 

Of course, lossy compression can achieve much smaller file sizes for
most images. While even a substantial loss of information can go
undetected by the observer in the case of photographic images with no
sharp edges, line drawings suffer badly. 

Line drawings usually contain vast percentages of empty space (i.e.
white) and thus can be compressed quite effectively by the
pre-filteing/lossless compression used by png.

Anyway - I totally agree that eps rules and pixel formats should be
avoided at all cost for illustrations, plots, etc. ...

cu
Philipp

-- 
Dr. Philipp PagelTel.  +49-8161-71 2131
Dept. of Genome Oriented Bioinformatics  Fax.  +49-8161-71 2186
Technical University of Munich
Science Center Weihenstephan
85350 Freising, Germany

 and

Institute for Bioinformatics / MIPS  Tel.  +49-89-3187 3675
GSF - National Research Center   Fax.  +49-89-3187 3585
  for Environment and Health
Ingolstädter Landstrasse 1
85764 Neuherberg, Germany
http://mips.gsf.de/staff/pagel

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint

2006-06-23 Thread Paul Artes

Most useful: the Ungroup command in ppt lets you take apart the graph when
you insert it as wmf. 
I often resort to that when I want to change labels / fonts / colours etc.
Very flexible.
--
View this message in context: 
http://www.nabble.com/PowerPoint-t1835745.html#a5016074
Sent from the R help forum at Nabble.com.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] problem with hist() for 'times' objects from 'chron' package

2006-06-23 Thread Prof Brian Ripley
As `Writing R Extensions' points out, using traceback() is often very 
informative.  Here it gives

> traceback()
3: axis(2, adj = adj, cex = cex, font = font, las = las, lab = lab,
mgp = mgp, tcl = tcl)
2: hist.times(dts)
1: hist(dts)

and the problem is partial matching of 'lab' to 'labels'.

This is discussed on the help page for axis, and in fact passing 'lab' to 
axis has never worked.  Please report this to the package maintainer (as 
advised in the posting guide), and as a fix edit hist.times to remove 
', lab=lab'.

On Fri, 23 Jun 2006, Bojanowski, M.J.  (Michal) wrote:

> Hello dear useRs and wizaRds,
>
> I encountered the following problem using the hist() method for the
> 'times' classes
> from package 'chron'. You should be able to recreate it using the code:
>
>
>
> library(chron)
>
> # pasted from chron help file (?chron)
> dts <- dates(c("02/27/92", "02/27/92", "01/14/92", "02/28/92",
> "02/01/92"))
> class(dts)
>
> hist(dts) # which yields:
>
> # Error in axis(side, at, labels, tick, line, pos, outer, font, lty,
> lwd,  :
> #'label' is supplied and not 'at'
> # In addition: Warning messages:
> # 1: "histo" is not a graphical parameter in: plot.window(xlim, ylim,
> log, asp, ...)
> # 2: "histo" is not a graphical parameter in: title(main, sub, xlab,
> ylab, line, outer, ...)
>
>
>
> The plot is produced, but there are no axes.
>
> As far as it goes for the warnings I looked in the sources and I think
> they are caused
> by hist.times() in package 'chron' which is calling the barplot() with
> the histo=TRUE,
> whereas neither barplot(), plot.window(), title() nor axis() seem to
> accept this argument.
>
> Am I doing something wrong or there is something not right with the
> hist.times method?
>
>
> Kind regards,
>
> Michal
>
>
>
>
>
> PS: I'm using R 2.3.1 on Windows XP and:
>
>
>
>> sessionInfo()
> Version 2.3.1 (2006-06-01)
> i386-pc-mingw32
>
> attached base packages:
> [1] "methods"   "stats" "graphics"  "grDevices" "utils"
> "datasets"
> [7] "base"
>
> other attached packages:
>  chron
> "2.3-3"
>>
>
>
> library(help="chron")
>
> Package:   chron
> Version:   2.3-3
> Date:  2006-05-09
> Author:S original by David James <[EMAIL PROTECTED]>, R
>   port by Kurt Hornik <[EMAIL PROTECTED]>.
> Maintainer:Kurt Hornik <[EMAIL PROTECTED]>
> Description:   Chronological objects which can handle dates and times
> Title: Chronological objects which can handle dates and times
> Depends:   R (>= 1.6.0)
> License:   GPL
> Packaged:  Fri May 12 09:31:49 2006; hornik
> Built: R 2.3.0; i386-pc-mingw32; 2006-05-13 12:21:51; windows
>
>
>
>
>
>
> ~,~`~,~`~,~`~,~`~,~`~,~`~,~`~,~`~,~
>
> Michal Bojanowski
> ICS / Utrecht University
> Heidelberglaan 2; 3584 CS Utrecht
> Room 1428
> [EMAIL PROTECTED]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Effect size in mixed models

2006-06-23 Thread Dave Atkins

Spencer & Bruno--

You might to take a look at a paper by Kyle Roberts at:

http://www.hlm-online.com/papers/

Kyle has been working on the issue of effect-sizes in mixed-effects (aka 
multilevel aka HLM) for a couple years.  I haven't had a chance to compare what 
Spencer has suggested with Kyle's approach.  [Though, it would be *lovely* if 
there were an agreed upon method for estimating effect-sizes in mixed-effects 
models; in psychology, reviewers will bludgeon you for an effect-size...]

cheers, Dave
-- 
Dave Atkins, PhD
Assistant Professor in Clinical Psychology
Fuller Graduate School of Psychology
Email: [EMAIL PROTECTED]



Spencer wrote:

   I just learned that my earlier suggestion was wrong.  It's better to
compute the variance of the predicted or fitted values and compare those
with the estimated variance components.

  To see how to do this, consider the following minor modification of
an example in the "lme" documentation:

fm1. <- lme(distance ~ age, data = Orthodont, random=~1)
fm2. <- lme(distance ~ age + Sex, data = Orthodont, random = ~ 1)

# str(fm1.) suggested the following:
  > var(fm2.$fitted[, "fixed"]-fm1.$fitted[, "fixed"])
[1] 1.312756
  > VarCorr(fm1.)[, 1]
(Intercept)Residual
   "4.472056"  "2.049456"
  > VarCorr(fm2.)[, 1]
(Intercept)Residual
   "3.266784"  "2.049456"

  In this example, the subject variance without considering "Sex" was
4.47 but with "Sex" in the model, it dropped to 3.27, while the Residual
variance remained unchanged at 2.05.  The difference between
fm2.$fitted[, "fixed"] and fm1.$fitted[, "fixed"] is the change in the
predictions generated by the addition of "Sex" to the model.  The
variance of that difference was 1.31.  Note that 3.27 + 1.31 = 4.58,
which is moderately close to 4.47.

  In sum, I think we can get a reasonable estimate of the size of an
effect from the variance of the differences in the "fixed" portion of
the fitted model.

  Comments?
  Hope this helps.
  Spencer Graves

Spencer Graves wrote:
 >   You have asked a great question:  It would indeed be useful to
 > compare the relative magnitude of fixed and random effects, e.g. to
 > prioritize efforts to better understand and possibly manage processes
 > being studied.  I will offer some thoughts on this, and I hope if there
 > are errors in my logic or if someone else has a better idea, we will
 > both benefit from their comments.
 >
 >   The ideal might be an estimate of something like a mean square for
 > a particular effect to compare with an estimated variance component.
 > Such mean squares were a mandatory component of any analysis of variance
 > table prior to the (a) popularization of generalized linear models and
 > (b) availability of software that made it feasible to compute maximum
 > likelihood estimates routinely for unbalanced, mixed-effects models.
 > However, anova(lme(...)) such mean squares are for most purposes
 > unnecessary cluster in a modern anova table.
 >
 >   To estimate a mean square for a fixed effect, consider the
 > following log(likelihood) for a mixed-effects model:
 >
 >   lglk = (-0.5)*(n*log(2*pi*var.e)-log(det(W)) +
 > t(y-X%*%b)%*%W%*%(y-X%*%b)/var.e),
 >
 > where n = the number of observations,
 >
 >   b = the fixed-effect parameter variance,
 >
 > and the covariance matrix of the residuals, after integrating out the
 > random effects is var.e*solve(W).  In this formulation, the matrix "W"
 > is a function of the variance components.  Since they are not needed to
 > compute the desired mean squares, they are suppressed in the notation here.
 >
 >   Then, the maximum likelihood estimate of
 >
 >   var.e = SSR/n,
 >
 > where SSR = t(y-X%*%b)%*%W%*%(y-X%*%b).
 >
 >   Then
 >
 >   mle.lglk = (-0.5)*(n*(log(2*pi*SSR/n)-1)-log(det(W))).
 >
 >   Now let
 >
 >   SSR0 = this generalized sum of squares of residuals (SSR) without
 > effect "1",
 >
 > and
 >
 >   SSR1 = this generalized SSR with this effect "1".
 >
 >   If I've done my math correctly, then
 >
 >   D = deviance = 2*log(likelihood ratio)
 > = (n*log(SSR0/SSR1)+log(det(W1)/det(W0)))
 >
 >   For roughly half a century, a major part of "the analysis of
 > variance" was the Pythagorean idea that the sum of squares under H0 was
 > the sum of squares under H1 plus the sum of squares for effect "1":
 >
 >   SSR0 = SS1 + SSR1.
 >
 >   Whence,
 >
 >   exp((D/n)-log(det(W1)/det(W0))) = 1+SS1/SSR1.
 >
 > Thus,
 >
 >   SS1 = SSR1*(exp((D/n)-log(det(W1)/det(W0)))-1).
 >
 >   If the difference between deg(W1) and det(W0) can be ignored, we get:
 >
 >   SS1 = SSR1*(exp((D/n)-1).
 >
 >   Now compute MS1 = SS1/df1, and compare with the variance components.
 >
 >   If there is a flaw in this logic, I hope someone will disabuse me
 > of it.
 >
 >   If this seems too terse or convoluted to follow, please provide a
 > simple, self-contained example, as sugges

Re: [R] problem with hist() for 'times' objects from 'chron' package

2006-06-23 Thread Gabor Grothendieck
Try this:

hist(as.Date(dts), "days") # or "weeks" or "months"

On 6/23/06, Bojanowski, M.J.  (Michal) <[EMAIL PROTECTED]> wrote:
> Hello dear useRs and wizaRds,
>
> I encountered the following problem using the hist() method for the
> 'times' classes
> from package 'chron'. You should be able to recreate it using the code:
>
>
>
> library(chron)
>
> # pasted from chron help file (?chron)
> dts <- dates(c("02/27/92", "02/27/92", "01/14/92", "02/28/92",
> "02/01/92"))
> class(dts)
>
> hist(dts) # which yields:
>
> # Error in axis(side, at, labels, tick, line, pos, outer, font, lty,
> lwd,  :
> #'label' is supplied and not 'at'
> # In addition: Warning messages:
> # 1: "histo" is not a graphical parameter in: plot.window(xlim, ylim,
> log, asp, ...)
> # 2: "histo" is not a graphical parameter in: title(main, sub, xlab,
> ylab, line, outer, ...)
>
>
>
> The plot is produced, but there are no axes.
>
> As far as it goes for the warnings I looked in the sources and I think
> they are caused
> by hist.times() in package 'chron' which is calling the barplot() with
> the histo=TRUE,
> whereas neither barplot(), plot.window(), title() nor axis() seem to
> accept this argument.
>
> Am I doing something wrong or there is something not right with the
> hist.times method?
>
>
> Kind regards,
>
> Michal
>
>
>
>
>
> PS: I'm using R 2.3.1 on Windows XP and:
>
>
>
> > sessionInfo()
> Version 2.3.1 (2006-06-01)
> i386-pc-mingw32
>
> attached base packages:
> [1] "methods"   "stats" "graphics"  "grDevices" "utils"
> "datasets"
> [7] "base"
>
> other attached packages:
>  chron
> "2.3-3"
> >
>
>
> library(help="chron")
>
> Package:   chron
> Version:   2.3-3
> Date:  2006-05-09
> Author:S original by David James <[EMAIL PROTECTED]>, R
>   port by Kurt Hornik <[EMAIL PROTECTED]>.
> Maintainer:Kurt Hornik <[EMAIL PROTECTED]>
> Description:   Chronological objects which can handle dates and times
> Title: Chronological objects which can handle dates and times
> Depends:   R (>= 1.6.0)
> License:   GPL
> Packaged:  Fri May 12 09:31:49 2006; hornik
> Built: R 2.3.0; i386-pc-mingw32; 2006-05-13 12:21:51; windows
>
>
>
>
>
>
> ~,~`~,~`~,~`~,~`~,~`~,~`~,~`~,~`~,~
>
> Michal Bojanowski
> ICS / Utrecht University
> Heidelberglaan 2; 3584 CS Utrecht
> Room 1428
> [EMAIL PROTECTED]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] rearranging data frame rows

2006-06-23 Thread Bojanowski, M.J. \(Michal\)
Hi Fede,

How about using merge()? For example:

n <- letters[1:10]
d1 <- data.frame( n=n, x1=rnorm(10) )
d2 <- data.frame( n=sample(n), x2=rnorm(10))
d1
d2
merge(d1,d2)


Is this what you had in mind?

HTH,

Michal






Hi All,

I have two data frames. The first contains data about a number of
individuals, 
coded in the first column with a name, in an order I find convenient.

The second contains different data about the same indivduals, in a
different 
order. Both data frame have the individual names in the first column.

I need to reorder the second data frame so the rows are rearranged in
the same 
manner as the fist. How?

I cannot turn the individual names in a numeric vairable with 
as.numeric(data1[,1]), because the two data frames are subset of
different data, 
so the the factor levels are way off between the two. I think I need to
actually 
use the names as a index.

Cheers,

Fede

~,~`~,~`~,~`~,~`~,~`~,~`~,~`~,~`~,~
 
Michal Bojanowski
ICS / Utrecht University
Heidelberglaan 2; 3584 CS Utrecht
Room 1428
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] stat question but maybe related to R

2006-06-23 Thread markleeds
i don't have any stats books with me because i am not at home so
i was hoping someone could direct me to a book or
maybe there is a way to do it in R ?

i do an lm with no intercept and there are 6 coeeficients on
the right hand side and i get back estimates of model which is fine
in the sense tha i check the t-stats and do normality, indepenendence
of residual checks etc.

then, some more data points come in ( say 20 ) and i use the 
coeeficients that i estimated previously to calculate the 
residuals (  the new residuals and the previous residuals ).

what i want are the t-stats on the old regression coefficients
( really just the first one actually )to see if they are still significant, 
given the new data, but i'm not reestimating. i still want to use the previous 
coeeficients.

is there a formula for this or a way to do this in R ?

   thanks.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Producing png plot in batch mode

2006-06-23 Thread Prof Brian Ripley
?png explains this to you and gives alternatives.

On Fri, 23 Jun 2006, Vittorio wrote:

> I have set up an R procedure that is launched every three hours by
> crontab in a unix server. Crontab runs at regular intervals the
> following line:
> R CMD BATH myprog.R

BATCH?

> myprog.R (which by the way uses
> R2HTML) should create an updated png graph  to be referred to and seen
> in an intranet web-page index.html.
>
> The problem is that both:
>
> png
> ()
> plot(...)
> dev.off()
>
> AND:
>
> plot(...)
> HTMLplot(...)
>
> fail when
> launched in a batch manner compalining that they need an X11() instance
> to be used (I understand that they work only in a graphic context and
> intarictively).
>
> How can I obtain that png file?
>
> Ciao
> Vittorio
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Producing png plot in batch mode

2006-06-23 Thread Marc Schwartz (via MN)
On Fri, 2006-06-23 at 17:23 +0100, Vittorio wrote:
> I have set up an R procedure that is launched every three hours by 
> crontab in a unix server. Crontab runs at regular intervals the 
> following line:
> R CMD BATH myprog.R
> 
> myprog.R (which by the way uses 
> R2HTML) should create an updated png graph  to be referred to and seen 
> in an intranet web-page index.html.
> 
> The problem is that both:
> 
> png
> () 
> plot(...) 
> dev.off() 
> 
> AND:
> 
> plot(...)
> HTMLplot(...)
> 
> fail when 
> launched in a batch manner compalining that they need an X11() instance 
> to be used (I understand that they work only in a graphic context and 
> intarictively).
> 
> How can I obtain that png file? 

See R FAQ 7.19 How do I produce PNG graphics in batch mode?

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint

2006-06-23 Thread Marc Schwartz (via MN)
On Fri, 2006-06-23 at 18:16 +0200, Philipp Pagel wrote:
> On Fri, Jun 23, 2006 at 09:21:37AM -0400, Gabor Grothendieck wrote:
> > Note that jpg, bmp and png are in less desirable bit mapped formats whereas
> > eps is in a more desirable vector format (magnification and shrinking does
> > not involve loss of info) and so would be preferable from a quality
> > viewpoint.
> 
> In addition to seconding the above statement I'd like to add that in
> cases where you are forced to use a bitmap format png tends to produce
> much better results than jpg where line drawings (e.g. most plots) are
> concerned. JPG format on the other hand is great for anyting which can be
> discirbed as photography-like. jpg images of plots tend to suffer from
> bad artifacts...
> 
> cu
>   Philipp


That is generally because png files are not compressed, whereas jpg
files are. 

The compression algorithms that are typically used are lossy, which
means that you give up image quality in order to gain the reduction in
file size. The greater the compression you use, the greater the loss in
image quality.

Yet another reason to use EPS for plots.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Testing for Significance Between Logistic Regressions

2006-06-23 Thread Justin Rapp
This is more of a statistics question with implementation implications.

I have used R to calculate logistic regressions for various
characterstics.  I would now like to verify that the difference
between a particular subset is significantly different from the
logistic regression of the entire set.

Example.
I have a logistic regression containing every running back drafted
between 1980-2000.  I have created an object, logistic.glm.  I also
have an object, sec.glm, that contains only players from the SEC.I
am curious into determining whether or not the difference between the
two is sigificant.

Subquestion 1:  Can I do this for each of the coefficents, beta0 and
beta1, individually?  i.e there may be a statistically signicant
difference in the intercept but not in the rate of decay as a function
of my independent variable

Subquestion 2:  Can the same method of testing differences in
regressions be applied to linear regressions?

Thanks in advance for everyone's time and help.

jdr

-- 
Justin Rapp
409 S. 22nd St.
Apt. 1
Philadelphia, PA 19146
Cell:(267)252.0297

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] problem with hist() for 'times' objects from 'chron' package

2006-06-23 Thread Bojanowski, M.J. \(Michal\)
Hello dear useRs and wizaRds,

I encountered the following problem using the hist() method for the
'times' classes
from package 'chron'. You should be able to recreate it using the code:



library(chron)

# pasted from chron help file (?chron)
dts <- dates(c("02/27/92", "02/27/92", "01/14/92", "02/28/92",
"02/01/92"))
class(dts)

hist(dts) # which yields:

# Error in axis(side, at, labels, tick, line, pos, outer, font, lty,
lwd,  : 
#'label' is supplied and not 'at'
# In addition: Warning messages:
# 1: "histo" is not a graphical parameter in: plot.window(xlim, ylim,
log, asp, ...) 
# 2: "histo" is not a graphical parameter in: title(main, sub, xlab,
ylab, line, outer, ...)



The plot is produced, but there are no axes.

As far as it goes for the warnings I looked in the sources and I think
they are caused
by hist.times() in package 'chron' which is calling the barplot() with
the histo=TRUE,
whereas neither barplot(), plot.window(), title() nor axis() seem to
accept this argument.

Am I doing something wrong or there is something not right with the
hist.times method?


Kind regards,

Michal





PS: I'm using R 2.3.1 on Windows XP and:



> sessionInfo()
Version 2.3.1 (2006-06-01) 
i386-pc-mingw32 

attached base packages:
[1] "methods"   "stats" "graphics"  "grDevices" "utils"
"datasets" 
[7] "base" 

other attached packages:
  chron 
"2.3-3" 
>


library(help="chron")

Package:   chron
Version:   2.3-3
Date:  2006-05-09
Author:S original by David James <[EMAIL PROTECTED]>, R
   port by Kurt Hornik <[EMAIL PROTECTED]>.
Maintainer:Kurt Hornik <[EMAIL PROTECTED]>
Description:   Chronological objects which can handle dates and times
Title: Chronological objects which can handle dates and times
Depends:   R (>= 1.6.0)
License:   GPL
Packaged:  Fri May 12 09:31:49 2006; hornik
Built: R 2.3.0; i386-pc-mingw32; 2006-05-13 12:21:51; windows 






~,~`~,~`~,~`~,~`~,~`~,~`~,~`~,~`~,~
 
Michal Bojanowski
ICS / Utrecht University
Heidelberglaan 2; 3584 CS Utrecht
Room 1428
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R: rearranging data frame rows

2006-06-23 Thread Vittorio
Have a look at merge.
Ciao
Vittorio

>Messaggio originale
>Da: 
[EMAIL PROTECTED]
>Data: 23-giu-2006 18.10
>A: "r-help"
>Ogg: [R] rearranging data frame rows
>
>Hi 
All,
>
>I have two data frames. The first contains data about a number 
of individuals, 
>coded in the first column with a name, in an order I 
find convenient.
>
>The second contains different data about the same 
indivduals, in a different 
>order. Both data frame have the individual 
names in the first column.
>
>I need to reorder the second data frame 
so the rows are rearranged in the same 
>manner as the fist. How?
>
>I 
cannot turn the individual names in a numeric vairable with 
>as.numeric
(data1[,1]), because the two data frames are subset of different data, 
>so the the factor levels are way off between the two. I think I need 
to actually 
>use the names as a index.
>
>Cheers,
>
>Fede
>
>-- 
>Federico C. F. Calboli
>Department of Epidemiology and Public Health
>Imperial College, St Mary's Campus
>Norfolk Place, London W2 1PG
>
>Tel  +44 (0)20 7594 1602 Fax (+44) 020 7594 3193
>
>f.calboli [.a.
t] imperial.ac.uk
>f.calboli [.a.t] gmail.com
>
>__
>[EMAIL PROTECTED]
ch mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE 
do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Producing png plot in batch mode

2006-06-23 Thread Vittorio
I have set up an R procedure that is launched every three hours by 
crontab in a unix server. Crontab runs at regular intervals the 
following line:
R CMD BATH myprog.R

myprog.R (which by the way uses 
R2HTML) should create an updated png graph  to be referred to and seen 
in an intranet web-page index.html.

The problem is that both:

png
() 
plot(...) 
dev.off() 

AND:

plot(...)
HTMLplot(...)

fail when 
launched in a batch manner compalining that they need an X11() instance 
to be used (I understand that they work only in a graphic context and 
intarictively).

How can I obtain that png file? 

Ciao
Vittorio

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint

2006-06-23 Thread Philipp Pagel
On Fri, Jun 23, 2006 at 09:21:37AM -0400, Gabor Grothendieck wrote:
> Note that jpg, bmp and png are in less desirable bit mapped formats whereas
> eps is in a more desirable vector format (magnification and shrinking does
> not involve loss of info) and so would be preferable from a quality
> viewpoint.

In addition to seconding the above statement I'd like to add that in
cases where you are forced to use a bitmap format png tends to produce
much better results than jpg where line drawings (e.g. most plots) are
concerned. JPG format on the other hand is great for anyting which can be
discirbed as photography-like. jpg images of plots tend to suffer from
bad artifacts...

cu
Philipp

-- 
Dr. Philipp PagelTel.  +49-8161-71 2131
Dept. of Genome Oriented Bioinformatics  Fax.  +49-8161-71 2186
Technical University of Munich
Science Center Weihenstephan
85350 Freising, Germany

 and

Institute for Bioinformatics / MIPS  Tel.  +49-89-3187 3675
GSF - National Research Center   Fax.  +49-89-3187 3585
  for Environment and Health
Ingolstädter Landstrasse 1
85764 Neuherberg, Germany
http://mips.gsf.de/staff/pagel

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] rearranging data frame rows

2006-06-23 Thread Federico Calboli
Hi All,

I have two data frames. The first contains data about a number of individuals, 
coded in the first column with a name, in an order I find convenient.

The second contains different data about the same indivduals, in a different 
order. Both data frame have the individual names in the first column.

I need to reorder the second data frame so the rows are rearranged in the same 
manner as the fist. How?

I cannot turn the individual names in a numeric vairable with 
as.numeric(data1[,1]), because the two data frames are subset of different 
data, 
so the the factor levels are way off between the two. I think I need to 
actually 
use the names as a index.

Cheers,

Fede

-- 
Federico C. F. Calboli
Department of Epidemiology and Public Health
Imperial College, St Mary's Campus
Norfolk Place, London W2 1PG

Tel  +44 (0)20 7594 1602 Fax (+44) 020 7594 3193

f.calboli [.a.t] imperial.ac.uk
f.calboli [.a.t] gmail.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint

2006-06-23 Thread Marc Schwartz (via MN)
Has anyone tried this with OO.org's Impress or Writer on Windows to see
if the same behavior occurs?  

My recollection from prior experience on Windows (it's been a while) is
that a subtle resize takes place when pasting/importing graphics into
the aforementioned apps. You can right click on the graphic in the app
and then select "Original Size" or something worded similarly on the
graphic object formatting dialog window. Not sure if that is enough to
get the lines back or if one has to go slightly larger than the original
size to resolve the issue.

It also seems to me that there was some behavior on the R Windows
graphic device relative to re-sizing the plot region and then doing the
metafile copy and paste, but it has been long enough that my memory may
not be intact (which my wife would suggest anyway  ;-).

HTH,

Marc Schwartz


On Fri, 2006-06-23 at 08:08 -0700, Berton Gunter wrote:
> I've always assumed that this was a rendering problem in the MS application,
> as the reappearance of the missing lines on re-sizing shows that that the
> necessary information **is** in the imported .wmf file, right?
> 
> -- Bert 
>  
> 
> > -Original Message-
> > From: [EMAIL PROTECTED] 
> > [mailto:[EMAIL PROTECTED] On Behalf Of Sundar 
> > Dorai-Raj
> > Sent: Friday, June 23, 2006 7:55 AM
> > To: Johannes Ranke
> > Cc: r-help@stat.math.ethz.ch; Marc Bernard
> > Subject: Re: [R] PowerPoint
> > 
> > Hi, all,
> > 
> > (Sorry to highjack the thread, but I think the OP should also 
> > know this)
> > 
> > One of the plots Marc mentions is xyplot. Has anybody else on 
> > this list 
> > had a problem with lattice and win.metafile (or Ctrl-W in the 
> > R graphics 
> > device)? I will sometimes import wmf files (or Ctrl-V) with lattice 
> > graphics into powerpoint and notice some of the border lines are 
> > missing. I can re-size the plot to make the lines reappear 
> > but have to 
> > find just the right size to make it look right. This seems to be a 
> > problem with PPT, XLS, and Word. I never have this problem with 
> > traditional graphics (e.g. plot.default, etc.).
> > 
> > I'm using Windows XP Pro with R-2.3.1 and lattice-0.13.8, though I've 
> > also experienced the problem on earlier versions of R and earlier 
> > versions of lattice.
> > 
> > Thanks,
> > 
> > --sundar
> > 
> > Johannes Ranke wrote:
> > > Dear Bernard,
> > > 
> > > if you use MS Powerpoint, it seems likely to me that you 
> > are using the
> > > Windows version of R. Are you aware of the fact, that you can just
> > > right-click on any graph and copy it to the clipboard (copy 
> > as metafile
> > > or similar).
> > > 
> > > That way you get a vectorized version of the graph, which 
> > you can nicely
> > > paste into Powerpoint and edit.
> > > 
> > > Johannes
> > > 
> > > * Marc Bernard <[EMAIL PROTECTED]> [060623 13:40]:
> > > 
> > >>Dear All,
> > >>   
> > >>  I am looking for the best way to use graphs from R (like 
> > xyplot, curve ...)   for a presentation with powerpoint. I 
> > used to save my plot as pdf and after to copy them as image 
> > in powerpoint but the quality is not optimal by so doing.
> > >>   
> > >>  Another completely independent question is the following: 
> > when I use "main"  in the  xyplot, the main title is very 
> > close to my plot, i.e. no ligne separate the main and the 
> > plot. I would like my title to be well distinguished from the plots.
> > >>   
> > >>  I would be grateful for any improvements...
> > >>   
> > >>  Many thanks,
> > >>   
> > >>  Bernard,

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] columnwise multiplication?

2006-06-23 Thread Marc Schwartz (via MN)
On Fri, 2006-06-23 at 11:53 -0400, Mu Tian wrote:
> Hi all,
> 
> I'd like to do a multiplication between 2 matrices buy only want resulsts of
> cloumn 1 * column 1, column 2 * column 2 and so on.
> 
> Now I do
> 
> C <- diag(t(A) %*% B)
> 
> Is there a bulit in way to do this?
> 
> Thank you.


You just want:

  A * B

for the initial multiplication. This technically gives you element by
element multiplication. Since matrices are vectors with a dim attribute
and elements stored by default in column order (top to bottom, then left
to right), the result matrix will yield what you require on a column by
column basis.

For example:

> A <- matrix(1:12, ncol = 3)
> B <- matrix(1:12, ncol = 3)

> A
 [,1] [,2] [,3]
[1,]159
[2,]26   10
[3,]37   11
[4,]48   12

> B
 [,1] [,2] [,3]
[1,]159
[2,]26   10
[3,]37   11
[4,]48   12

> A * B
 [,1] [,2] [,3]
[1,]1   25   81
[2,]4   36  100
[3,]9   49  121
[4,]   16   64  144


To then get cumulative column totals, you can do:

> colSums(A * B)
[1]  30 174 446


HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] columnwise multiplication?

2006-06-23 Thread Prof Brian Ripley
On Fri, 23 Jun 2006, Mu Tian wrote:

> Hi all,
>
> I'd like to do a multiplication between 2 matrices buy only want resulsts of
> cloumn 1 * column 1, column 2 * column 2 and so on.
>
> Now I do
>
> C <- diag(t(A) %*% B)
>
> Is there a bulit in way to do this?

If * here means vector inner product, colSums(A*B) which is a lot more 
efficient.

>
> Thank you.
>
>   [[alternative HTML version deleted]]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint

2006-06-23 Thread Sundar Dorai-Raj
Hi, Bert,

Yes, this is true. However, it seems only to be a problem with lattice 
graphics and not traditional. So I am confused as to why there is a 
difference.

Thanks,

--sundar

Berton Gunter wrote:
> I've always assumed that this was a rendering problem in the MS application,
> as the reappearance of the missing lines on re-sizing shows that that the
> necessary information **is** in the imported .wmf file, right?
> 
> -- Bert 
>  
> 
> 
>>-Original Message-
>>From: [EMAIL PROTECTED] 
>>[mailto:[EMAIL PROTECTED] On Behalf Of Sundar 
>>Dorai-Raj
>>Sent: Friday, June 23, 2006 7:55 AM
>>To: Johannes Ranke
>>Cc: r-help@stat.math.ethz.ch; Marc Bernard
>>Subject: Re: [R] PowerPoint
>>
>>Hi, all,
>>
>>(Sorry to highjack the thread, but I think the OP should also 
>>know this)
>>
>>One of the plots Marc mentions is xyplot. Has anybody else on 
>>this list 
>>had a problem with lattice and win.metafile (or Ctrl-W in the 
>>R graphics 
>>device)? I will sometimes import wmf files (or Ctrl-V) with lattice 
>>graphics into powerpoint and notice some of the border lines are 
>>missing. I can re-size the plot to make the lines reappear 
>>but have to 
>>find just the right size to make it look right. This seems to be a 
>>problem with PPT, XLS, and Word. I never have this problem with 
>>traditional graphics (e.g. plot.default, etc.).
>>
>>I'm using Windows XP Pro with R-2.3.1 and lattice-0.13.8, though I've 
>>also experienced the problem on earlier versions of R and earlier 
>>versions of lattice.
>>
>>Thanks,
>>
>>--sundar
>>
>>Johannes Ranke wrote:
>>
>>>Dear Bernard,
>>>
>>>if you use MS Powerpoint, it seems likely to me that you 
>>
>>are using the
>>
>>>Windows version of R. Are you aware of the fact, that you can just
>>>right-click on any graph and copy it to the clipboard (copy 
>>
>>as metafile
>>
>>>or similar).
>>>
>>>That way you get a vectorized version of the graph, which 
>>
>>you can nicely
>>
>>>paste into Powerpoint and edit.
>>>
>>>Johannes
>>>
>>>* Marc Bernard <[EMAIL PROTECTED]> [060623 13:40]:
>>>
>>>
Dear All,
  
 I am looking for the best way to use graphs from R (like 
>>
>>xyplot, curve ...)   for a presentation with powerpoint. I 
>>used to save my plot as pdf and after to copy them as image 
>>in powerpoint but the quality is not optimal by so doing.
>>
  
 Another completely independent question is the following: 
>>
>>when I use "main"  in the  xyplot, the main title is very 
>>close to my plot, i.e. no ligne separate the main and the 
>>plot. I would like my title to be well distinguished from the plots.
>>
  
 I would be grateful for any improvements...
  
 Many thanks,
  
 Bernard,
  
   


-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
>>
>>http://www.R-project.org/posting-guide.html
>>
>>>
>>__
>>R-help@stat.math.ethz.ch mailing list
>>https://stat.ethz.ch/mailman/listinfo/r-help
>>PLEASE do read the posting guide! 
>>http://www.R-project.org/posting-guide.html
>>
> 
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] columnwise multiplication?

2006-06-23 Thread Mu Tian
Hi all,

I'd like to do a multiplication between 2 matrices buy only want resulsts of
cloumn 1 * column 1, column 2 * column 2 and so on.

Now I do

C <- diag(t(A) %*% B)

Is there a bulit in way to do this?

Thank you.

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Bayesian logistic regression?

2006-06-23 Thread Spencer Graves
  I don't know of anything.  A brief search using RSiteSearch("Bayesian 
logistic regression") and RSiteSearch("Bayesian regression") led me to 
the BMA package plus several MCMC solutions (coda, MCMCpack, and 
BayesCslogistic {cslogistic}).  If it were my problem, I might spend a 
few minutes with BMA and then probably write my own.

  I would like to see a (possibly singular) multivariate normal (or 
normal + inverse gamma) "prior" as an optional argument for lm and glm 
and when present would produce the obvious "posterior" [exact for lm and 
approximate for glm] as an attribute of the output.  A few years ago, 
wrote something to do this that would do ordinary least squares one step 
at a time and get the standard OLS answer (starting from a 
noninformative norma + inverse gamma prior).  From this, it is a short 
step to Kalman filtering:  Just add an appropriate "decay" function to 
increase the uncertainty to convert the posterior at one step into the 
prior for the next.

  I'm sure this didn't help much other than confirm that your own 
search did not overlook something obvious.

  Best Wishes,
  Spencer Graves

Andrew Gelman wrote:
> Hi all.
> Are there any R functions around that do quick logistic regression with 
> a Gaussian prior distribution on the coefficients?  I just want 
> posterior mode, not MCMC.  (I'm using it as a step within an iterative 
> imputation algorithm.)  This isn't hard to do:  each step of a glm 
> iteration simply linearizes the derivative of the log-likelihood, and, 
> at this point, essentially no effort is required to augment the data to 
> include the prior information.  I think this can be done by going inside 
> the glm.fit() function--but if somebody's already done it, that would be 
> a relief!
> Thanks.
> Andrew
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] assign / environment side effect on R 2.4.0

2006-06-23 Thread Thomas Petzoldt
Sorry,

the posted example had the side effect on all platforms (correctly: R
2.2.1/Windows, 2.3.1/Linux, 2.4.0/Windows), but in the following
corrected example the behavior of 2.4.0 differs from the older versions.

The only difference between the "wrong" and the "new" example is
L[["test"]] vs. L$test in the assign.

Thomas P.


envfun <- function(L) {
#  L <- as.list(unlist(L))
  p <- parent.frame()
  assign("test", L[["test"]], p) ## [["test"]] instead of $test
  environment(p[["test"]]) <- p
}


solver <- function(L) {
  envfun(L)
  # some other stuff
}

L <- list(test = function() 1 + 2)

e1 <- environment(L$test)
solver(L)
e2 <- environment(L$test)

print(e1)
print(e2)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] integrate

2006-06-23 Thread Thomas Lumley

On Fri, 23 Jun 2006, Rogério Rosa da Silva wrote:


Dear All,

My doubt about how to integrate a simple kernel density estimation goes on.

I have seen the recent posts on integrate density estimation, which seem
similar to my question. However, I haven't found a solution.

I have made two simple kernel density estimation by:

   kde.1 <-density(x, bw=sd(x), kernel="gaussian")$y # x<- c(2,3,5,12)
   kde.2 <-density(y, bw=sd(y), kernel="gaussian")$y # y<- c(4,2,4,11)

Now I would like to integrate the difference in the estimated density
values, i.e.:

   diff.kde <- abs (kde.1- kde.2)

How can I integrate diff.kde over -Inf to Inf ?


Well, the answer is zero.

Computationally this is a bit tricky.  You can turn the density estimates 
into functions with approxfun()

x<-rexp(100)
kde<-density(x)
 f<-approxfun(kde$x,kde$y,rule=2)
integrate(f,-1,10)
1.000936 with absolute error < 3.3e-05

But if you want to integrate over -Inf to Inf you need the function to 
specify the values outside the range of the data.  The only value that 
will work over the range -Inf to Inf is zero

f<-approxfun(kde$x,kde$y,yleft=0,yright=0)
integrate(f,-1,10)

1.00072 with absolute error < 1.5e-05

integrate(f,-Inf,Inf)

1.000811 with absolute error < 2.3e-05


-thomas__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] problem installing gsl package under Ubuntu Breezy Badger

2006-06-23 Thread Johannes Ranke
Hi Giuseppe,

you need the -dev package for it (header files). In Debian, this is
called libgsl0-dev, so chances are good that the name is the same in
Ubuntu. Otherwise you might want to try 

sudo apt-cache search gsl

Best regards,

Johannes

* Giuseppe Paleologo <[EMAIL PROTECTED]> [060623 16:30]:
> I am trying to install the gls package (a wrapper for GNU scientific library
> special functions) package under Ubuntu 5.10. I have gls-bin (the debian GNU
> Scientific Library binary package). When I try to install the R package, I
> receive the following.
> 
> > install.packages("gsl",dependencies=T)
> Warning in install.packages("gsl", dependencies = T) :
>  argument 'lib' is missing: using /usr/local/lib/R/site-library
> trying URL 'http://lib.stat.cmu.edu/R/CRAN/src/contrib/gsl_1.6-6.tar.gz'
> Content type 'application/x-gzip' length 50969 bytes
> opened URL
> ==
> downloaded 49Kb
> 
> * Installing *source* package 'gsl' ...
> checking for gcc... gcc
> checking for C compiler default output... a.out
> checking whether the C compiler works... yes
> checking whether we are cross compiling... no
> checking for suffix of executables...
> checking for suffix of object files... o
> checking whether we are using the GNU C compiler... yes
> checking whether gcc accepts -g... yes
> checking for gcc option to accept ANSI C... none needed
> checking for gsl_sf_airy_Ai_e in -lgsl... no
> configure: error: Cannot find Gnu Scientific Library.
> ERROR: configuration failed for package 'gsl'
> The downloaded packages are in
> /tmp/RtmpbxQ4fl/downloaded_packages
> Warning message:
> installation of package 'gsl' had non-zero exit status in:
> install.packages("gsl",
> dependencies = T)
> 
> 
> as if the gls-bin package were absent. Any insights? Thanks in advance,
> 
> 
> - gappy (International Man of Mistery)
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
Dr. Johannes Ranke [EMAIL PROTECTED]
UFT Bremen, Leobenerstr. 1 +49 421 218 8971 
D-28359 Bremen http://www.uft.uni-bremen.de/chemie/ranke

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint

2006-06-23 Thread Berton Gunter

I've always assumed that this was a rendering problem in the MS application,
as the reappearance of the missing lines on re-sizing shows that that the
necessary information **is** in the imported .wmf file, right?

-- Bert 
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Sundar 
> Dorai-Raj
> Sent: Friday, June 23, 2006 7:55 AM
> To: Johannes Ranke
> Cc: r-help@stat.math.ethz.ch; Marc Bernard
> Subject: Re: [R] PowerPoint
> 
> Hi, all,
> 
> (Sorry to highjack the thread, but I think the OP should also 
> know this)
> 
> One of the plots Marc mentions is xyplot. Has anybody else on 
> this list 
> had a problem with lattice and win.metafile (or Ctrl-W in the 
> R graphics 
> device)? I will sometimes import wmf files (or Ctrl-V) with lattice 
> graphics into powerpoint and notice some of the border lines are 
> missing. I can re-size the plot to make the lines reappear 
> but have to 
> find just the right size to make it look right. This seems to be a 
> problem with PPT, XLS, and Word. I never have this problem with 
> traditional graphics (e.g. plot.default, etc.).
> 
> I'm using Windows XP Pro with R-2.3.1 and lattice-0.13.8, though I've 
> also experienced the problem on earlier versions of R and earlier 
> versions of lattice.
> 
> Thanks,
> 
> --sundar
> 
> Johannes Ranke wrote:
> > Dear Bernard,
> > 
> > if you use MS Powerpoint, it seems likely to me that you 
> are using the
> > Windows version of R. Are you aware of the fact, that you can just
> > right-click on any graph and copy it to the clipboard (copy 
> as metafile
> > or similar).
> > 
> > That way you get a vectorized version of the graph, which 
> you can nicely
> > paste into Powerpoint and edit.
> > 
> > Johannes
> > 
> > * Marc Bernard <[EMAIL PROTECTED]> [060623 13:40]:
> > 
> >>Dear All,
> >>   
> >>  I am looking for the best way to use graphs from R (like 
> xyplot, curve ...)   for a presentation with powerpoint. I 
> used to save my plot as pdf and after to copy them as image 
> in powerpoint but the quality is not optimal by so doing.
> >>   
> >>  Another completely independent question is the following: 
> when I use "main"  in the  xyplot, the main title is very 
> close to my plot, i.e. no ligne separate the main and the 
> plot. I would like my title to be well distinguished from the plots.
> >>   
> >>  I would be grateful for any improvements...
> >>   
> >>  Many thanks,
> >>   
> >>  Bernard,
> >>   
> >>
> >>
> >>
> >>-
> >>
> >>[[alternative HTML version deleted]]
> >>
> >>__
> >>R-help@stat.math.ethz.ch mailing list
> >>https://stat.ethz.ch/mailman/listinfo/r-help
> >>PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> > 
> >
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] problem installing gsl package under Ubuntu Breezy Badger

2006-06-23 Thread Ian Wilson
You need the libgsl-dev package.  The gsl-bin package just contains the
example programs.

on breezy

# apt-get install libgsl0-dev


Ian

On Fri, June 23, 2006 3:21 pm, Giuseppe Paleologo wrote:
> I am trying to install the gls package (a wrapper for GNU scientific
> library
> special functions) package under Ubuntu 5.10. I have gls-bin (the debian
> GNU
> Scientific Library binary package). When I try to install the R package, I
> receive the following.
>
>> install.packages("gsl",dependencies=T)
> Warning in install.packages("gsl", dependencies = T) :
>  argument 'lib' is missing: using /usr/local/lib/R/site-library
> trying URL 'http://lib.stat.cmu.edu/R/CRAN/src/contrib/gsl_1.6-6.tar.gz'
> Content type 'application/x-gzip' length 50969 bytes
> opened URL
> ==
> downloaded 49Kb
>
> * Installing *source* package 'gsl' ...
> checking for gcc... gcc
> checking for C compiler default output... a.out
> checking whether the C compiler works... yes
> checking whether we are cross compiling... no
> checking for suffix of executables...
> checking for suffix of object files... o
> checking whether we are using the GNU C compiler... yes
> checking whether gcc accepts -g... yes
> checking for gcc option to accept ANSI C... none needed
> checking for gsl_sf_airy_Ai_e in -lgsl... no
> configure: error: Cannot find Gnu Scientific Library.
> ERROR: configuration failed for package 'gsl'
> The downloaded packages are in
> /tmp/RtmpbxQ4fl/downloaded_packages
> Warning message:
> installation of package 'gsl' had non-zero exit status in:
> install.packages("gsl",
> dependencies = T)
>
>
> as if the gls-bin package were absent. Any insights? Thanks in advance,
>
>
> - gappy (International Man of Mistery)
>
>   [[alternative HTML version deleted]]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] problem installing gsl package under Ubuntu Breezy Badger

2006-06-23 Thread Dirk Eddelbuettel
On Fri, Jun 23, 2006 at 10:21:09AM -0400, Giuseppe Paleologo wrote:
> I am trying to install the gls package (a wrapper for GNU scientific library
> special functions) package under Ubuntu 5.10. I have gls-bin (the debian GNU
> Scientific Library binary package). When I try to install the R package, I

You also need the corresponding -dev package to compile. 

Hth, Dirk

> receive the following.
> 
> > install.packages("gsl",dependencies=T)
> Warning in install.packages("gsl", dependencies = T) :
>  argument 'lib' is missing: using /usr/local/lib/R/site-library
> trying URL 'http://lib.stat.cmu.edu/R/CRAN/src/contrib/gsl_1.6-6.tar.gz'
> Content type 'application/x-gzip' length 50969 bytes
> opened URL
> ==
> downloaded 49Kb
> 
> * Installing *source* package 'gsl' ...
> checking for gcc... gcc
> checking for C compiler default output... a.out
> checking whether the C compiler works... yes
> checking whether we are cross compiling... no
> checking for suffix of executables...
> checking for suffix of object files... o
> checking whether we are using the GNU C compiler... yes
> checking whether gcc accepts -g... yes
> checking for gcc option to accept ANSI C... none needed
> checking for gsl_sf_airy_Ai_e in -lgsl... no
> configure: error: Cannot find Gnu Scientific Library.
> ERROR: configuration failed for package 'gsl'
> The downloaded packages are in
> /tmp/RtmpbxQ4fl/downloaded_packages
> Warning message:
> installation of package 'gsl' had non-zero exit status in:
> install.packages("gsl",
> dependencies = T)
> 
> 
> as if the gls-bin package were absent. Any insights? Thanks in advance,
> 
> 
> - gappy (International Man of Mistery)
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
Hell, there are no rules here - we're trying to accomplish something. 
  -- Thomas A. Edison

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint

2006-06-23 Thread Sundar Dorai-Raj
Hi, all,

(Sorry to highjack the thread, but I think the OP should also know this)

One of the plots Marc mentions is xyplot. Has anybody else on this list 
had a problem with lattice and win.metafile (or Ctrl-W in the R graphics 
device)? I will sometimes import wmf files (or Ctrl-V) with lattice 
graphics into powerpoint and notice some of the border lines are 
missing. I can re-size the plot to make the lines reappear but have to 
find just the right size to make it look right. This seems to be a 
problem with PPT, XLS, and Word. I never have this problem with 
traditional graphics (e.g. plot.default, etc.).

I'm using Windows XP Pro with R-2.3.1 and lattice-0.13.8, though I've 
also experienced the problem on earlier versions of R and earlier 
versions of lattice.

Thanks,

--sundar

Johannes Ranke wrote:
> Dear Bernard,
> 
> if you use MS Powerpoint, it seems likely to me that you are using the
> Windows version of R. Are you aware of the fact, that you can just
> right-click on any graph and copy it to the clipboard (copy as metafile
> or similar).
> 
> That way you get a vectorized version of the graph, which you can nicely
> paste into Powerpoint and edit.
> 
> Johannes
> 
> * Marc Bernard <[EMAIL PROTECTED]> [060623 13:40]:
> 
>>Dear All,
>>   
>>  I am looking for the best way to use graphs from R (like xyplot, curve ...) 
>>   for a presentation with powerpoint. I used to save my plot as pdf and 
>> after to copy them as image in powerpoint but the quality is not optimal by 
>> so doing.
>>   
>>  Another completely independent question is the following: when I use "main" 
>>  in the  xyplot, the main title is very close to my plot, i.e. no ligne 
>> separate the main and the plot. I would like my title to be well 
>> distinguished from the plots.
>>   
>>  I would be grateful for any improvements...
>>   
>>  Many thanks,
>>   
>>  Bernard,
>>   
>>
>>
>>  
>>-
>>
>>  [[alternative HTML version deleted]]
>>
>>__
>>R-help@stat.math.ethz.ch mailing list
>>https://stat.ethz.ch/mailman/listinfo/r-help
>>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] assign / environment side effect on R 2.4.0

2006-06-23 Thread Thomas Petzoldt
Hello,

I got several off-list answers to my question on R-Help:

"[R] list of interdependent functions" from  2006-06-20

and by evaluating this (and also my own example) I found differences in
the behavior of older versions of R (R 2.2.1 and 2.3.1) and the most
recent R 2.4. under development (SVN revision 38399, WinXP SP 2).

The example is a constructed cut-down  example, but this behavior is
observed also in different versions of the full implementation.

While in the older versions the environment of L$test remains
R_Globalenv, a changed environment is returned under R 2.4.0

Ist this side effect a "bug" or a "feature"? The list(unlist(L))
workaround helps to avoid the side effect.

Thomas




envfun <- function(L) {
  # L <- as.list(unlist(L)) # !!! workaround
  p <- parent.frame()
  assign("test", L$test, p)
  environment(p[["test"]]) <- p
}


solver <- function(L) {
  envfun(L)
  # some other stuff
}

L <- list(test = function() 1 + 2)

e1 <- environment(L$test)
solver(L)
e2 <- environment(L$test)

print(e1)
# 

print(e2)
# 


-- 
Thomas PetzoldtTel. +49-351-463 3 4954
Technische Universitaet DresdenFax  +49-351-463 3 7108
Institut fuer Hydrobiologie[EMAIL PROTECTED]
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

Upcoming: German Limnology Conference!http://tu-dresden.de/dgl2006

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] problem installing gsl package under Ubuntu Breezy Badger

2006-06-23 Thread Giuseppe Paleologo
I am trying to install the gls package (a wrapper for GNU scientific library
special functions) package under Ubuntu 5.10. I have gls-bin (the debian GNU
Scientific Library binary package). When I try to install the R package, I
receive the following.

> install.packages("gsl",dependencies=T)
Warning in install.packages("gsl", dependencies = T) :
 argument 'lib' is missing: using /usr/local/lib/R/site-library
trying URL 'http://lib.stat.cmu.edu/R/CRAN/src/contrib/gsl_1.6-6.tar.gz'
Content type 'application/x-gzip' length 50969 bytes
opened URL
==
downloaded 49Kb

* Installing *source* package 'gsl' ...
checking for gcc... gcc
checking for C compiler default output... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ANSI C... none needed
checking for gsl_sf_airy_Ai_e in -lgsl... no
configure: error: Cannot find Gnu Scientific Library.
ERROR: configuration failed for package 'gsl'
The downloaded packages are in
/tmp/RtmpbxQ4fl/downloaded_packages
Warning message:
installation of package 'gsl' had non-zero exit status in:
install.packages("gsl",
dependencies = T)


as if the gls-bin package were absent. Any insights? Thanks in advance,


- gappy (International Man of Mistery)

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint

2006-06-23 Thread Johannes Ranke
Dear Bernard,

if you use MS Powerpoint, it seems likely to me that you are using the
Windows version of R. Are you aware of the fact, that you can just
right-click on any graph and copy it to the clipboard (copy as metafile
or similar).

That way you get a vectorized version of the graph, which you can nicely
paste into Powerpoint and edit.

Johannes

* Marc Bernard <[EMAIL PROTECTED]> [060623 13:40]:
> Dear All,
>
>   I am looking for the best way to use graphs from R (like xyplot, curve ...) 
>   for a presentation with powerpoint. I used to save my plot as pdf and after 
> to copy them as image in powerpoint but the quality is not optimal by so 
> doing.
>
>   Another completely independent question is the following: when I use "main" 
>  in the  xyplot, the main title is very close to my plot, i.e. no ligne 
> separate the main and the plot. I would like my title to be well 
> distinguished from the plots.
>
>   I would be grateful for any improvements...
>
>   Many thanks,
>
>   Bernard,
>
> 
> 
>   
> -
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
Dr. Johannes Ranke [EMAIL PROTECTED]
UFT Bremen, Leobenerstr. 1 +49 421 218 8971 
D-28359 Bremen http://www.uft.uni-bremen.de/chemie/ranke

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] command line boa problems...

2006-06-23 Thread Martyn Plummer
You need to give the file name in quotes. If you do not, R will look for
an object of that name in your work space. Note that the error message
is "object practice not found", not "file practice.txt not found."

You might also need to give the file extension, if this is not added by
the boa.importASCII function.

What's wrong with coda anyway? Just curious.

Martyn

On Fri, 2006-06-23 at 09:35 -0400, Evan Cooch wrote:
> Greetings -
> 
> For a number of reasons, I'm moving from CODA to BOA - and I have one or 
> two really basic, boa-newbie questions. While I have the 'menu-driven' 
> version of boa working fine (most recent version, running under R 2.3.1 
> on a Windows machine), for the life of me I can't seem to get some basic 
> boa. command-line functions to work at all. Even things like 
> boa.version() or boa.license() return errors. But, boa.init() and other 
> functions seem to work fine.
> 
> At this stage, I'd be happy getting the first - fairly key - function to 
> work - boa.importASCII. My MCMC sample data are in a flat ASCII file 
> (white-space delimited), called practice.txt. There are two columns in 
> the file (iteration number, parameter value), both labeled in the first 
> row of the file (as per instructions in the boa documentation). I know 
> the file is formatted OK, because I can import and play with it 
> successfully using the boa.menu() approach. But, for some reason, 
> boa.importASCII won't touch it.
> 
> Suppose the file is on my desktop (remember, windows machine)
> 
> I've tried all the variuous front-slash, back-slash, double-slash etc. 
> combinations I can think of...
> 
> boa.importASCII(practice,"c:\documents and settings\eg7\desktop")
> 
> boa.importASCII(practice,"c:\\documents and settings\\eg7\\desktop")
> 
> boa.importASCII(practice,"c:/documents and settings/eg7/desktop")
> 
> boa.importASCII(practice,path="c:/documents and settings/eg7/desktop")
> 
> boa.importASCII(practice,path="c:\\documents and settings\\eg7\\desktop")
> 
> and so on...and so on...in each case, boa.importASCII reports that
> 
> Error in paste(prefix, boa.par("ASCIIext"), sep = "") :
>object "practice" not found
> 
> or something to that effect - basically, practice.txt isn't being found.
> 
> Help! 
> 
> Thanks!


---
This message and its attachments are strictly confidential. ...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint

2006-06-23 Thread Sean O'Riordain
or try win.metafile()

if i'm in a hurry, (in windows) i just right click on the graph and
select "Copy as Metafile" and paste directly into powerpoint...

Sean


On 23/06/06, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> Note that jpg, bmp and png are in less desirable bit mapped formats whereas
> eps is in a more desirable vector format (magnification and shrinking does
> not involve loss of info) and so would be preferable from a quality
> viewpoint.  See:
> http://www.stc-saz.org/resources/0203_graphics.pdf
>
> On 6/23/06, Doran, Harold <[EMAIL PROTECTED]> wrote:
> > Use the functions in library(grDevices) for jpeg, bmp, or png formats.
> > Or, you can use postscript() for an eps file. Of course, I personally
> > think tex files make for much better looking presentations if you can be
> > persuaded.
> >
> > Harold
> >
> >
> > > -Original Message-
> > > From: [EMAIL PROTECTED]
> > > [mailto:[EMAIL PROTECTED] On Behalf Of Marc Bernard
> > > Sent: Friday, June 23, 2006 7:28 AM
> > > To: r-help@stat.math.ethz.ch
> > > Subject: [R] PowerPoint
> > >
> > > Dear All,
> > >
> > >   I am looking for the best way to use graphs from R (like
> > > xyplot, curve ...)   for a presentation with powerpoint. I
> > > used to save my plot as pdf and after to copy them as image
> > > in powerpoint but the quality is not optimal by so doing.
> > >
> > >   Another completely independent question is the following:
> > > when I use "main"  in the  xyplot, the main title is very
> > > close to my plot, i.e. no ligne separate the main and the
> > > plot. I would like my title to be well distinguished from the plots.
> > >
> > >   I would be grateful for any improvements...
> > >
> > >   Many thanks,
> > >
> > >   Bernard,
> > >
> > >
> > >
> > >
> > > -
> > >
> > >   [[alternative HTML version deleted]]
> > >
> > > __
> > > R-help@stat.math.ethz.ch mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide!
> > > http://www.R-project.org/posting-guide.html
> > >
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! 
> > http://www.R-project.org/posting-guide.html
> >
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint

2006-06-23 Thread rdporto1
Bernard,

there are some things that affect your graphs in
MS Powerpoint like the formats you save and include
them. You should consider test the various combinations.
You should test the final presentation on the
chosen presentation device 'cause there's some
resolution variation there too.

I use to save a .jpg format since I can control and
test some resolution levels and include as image.

For your main title, it's better you supply the group
with some code in order we can help you more.

HTH,

Rogerio Porto.

-- Cabeçalho original ---

De: [EMAIL PROTECTED]
Para: r-help@stat.math.ethz.ch
Cópia: 
Data: Fri, 23 Jun 2006 13:28:07 +0200 (CEST)
Assunto: [R] PowerPoint

> Dear All,
>
>   I am looking for the best way to use graphs from R (like xyplot, curve ...) 
>   for a presentation with powerpoint. I used to save my plot as pdf and after 
> to copy them as image in powerpoint but the quality is not optimal by so 
> doing.
>
>   Another completely independent question is the following: when I use "main" 
>  in the  xyplot, the main title is very close to my plot, i.e. no ligne 
> separate the main and the plot. I would like my title to be well 
> distinguished from the plots.
>
>   I would be grateful for any improvements...
>
>   Many thanks,
>
>   Bernard,
>
> 
> 
>   
> -
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] command line boa problems...

2006-06-23 Thread Evan Cooch
Greetings -

For a number of reasons, I'm moving from CODA to BOA - and I have one or 
two really basic, boa-newbie questions. While I have the 'menu-driven' 
version of boa working fine (most recent version, running under R 2.3.1 
on a Windows machine), for the life of me I can't seem to get some basic 
boa. command-line functions to work at all. Even things like 
boa.version() or boa.license() return errors. But, boa.init() and other 
functions seem to work fine.

At this stage, I'd be happy getting the first - fairly key - function to 
work - boa.importASCII. My MCMC sample data are in a flat ASCII file 
(white-space delimited), called practice.txt. There are two columns in 
the file (iteration number, parameter value), both labeled in the first 
row of the file (as per instructions in the boa documentation). I know 
the file is formatted OK, because I can import and play with it 
successfully using the boa.menu() approach. But, for some reason, 
boa.importASCII won't touch it.

Suppose the file is on my desktop (remember, windows machine)

I've tried all the variuous front-slash, back-slash, double-slash etc. 
combinations I can think of...

boa.importASCII(practice,"c:\documents and settings\eg7\desktop")

boa.importASCII(practice,"c:\\documents and settings\\eg7\\desktop")

boa.importASCII(practice,"c:/documents and settings/eg7/desktop")

boa.importASCII(practice,path="c:/documents and settings/eg7/desktop")

boa.importASCII(practice,path="c:\\documents and settings\\eg7\\desktop")

and so on...and so on...in each case, boa.importASCII reports that

Error in paste(prefix, boa.par("ASCIIext"), sep = "") :
   object "practice" not found

or something to that effect - basically, practice.txt isn't being found.

Help! 

Thanks!

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] How to use mle or similar with integrate?

2006-06-23 Thread Rainer M Krug
Hi

I have the following formula (I hope it is clear - if no, I can try to
do better the next time)

h(x, a, b) =
integral(0 to pi/2)
(
  (
integral(D/sin(alpha) to Inf)
(
  (
f(x, a, b)
  )
  dx
)
  dalpha
)

and I want to do an mle with it.
I know how to use mle() and I also know about integrate(). My problem is
to give the parameter values a and b to the integrate function.

In other words, how can I write

h <- function...

so that I can estimate a and b?

Thanks,

Rainer


-- 
Rainer M. Krug, Dipl. Phys. (Germany), MSc Conservation
Biology (UCT)

Department of Conservation Ecology and Entomology
University of Stellenbosch
Matieland 7602
South Africa

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PowerPoint

2006-06-23 Thread Gabor Grothendieck
Note that jpg, bmp and png are in less desirable bit mapped formats whereas
eps is in a more desirable vector format (magnification and shrinking does
not involve loss of info) and so would be preferable from a quality
viewpoint.  See:
http://www.stc-saz.org/resources/0203_graphics.pdf

On 6/23/06, Doran, Harold <[EMAIL PROTECTED]> wrote:
> Use the functions in library(grDevices) for jpeg, bmp, or png formats.
> Or, you can use postscript() for an eps file. Of course, I personally
> think tex files make for much better looking presentations if you can be
> persuaded.
>
> Harold
>
>
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of Marc Bernard
> > Sent: Friday, June 23, 2006 7:28 AM
> > To: r-help@stat.math.ethz.ch
> > Subject: [R] PowerPoint
> >
> > Dear All,
> >
> >   I am looking for the best way to use graphs from R (like
> > xyplot, curve ...)   for a presentation with powerpoint. I
> > used to save my plot as pdf and after to copy them as image
> > in powerpoint but the quality is not optimal by so doing.
> >
> >   Another completely independent question is the following:
> > when I use "main"  in the  xyplot, the main title is very
> > close to my plot, i.e. no ligne separate the main and the
> > plot. I would like my title to be well distinguished from the plots.
> >
> >   I would be grateful for any improvements...
> >
> >   Many thanks,
> >
> >   Bernard,
> >
> >
> >
> >
> > -
> >
> >   [[alternative HTML version deleted]]
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide!
> > http://www.R-project.org/posting-guide.html
> >
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] frechet distance

2006-06-23 Thread Liaw, Andy
 
[Sorry for coming to this so late...  I've been trying to play catch-up with
~1000 unread messages in my R-help folder...]

If the curves are sufficiently smooth (i.e., `kinks' are quite small,
relative to the real sigmoidal features of interest), what I would try is
something like smoothing splines or local polynomials, but over-smooth
(i.e., use "large enough" smoothing parameters) so that only the essential
features of the sigmoidal or double sigmoidal features remains, then look at
the zero crossings of the derivatives.  Martin has functions like D1ss() in
the `sfsmisc' package to do this.

HTH,
Andy


From: Rajarshi Guha
> 
> On Thu, 2006-06-22 at 19:52 -0700, Spencer Graves wrote:
> >   RSiteSearch("Frechet distance") returned only one hit 
> for me just 
> > now, and that was for a Frechet distribution, as you 
> mentioned.  Google 
> > found "www.cs.concordia.ca/cccg/papers/39.pdf", which suggests that 
> > computing it may not be easy.
> 
> In addition, from what I have read it is supposed to be 
> NP-hard. However
> my problems are usually small. 
> 
> > 1.  If you absolutely need the Frechet distance and you can 
> describe 
> > an algorithm for computing it but get stuck writing a 
> function for it, 
> > please submit another question outlining what you've tried and that 
> > obstacle you've found.
> > 
> >   2.  Alternatively, you might describe the more 
> general problem you 
> > are trying to solve, why you thought the Frechet distance 
> might help and 
> > invite alternative suggestions.
> 
> I have some curves (in the form of points) which are sigmoidal. I also
> have some curves which look like 2 sigmoid curves joined head to tail.
> Something like
> 
> 
>  -
>   /
>  /
>--
>   /
>  /
> /
>  --
> 
> Now these curves can vary: in some cases the initial lower 
> tail might be
> truncated.  For the 'stepwise' sigmoidal curves, the middle step might
> not be horizontal but inclined to some degree and so on.
> 
> My goal is to be given a set of points representing a curve 
> and try and
> identify whether it is of the standard sigmoid form or of the 
> 'stepwise'
> sigmoid form.
> 
> My plan was to generate a 'canonical sigmoid curve' via the logistic
> equation and then perform a curve matching operation. Thus for a
> supplied curve that is sigmoid in nature it will match the 'canonical
> curve' to a better extent than would a curve that is stepwise 
> in nature.
> (The matching is performed after applying a Procrustes transformation)
> 
> Initially I tried using the Hausdorff distance, but this does not take
> into account the ordering of the points in the curve and did 
> not always
> give a conclusive answer. A number of references (including the one
> above) indicate that the Frechet distance is better suited for curve
> matching problems.
> 
> As you noted, evaluating the Frechet distance is non-trivial and the
> only code that I could find was some code that is dependent 
> on the CGAL
> (http://www.cgal.org/) library. As far as I could see, CGAL does not
> have any R bindings.
> 
> An alternative that I had considered was to to evaluate the distance
> matrix of the points making up the curve and then evaluating the root
> mean square error of the matrix elements for the canonical 
> curve and the
> supplied curve. My initial experiments indicated that this generally
> works but I observed some cases where a stepwise curve matched the
> canonical sigmoid better (ie lower RMSE) than an actual sigmoid curve.
> 
> Another alternative is look at a graph of the first derivative of the
> curve. A standard sigmoidal curve will result in a graph with a single
> peak, a stepwise curve like above will result in a graph with 2 peaks.
> Thus this could be reduced to a peak picking problem. The 
> problem is the
> curves I'll get are not smooth and can have small kinks - 
> this leads to
> (usually) quite small peaks in the graph of the first derivative - but
> most of the code that has been described on this list for peak picking
> also picks them up, thus making identification of the curve ambiguous.
> 
> To be honest I do not fully understand the algorithm used to evaluate
> the Frechet distance hence my request for code. However, I'm 
> not fixated
> on the Frechet distance :) If there are simpler approaches I'm open to
> them.
> 
> Thanks,
> 
> ---
> Rajarshi Guha <[EMAIL PROTECTED]> 
> GPG Fingerprint: 0CCA 8EE2 2EEB 25E2 AB04 06F7 1BB9 E634 9B87 56EE
> ---
> Q: What do you get when you cross a mosquito with a mountain climber?
> A: Nothing. You can't cross a vector with a scaler.
> 
> _

Re: [R] PowerPoint

2006-06-23 Thread Doran, Harold
Use the functions in library(grDevices) for jpeg, bmp, or png formats.
Or, you can use postscript() for an eps file. Of course, I personally
think tex files make for much better looking presentations if you can be
persuaded.

Harold
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Marc Bernard
> Sent: Friday, June 23, 2006 7:28 AM
> To: r-help@stat.math.ethz.ch
> Subject: [R] PowerPoint
> 
> Dear All,
>
>   I am looking for the best way to use graphs from R (like 
> xyplot, curve ...)   for a presentation with powerpoint. I 
> used to save my plot as pdf and after to copy them as image 
> in powerpoint but the quality is not optimal by so doing.
>
>   Another completely independent question is the following: 
> when I use "main"  in the  xyplot, the main title is very 
> close to my plot, i.e. no ligne separate the main and the 
> plot. I would like my title to be well distinguished from the plots.
>
>   I would be grateful for any improvements...
>
>   Many thanks,
>
>   Bernard,
>
> 
> 
>   
> -
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] problem with "code/documentation mismatch"

2006-06-23 Thread Joerg van den Hoff
I have a package with a division method for special objects of class 
"RoidataList", i.e. the function is named `/.RoidataList'.
documentation for this is in the file "Divide.RoidataList".


R CMD CHECK complains with:

=cut==
* checking for code/documentation mismatches ... WARNING
Functions/methods with usage in documentation object 
'Divide.RoidataList' but not in code:
   /

* checking Rd \usage sections ... WARNING
Objects in \usage without \alias in documentation object 
'Divide.RoidataList':
   /
=cut==

the `usage' section in the Rd file reads

=cut==
\usage{
x/y
}
=cut==

which, of course is the desired way to use the function.
what am I doing wrong, i.e. how should I modify the Rd file?
maybe obvious, but not to me.

joerg van den hoff

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] integrate

2006-06-23 Thread Rogério Rosa da Silva
Dear All,

My doubt about how to integrate a simple kernel density estimation goes on.

I have seen the recent posts on integrate density estimation, which seem
similar to my question. However, I haven't found a solution.

I have made two simple kernel density estimation by:
 
kde.1 <-density(x, bw=sd(x), kernel="gaussian")$y # x<- c(2,3,5,12)
kde.2 <-density(y, bw=sd(y), kernel="gaussian")$y # y<- c(4,2,4,11)
 
Now I would like to integrate the difference in the estimated density
values, i.e.:

diff.kde <- abs (kde.1- kde.2)

How can I integrate diff.kde over -Inf to Inf ?

Best,

Rogério

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] PowerPoint

2006-06-23 Thread Marc Bernard
Dear All,
   
  I am looking for the best way to use graphs from R (like xyplot, curve ...)   
for a presentation with powerpoint. I used to save my plot as pdf and after to 
copy them as image in powerpoint but the quality is not optimal by so doing.
   
  Another completely independent question is the following: when I use "main"  
in the  xyplot, the main title is very close to my plot, i.e. no ligne separate 
the main and the plot. I would like my title to be well distinguished from the 
plots.
   
  I would be grateful for any improvements...
   
  Many thanks,
   
  Bernard,
   



-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Basic package structure question

2006-06-23 Thread Duncan Murdoch
On 6/23/2006 5:45 AM, Joerg van den Hoff wrote:
> Prof Brian Ripley wrote:
>> On Fri, 23 Jun 2006, Joerg van den Hoff wrote:
>>
>>> just to confirm duncan murdochs remark:
>>>
>>> our Windows machines lack proper development environments (mainly
>>> missing perl is the problem for pure R-code packages, I believe?) and we
>>> bypass this (for pure R-code packages only, of course) by
>>>
>>> 1.) install the package on the unix machine into the desired R library
>>> 2.) zip the _installed_ package (not the source tree!) found in the R
>>>   library directory
>>> 3.) transfer this archive to the Windows machine
>>> 4.) unzip directly into the desired library destination
>>>
>>> this procedure up to now always worked including properly installed
>>> manpages (text + html (and I hope this remains the case in the future...)
>>  From README.packages:
>>
>>   If your package has no compiled code it is possible that zipping up the
>>   installed package on Linux will produce an installable package on
>>   Windows.  (It has always worked for us, but failures have been reported.)
>>
>> so this is indeed already documented, and does not work for everyone.
>>
> albeit sparsely...
> 
> AFAIKS it's not in the `R Extensions' manual at all. If(!) this approach 
> could be made an 'official' workaround and explained in the manual, that 
> would be good.
> 
> I'd appreciate if someone could tell me:
> 
> are the mentioned failures confirmed or are they "UFOs"?
> 
> if so, are (the) reasons for (or circumstances of) failure known (I'm 
> always afraid walking on thin ice when using this transfer strategy)?
> 
> what does "produce an installable package on Windows" in the README text 
> mean? I presume it does not mean that R CMD INSTALL (or the Windows 
> equivalent) does work? if it really means "unzip the package on the 
> Windows machine into the library directory", should'nt the text be altered?

I do not want to support more than one method of installing packages. 
The R CMD install method works.  If some other method also works, it's 
unsupported.

One obvious limitation of this install method is that it won't produce 
native Windows help files (.chm).  Plans are to make CHM the default 
help system as of the 2.4.0 release, so your packages will not work 
properly unless you give special instructions on how to change the help 
system defaults.

The other obvious limitation is that it won't work if you have any C or 
Fortran code in your package.

> and I forgot to mention in my first mail: I use the described procedure 
> for transfer from a non-Intel apple machine under MacOS (a FreeBSD 
> descendant) to Windows (and even (unneccessarily, I know) to 
> Sun/Solaris). so the strategy is not restricted to transfer from Linux 
> -> Windows.
> 
> and it is useful (if it is not 'accidental' that it works at all): in 
> this way one can keep very easily in sync several local incarnations of 
> a package across platforms (if network access to a common disk is not 
> the way to go): I simply `rsync' the affected (local, R-code only) 
> library directories.

You might be able to achieve this by doing your builds on Windows, 
rather than on Linux or MacOS, but as far as I know it is not possible 
to build .chm files on those OSs.

Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] programming advice

2006-06-23 Thread Fred JEAN
Martin Maechler wrote:
> [...]
> Just a note:  
> Why are you making the detour of calling cor.test(.)
> when  
>   cor(nox, noy,  method = "kendall")
> 
> is probably all you need?

I use cor.test() because I want to test the value of Tau.

Thanks to everyone. I'll use the solution proposed by Ronggui.

The Kendall package solution is certainly also valuable but less direct
(in my newbie opinion) : in fact x and y are extracted from a
contingency table and I should use replace() to substitute "double NAs"
to "double zeros"

Thanks again

-- 
Fred

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Basic package structure question

2006-06-23 Thread Joerg van den Hoff
Prof Brian Ripley wrote:
> On Fri, 23 Jun 2006, Joerg van den Hoff wrote:
> 
>> just to confirm duncan murdochs remark:
>>
>> our Windows machines lack proper development environments (mainly
>> missing perl is the problem for pure R-code packages, I believe?) and we
>> bypass this (for pure R-code packages only, of course) by
>>
>> 1.) install the package on the unix machine into the desired R library
>> 2.) zip the _installed_ package (not the source tree!) found in the R
>>   library directory
>> 3.) transfer this archive to the Windows machine
>> 4.) unzip directly into the desired library destination
>>
>> this procedure up to now always worked including properly installed
>> manpages (text + html (and I hope this remains the case in the future...)
> 
>  From README.packages:
> 
>   If your package has no compiled code it is possible that zipping up the
>   installed package on Linux will produce an installable package on
>   Windows.  (It has always worked for us, but failures have been reported.)
> 
> so this is indeed already documented, and does not work for everyone.
> 
albeit sparsely...

AFAIKS it's not in the `R Extensions' manual at all. If(!) this approach 
could be made an 'official' workaround and explained in the manual, that 
would be good.

I'd appreciate if someone could tell me:

are the mentioned failures confirmed or are they "UFOs"?

if so, are (the) reasons for (or circumstances of) failure known (I'm 
always afraid walking on thin ice when using this transfer strategy)?

what does "produce an installable package on Windows" in the README text 
mean? I presume it does not mean that R CMD INSTALL (or the Windows 
equivalent) does work? if it really means "unzip the package on the 
Windows machine into the library directory", should'nt the text be altered?

and I forgot to mention in my first mail: I use the described procedure 
for transfer from a non-Intel apple machine under MacOS (a FreeBSD 
descendant) to Windows (and even (unneccessarily, I know) to 
Sun/Solaris). so the strategy is not restricted to transfer from Linux 
-> Windows.

and it is useful (if it is not 'accidental' that it works at all): in 
this way one can keep very easily in sync several local incarnations of 
a package across platforms (if network access to a common disk is not 
the way to go): I simply `rsync' the affected (local, R-code only) 
library directories.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] programming advice

2006-06-23 Thread Martin Maechler
> "Fred" == Fred JEAN <[EMAIL PROTECTED]>
> on Thu, 22 Jun 2006 17:58:06 +0200 writes:

Fred> Dear R users
Fred> I want to compute Kendall's Tau between two vectors x and y.
Fred> But x and y  may have zeros in the same position(s) and I wrote the
Fred> following function to be sure to drop out those "double zeros"

"cor.kendall" <- function(x,y) {
  nox <- c()
  noy <- c()
  #
  for (i in 1:length(x)) if (x[i]!= 0 | y[i] != 0)
  nox[length(nox)+1]<- x[i]
  for (i in 1:length(y)) if (x[i]!= 0 | y[i] != 0)
  noy[length(noy)+1]<- y[i]
  #
  res.kendall <- cor.test(nox,noy,method = "kendall",exact=F)
  return(list(x=nox,y=noy,res.kendall,length(nox)))
}

Fred> Do you know a more elegant way to achieve the same goal ?
Fred> (I'm sure you do : it's a newbie's program actually)

"Ronggui" already helped you with your main question.

Just a note:  
Why are you making the detour of calling cor.test(.)
when  
  cor(nox, noy,  method = "kendall")

is probably all you need?

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Basic package structure question

2006-06-23 Thread Prof Brian Ripley
On Fri, 23 Jun 2006, Joerg van den Hoff wrote:

> just to confirm duncan murdochs remark:
>
> our Windows machines lack proper development environments (mainly
> missing perl is the problem for pure R-code packages, I believe?) and we
> bypass this (for pure R-code packages only, of course) by
>
> 1.) install the package on the unix machine into the desired R library
> 2.) zip the _installed_ package (not the source tree!) found in the R
>   library directory
> 3.) transfer this archive to the Windows machine
> 4.) unzip directly into the desired library destination
>
> this procedure up to now always worked including properly installed
> manpages (text + html (and I hope this remains the case in the future...)

>From README.packages:

   If your package has no compiled code it is possible that zipping up the
   installed package on Linux will produce an installable package on
   Windows.  (It has always worked for us, but failures have been reported.)

so this is indeed already documented, and does not work for everyone.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] High breakdown/efficiency statistics -- was RE: Rosner's test

2006-06-23 Thread Martin Maechler
I'm CC'ing this to the R-SIG-robust mailing list
 [R Special Interest Group on robust statistics]
so it's properly archived there as well.
Follow up ideally should only go there.

{BTW: Did you know that to *search* mailing list archives of
  such R-SIG-foo mailing lists, you can use google very
  efficiently by prepending the mailing list name and 'site:stat.ethz.ch'?
  e.g., use google search on
  "R-SIG-robust site:stat.ethz.ch lmrob"
}
  
> "BertG" == Berton Gunter <[EMAIL PROTECTED]>
> on Thu, 22 Jun 2006 09:44:33 -0700 writes:

BertG> Many thanks for this Martin. There now are several
BertG> packages with what appear to be overlapping functions
BertG> (or at least algorithms). Besides those you
BertG> mentioned, "robust" and "roblm" are at least two others.

actually quite particular ones:

- "roblm" by Matias Salibian-Barreras is really predecessor to
  parts in 'robustbase'. His roblm() function is now  lmrob() in
  robustbase, i.e., robustbase::lmrob(), and lmrob() is a bit
  more efficient and further has a anova() method.

- "robust" : by Kjell Konis -- is planned to become a full port
   of the S-plus library section "robust" (from
   Insightful, also mainly by Kjell Konis, built
   on code of many more, see DESCRIPTION).
   At the moment it comes with a 'Insightful Robust Library License'
   which seems a kind of open source licence, but pretty "peculiar"
   (to me: IANAL (I am not a lawyer)).

   At the moment it only has "robust covariance + location", but
   when it will contain everything from its S-plus counterpart,
   it will be a very nice benchmark; in many parts "first rate".

BertG> Any recommendations about how or whether to
BertG> choose among these for us enthusiastic but non-expert
BertG> users?

As I said (in reply to Andy's suggestion) there will be a CRAN
task view "real soon now" 
in order to give some guidance on the diverse packages with
robustness functionality.

BertG> Cheers, Bert
 

>> -Original Message- From: Martin Maechler
>> [mailto:[EMAIL PROTECTED] Sent: Thursday, June
>> 22, 2006 2:04 AM To: Berton Gunter Cc: 'Robert Powell';
>> r-help@stat.math.ethz.ch Subject: Re: [R] Rosner's test
>> 
>> > "BertG" == Berton Gunter <[EMAIL PROTECTED]>
>> > on Tue, 13 Jun 2006 14:34:48 -0700 writes:
>> 
BertG> RSiteSearch('Rosner') ?RSiteSearch or search directly
BertG> from CRAN.
>>
BertG> Incidentally, I'll repeat what I've said
BertG> before. Don't do outlier tests.  They're
BertG> dangerous. Use robust methods instead.
>>  Yes, yes, yes!!!
>> 
>> Note that rlm() or cov.rob() from recommended package
>> MASS will most probably be sufficient for your needs.
>> 
>> For slightly newer methodology, look at package
>> 'robustbase', or also 'rrcov'.
>> 
>> Martin Maechler, ETH Zurich
>> 
BertG> -- Bert Gunter Genentech Non-Clinical Statistics
BertG> South San Francisco, CA
>>
BertG> "The business of the statistician is to catalyze the
BertG> scientific learning process."  - George E. P. Box
>>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Basic package structure question

2006-06-23 Thread Joerg van den Hoff
Gabor Grothendieck wrote:
> On 6/22/06, Duncan Murdoch <[EMAIL PROTECTED]> wrote:
>> Jay Emerson wrote:
>>> At the encouragement of many at UseR, I'm trying to build my first real
>>> package. I have no C/Fortran code, just plain old R code, so it should be
>>> rocket science.  On a Linux box, I used package.skeleton() to create a basic
>>> package containing just one "hello world" type of function.  I edited the
>>> DESCRIPTION file, changin the package name appropriately.  I edited the
>>> hello.Rd file.  Upon running R CMD check hello, the only warning had to do
>>> with the fact that src/ was empty (obviously I had no source in such a
>>> simple package).  I doubt this is a problem.
>>>
>>> I was able to install and use the package successfully on the Linux system
>>> from the .tar.gz file, so far so good!  Next, on to Windows, where the
>>> problem arose:
>>>
>>> I created a zip file from inside the package directory: zip -r ../hello.zip
>>> ./*
>>>
>>>
>> Which package directory, the source or the installed copy?  I think this
>> might work in the installed copy, but would not work on the source.
>> It's not the documented way to build a binary zip, though.
>>> When I moved this to my Windows machine and tried to install the package, I
>>> received the following error:
>>>
>>>
 utils:::menuInstallLocal()

>>> Error in unpackPkg(pkgs[i], pkgnames[i], lib, installWithVers) :
>>> malformed bundle DESCRIPTION file, no Contains field
>>>
>>> I only found one mention of this in my Google search, with no reply to the
>>> thread.  The Contains field appears to be used for bundles, but I'm trying
>>> to create a package, not a bundle.  This leads me to believe that a simple
>>> zipping of the package directory structure is not the correct format for
>>> Windows.
>>>
>>> Needless to say, there appears to be wide agreement that making packages
>>> requires precision, but fundamentally a package should (as described in the
>>> documentation) just be a collection of files and folders organized a certain
>>> way.  If someone could point me to documentation I may have missed that
>>> explains this, I would be grateful.
>>>
>> I think the "organized in a certain way" part is actually important.
>> Using R CMD install --build is the documented way to achieve this.  It's
>> not trivial to do this on Windows, because you need to set up a build
>> environment first, but it's not horribly difficult.
>>
>> Duncan Murdoch
>>> Regards,
>>>
>>> Jay
> 
> One idea that occurred to me in reading this would be to have a server
> that one can send a package to and get back a Windows build to
> eliminate having to set up a development environment.  Not sure if
> this is feasible, particularly security aspects, but if it were it would
> open up package building on Windows to a larger audience.
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

just to confirm duncan murdochs remark:

our Windows machines lack proper development environments (mainly 
missing perl is the problem for pure R-code packages, I believe?) and we 
bypass this (for pure R-code packages only, of course) by

1.) install the package on the unix machine into the desired R library
2.) zip the _installed_ package (not the source tree!) found in the R 
library directory
3.) transfer this archive to the Windows machine
4.) unzip directly into the desired library destination

this procedure up to now always worked including properly installed 
manpages (text + html (and I hope this remains the case in the future...)

joerg van den hoff

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Time series labeling with Zoo

2006-06-23 Thread Gad Abraham
Gabor Grothendieck wrote:
> When the axis labelling does not work well you will have to do it yourself
> like this.  The plot statement is instructed not to plot the axis and then
> we extract into tt all the dates which are day of the month 1.  Then
> we manually draw the axis using those.
> 
> library(zoo)
> set.seed(1)
> z <- zoo(runif(10), as.Date("2005-06-01") + 0:380)
> 
> plot(z, xaxt = "n")
> tt <- time(z)[as.POSIXlt(time(z))$mday == 1]
> axis(1, tt, format(tt, "%d%b%y"))

Thanks Gabor,

That works nicely.

Cheers,
Gad


-- 
Gad Abraham
Department of Mathematics and Statistics
University of Melbourne
Parkville 3010, Victoria, Australia
email: [EMAIL PROTECTED]
web: http://www.ms.unimelb.edu.au/~gabraham

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html