Re: [R] Using help()

2009-01-25 Thread Patrick Burns

Michael Kubovy wrote:

Dear R-helpers,

[...]

(2) If I remember dnorm() and want to be reminded of the call, I also  
get a list of pages.
  


It sounds to me like here you want:

args(dnorm)

Patrick Burns
patr...@burns-stat.com
+44 (0)20 8525 0696
http://www.burns-stat.com
(home of The R Inferno and A Guide for the Unwilling S User)

Advice?

It seems to me that if the output of help() listed base functions  
first, it would go a long way toward improving the usefulness of this  
function.

_
Professor Michael Kubovy
University of Virginia
Department of Psychology
Postal Address:
P.O.Box 400400, Charlottesville, VA 22904-4400
Express Parcels Address:
Gilmer Hall, Room 102, McCormick Road, Charlottesville, VA 22903
Office:B011;Phone: +1-434-982-4729
Lab:B019;   Phone: +1-434-982-4751
WWW:http://www.people.virginia.edu/~mk9y/
Skype name: polyurinsane





[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.





__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Delete Dates from a vector.

2009-01-25 Thread pluribus
I need to create a vector of dates, weekdays only for a function I am
working on. Thanks to the chron library, I have managed to accomplish
this, but is there is a better / easier way. This is what I have thus
far.

range.dates - seq.dates('02/02/2009', '03/13/2009', by =
'days')
range.days - weekdays(range.dates)
weekends - which(range.days == Sat OR range.days == Sun)
range.dates[weekends] - NA
range.dates - sort(range.dates)

I am trying to get better with R and I appreciate any feedback or
suggestions you may have.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Multiple lattice plots on a page: aligning x-axes vertically

2009-01-25 Thread Daniel Myall

Dear R-help,

I am creating a two lattice plots (a densityplot() and xyplot()) that 
have the same x-axes and then 'printing' them onto the same page, one 
above the other (see end of email for an example to generate the graph). 
With different labels on the y-axis for each plot the left spacing is 
different, and the x-axes don't align vertically. Although I can 
manually modify the print region of the plots on the page to align the 
x-axes, this is not very robust; as can be expected changing the size of 
the plot device scales the x-axes differently in both plots. 
Additionally, changing devices also causes issues (i.e., the plots are 
aligned in quartz(width=12,height=6) but then slightly off with 
pdf(width=12,height=6)). As I'm using this inside Sweave to generate 
numerous plots of this form, I am after some robust way to vertically 
align the x-axes of the plots.


With the approach I'm taking is there a way with lattice or grid to 
vertically align the x-axes? By somehow setting the internal plot width 
to be the same in both plots I think this would solve the issue 
(possibly by setting the right parameters in 
str(trellis.par.get(layout.widths)))?  Alternatively, would another 
approach be better (i.e., is it possible to create a new panel function 
that does a panel.xyplot and panel.densityplot on slightly different 
data?).


Thanks.

Daniel


## BEGIN Example
#OS: Mac OS X 10.5.5; R: 2.8.1; lattice 0.17-20

library(reshape)
library(lattice)

plotdensitymeans - 
function(data,measure,factors=c('subject_group','task'), 
xlab=,xlim=NULL, ...) {


   # Create means by subject
   x.melted - melt(data, id.var = append(factors,'subject_ID'),
   measure.var=measure,na.rm=T)
   formula.bysubject.cast - paste(factors[1],+,factors[2],+ 
subject_ID ~ .)
   x.cast.subject - data.frame(cast(x.melted,formula.bysubject.cast, 
mean))
  
   # Plot means by subject

   text.formula - paste(factors[1],:,factors[2],~ X.all.)
   text.group - paste(factors[1],:,factors[2])   
   formula.xyplot - as.formula(text.formula)

   formula.group - as.expression(formula.xyplot)

   environment(formula.xyplot) - environment()
   plot.subject.means - xyplot(formula.xyplot, 
group=eval(formula.group), xlim=xlim, pch=16, alpha=0.6,data = 
x.cast.subject,xlab=xlab,ylab=)


   # Plot distributions
   formula.densityplot - as.formula(paste( ~,measure[1]))

   environment(formula.densityplot) - environment()
   plot.density - 
densityplot(formula.densityplot,group=eval(formula.group),data=data,
   xlim=xlim,n=200,auto.key=list(columns = 
4,line=TRUE),plot.points=F,xlab=,lty=1,

   scales=list(y=list(draw=FALSE),x=list(draw=FALSE)))

   # Plot both plots on a single page
   print(plot.subject.means, position = c(0,0,1,0.35))
   print(plot.density, position = c(0.0988,0.22,0.98,1), newpage = FALSE)

}

example.data - data.frame(subject_ID = c('A01','B01','A02','B02'), 
subject_group = c('pop1','pop2'),
   task= 
c(rep('task1',32),rep('task2',32)),dependent_measure=rnorm(64))
plotdensitymeans(example.data, measure=c('dependent_measure'), 
xlab=dependent measure (units), xlim=c(-3,3))


##END Example

---
Daniel Myall
PhD Student
Department of Medicine
University of Otago, Christchurch
Van der Veer Institute for Parkinson's and Brain Research
66 Stewart St
Christchurch
New Zealand

daniel.my...@vanderveer.org.nz
http://www.vanderveer.org.nz/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R for Computational Neuroscience?

2009-01-25 Thread Bernardo Rangel Tura
On Fri, 2009-01-23 at 08:53 -0400, Mike Lawrence wrote:
 Hi all,
 
 I've noticed that many computational neuroscience research groups use
 MATLAB. While it's possible that MATLAB may have some features
 unavailable in R, I suspect that this may instead simply be a case of
 costly tradition, where researchers were taught MATLAB as students and
 pay for it as researchers because it's all they know.
 
 I'd like to attempt to break the cycle by offering colleagues
 resources on using R for computational neuroscience, but I haven't
 been able to find anything (searched the task view, r-seek,  google).
 
 Can anyone direct me to resources on using R for computational
 neuroscience? Input on my possibly naive assumption that R is a
 sufficient tool for this field would also be appreciated.
 
 Cheers,
 
 Mike

Mike,

I think neuroscience is a term using for a wide group of researchers.
The common analysis (hypothesis test, ANOVA, regression models, etc) is
perfectly made in R.

But the interpretation of mri is need a packages:

1- AnalyzeFMRI -Functions for I/O, visualisation and analysis of
functional Magnetic Resonance Imaging (fMRI) datasets stored in the
ANALYZE or NIFTI format.

2- fmri - contains R-functions to perform an fmri analysis as described
in Tabelow, K., Polzehl, J., Voss, H.U., and Spokoiny, V. Analysing fMRI
experiments with structure adaptive smoothing procedures, NeuroImage,
33:55-62 (2006)

3- dti - Diffusion Weighted Imaging is a Magnetic Resonance Imaging
modality, that measures diffusion of water in tissues like the human
brain. The package contains R-functions to process diffusion-weighted
data in the context of the diffusion tensor model (DTI). This includes
the calculation of anisotropy measures and, most important, the
implementation of our structural adaptive smoothing algorithm as
described in K. Tabelow, J. Polzehl, V. Spokoiny, and H.U. Voss,
Diffusion Tensor Imaging: Structural Adaptive Smoothing, Neuroimage
39(4), 1763-1773 (2008).


-- 
Bernardo Rangel Tura, M.D,MPH,Ph.D
National Institute of Cardiology
Brazil

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Calling a jar file with rJava

2009-01-25 Thread cameron.bracken

I want to call the jar file from R.  I want to be able to do this without
using system().  Normally a command line call would look like 

java -jar eps2pgf.jar -m directcopy myfile.eps

My question is: Can I call this program using the rJava package or any other
(command line options and all)?  I really know nothing about Java so any
pointers would be appreciated.  

Thanks

-Cameron Bracken
-- 
View this message in context: 
http://www.nabble.com/Calling-a-jar-file-with-rJava-tp21650356p21650356.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Delete Dates from a vector.

2009-01-25 Thread Gabor Grothendieck
See:
https://stat.ethz.ch/pipermail/r-help/2008-September/173522.html

On Sat, Jan 24, 2009 at 11:01 PM, pluribus pluri...@overdetermined.net wrote:
 I need to create a vector of dates, weekdays only for a function I am
 working on. Thanks to the chron library, I have managed to accomplish
 this, but is there is a better / easier way. This is what I have thus
 far.

range.dates - seq.dates('02/02/2009', '03/13/2009', by =
'days')
range.days - weekdays(range.dates)
weekends - which(range.days == Sat OR range.days == Sun)
range.dates[weekends] - NA
range.dates - sort(range.dates)

 I am trying to get better with R and I appreciate any feedback or
 suggestions you may have.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Correlation matrix one side with significance

2009-01-25 Thread Gabor Grothendieck
If your purpose is simply to represent a correlation matrix it in a more
compact way see ?symnum, the corrgram package and an example in the
book Multivariate Data Visualization (regarding which gives a lattice
implementation).

On Fri, Mar 7, 2008 at 2:15 PM, Martin Kaffanke
tech...@roomandspace.com wrote:
 Thank you, thats really good and gives me very good information.

 Thanks,
 Martin

 Am Donnerstag, den 06.03.2008, 14:35 -0500 schrieb Chuck Cleland:
 On 3/6/2008 2:07 PM, Martin Kaffanke wrote:
  Am Mittwoch, den 05.03.2008, 14:38 -0300 schrieb Henrique Dallazuanna:
  Try this:
 
  On 05/03/2008, Martin Kaffanke tech...@roomandspace.com wrote:
  Hi there!
 
   In my case,
 
   cor(d[1:20])
 
   makes me a good correlation matrix.
 
   Now I'd like to have it one sided, means only the left bottom side to be
   printed (the others are the same) and I'd like to have * where the
   p-value is lower than 0.05 and ** lower than 0.01.
 
   How can I do this?
  d - matrix(rexp(16, 2), 4)
  corr - cor(d)
  sign - symnum(cor(d), cutpoints=c(0.05, 0.01), corr = T,
  symbols=c(***, **, *), abbr=T, diag=F)
 
  noquote(mapply(function(x, y)paste(x, format(y, dig=3), sep=''),
  as.data.frame(unclass(sign)), as.data.frame(corr)))
 
  Seems that we mark the value itself, but not the p-value.
 
  So lets say, in a way I have to get the lower left half of a
 
  cor(el[1:20])
 
  Then I need to calc all the values with a cor.test() to see for the
  p-value.  And the p-value should be lower than .05 or .01 - this should
  make the * to the value.
 
  Thanks,
  Martin

Do you want something like the following, but with the upper triangle
 removed?

 corstars - function(x){
 require(Hmisc)
 x - as.matrix(x)
 R - rcorr(x)$r
 p - rcorr(x)$P
 mystars - ifelse(p  .01, **|, ifelse(p  .05, * |,   |))
 R - format(round(cbind(rep(-1.111, ncol(x)), R), 3))[,-1]
 Rnew - matrix(paste(R, mystars, sep=), ncol=ncol(x))
 diag(Rnew) - paste(diag(R),   |, sep=)
 rownames(Rnew) - colnames(x)
 colnames(Rnew) - paste(colnames(x), |, sep=)
 Rnew - as.data.frame(Rnew)
 return(Rnew)
 }

 corstars(swiss[,1:4])
  Fertility| Agriculture| Examination| Education|
 Fertility 1.000  | 0.353* |-0.646**|  -0.664**|
 Agriculture   0.353* | 1.000  |-0.687**|  -0.640**|
 Examination  -0.646**|-0.687**| 1.000  |   0.698**|
 Education-0.664**|-0.640**| 0.698**|   1.000  |

I will leave the removing the upper triangle part to you - should be
 examples in the archives.

  
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide 
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.

 --
 Ihr Partner für Webdesign, Webapplikationen und Webspace.
 http://www.roomandspace.com/
 Martin Kaffanke +43 650 4514224

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Multiple lattice plots on a page: aligning x-axes vertically

2009-01-25 Thread baptiste auguie

Have you tried c() from the latticeExtra package?

It worked for me (see below)



library(grid)
library(lattice)
x - seq(0, 10, length=100)
y - sin(x)
y2 - 10*sin(x)
f - rep(c(1, 2), each=50)

p1 - xyplot(y~x,groups=f, ylab=BIG LABEL,
# auto.key=list(space=right),
par.settings = list(layout.width = list(panel=1,  ylab = 2, axis.left
=1.0, left.padding=1,
  ylab.axis.padding=1, axis.panel=1)))

p2 - xyplot(y2~x, ylab=a,
par.settings = list(layout.width = list(panel=1,  ylab = 2, axis.left
=1.0, left.padding=1,
  ylab.axis.padding=1, axis.panel=1)))

library(latticeExtra)

update(c(p2, p1, x.same = TRUE),
  layout = c(1, 2),
  ylab = list(c(a, BIG LABEL), y = c(1/6, 2/3)),
  par.settings = list(layout.heights = list(panel = c(1, 2


Hope this helps,

baptiste

On 25 Jan 2009, at 09:54, Daniel Myall wrote:


Dear R-help,

I am creating a two lattice plots (a densityplot() and xyplot()) that
have the same x-axes and then 'printing' them onto the same page, one
above the other (see end of email for an example to generate the  
graph).

With different labels on the y-axis for each plot the left spacing is
different, and the x-axes don't align vertically. Although I can
manually modify the print region of the plots on the page to align the
x-axes, this is not very robust; as can be expected changing the  
size of

the plot device scales the x-axes differently in both plots.
Additionally, changing devices also causes issues (i.e., the plots are
aligned in quartz(width=12,height=6) but then slightly off with
pdf(width=12,height=6)). As I'm using this inside Sweave to generate
numerous plots of this form, I am after some robust way to vertically
align the x-axes of the plots.

With the approach I'm taking is there a way with lattice or grid to
vertically align the x-axes? By somehow setting the internal plot  
width

to be the same in both plots I think this would solve the issue
(possibly by setting the right parameters in
str(trellis.par.get(layout.widths)))?  Alternatively, would another
approach be better (i.e., is it possible to create a new panel  
function

that does a panel.xyplot and panel.densityplot on slightly different
data?).

Thanks.

Daniel


## BEGIN Example
#OS: Mac OS X 10.5.5; R: 2.8.1; lattice 0.17-20

library(reshape)
library(lattice)

plotdensitymeans -
function(data,measure,factors=c('subject_group','task'),
xlab=,xlim=NULL, ...) {

   # Create means by subject
   x.melted - melt(data, id.var = append(factors,'subject_ID'),
   measure.var=measure,na.rm=T)
   formula.bysubject.cast - paste(factors[1],+,factors[2],+
subject_ID ~ .)
   x.cast.subject - data.frame(cast(x.melted,formula.bysubject.cast,
mean))

   # Plot means by subject
   text.formula - paste(factors[1],:,factors[2],~ X.all.)
   text.group - paste(factors[1],:,factors[2])
   formula.xyplot - as.formula(text.formula)
   formula.group - as.expression(formula.xyplot)

   environment(formula.xyplot) - environment()
   plot.subject.means - xyplot(formula.xyplot,
group=eval(formula.group), xlim=xlim, pch=16, alpha=0.6,data =
x.cast.subject,xlab=xlab,ylab=)

   # Plot distributions
   formula.densityplot - as.formula(paste( ~,measure[1]))

   environment(formula.densityplot) - environment()
   plot.density -
densityplot(formula.densityplot,group=eval(formula.group),data=data,
   xlim=xlim,n=200,auto.key=list(columns =
4,line=TRUE),plot.points=F,xlab=,lty=1,
   scales=list(y=list(draw=FALSE),x=list(draw=FALSE)))

   # Plot both plots on a single page
   print(plot.subject.means, position = c(0,0,1,0.35))
   print(plot.density, position = c(0.0988,0.22,0.98,1), newpage =  
FALSE)


}

example.data - data.frame(subject_ID = c('A01','B01','A02','B02'),
subject_group = c('pop1','pop2'),
   task=
c(rep('task1',32),rep('task2',32)),dependent_measure=rnorm(64))
plotdensitymeans(example.data, measure=c('dependent_measure'),
xlab=dependent measure (units), xlim=c(-3,3))

##END Example

---
Daniel Myall
PhD Student
Department of Medicine
University of Otago, Christchurch
Van der Veer Institute for Parkinson's and Brain Research
66 Stewart St
Christchurch
New Zealand

daniel.my...@vanderveer.org.nz
http://www.vanderveer.org.nz/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


_

Baptiste Auguié

School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK

Phone: +44 1392 264187

http://newton.ex.ac.uk/research/emag

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R for Computational Neuroscience?

2009-01-25 Thread Ken Knoblauch
Mike Lawrence mike at thatmike.com writes:
 I've noticed that many computational neuroscience research groups use
 MATLAB. While it's possible that MATLAB may have some features
 unavailable in R, I suspect that this may instead simply be a case of
 costly tradition, where researchers were taught MATLAB as students and
 pay for it as researchers because it's all they know.
 I'd like to attempt to break the cycle by offering colleagues
 resources on using R for computational neuroscience, but I haven't
 been able to find anything (searched the task view, r-seek,  google).

 Can anyone direct me to resources on using R for computational
 neuroscience? Input on my possibly naive assumption that R is a
 sufficient tool for this field would also be appreciated.
 Mike

Consider also, the packages
STAR - Spike Train Averaging in R
and 
brainwaver - Basic wavelet analysis of multivariate time series 
with a visualisation and parametrisation using graph theory

which was developed for analyzing fMRI data.  
Many of the packages developed for analyzing graphs
of social networks are equally of use in analyzing connectivity
in neural systems.

There are also packages for analysing psychophysical data which
are relevant for behavioral neuroscience,
psyphy, MLDS, sdtalt, etc.

Would there be enough for CRAN TASK VIEW?

Ken

-- 
Ken Knoblauch
Inserm U846
Institut Cellule Souche et Cerveau
Département Neurosciences Intégratives
18 avenue du Doyen Lépine
69500 Bron
France
tel: +33 (0)4 72 91 34 77
fax: +33 (0)4 72 91 34 61
portable: +33 (0)6 84 10 64 10
http://www.sbri.fr

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is there any function can be used to compare two probit models made from same data?

2009-01-25 Thread Michael Dewey

At 14:55 23/01/2009, David Freedman wrote:


Hi - wouldn't it be possible to bootstrap the difference between the fit of
the 2 models?  For example, if one had a *linear* regression problem, the
following script could be used (although I'm sure that it could be
improved):


There are a number of methods for comparing non-nested models in the 
lmtest package.




library(MASS); library(boot)
#create intercorrelated data
Sigma - matrix(c(1,.5,.4,  .5,1,.8,  .4,.8,1),3,3)
Sigma
dframe-as.data.frame(mvrnorm(n-200, rep(0, 3), Sigma))
names(dframe)-c('disease','age','ht') #age and ht are predictors of
'disease'
head(dframe); cor(dframe)

#bootstrap the difference between models containing the 2 predictors
model.fun - function(data, indices) {
 dsub-dframe[indices,]
 m1se-summary(lm(disease~age,data=dsub))$sigma;
 m2se-summary(lm(disease~ht,da=dsub))$sigma;
 diff-m1se-m2se;  #diff is the difference in the SEs of the 2 models
 }
eye - boot(dframe,model.fun, R=200);  class(eye); names(eye);
des(an(eye$t))
boot.ci(eye,conf=c(.95,.99),type=c('norm'))



Ben Bolker wrote:


 jingjiang yan jingjiangyan at gmail.com writes:


 hi, people
 How can we compare two probit models brought out from the same data?
 Let me use the example used in An Introduction to R.
 Consider a small, artificial example, from Silvey (1970).

 On the Aegean island of Kalythos the male inhabitants suffer from a
 congenital eye disease, the effects of which become more marked with
 increasing age. Samples of islander males of various ages were tested for
 blindness and the results recorded. The data is shown below:

 Age: 20 35 45 55 70
 No. tested: 50 50 50 50 50
 No. blind: 6 17 26 37 44
 

 now, we can use the age and the blind percentage to produce a probit
 model
 and get their coefficients by using glm function as was did in An
 Introduction to R

 My question is, let say there is another potential factor instead of age
 affected the blindness percentage.
 for example, the height of these males. Using their height, and their
 relevant blindness we can introduce another probit model.

 If I want to determine which is significantly better, which function can
 I
 use to compare both models? and, in addition, compared with the Null
 hypothesis(i.e. the same blindness for all age/height) to prove this
 model
 is effective?


   You can use a likelihood ratio test (i.e.
 anova(model1,model0) to compare either model
 to the null model (blindness is independent of
 both age and height).  The age model and height
 model are non-nested, and of equal complexity.
 You can tell which one is *better* by comparing
 log-likelihoods/deviances, but cannot test
 a null hypothesis of significance. Most (but
 not all) statisticians would say you can compare
 non-nested models by using AIC, but you don't
 get a hypothesis-test/p-value in this way.


   Ben Bolker

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



--
View this message in context: 
http://www.nabble.com/Is-there-any-function-can-be-used-to-compare-two-probit-models-made-from-same-data--tp21614487p21625839.html

Sent from the R help mailing list archive at Nabble.com.


Michael Dewey
http://www.aghmed.fsnet.co.uk

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Calling a jar file with rJava

2009-01-25 Thread Gabor Grothendieck
Check out the source to helloJavaWorld package or one of the other
packages that uses rJava.  Some of them are:
CADStat Containers JGR RFreak RJDBC RKEA RLadyBug
RWeka gWidgetsrJava helloJavaWorld iplots openNLP rSymPy
rcdk rcdklibs wordnet

On Sun, Jan 25, 2009 at 5:48 AM, cameron.bracken
cameron.brac...@gmail.com wrote:

 I want to call the jar file from R.  I want to be able to do this without
 using system().  Normally a command line call would look like

 java -jar eps2pgf.jar -m directcopy myfile.eps

 My question is: Can I call this program using the rJava package or any other
 (command line options and all)?  I really know nothing about Java so any
 pointers would be appreciated.

 Thanks

 -Cameron Bracken
 --
 View this message in context: 
 http://www.nabble.com/Calling-a-jar-file-with-rJava-tp21650356p21650356.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] commercially supported version of R for 64 -bit Windows?

2009-01-25 Thread new ruser




Can anyone please refer me to all firms that offer and/or are developing a 
commercially supported version of R for 64 -bit Windows? - Thanks




  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R for Computational Neuroscience?

2009-01-25 Thread Mike Lawrence
Thanks to Ken and Bernardo for their attempts to answer my question,
but I was apparently unclear as to what I meant by computational
neuroscience.

The tools Ken and Bernardo suggest provide means to analyze data from
neuroscience research, but I'm actually looking for means to simulate
biologically realistic neural systems.

Maybe it would help if I provided some keywords, so here is a list of
chapters/sections from MATLAB section of the book How the brain
computes: Network fundamentals of computational neuroscience by
Thomas Trappenberg:

12 A MATLAB guide to computational neuroscience
12.1 Introduction to the MATLAB programming environ-
ment
...
12.2 Spiking neurons and numerical integration in MAT-
LAB
12.2.1 Integrating Hodgkin-Huxley equations with the
Euler method
12.2.2 The Wilson model and advanced integration
12.2.3 MATLAB function files
12.2.4 Leaky integrate-and-fire neuron
12.2.5 Poisson spike trains
12.2.6 Netlet formulas by Anninos et al.
12.3 Associators and Hebbian Learning
12.3.1 Hebbian Weight matrix in rate models
12.3.2 Hebbian learning with weight decay
12.4 Recurrent networks and networks dynamics
12.4.1 Example of a complete network simulation
12.4.2 Quasi-continuous attractor network
12.4.3 Networks with random asymmetric weight ma-
trix
12.4.4 The Lorenz attractor
12.5 Continuous attractor neural networks
12.5.1 Path-integration
12.6 Error-backpropagation network

-- 
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University
www.thatmike.com

Looking to arrange a meeting? Check my public calendar:
http://www.thatmike.com/mikes-public-calendar

~ Certainty is folly... I think. ~

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Problem with JGR. Was: Re: Using help()

2009-01-25 Thread Michael Kubovy
Dear Friends,

Thanks to Rolf Turner, Brian Ripley and Patrick Burns for their  
answers.They don't quite resolve the problem, which I now realize is  
due to non-standard behavior of JGR, at least on my machine (I  
verified that Mac GUI works entirely as expected):

 My installation
Running the JGR GUI:
  sessionInfo()
R version 2.8.1 (2008-12-22)
i386-apple-darwin8.11.1

locale:
C/C/en_US/C/C/C

attached base packages:
[1] grid  stats graphics  grDevices utils datasets  methods
[8] base

other attached packages:
[1] JGR_1.6-2   iplots_1.1-2JavaGD_0.5-2rJava_0.6-1
[5] MASS_7.2-45 lattice_0.17-20

loaded via a namespace (and not attached):
[1] tools_2.8.1

What happens with ? and ?? **

If I type ?normal I get the long list, not No documentation found.  
When I type ?plot I get the help page for plot {JM}, and not  
plot.default {graphics}; when I type ?dnorm I get a rather long list  
of help pages.

If I type ??normal
I get
?normal.htm
.com.symantec.APSock
.com.symantec.aptmp
.DM_1039:1232634821l:DlnIrq
.DM_11869:1232818209l:m4AGyL
.DM_13345:1232655220l:C1js39
.DM_14309:1232822090l:e6wvqw
.DM_15688:1232659145l:ffZvPg
.DM_16640:1232825979l:n5TrAz
.DM_18040:1232662823l:Gb81yX
…

 Another JGR problem **

Help pages for newly installed packages are accessible only after JGR  
is restarted.

Thanks,
MK

On Jan 24, 2009, at 8:54 PM, Rolf Turner wrote:

 On 25/01/2009, at 2:33 PM, Michael Kubovy wrote:

 …
 (1) If I type ?normal because I forgot the name dnorm() I get a long
 list of relevant pages. Getting to right page is laborious.

 (2) If I remember dnorm() and want to be reminded of the call, I also
 get a list of pages.
 …


 …
 If you type ``?normal'' you get a ``No documentation found'' message.

 If you type ``??normal'' you indeed get a long list of pages, some of
 which might be relevant.  (If you want help on ``dnorm'' then the  
 relevant
 page is stats::Normal.  And then typing ``?Normal'' gets you what you
 want.  Which is somewhat on the obscure side of obvious, IMHO.)

 If you type ``?dnorm'' then you get exactly what you want immediately.
 Exactly?  Well, there's also info on pnorm, qnorm, and rnorm, but I
 expect you can live with that.

 …
   Rolf Turner


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] data management

2009-01-25 Thread oscar linares
Dear Rxperts,

I would like to convert the following:

StudyStudy.NameParameterDestSrcFormValueMin
MaxFSD
1NT_1-0BFK(03)03A0.128510.0E+001.
0.41670E-01
1NT_1-0BFL(00,03)0003D0.36577
1NT_1-0BFL(00,02)0002D0.9
1NT_1-0BFL(00,04)0004D0.9
1NT_1-0BFP(01)01A0.365770.0E+00100.00
0.36880E-01
1NT_1-0BFP(02)02A28.2690.0E+00100.00
0.58489E-01
1NT_1-0BFP(03)03A68.14410.0001000.0
0.27806E-01
1NT_1-0BFP(05)05D0.9
1NT_1-0BFP(31)31D26.316
1NT_1-0BFP(32)32D29.483
1NT_1-0BFP(22)22D7.7813
StudyStudy.NameParameterDestSrcFormValueMin
MaxFSD
1NT_1-1BFK(03)03A0.128520.0E+001.
0.39727E-01
1NT_1-1BFL(00,03)0003D0.36577
1NT_1-1BFL(00,02)0002D0.9
1NT_1-1BFL(00,04)0004D0.9
1NT_1-1BFP(01)01A0.365770.0E+00100.00
0.35166E-01
1NT_1-1BFP(02)02A28.2800.0E+00100.00
0.55760E-01
1NT_1-1BFP(03)03A68.13410.0001000.0
0.26508E-01
1NT_1-1BFP(05)05D0.9
1NT_1-1BFP(22)22D7.7811
StudyStudy.NameParameterDestSrcFormValueMin
MaxFSD
1NT_1-2BFK(03)03A0.128510.0E+001.
0.90167E-01
1NT_1-2BFL(00,03)0003D0.36575
1NT_1-2BFL(00,02)0002D0.9
1NT_1-2BFL(00,04)0004D0.9
1NT_1-2BFP(01)01A0.365750.0E+00100.00
0.79794E-01
1NT_1-2BFP(02)02A23.8900.0E+00100.00
0.13385
1NT_1-2BFP(03)03A76.29710.0001000.0
0.68931E-01
1NT_1-2BFP(05)05D0.9
1NT_1-2BFP(22)22D7.7815

To look like the following stata output

 | study   studyn~e K3P1   P2   P3   P5P11
P23  P31  P32  P33 |

|--|
  1. | 1  NT_16   .125   .35   35.903   8.6815   .83195 58
.13793   26.316   4.7181   13.211 |
  2. | 2   NT_1   .125   .35   23.173   9.4882   .75125   66.7
.11994   26.316   4.042711.32 |
  3. | 3   NT_2   .125   .35   48.229   7.1296   .68354   66.7
.11994   26.316   4.9101   13.748 |
  4. | 4   NT_3   .125   .35   8.0027   15.967   1.1438
80.1   26.316   .37137   1.0398 |
  5. | 5   NT_4   .125   .35   24.468   4.4256   .65408
40.2   26.316   2.1901   6.1322 |

|--|

Any suggestions for doing this in R?

Many thanks in advance for your help.

-- 
Oscar
Oscar A. Linares
Molecular Medicine Unit
Bolles Harbor
Monroe, Michigan

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Interpreting model matrix columns when using contr.sum

2009-01-25 Thread Douglas Bates
On Fri, Jan 23, 2009 at 4:58 PM, Gang Chen gangch...@gmail.com wrote:
 With the following example using contr.sum for both factors,

 dd - data.frame(a = gl(3,4), b = gl(4,1,12)) # balanced 2-way
 model.matrix(~ a * b, dd, contrasts = list(a=contr.sum, b=contr.sum))

   (Intercept) a1 a2 b1 b2 b3 a1:b1 a2:b1 a1:b2 a2:b2 a1:b3 a2:b3
 11  1  0  1  0  0 1 0 0 0 0 0
 21  1  0  0  1  0 0 0 1 0 0 0
 31  1  0  0  0  1 0 0 0 0 1 0
 41  1  0 -1 -1 -1-1 0-1 0-1 0
 51  0  1  1  0  0 0 1 0 0 0 0
 61  0  1  0  1  0 0 0 0 1 0 0
 71  0  1  0  0  1 0 0 0 0 0 1
 81  0  1 -1 -1 -1 0-1 0-1 0-1
 91 -1 -1  1  0  0-1-1 0 0 0 0
 10   1 -1 -1  0  1  0 0 0-1-1 0 0
 11   1 -1 -1  0  0  1 0 0 0 0-1-1
 12   1 -1 -1 -1 -1 -1 1 1 1 1 1 1
 ...

 I have two questions:

 (1) I assume the 1st column (under intercept) is the overall mean, the
 2rd column (under a1) is the difference between the 1st level of
 factor a and the overall mean, the 4th column (under b1) is the
 difference between the 1st level of factor b and the overall mean.

 Is this interpretation correct?

I don't think so and furthermore I don't see why the contrasts should
have an interpretation.  The contrasts are simply a parameterization
of the space spanned by the indicator columns of the levels of the
factors.  Interpretations as overall means, etc. are mostly a holdover
from antiquated concepts of how analysis of variance tables should be
evalated.

If you want to determine the interpretation of particular coefficients
for the special case of a balanced design (which doesn't always mean a
resulting balanced data set - I remind my students that expecting a
balanced design to produce balanced data is contrary to Murphy's Law)
the easiest way of doing so is (I think this is right but I can
somehow manage to confuse myself on this with great ease) to calculate

 contr.sum(3)
  [,1] [,2]
110
201
3   -1   -1
 solve(cbind(1, contr.sum(3)))
  1  2  3
[1,]  0.333  0.333  0.333
[2,]  0.667 -0.333 -0.333
[3,] -0.333  0.667 -0.333
 solve(cbind(1, contr.sum(4)))
 1 2 3 4
[1,]  0.25  0.25  0.25  0.25
[2,]  0.75 -0.25 -0.25 -0.25
[3,] -0.25  0.75 -0.25 -0.25
[4,] -0.25 -0.25  0.75 -0.25

That is, the first coefficient is the overall mean (but only for a
balanced data set), the second is a contrast of the first level with
the others, the third is a contrast of the second level with the
others and so on.

 (2) I'm not so sure about those interaction columns. For example, what
 is a1:b1? Is it the 1st level of factor a at the 1st level of factor b
 versus the overall mean, or something more complicated?

Well, at the risk of sounding trivial, a1:b1 is the product of the a1
and b1 columns.  You need a basis for a certain subspace and this
provides one.  I don't see why there must be interpretations of the
coefficients.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] data management

2009-01-25 Thread Uwe Ligges
Although the message is rather unreadable, I guess you want to look at 
?reshape.


Uwe Ligges

oscar linares wrote:

Dear Rxperts,

I would like to convert the following:

StudyStudy.NameParameterDestSrcFormValueMin
MaxFSD
1NT_1-0BFK(03)03A0.128510.0E+001.
0.41670E-01
1NT_1-0BFL(00,03)0003D0.36577
1NT_1-0BFL(00,02)0002D0.9
1NT_1-0BFL(00,04)0004D0.9
1NT_1-0BFP(01)01A0.365770.0E+00100.00
0.36880E-01
1NT_1-0BFP(02)02A28.2690.0E+00100.00
0.58489E-01
1NT_1-0BFP(03)03A68.14410.0001000.0
0.27806E-01
1NT_1-0BFP(05)05D0.9
1NT_1-0BFP(31)31D26.316
1NT_1-0BFP(32)32D29.483
1NT_1-0BFP(22)22D7.7813
StudyStudy.NameParameterDestSrcFormValueMin
MaxFSD
1NT_1-1BFK(03)03A0.128520.0E+001.
0.39727E-01
1NT_1-1BFL(00,03)0003D0.36577
1NT_1-1BFL(00,02)0002D0.9
1NT_1-1BFL(00,04)0004D0.9
1NT_1-1BFP(01)01A0.365770.0E+00100.00
0.35166E-01
1NT_1-1BFP(02)02A28.2800.0E+00100.00
0.55760E-01
1NT_1-1BFP(03)03A68.13410.0001000.0
0.26508E-01
1NT_1-1BFP(05)05D0.9
1NT_1-1BFP(22)22D7.7811
StudyStudy.NameParameterDestSrcFormValueMin
MaxFSD
1NT_1-2BFK(03)03A0.128510.0E+001.
0.90167E-01
1NT_1-2BFL(00,03)0003D0.36575
1NT_1-2BFL(00,02)0002D0.9
1NT_1-2BFL(00,04)0004D0.9
1NT_1-2BFP(01)01A0.365750.0E+00100.00
0.79794E-01
1NT_1-2BFP(02)02A23.8900.0E+00100.00
0.13385
1NT_1-2BFP(03)03A76.29710.0001000.0
0.68931E-01
1NT_1-2BFP(05)05D0.9
1NT_1-2BFP(22)22D7.7815

To look like the following stata output

 | study   studyn~e K3P1   P2   P3   P5P11
P23  P31  P32  P33 |

|--|
  1. | 1  NT_16   .125   .35   35.903   8.6815   .83195 58
.13793   26.316   4.7181   13.211 |
  2. | 2   NT_1   .125   .35   23.173   9.4882   .75125   66.7
.11994   26.316   4.042711.32 |
  3. | 3   NT_2   .125   .35   48.229   7.1296   .68354   66.7
.11994   26.316   4.9101   13.748 |
  4. | 4   NT_3   .125   .35   8.0027   15.967   1.1438
80.1   26.316   .37137   1.0398 |
  5. | 5   NT_4   .125   .35   24.468   4.4256   .65408
40.2   26.316   2.1901   6.1322 |

|--|

Any suggestions for doing this in R?

Many thanks in advance for your help.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Interpreting model matrix columns when using contr.sum

2009-01-25 Thread John Fox
Dear Doug and Gang Chen,

With balanced data and sum-to-zero contrasts, the intercept is indeed the
general mean of the response; the coefficient of a1 is the mean of the
response in category a1 minus the general mean; the coefficient of a1:b1 is
the mean of the response in cell a1, b1 minus the general mean and the
coefficients of a1 and b1; etc. For unbalanced data (and balanced data) the
intercept is the mean of the cell means; the coefficient of a1 is the mean
of cell means at level a1 minus the intercept; etc. Whether all this is of
interest is another question, since a simple graph of cell means tells a
more digestible story about the data.

Regards,
 John

--
John Fox, Professor
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
web: socserv.mcmaster.ca/jfox


 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
On
 Behalf Of Douglas Bates
 Sent: January-25-09 10:49 AM
 To: Gang Chen
 Cc: R-help
 Subject: Re: [R] Interpreting model matrix columns when using contr.sum
 
 On Fri, Jan 23, 2009 at 4:58 PM, Gang Chen gangch...@gmail.com wrote:
  With the following example using contr.sum for both factors,
 
  dd - data.frame(a = gl(3,4), b = gl(4,1,12)) # balanced 2-way
  model.matrix(~ a * b, dd, contrasts = list(a=contr.sum,
b=contr.sum))
 
(Intercept) a1 a2 b1 b2 b3 a1:b1 a2:b1 a1:b2 a2:b2 a1:b3 a2:b3
  11  1  0  1  0  0 1 0 0 0 0 0
  21  1  0  0  1  0 0 0 1 0 0 0
  31  1  0  0  0  1 0 0 0 0 1 0
  41  1  0 -1 -1 -1-1 0-1 0-1 0
  51  0  1  1  0  0 0 1 0 0 0 0
  61  0  1  0  1  0 0 0 0 1 0 0
  71  0  1  0  0  1 0 0 0 0 0 1
  81  0  1 -1 -1 -1 0-1 0-1 0-1
  91 -1 -1  1  0  0-1-1 0 0 0 0
  10   1 -1 -1  0  1  0 0 0-1-1 0 0
  11   1 -1 -1  0  0  1 0 0 0 0-1-1
  12   1 -1 -1 -1 -1 -1 1 1 1 1 1 1
  ...
 
  I have two questions:
 
  (1) I assume the 1st column (under intercept) is the overall mean, the
  2rd column (under a1) is the difference between the 1st level of
  factor a and the overall mean, the 4th column (under b1) is the
  difference between the 1st level of factor b and the overall mean.
 
  Is this interpretation correct?
 
 I don't think so and furthermore I don't see why the contrasts should
 have an interpretation.  The contrasts are simply a parameterization
 of the space spanned by the indicator columns of the levels of the
 factors.  Interpretations as overall means, etc. are mostly a holdover
 from antiquated concepts of how analysis of variance tables should be
 evalated.
 
 If you want to determine the interpretation of particular coefficients
 for the special case of a balanced design (which doesn't always mean a
 resulting balanced data set - I remind my students that expecting a
 balanced design to produce balanced data is contrary to Murphy's Law)
 the easiest way of doing so is (I think this is right but I can
 somehow manage to confuse myself on this with great ease) to calculate
 
  contr.sum(3)
   [,1] [,2]
 110
 201
 3   -1   -1
  solve(cbind(1, contr.sum(3)))
   1  2  3
 [1,]  0.333  0.333  0.333
 [2,]  0.667 -0.333 -0.333
 [3,] -0.333  0.667 -0.333
  solve(cbind(1, contr.sum(4)))
  1 2 3 4
 [1,]  0.25  0.25  0.25  0.25
 [2,]  0.75 -0.25 -0.25 -0.25
 [3,] -0.25  0.75 -0.25 -0.25
 [4,] -0.25 -0.25  0.75 -0.25
 
 That is, the first coefficient is the overall mean (but only for a
 balanced data set), the second is a contrast of the first level with
 the others, the third is a contrast of the second level with the
 others and so on.
 
  (2) I'm not so sure about those interaction columns. For example, what
  is a1:b1? Is it the 1st level of factor a at the 1st level of factor b
  versus the overall mean, or something more complicated?
 
 Well, at the risk of sounding trivial, a1:b1 is the product of the a1
 and b1 columns.  You need a basis for a certain subspace and this
 provides one.  I don't see why there must be interpretations of the
 coefficients.
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, 

Re: [R] commercially supported version of R for 64 -bit Windows?

2009-01-25 Thread stephen sefick
Why?  Revolution Computing may do what you want.

On Sun, Jan 25, 2009 at 7:39 AM, new ruser newru...@yahoo.com wrote:




 Can anyone please refer me to all firms that offer and/or are developing a 
 commercially supported version of R for 64 -bit Windows? - Thanks





[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Stephen Sefick

Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods.  We are mammals, and have not exhausted the
annoying little problems of being mammals.

-K. Mullis

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Build Error on Opensolaris iconvlist

2009-01-25 Thread Karun Gahlawat
Uwe,
Sorry I missed it. I do have gnu iconv..
SUNWgnu-libiconv

ls -lra /usr/lib/*iconv* | more
lrwxrwxrwx   1 root root  14 Jan 23 21:23 /usr/lib/libiconv.so - li
bgnuiconv.so
lrwxrwxrwx   1 root root  22 Jan 23 21:23 /usr/lib/libgnuiconv.so -
 ../gnu/lib/libiconv.so

And hence the confusion..

On Sat, Jan 24, 2009 at 1:00 PM, Uwe Ligges
lig...@statistik.tu-dortmund.de wrote:


 Karun Gahlawat wrote:

 Hi!

 Trying to build R-2.8.1. while configuring, it throws error

 ./configure

 checking iconv.h usability... yes
 checking iconv.h presence... yes
 checking for iconv.h... yes
 checking for iconv... yes
 checking whether iconv accepts UTF-8, latin1 and UCS-*... no
 checking for iconvlist... no
 configure: error: --with-iconv=yes (default) and a suitable iconv is
 not available

 I am confused.. sorry new to this..
 I can see the iconv binary, headers and libs all in the standard
 directory. Please help or redirect!


 Please read the documentation, the R Installation and Administration manuals
 tells you:

 You will need GNU libiconv: the Solaris version of iconv is not
 sufficiently powerful. 

 Uwe Ligges


 SunOS  5.11 snv_101b i86pc i386 i86pc

 CC: Sun Ceres C++ 5.10 SunOS_i386 2008/10/22



 Karun

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Build Error on Opensolaris iconvlist

2009-01-25 Thread Uwe Ligges



Karun Gahlawat wrote:

Uwe,
Sorry I missed it. I do have gnu iconv..
SUNWgnu-libiconv

ls -lra /usr/lib/*iconv* | more
lrwxrwxrwx   1 root root  14 Jan 23 21:23 /usr/lib/libiconv.so - li
bgnuiconv.so
lrwxrwxrwx   1 root root  22 Jan 23 21:23 /usr/lib/libgnuiconv.so -
 ../gnu/lib/libiconv.so

And hence the confusion..



Hmmm, then I have no idea. Since I have not Solaris system available 
currently, I cannot test ...


Uwe



On Sat, Jan 24, 2009 at 1:00 PM, Uwe Ligges
lig...@statistik.tu-dortmund.de wrote:


Karun Gahlawat wrote:

Hi!

Trying to build R-2.8.1. while configuring, it throws error

./configure

checking iconv.h usability... yes
checking iconv.h presence... yes
checking for iconv.h... yes
checking for iconv... yes
checking whether iconv accepts UTF-8, latin1 and UCS-*... no
checking for iconvlist... no
configure: error: --with-iconv=yes (default) and a suitable iconv is
not available

I am confused.. sorry new to this..
I can see the iconv binary, headers and libs all in the standard
directory. Please help or redirect!


Please read the documentation, the R Installation and Administration manuals
tells you:

You will need GNU libiconv: the Solaris version of iconv is not
sufficiently powerful. 

Uwe Ligges



SunOS  5.11 snv_101b i86pc i386 i86pc

CC: Sun Ceres C++ 5.10 SunOS_i386 2008/10/22



Karun

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Interpreting model matrix columns when using contr.sum

2009-01-25 Thread Gang Chen
Many thanks to both Drs. Bates and Fox for the help!

I also figured out yesterday what Dr. Fox just said regarding the
interpretations of those coefficients for a balanced design. Thanks
Dr. Bates for the suggestion of using solve(cbind(1, contr.sum(4))) to
sort out the factor level effects. Model validation is very important,
but interpreting those coefficients, at least in the case of balanced
designs, also provides some insights about various effects for the
people working in the field.

Gang


On Sun, Jan 25, 2009 at 11:25 AM, John Fox j...@mcmaster.ca wrote:
 Dear Doug and Gang Chen,

 With balanced data and sum-to-zero contrasts, the intercept is indeed the
 general mean of the response; the coefficient of a1 is the mean of the
 response in category a1 minus the general mean; the coefficient of a1:b1 is
 the mean of the response in cell a1, b1 minus the general mean and the
 coefficients of a1 and b1; etc. For unbalanced data (and balanced data) the
 intercept is the mean of the cell means; the coefficient of a1 is the mean
 of cell means at level a1 minus the intercept; etc. Whether all this is of
 interest is another question, since a simple graph of cell means tells a
 more digestible story about the data.

 Regards,
  John

 --
 John Fox, Professor
 Department of Sociology
 McMaster University
 Hamilton, Ontario, Canada
 web: socserv.mcmaster.ca/jfox


 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
 On
 Behalf Of Douglas Bates
 Sent: January-25-09 10:49 AM
 To: Gang Chen
 Cc: R-help
 Subject: Re: [R] Interpreting model matrix columns when using contr.sum

 On Fri, Jan 23, 2009 at 4:58 PM, Gang Chen gangch...@gmail.com wrote:
  With the following example using contr.sum for both factors,
 
  dd - data.frame(a = gl(3,4), b = gl(4,1,12)) # balanced 2-way
  model.matrix(~ a * b, dd, contrasts = list(a=contr.sum,
 b=contr.sum))
 
(Intercept) a1 a2 b1 b2 b3 a1:b1 a2:b1 a1:b2 a2:b2 a1:b3 a2:b3
  11  1  0  1  0  0 1 0 0 0 0 0
  21  1  0  0  1  0 0 0 1 0 0 0
  31  1  0  0  0  1 0 0 0 0 1 0
  41  1  0 -1 -1 -1-1 0-1 0-1 0
  51  0  1  1  0  0 0 1 0 0 0 0
  61  0  1  0  1  0 0 0 0 1 0 0
  71  0  1  0  0  1 0 0 0 0 0 1
  81  0  1 -1 -1 -1 0-1 0-1 0-1
  91 -1 -1  1  0  0-1-1 0 0 0 0
  10   1 -1 -1  0  1  0 0 0-1-1 0 0
  11   1 -1 -1  0  0  1 0 0 0 0-1-1
  12   1 -1 -1 -1 -1 -1 1 1 1 1 1 1
  ...

  I have two questions:

  (1) I assume the 1st column (under intercept) is the overall mean, the
  2rd column (under a1) is the difference between the 1st level of
  factor a and the overall mean, the 4th column (under b1) is the
  difference between the 1st level of factor b and the overall mean.

  Is this interpretation correct?

 I don't think so and furthermore I don't see why the contrasts should
 have an interpretation.  The contrasts are simply a parameterization
 of the space spanned by the indicator columns of the levels of the
 factors.  Interpretations as overall means, etc. are mostly a holdover
 from antiquated concepts of how analysis of variance tables should be
 evalated.

 If you want to determine the interpretation of particular coefficients
 for the special case of a balanced design (which doesn't always mean a
 resulting balanced data set - I remind my students that expecting a
 balanced design to produce balanced data is contrary to Murphy's Law)
 the easiest way of doing so is (I think this is right but I can
 somehow manage to confuse myself on this with great ease) to calculate

  contr.sum(3)
   [,1] [,2]
 110
 201
 3   -1   -1
  solve(cbind(1, contr.sum(3)))
   1  2  3
 [1,]  0.333  0.333  0.333
 [2,]  0.667 -0.333 -0.333
 [3,] -0.333  0.667 -0.333
  solve(cbind(1, contr.sum(4)))
  1 2 3 4
 [1,]  0.25  0.25  0.25  0.25
 [2,]  0.75 -0.25 -0.25 -0.25
 [3,] -0.25  0.75 -0.25 -0.25
 [4,] -0.25 -0.25  0.75 -0.25

 That is, the first coefficient is the overall mean (but only for a
 balanced data set), the second is a contrast of the first level with
 the others, the third is a contrast of the second level with the
 others and so on.

  (2) I'm not so sure about those interaction columns. For example, what
  is a1:b1? Is it the 1st level of factor a at the 1st level of factor b
  versus the overall mean, or something more complicated?

 Well, at the risk of sounding trivial, a1:b1 is the product of the a1
 and b1 columns.  You need a basis for a certain subspace and this
 provides one.  I don't 

Re: [R] commercially supported version of R for 64 -bit Windows?

2009-01-25 Thread eugene dalt
I am beta test XLSolutions commercially supported version of  R for 64-bit 
windows.

www.xlsolutions-corp.com


--- On Sun, 1/25/09, stephen sefick ssef...@gmail.com wrote:

 From: stephen sefick ssef...@gmail.com
 Subject: Re: [R] commercially supported version of R for 64 -bit Windows?
 To: newru...@yahoo.com
 Cc: r-help@r-project.org
 Date: Sunday, January 25, 2009, 8:46 AM
 Why?  Revolution Computing may do what you want.
 
 On Sun, Jan 25, 2009 at 7:39 AM, new ruser
 newru...@yahoo.com wrote:
 
 
 
 
  Can anyone please refer me to all firms that offer
 and/or are developing a commercially supported version of R
 for 64 -bit Windows? - Thanks
 
 
 
 
 
 [[alternative HTML version deleted]]
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained,
 reproducible code.
 
 
 
 
 -- 
 Stephen Sefick
 
 Let's not spend our time and resources thinking about
 things that are
 so little or so large that all they really do for us is
 puff us up and
 make us feel like gods.  We are mammals, and have not
 exhausted the
 annoying little problems of being mammals.
 
   -K. Mullis
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained,
 reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] commercially supported version of R for 64 -bit Windows?

2009-01-25 Thread eugene dalt
I am beta test XLSolutions commercially supported version of  R for 64-bit 
windows.

www.xlsolutions-corp.com


--- On Sun, 1/25/09, stephen sefick ssef...@gmail.com wrote:

 From: stephen sefick ssef...@gmail.com
 Subject: Re: [R] commercially supported version of R for 64 -bit Windows?
 To: newru...@yahoo.com
 Cc: r-help@r-project.org
 Date: Sunday, January 25, 2009, 8:46 AM
 Why?  Revolution Computing may do what you want.
 
 On Sun, Jan 25, 2009 at 7:39 AM, new ruser
 newru...@yahoo.com wrote:
 
 
 
 
  Can anyone please refer me to all firms that offer
 and/or are developing a commercially supported version of R
 for 64 -bit Windows? - Thanks
 
 
 
 
 
 [[alternative HTML version deleted]]
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained,
 reproducible code.
 
 
 
 
 -- 
 Stephen Sefick
 
 Let's not spend our time and resources thinking about
 things that are
 so little or so large that all they really do for us is
 puff us up and
 make us feel like gods.  We are mammals, and have not
 exhausted the
 annoying little problems of being mammals.
 
   -K. Mullis
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained,
 reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Outputing residuals

2009-01-25 Thread William Revelle

At 5:42 AM -0800 1/23/09, Josh B wrote:

Hello,

I was wondering if someone could tell me how to output, to file, the 
residuals from a REML model-fit. The type of residuals I am 
interested in are the simple original raw values - model fit type.


?residuals
  To find out how to get residuals from most functions
?str
To find out what values are returned but not necessarily displayed by 
a call to a particular function.


Bill




Thanks in advance,
Josh B.


 
	[[alternative HTML version deleted]]


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
William Revelle http://personality-project.org/revelle.html
Professor   http://personality-project.org/personality.html
Department of Psychology http://www.wcas.northwestern.edu/psych/
Northwestern University http://www.northwestern.edu/
Attend  ISSID/ARP:2009   http://issid.org/issid.2009/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Histogram for grouped data in R

2009-01-25 Thread Mendiburu, Felipe (CIP)
Ok,
use library agricolae, graph.freq() is similar hist(), aditional parameters
 
size- c(0,10,20,50,100)
f-c(15,25,10,5)
library(agricolae)
h-graph.freq(size,counts=f,axes=F)
axis(1,x)
axis(2,seq(0,30,5))

Other function:
# is necesary histogram h with hist() or graph.freq()
h-graph.freq(x,counts=f,axes=F)
axis(1,x)
normal.freq(h,col=red)
table.freq(h)
ojiva.freq(h,type=b,col=blue)

Regards,
 
Felipe de Mendiburu
http://tarwi.lamolina.edu.pe/~fmendiburu

 


From: r-help-boun...@r-project.org on behalf of darthgervais
Sent: Fri 1/23/2009 8:55 AM
To: r-help@r-project.org
Subject: [R] Histogram for grouped data in R




I have grouped data in this format

Size  -- Count
0-10 --  15
10-20 -- 25
20-50 -- 10
50-100 -- 5

I've been trying to find a way to set this up with the proper histogram
heights, but can't seem to figure it out. So any help would be much
appreciated!
--
View this message in context: 
http://www.nabble.com/Histogram-for-grouped-data-in-R-tp21624806p21624806.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Histogram for grouped data in R

2009-01-25 Thread Mendiburu, Felipe (CIP)
This script is correct,
use library agricolae, graph.freq() is similar hist(), aditional parameters
 
size- c(0,10,20,50,100)
f-c(15,25,10,5)
library(agricolae)
h-graph.freq(size,counts=f,axes=F)
axis(1,size)
axis(2,seq(0,30,5))
#
# Other function:
# is necesary histogram h with hist() or graph.freq()
h-graph.freq(size,counts=f,axes=F)
axis(1,size)
normal.freq(h,col=red)
table.freq(h)
ojiva.freq(h,type=b,col=blue)

 
Regards,
 
Felipe de Mendiburu
http://tarwi.lamolina.edu.pe/~fmendiburu 
https://webmail.cip.cgiar.org/exchweb/bin/redir.asp?URL=http://tarwi.lamolina.edu.pe/~fmendiburu
 




From: r-help-boun...@r-project.org on behalf of Vincent Goulet
Sent: Fri 1/23/2009 10:36 AM
To: darthgervais
Cc: r-help@r-project.org
Subject: Re: [R] Histogram for grouped data in R



Le ven. 23 janv. à 08:55, darthgervais a écrit :


 I have grouped data in this format

 Size  -- Count
 0-10 --  15
 10-20 -- 25
 20-50 -- 10
 50-100 -- 5

 I've been trying to find a way to set this up with the proper 
 histogram
 heights, but can't seem to figure it out. So any help would be much
 appreciated!

Define your data as a grouped.data object using the function of the 
same name in package actuar. Then you can simply use hist() as usual 
to get what you want. See:

@Article{Rnews:Goulet+Pigeon:2008,
  author = {Vincent Goulet and Mathieu Pigeon},
  title = {Statistical Modeling of Loss Distributions Using actuar},
  journal = {R News},
  year = 2008,
  volume = 8,
  number = 1,
  pages = {34--40},
  month = {May},
  url = http, pdf = Rnews2008-1 }

HTH

---
   Vincent Goulet
   Acting Chair, Associate Professor
   École d'actuariat
   Université Laval, Québec
   vincent.gou...@act.ulaval.ca   http://vgoulet.act.ulaval.ca 
http://vgoulet.act.ulaval.ca/ 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Correlation matrix one side with significance

2009-01-25 Thread Kingsford Jones
On the topic of visualizing correlation, see also

Murdoch, D.J. and Chow, E.D. (1996). A graphical display of large
correlation matrices.
The American Statistician 50, 178-180.

with examples here:

# install.packages('ellipse')
example(plotcorr, package='ellipse')



On Sat, Mar 8, 2008 at 3:01 AM, Liviu Andronic landronim...@gmail.com wrote:
 On 3/5/08, Martin Kaffanke tech...@roomandspace.com wrote:
  Now I'd like to have it one sided, means only the left bottom side to be
  printed (the others are the same) and I'd like to have * where the
  p-value is lower than 0.05 and ** lower than 0.01.

 Look here [1], at Visualizing Correlations. You might find
 interesting the example of a plotted correlation matrix.

 Liviu

 [1] http://www.statmethods.net/stats/correlations.html

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] commercially supported version of R for 64 -bit Windows?

2009-01-25 Thread Dirk Eddelbuettel

On 25 January 2009 at 04:39, new ruser wrote:
| Can anyone please refer me to all firms that offer and/or are developing a
| commercially supported version of R for 64 -bit Windows? - Thanks 

Try contacting Revolution-Computing.com --- to the best of my knowledge they
expect to have such a product forthcoming in 2009.

64bit versions have of course been available on Linux / Unix for over a
decade so you could use that now.  Works great for me on Debian and Ubuntu.

Dirk

-- 
Three out of two people have difficulties with fractions.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Agricolae satisfaction survey

2009-01-25 Thread Mendiburu, Felipe (CIP)
Dear Users R,
If you use the library agricolae, I would like to have a review to improve my 
library. 
Please, You might fill the satisfaction survey and send to email. 
http://tarwi.lamolina.edu.pe/~fmendiburu/survey.htm
 
Thanks for your response.
 
Felipe de Mendiburu
http://tarwi.lamolina.edu.pe/~fmendiburu 
http://tarwi.lamolina.edu.pe/~fmendiburu/survey.htm 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Build Error on Opensolaris iconvlist

2009-01-25 Thread Prof Brian Ripley

On Sun, 25 Jan 2009, Karun Gahlawat wrote:


Uwe,
Sorry I missed it. I do have gnu iconv..
SUNWgnu-libiconv

ls -lra /usr/lib/*iconv* | more
lrwxrwxrwx   1 root root  14 Jan 23 21:23 /usr/lib/libiconv.so - li
bgnuiconv.so
lrwxrwxrwx   1 root root  22 Jan 23 21:23 /usr/lib/libgnuiconv.so -
../gnu/lib/libiconv.so

And hence the confusion..


Did you tell R to use that one?  You need the correct header files set 
as well as the library, or you will get the system iconv.  (The header 
file remaps the entry point names.)


Perhaps you need to study the R-admin manual carefully, which 
describes how to get the correct iconv.




On Sat, Jan 24, 2009 at 1:00 PM, Uwe Ligges
lig...@statistik.tu-dortmund.de wrote:



Karun Gahlawat wrote:


Hi!

Trying to build R-2.8.1. while configuring, it throws error

./configure

checking iconv.h usability... yes
checking iconv.h presence... yes
checking for iconv.h... yes
checking for iconv... yes
checking whether iconv accepts UTF-8, latin1 and UCS-*... no
checking for iconvlist... no
configure: error: --with-iconv=yes (default) and a suitable iconv is
not available

I am confused.. sorry new to this..
I can see the iconv binary, headers and libs all in the standard
directory. Please help or redirect!



Please read the documentation, the R Installation and Administration manuals
tells you:

You will need GNU libiconv: the Solaris version of iconv is not
sufficiently powerful. 

Uwe Ligges



SunOS  5.11 snv_101b i86pc i386 i86pc

CC: Sun Ceres C++ 5.10 SunOS_i386 2008/10/22


--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] strip leading 0's

2009-01-25 Thread jim holtman
Does something like this help:

 x - matrix(runif(25,-2,2), 5)
 x
   [,1][,2]   [,3]   [,4][,5]
[1,]  0.6188957 -1.14716746  1.9046828 -1.9476897  1.96735448
[2,] -0.5872109 -1.48251061  0.9271700  0.8622643 -0.01762569
[3,] -0.9189594 -0.08752786 -0.5730924 -1.5872631 -0.06260190
[4,]  1.9707362  1.69629788 -0.2741052 -0.2148626 -1.30623066
[5,]  0.5339731  0.39504387 -1.4071538  0.5604042  1.01928378
 x.out - capture.output(x)[-1]
 # remove row names
 x.new - sub(^\\S+, , x.out)
 # replace leading '0's
 x.new - gsub( -0,  - , x.new)
 x.new - gsub( 0,   , x.new)
 x.new
[1].6188957 -1.14716746  1.9046828 -1.9476897  1.96735448
[2]  - .5872109 -1.48251061   .9271700   .8622643 - .01762569
[3]  - .9189594 - .08752786 - .5730924 -1.5872631 - .06260190
[4]   1.9707362  1.69629788 - .2741052 - .2148626 -1.30623066
[5].5339731   .39504387 -1.4071538   .5604042  1.01928378



On Sun, Jan 25, 2009 at 9:58 AM, Ista Zahn iz...@psych.rochester.edu wrote:
 Thanks for the suggestion replacing the leading 0 with a space instead
 of nothing to preserve the layout, and for explaining why there is no
 option for this.

 Yes, I see why this sounds like a bad idea. The reason I asked is that
 I use Sweave to write statistical reports, and I like to get the
 formatting as close as possible without editing the .tex file
 afterward. In my field (psychology) it is standard practice to omit
 the leading zeros when reporting statistics whose value cannot exceed
 1, mainly correlations and p values.

 Thanks,
 Ista
 On Sun, Jan 25, 2009 at 2:39 AM, Prof Brian Ripley
 rip...@stats.ox.ac.uk wrote:

 The only comprehensive way to do this would be to change R's internaal print 
 mechanisms.  (Note that changing 0. to . breaks the layout: changing '0.' to 
 ' .' would be better.)

 But you haven't told use why you would want to do this.  Leaving off leading 
 zeroes makes output harder to read for most people, and indded leading 
 periods are easy to miss (much easier than failing to see that you were 
 asked not to send HTML mail).

 It would be easy for the cognescenti to add an option to R, but I suspect 
 they all would need a lot of convincing to do so.

 On Sat, 24 Jan 2009, Ista Zahn wrote:

 Dear all,
 Is there a simple way to strip the leading 0's from R output? For example,
 I want
 Data - data.frame(x=rnorm(10), y=x*rnorm(10), z = x+y+rnorm(10))
 cor(Data)

 to give me
  x  y  z
 x  1.000 -.1038904 -.3737842
 y -.1038904  1.000  .4414706
 z -.3737842  .4414706  1.000

 Several of you were kind enough to alert me to the existence of gsub a few
 weeks ago, so I can do
 gsub(0\\.,\\.,cor(Data))

 but I'm hoping someone has a better way (e.g, one that returns an object of
 the same class as the original, preserving dimnames, etc.)

 Thanks,
 Ista

[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 --
 Brian D. Ripley,  rip...@stats.ox.ac.uk
 Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
 University of Oxford, Tel:  +44 1865 272861 (self)
 1 South Parks Road, +44 1865 272866 (PA)
 Oxford OX1 3TG, UKFax:  +44 1865 272595

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] .Renviron for multiple hardwares...

2009-01-25 Thread Jonathan Greenberg
Our lab has a lot of different unix boxes, with different hardware, and 
I'm assuming (perhaps wrongly) that by setting a per-user package 
installation directory, the packages will only work on one type of 
hardware.  Our systems are all set up to share the same home directory 
(and, thus, the same .Renviron file) -- so, is there a way to set, in 
the .Renviron file, per-computer or per-hardware settings?  The idea is 
to have a different package installation directory for each computer 
(e.g. ~/R/computer1/packages and ~/R/computer2/packages.


Thoughts?  Ideas?  Thanks!

--j

--

Jonathan A. Greenberg, PhD
Postdoctoral Scholar
Center for Spatial Technologies and Remote Sensing (CSTARS)
University of California, Davis
One Shields Avenue
The Barn, Room 250N
Davis, CA 95616
Cell: 415-794-5043
AIM: jgrn307, MSN: jgrn...@hotmail.com, Gchat: jgrn307

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] .Renviron for multiple hardwares...

2009-01-25 Thread Henrik Bengtsson
The script .Rprofile evaluates R code on startup.  You could use that
to test for various environment variables.  Alternatively, use Unix
shell scripts to set system environment variables to be used in a
generic .Renviron.  See help(Startup) for more details.

/Henrik

On Sun, Jan 25, 2009 at 11:22 AM, Jonathan Greenberg
greenb...@ucdavis.edu wrote:
 Our lab has a lot of different unix boxes, with different hardware, and I'm
 assuming (perhaps wrongly) that by setting a per-user package installation
 directory, the packages will only work on one type of hardware.  Our systems
 are all set up to share the same home directory (and, thus, the same
 .Renviron file) -- so, is there a way to set, in the .Renviron file,
 per-computer or per-hardware settings?  The idea is to have a different
 package installation directory for each computer (e.g.
 ~/R/computer1/packages and ~/R/computer2/packages.

 Thoughts?  Ideas?  Thanks!

 --j

 --

 Jonathan A. Greenberg, PhD
 Postdoctoral Scholar
 Center for Spatial Technologies and Remote Sensing (CSTARS)
 University of California, Davis
 One Shields Avenue
 The Barn, Room 250N
 Davis, CA 95616
 Cell: 415-794-5043
 AIM: jgrn307, MSN: jgrn...@hotmail.com, Gchat: jgrn307

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] .Renviron for multiple hardwares...

2009-01-25 Thread Prof Brian Ripley

On Sun, 25 Jan 2009, Henrik Bengtsson wrote:


The script .Rprofile evaluates R code on startup.  You could use that
to test for various environment variables.  Alternatively, use Unix
shell scripts to set system environment variables to be used in a
generic .Renviron.  See help(Startup) for more details.


Well, not just 'Unix shell scripts', just R_ENVIRON_USER apppriately 
(on any OS).




/Henrik

On Sun, Jan 25, 2009 at 11:22 AM, Jonathan Greenberg
greenb...@ucdavis.edu wrote:

Our lab has a lot of different unix boxes, with different hardware, and I'm
assuming (perhaps wrongly) that by setting a per-user package installation
directory, the packages will only work on one type of hardware.  Our systems
are all set up to share the same home directory (and, thus, the same
.Renviron file) -- so, is there a way to set, in the .Renviron file,
per-computer or per-hardware settings?  The idea is to have a different
package installation directory for each computer (e.g.
~/R/computer1/packages and ~/R/computer2/packages.


Well, we anticipated that and the default personal directory is
set by R_LIBS_USER, and that has a platform-specific default.  See 
?.libPaths.


None of this is uncommon: my dept home file system is shared by x86_64 
Linux, i386 Linus, x86_64 Solaris, Sparc Solaris, Mac OS X and 
Windows.  I just let install.packages() create a personal library for 
me on each one I use it on.



Thoughts?  Ideas?  Thanks!

--j

--

Jonathan A. Greenberg, PhD
Postdoctoral Scholar
Center for Spatial Technologies and Remote Sensing (CSTARS)
University of California, Davis
One Shields Avenue
The Barn, Room 250N
Davis, CA 95616
Cell: 415-794-5043
AIM: jgrn307, MSN: jgrn...@hotmail.com, Gchat: jgrn307

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] package clusterSim

2009-01-25 Thread mauede
I am interested in familiarizing with the functions belonging to paclage 
clusterSim.

This package on-line documentation indicates a file pathname which is clear to 
me on Linux systems where $VAR
identifies VAR as an environment variable. But what if I am using a Windows 
system ? 
I do not know how to make Windows or DOS resolve the following address:
 $R_HOME\library\clusterSim\pdf\indexG1_details.pdf 

I wonder whether such a package implements the Pseudo F-statistics method for 
clustering as described in the following
paper:
  IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 1 
(1979), number 3
* Vogel, M.A. and A.K.C. Wong, PFS clustering method, pp. 237-245. 

Any suggestion and/or comment is welcome.
Thank you so much.
Maura 







tutti i telefonini TIM!


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Omitting a desired line from a table [Beginner Question]

2009-01-25 Thread pfc_ivan

I am a beginner using this R software and have a quick question. 

I added a file into the R called fish.txt using this line. 

fish-read.table(fish.txt, head=T, fill=T) 

The .txt file looks like this. Since it contains like 30 lines of data I
will copy/paste first 5 lines. 

Year GeoArea  SmpNo   Month   
1970113 7   
197111310   
1972113 8   
197321310   
197411311   

Now what I want to do is to omit all the lines in the file that arent
happening in GeoArea 1, and that arent happening in Month 10. So basically
The only lines that I want to keep are the lines that have GeoArea=1 and
Month=10 at the same time. So if GeoArea=2 and Month=10 I dont need it. So i
just need the lines that have both of those values correct. How do I delete
the rest of the lines that I dont need?

Thank you everyone.


-- 
View this message in context: 
http://www.nabble.com/Omitting-a-desired-line-from-a-table--Beginner-Question--tp21657416p21657416.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] comparing the previous and next entry of a vector

2009-01-25 Thread Jörg Groß

Hi,

I have a quit abstract problem, hope someone can help me here.

I have a vector like this:


x - c(1,2,3,4,5,2,6)
x

[1] 1 2 3 4 5 2 6

now I want to get the number where the previous number is 1 and the  
next number is 3

(that is the 2 at the second place)

I tried something with tail(x, -1) ...
with that, I can check  the next number, but how can I check the  
previous number?


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Omitting a desired line from a table [Beginner Question]

2009-01-25 Thread Kingsford Jones
see

?subset

Or use indexing, which is covered in section 2.7 of an introduction to
R (but note that a data frame has 2 dimensions)

hth,

Kingsford Jones


On Sun, Jan 25, 2009 at 3:06 PM, pfc_ivan pfc_i...@hotmail.com wrote:

 I am a beginner using this R software and have a quick question.

 I added a file into the R called fish.txt using this line.

 fish-read.table(fish.txt, head=T, fill=T)

 The .txt file looks like this. Since it contains like 30 lines of data I
 will copy/paste first 5 lines.

 Year GeoArea  SmpNo   Month
 1970113 7
 197111310
 1972113 8
 197321310
 197411311

 Now what I want to do is to omit all the lines in the file that arent
 happening in GeoArea 1, and that arent happening in Month 10. So basically
 The only lines that I want to keep are the lines that have GeoArea=1 and
 Month=10 at the same time. So if GeoArea=2 and Month=10 I dont need it. So i
 just need the lines that have both of those values correct. How do I delete
 the rest of the lines that I dont need?

 Thank you everyone.


 --
 View this message in context: 
 http://www.nabble.com/Omitting-a-desired-line-from-a-table--Beginner-Question--tp21657416p21657416.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] .Renviron for multiple hardwares...

2009-01-25 Thread Jonathan Greenberg
Ah, perfect -- so would the ideal R_LIBS_USER setting (to more or less 
guarantee the libraries will work on every possible computer) be 
something along the lines of:


~/myRlibraries/%V%p%o%a

Or is this overkill?

--j

Prof Brian Ripley wrote:

On Sun, 25 Jan 2009, Henrik Bengtsson wrote:


The script .Rprofile evaluates R code on startup.  You could use that
to test for various environment variables.  Alternatively, use Unix
shell scripts to set system environment variables to be used in a
generic .Renviron.  See help(Startup) for more details.


Well, not just 'Unix shell scripts', just R_ENVIRON_USER apppriately 
(on any OS).




/Henrik

On Sun, Jan 25, 2009 at 11:22 AM, Jonathan Greenberg
greenb...@ucdavis.edu wrote:
Our lab has a lot of different unix boxes, with different hardware, 
and I'm
assuming (perhaps wrongly) that by setting a per-user package 
installation
directory, the packages will only work on one type of hardware.  Our 
systems

are all set up to share the same home directory (and, thus, the same
.Renviron file) -- so, is there a way to set, in the .Renviron file,
per-computer or per-hardware settings?  The idea is to have a different
package installation directory for each computer (e.g.
~/R/computer1/packages and ~/R/computer2/packages.


Well, we anticipated that and the default personal directory is
set by R_LIBS_USER, and that has a platform-specific default.  See 
?.libPaths.


None of this is uncommon: my dept home file system is shared by x86_64 
Linux, i386 Linus, x86_64 Solaris, Sparc Solaris, Mac OS X and 
Windows.  I just let install.packages() create a personal library for 
me on each one I use it on.



Thoughts?  Ideas?  Thanks!

--j

--

Jonathan A. Greenberg, PhD
Postdoctoral Scholar
Center for Spatial Technologies and Remote Sensing (CSTARS)
University of California, Davis
One Shields Avenue
The Barn, Room 250N
Davis, CA 95616
Cell: 415-794-5043
AIM: jgrn307, MSN: jgrn...@hotmail.com, Gchat: jgrn307

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.





--

Jonathan A. Greenberg, PhD
Postdoctoral Scholar
Center for Spatial Technologies and Remote Sensing (CSTARS)
University of California, Davis
One Shields Avenue
The Barn, Room 250N
Davis, CA 95616
Cell: 415-794-5043
AIM: jgrn307, MSN: jgrn...@hotmail.com, Gchat: jgrn307

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] comparing the previous and next entry of a vector

2009-01-25 Thread Jörg Groß

is there a way to do that without generating a data.frame?

In my real data, I have a big data.frame and I have to compare over  
different columns...



Am 25.01.2009 um 23:42 schrieb Gabor Grothendieck:


Try this:

DF - data.frame(x, nxt = c(tail(x, -1), NA), prv = c(NA, head(x,  
-1)))

DF

 x nxt prv
1 1   2  NA
2 2   3   1
3 3   4   2
4 4   5   3
5 5   2   4
6 2   6   5
7 6  NA   2

subset(DF, nxt == 3  prv == 1)$x

[1] 2


On Sun, Jan 25, 2009 at 5:29 PM, Jörg Groß jo...@licht-malerei.de  
wrote:

Hi,

I have a quit abstract problem, hope someone can help me here.

I have a vector like this:


x - c(1,2,3,4,5,2,6)
x

[1] 1 2 3 4 5 2 6

now I want to get the number where the previous number is 1 and the  
next

number is 3
(that is the 2 at the second place)

I tried something with tail(x, -1) ...
with that, I can check  the next number, but how can I check the  
previous

number?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [Fwd: Re: evaluation question]

2009-01-25 Thread Wacek Kusnierczyk
dear list,

below is an edited version of my response to an r user asking me for
explaining some issues related to r's evaluation rules.  i find the
problem interesting enough to be forwarded to the list, hopefully for
comments from whoever may want to extend or correct my explanations.

(i'd like to add that much as i'm happy to receive and answer offline
mails, questions related to r are best sent directly to the list, where
the real experts are.)


 Original Message 
Subject:Re: evaluation question
Date:   Sun, 25 Jan 2009 20:32:22 +0100








xxx wrote:

snip

 Someone sent in an example a few days ago showing that prac1 ( see
 below ) doesn't work. Then someone else sent two different
 ways of fixing it.
 I'm still slightly confused.

snip




 x-1:10;
 y-rnorm(10) + x;

 # THIS DOES NOT WORK

 prac1 - function( model,wghts){
   lm( model, weights = wghts)
 }

 prac1(model = y~x, wghts = rep(1, 10))

tfm:

 the variables are taken from 'environment(formula)', typically 
  the environment from which 'lm' is called. 

when lm is applied to a model, the variable names used to pass arguments
to lm (here, 'wghts') are looked up in the environment where the model
was defined.  here, you have two environments:

- the global one (say, e_g), where x, y, and prac1 are defined;
- the call-local one (say, e_p1), created when prac1 is applied.

there is a variable name 'wghts' in the latter, but none in the
former.  just before the call, environmentwise the situation is as follows:

e_g = { 'x':v1, 'y':v2, 'prac1':v3 }

where e_g contains three mappings (of those we are interested here), written 
here as name:value, none for
'wghts'.  (the v1, v2, v3 stand for the respective values, as in the
code above.)

when you apply prac1, you create a new, local environment:

e_p1 = { 'model':v4, 'wghts':v5 }

where v4 is a promise with the expression 'y~x' and evaluation
environment e_g (the caller's environment), and v5 is a promise with the
expression 'rep(1, 10)' and evaluation environment e_g.

when you call lm, things are a little bit more complicated.  after some
black magic is performed on the arguments in the lm call, weights are
extracted from the model using model.weights, and the lookup is
performed not in e_p1, but in e_g.

rm(list=ls()) # cleanup
x = 1:10
y = rnorm(10)+x

p1 = function(model, wghts)
lm(model, weights=wghts)

p1(y~x, rep(1,10))
# (somewhat cryptic) error: no variable named 'wghts' found

wghts = rep(1,10)
p1(y~x, wghts)
# now works, e_g has a binding for 'wghts'
# passing wghts as an argument to p1 makes no difference

note, due to lazy evaluation, the following won't do:

rm(wghts) # cleanup

p1(y~x, wghts-rep(1,10))
# wghts still not found in e_g


if you happen to generalize your p1 over the additional arguments to be
passed to lm, ugly surprizes await, too:

p2 = function(model, ...) {
# some additional code
lm(model, ...) }
p2(y~x, weights=rep(1,10))
# (rather cryptic) error


if you want to fit a model with different sets of weights, the following
won't do:

rm(wghts) # cleanup
lapply(
   list(rep(1,10), rep(c(0.5, 1.5), 5)), # alternative weight vectors
   function(weights) p1(y~x, weights))
# wghts not found in e_g, as before

but this, incidentally, will work:

rm(wghts) # cleanup
lapply(
   list(rep(1,10), rep(c(0.5, 1.5), 5)), 
   function(wghts) p1(y~x, wghts))
# wghts found in e_g, not in e_p1

as will this:

rm(wghts) # cleanup
lapply(
   list(rep(1,10), rep(c(0.5, 1.5), 5)), 
   function(wghts) p1(y~x))
# wghts found in e_g

but obviously not this:

rm(wghts) # cleanup
lapply(
   list(rep(1,10), rep(c(0.5, 1.5), 5)), 
   function(weights) p1(y~x))
# wghts not found




 # SOLUTION # 1

 prac2 - function( model,wghts){
  environment(model) - environment()
  lm(model,weights = wghts)
 }

 prac2(model = y~x, wghts = rep(1, 10))

environment() returns the local call environment (see e_p1 above), where
'wghts' is mapped to a promise to evaluate rep(1,10) in e_g.  you set
the environment of model to e_p1, so that lm looks for wghts there --
and finds it.

this is an 'elegant' workaround, with possible disastrous
consequences if the model happens to include a variable named 'model' or
'wghts':

model = 1:10
prac2(y~model, rep(1,10))
# can't use model in a formula?

wghts = x
prac2(y~wghts, rep(1,10))
# oops, not quite the same prac2(y~x, rep(1,10))

another problem with this 'elegant' 'solution' is that if prac_ happens to have
local variables with names in conflict with names in the model formula, you're
in trouble again:

prac2 = function(model, wghts) {
environment(model) = environment()
x = NULL # for whatever reason one might need an x here
# whatever
lm(model, weights = wghts) }

prac2(y~x, rep(1,10))
# oops, NULL is not good an x in the model

these may be unlikely scenarios, but the issue is serious.  you need to
understand the details of how lm is implemented in order to understand
why your 

Re: [R] comparing the previous and next entry of a vector

2009-01-25 Thread Ted Harding
On 25-Jan-09 22:29:25, Jörg Groß wrote:
 Hi,
 I have a quit abstract problem, hope someone can help me here.
 I have a vector like this:
 
 x - c(1,2,3,4,5,2,6)
 x
 [1] 1 2 3 4 5 2 6
 
 now I want to get the number where the previous number is 1 and the  
 next number is 3
 (that is the 2 at the second place)
 
 I tried something with tail(x, -1) ...
 with that, I can check  the next number, but how can I check the  
 previous number?

  x  - c(1,2,3,4,5,2,6)
  x0 - x[1:(length(x)-2)]
  x1 - x[3:(length(x))]

  x[which((x0==1)(x1==3))+1]
# [1] 2

Or, changing the data:

  x  - c(4,5,1,7,3,2,6,9,8)
  x0 - x[1:(length(x)-2)]
  x1 - x[3:(length(x))]
  x[which((x0==1)(x1==3))+1]
# [1] 7

Ted.


E-Mail: (Ted Harding) ted.hard...@manchester.ac.uk
Fax-to-email: +44 (0)870 094 0861
Date: 25-Jan-09   Time: 22:47:15
-- XFMail --

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] .Renviron for multiple hardwares...

2009-01-25 Thread Jeffrey Horner

Jonathan Greenberg wrote:
Our lab has a lot of different unix boxes, with different hardware, and 
I'm assuming (perhaps wrongly) that by setting a per-user package 
installation directory, the packages will only work on one type of 
hardware.  Our systems are all set up to share the same home directory 
(and, thus, the same .Renviron file) -- so, is there a way to set, in 
the .Renviron file, per-computer or per-hardware settings?  The idea is 
to have a different package installation directory for each computer 
(e.g. ~/R/computer1/packages and ~/R/computer2/packages.


Thoughts?  Ideas?  Thanks!


You would certainly want to look at altering the library path on R 
startup using the RProfile.site file (see ?Startup). You R code could 
use bits of info from the R variables .Platform and .Machine, plus some 
environment variables for UNIX platform info.


As an example of altering the library path, this is what I have used in 
the past for my personal .Rprofile file:


### Add development R versions to the library path first
devlib - paste('~/Rlib',gsub(' ','_',R.version.string),sep='/')
if (!file.exists(devlib))
dir.create(devlib)

x - .libPaths()
.libPaths(c(devlib,x))
rm(x,devlib)

So when I start up the latest development version of R, this is what is set:

$ /home/hornerj/R-sources/trunk/bin/R --quiet
 .libPaths()
[1] 
/home/hornerj/Rlib/R_version_2.9.0_Under_development_(unstable)_(2008-10-14_r46718)

[2] /home/hornerj/R-sources/trunk/library

But with the latest ubuntu R release:

$ R --quiet
 .libPaths()
[1] /home/hornerj/Rlib/R_version_2.8.1_(2008-12-22)
[2] /usr/local/lib/R/site-library
[3] /usr/lib/R/site-library
[4] /usr/lib/R/library

Jeff
--
http://biostat.mc.vanderbilt.edu/JeffreyHorner

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] comparing the previous and next entry of a vector to a criterium

2009-01-25 Thread Jörg Groß

Hi,

I have a quit abstract problem, hope someone can help me here.

I have a vector like this:


 x - c(1,2,3,4,5,2,6)
 x

[1] 1 2 3 4 5 2 6

now I want to get the number where the previous number is 1 and the  
next number is 3

(that is the 2 at the second place)

I tried something with tail(x, -1) ...
with that, I can check  the next number, but how can I check the  
previous number?


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Omitting a desired line from a table [Beginner Question]

2009-01-25 Thread Jörg Groß


fish[fish$GeoArea == 1  fish$Month == 10]

Am 25.01.2009 um 23:06 schrieb pfc_ivan:



I am a beginner using this R software and have a quick question.

I added a file into the R called fish.txt using this line.

fish-read.table(fish.txt, head=T, fill=T)

The .txt file looks like this. Since it contains like 30 lines  
of data I

will copy/paste first 5 lines.

Year GeoArea  SmpNo   Month
1970113 7   
197111310   
1972113 8   
197321310   
197411311   

Now what I want to do is to omit all the lines in the file that arent
happening in GeoArea 1, and that arent happening in Month 10. So  
basically
The only lines that I want to keep are the lines that have GeoArea=1  
and
Month=10 at the same time. So if GeoArea=2 and Month=10 I dont need  
it. So i
just need the lines that have both of those values correct. How do I  
delete

the rest of the lines that I dont need?

Thank you everyone.


--
View this message in context: 
http://www.nabble.com/Omitting-a-desired-line-from-a-table--Beginner-Question--tp21657416p21657416.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Omitting a desired line from a table [Beginner Question]

2009-01-25 Thread Stephan Kolassa

Hi,

fish.new - fish[fish$GeoArea==1  fish$Month==10,]

HTH,
Stephan


pfc_ivan schrieb:
I am a beginner using this R software and have a quick question. 

I added a file into the R called fish.txt using this line. 

fish-read.table(fish.txt, head=T, fill=T) 


The .txt file looks like this. Since it contains like 30 lines of data I
will copy/paste first 5 lines. 

Year GeoArea  SmpNo   Month   
1970	1	 13	7	

197111310   
1972113 8   
197321310   
197411311   

Now what I want to do is to omit all the lines in the file that arent
happening in GeoArea 1, and that arent happening in Month 10. So basically
The only lines that I want to keep are the lines that have GeoArea=1 and
Month=10 at the same time. So if GeoArea=2 and Month=10 I dont need it. So i
just need the lines that have both of those values correct. How do I delete
the rest of the lines that I dont need?

Thank you everyone.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] comparing the previous and next entry of a vector

2009-01-25 Thread Gabor Grothendieck
Try this:

 DF - data.frame(x, nxt = c(tail(x, -1), NA), prv = c(NA, head(x, -1)))
 DF
  x nxt prv
1 1   2  NA
2 2   3   1
3 3   4   2
4 4   5   3
5 5   2   4
6 2   6   5
7 6  NA   2
 subset(DF, nxt == 3  prv == 1)$x
[1] 2


On Sun, Jan 25, 2009 at 5:29 PM, Jörg Groß jo...@licht-malerei.de wrote:
 Hi,

 I have a quit abstract problem, hope someone can help me here.

 I have a vector like this:


 x - c(1,2,3,4,5,2,6)
 x

 [1] 1 2 3 4 5 2 6

 now I want to get the number where the previous number is 1 and the next
 number is 3
 (that is the 2 at the second place)

 I tried something with tail(x, -1) ...
 with that, I can check  the next number, but how can I check the previous
 number?

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Build Error on Opensolaris iconvlist

2009-01-25 Thread Karun Gahlawat
Prof Ripely,
Thanks. SUN gnu-iconv package should overwrite the sun version and so
it does. Apparently, it does not work. I built this library from gnu
source with gcc and now it configures and builds but fails on make
check for regressions.

This topic is touched in the manual with some 'blas' and 'lapack'
libraries. Not sure if these are related though. I am not sure either
where and what to get for this. Apologize as this is all new to me.

Here is the extract from make check logs..

running code in 'reg-tests-1.R' ...*** Error code 1
The following command caused the error:
LC_ALL=C SRCDIR=. R_DEFAULT_PACKAGES= ../bin/R --vanilla 
reg-tests-1.R  reg-tests-1.Rout 21 || (mv reg-tests-1.Rout
reg-tests-1.Rout.fail  exit 1)
make: Fatal error: Command failed for target `reg-tests-1.Rout'
Current working directory /opt/R-2.8.1/tests
*** Error code 1
The following command caused the error:
make reg-tests-1.Rout reg-tests-2.Rout reg-IO.Rout reg-IO2.Rout
reg-plot.Rout reg-S4.Rout  RVAL_IF_DIFF=1
make: Fatal error: Command failed for target `test-Reg'
Current working directory /opt/R-2.8.1/tests
*** Error code 1
The following command caused the error:
for name in Examples Specific Reg Internet; do \
  make test-${name} || exit 1; \
done
make: Fatal error: Command failed for target `test-all-basics'
Current working directory /opt/R-2.8.1/tests
*** Error code 1
The following command caused the error:
(cd tests  make check)
make: Fatal error: Command failed for target `check'


On Sun, Jan 25, 2009 at 2:05 PM, Prof Brian Ripley
rip...@stats.ox.ac.uk wrote:
 On Sun, 25 Jan 2009, Karun Gahlawat wrote:

 Uwe,
 Sorry I missed it. I do have gnu iconv..
 SUNWgnu-libiconv

 ls -lra /usr/lib/*iconv* | more
 lrwxrwxrwx   1 root root  14 Jan 23 21:23 /usr/lib/libiconv.so
 - li
 bgnuiconv.so
 lrwxrwxrwx   1 root root  22 Jan 23 21:23
 /usr/lib/libgnuiconv.so -
 ../gnu/lib/libiconv.so

 And hence the confusion..

 Did you tell R to use that one?  You need the correct header files set as
 well as the library, or you will get the system iconv.  (The header file
 remaps the entry point names.)

 Perhaps you need to study the R-admin manual carefully, which describes how
 to get the correct iconv.


 On Sat, Jan 24, 2009 at 1:00 PM, Uwe Ligges
 lig...@statistik.tu-dortmund.de wrote:


 Karun Gahlawat wrote:

 Hi!

 Trying to build R-2.8.1. while configuring, it throws error

 ./configure

 checking iconv.h usability... yes
 checking iconv.h presence... yes
 checking for iconv.h... yes
 checking for iconv... yes
 checking whether iconv accepts UTF-8, latin1 and UCS-*... no
 checking for iconvlist... no
 configure: error: --with-iconv=yes (default) and a suitable iconv is
 not available

 I am confused.. sorry new to this..
 I can see the iconv binary, headers and libs all in the standard
 directory. Please help or redirect!


 Please read the documentation, the R Installation and Administration
 manuals
 tells you:

 You will need GNU libiconv: the Solaris version of iconv is not
 sufficiently powerful. 

 Uwe Ligges


 SunOS  5.11 snv_101b i86pc i386 i86pc

 CC: Sun Ceres C++ 5.10 SunOS_i386 2008/10/22

 --
 Brian D. Ripley,  rip...@stats.ox.ac.uk
 Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
 University of Oxford, Tel:  +44 1865 272861 (self)
 1 South Parks Road, +44 1865 272866 (PA)
 Oxford OX1 3TG, UKFax:  +44 1865 272595


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] comparing the previous and next entry of a vector

2009-01-25 Thread Marc Schwartz
on 01/25/2009 04:29 PM Jörg Groß wrote:
 Hi,
 
 I have a quit abstract problem, hope someone can help me here.
 
 I have a vector like this:
 
 
 x - c(1,2,3,4,5,2,6)
 x
 
 [1] 1 2 3 4 5 2 6
 
 now I want to get the number where the previous number is 1 and the next
 number is 3
 (that is the 2 at the second place)
 
 I tried something with tail(x, -1) ...
 with that, I can check  the next number, but how can I check the
 previous number?


How about this:

InBetween - function(x, val1, val2)
{
  unlist(sapply(2:(length(x) - 1),
 function(i) if ((x[i - 1] == val1)  (x[i + 1] == val2)) x[i]))
}


 InBetween(x, 1, 3)
[1] 2


 InBetween(x, 4, 2)
[1] 5

It will return NULL if not found.

You might want to reinforce it with some error checking as well.

HTH,

Marc Schwartz

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] comparing the previous and next entry of a vector

2009-01-25 Thread Gabor Grothendieck
The data frame is not essential.  I was just trying to keep things tidy.
Try this:

nxt - c(tail(x, -1), NA)
prv - c(NA, head(x, -1))
x[nxt == 3  prv == 1]


On Sun, Jan 25, 2009 at 5:53 PM, Jörg Groß jo...@licht-malerei.de wrote:
 is there a way to do that without generating a data.frame?

 In my real data, I have a big data.frame and I have to compare over
 different columns...


 Am 25.01.2009 um 23:42 schrieb Gabor Grothendieck:

 Try this:

 DF - data.frame(x, nxt = c(tail(x, -1), NA), prv = c(NA, head(x, -1)))
 DF

  x nxt prv
 1 1   2  NA
 2 2   3   1
 3 3   4   2
 4 4   5   3
 5 5   2   4
 6 2   6   5
 7 6  NA   2

 subset(DF, nxt == 3  prv == 1)$x

 [1] 2


 On Sun, Jan 25, 2009 at 5:29 PM, Jörg Groß jo...@licht-malerei.de wrote:

 Hi,

 I have a quit abstract problem, hope someone can help me here.

 I have a vector like this:


 x - c(1,2,3,4,5,2,6)
 x

 [1] 1 2 3 4 5 2 6

 now I want to get the number where the previous number is 1 and the next
 number is 3
 (that is the 2 at the second place)

 I tried something with tail(x, -1) ...
 with that, I can check  the next number, but how can I check the previous
 number?

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] comparing the previous and next entry of a vector to a criterium

2009-01-25 Thread Kingsford Jones
How about:

a - c(1,2,3,3,2,1,6,3,2)
b - c(NA,a[-length(a)])
c - c(a[-1],NA)
a[b==1  c==3]
[1] 2 6


hth,

Kingsford Jones


On Sun, Jan 25, 2009 at 3:02 PM, Jörg Groß jo...@licht-malerei.de wrote:
 Hi,

 I have a quit abstract problem, hope someone can help me here.

 I have a vector like this:


  x - c(1,2,3,4,5,2,6)
  x

 [1] 1 2 3 4 5 2 6

 now I want to get the number where the previous number is 1 and the next
 number is 3
 (that is the 2 at the second place)

 I tried something with tail(x, -1) ...
 with that, I can check  the next number, but how can I check the previous
 number?

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] comparing the previous and next entry of a vector

2009-01-25 Thread Jorge Ivan Velez
Hi Jörg,
If I understood,

apply(yourdataframe,2,function(x) x[diff(which(x==1 | x==3))])

should do what you want.

HTH,

Jorge


On Sun, Jan 25, 2009 at 5:53 PM, Jörg Groß jo...@licht-malerei.de wrote:

 is there a way to do that without generating a data.frame?

 In my real data, I have a big data.frame and I have to compare over
 different columns...


 Am 25.01.2009 um 23:42 schrieb Gabor Grothendieck:


  Try this:

  DF - data.frame(x, nxt = c(tail(x, -1), NA), prv = c(NA, head(x, -1)))
 DF

  x nxt prv
 1 1   2  NA
 2 2   3   1
 3 3   4   2
 4 4   5   3
 5 5   2   4
 6 2   6   5
 7 6  NA   2

 subset(DF, nxt == 3  prv == 1)$x

 [1] 2


 On Sun, Jan 25, 2009 at 5:29 PM, Jörg Groß jo...@licht-malerei.de
 wrote:

 Hi,

 I have a quit abstract problem, hope someone can help me here.

 I have a vector like this:


 x - c(1,2,3,4,5,2,6)
 x

 [1] 1 2 3 4 5 2 6

 now I want to get the number where the previous number is 1 and the next
 number is 3
 (that is the 2 at the second place)

 I tried something with tail(x, -1) ...
 with that, I can check  the next number, but how can I check the previous
 number?

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Gibbs sampler...did it work?

2009-01-25 Thread ekwaters

I am writing a Gibbs sampler. I think it is outputting some of what I want,
in that I am getting vector of several thousand values (but not 10,000) in a
txt file at the end.

My question is, is the error message (see below) telling me that it can't
output 10,000 values (draws) because of a limitation in my memory, file
size, shape etc, or that there is an error in the sampler itself?

 s2eg2=1/rgamma(mg2,(12/2),.5*t(residuals(lm(yg[,1]~xg-1))%*%residuals(lm(yg[,1]~xg-1
 for(i in 1:mg2){
+ s2yg[i,]=parsy+t(rnorm(1,mean=0,sd=s2ygscale[i])%*%chol(s2eg2[i]*xgtxgi))
+ write(c(s2yg[i,],s2eg2[i]),
+ file=/media/DataTravelerMini/KINGSTON/Honours/R/IPR/s2yg2.txt, append=T,
ncolumns=1)
+ if(i%%50==0){print(c(s2yg[i,],s2eg2[i]))}}

I GET A BUNCH OF NUMBERS PRINTED HERE, THE OUTPUTTED VALUES WHICH ALSO
APPEAR IN A TEXT FILE.  I HIT ABOUT 2000 VALUES, THEN I GET THIS MESSAGE:

Error in s2yg[i, ] = parsy + t(rnorm(1, mean = 0, sd = s2ygscale[i]) %*%  :
subscript out of bounds


Ned


-- 
View this message in context: 
http://www.nabble.com/Gibbs-sampler...did-it-work--tp21658246p21658246.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] [Fwd: Re: evaluation question]

2009-01-25 Thread Gabor Grothendieck
It looks in data and if not found there in environment(formula)
so try this:

mylm - function(model, wghts) {
lm(model, data.frame(wghts), weights = wghts)
}

On Sun, Jan 25, 2009 at 4:20 PM, Wacek Kusnierczyk
waclaw.marcin.kusnierc...@idi.ntnu.no wrote:
 dear list,

 below is an edited version of my response to an r user asking me for
 explaining some issues related to r's evaluation rules.  i find the
 problem interesting enough to be forwarded to the list, hopefully for
 comments from whoever may want to extend or correct my explanations.

 (i'd like to add that much as i'm happy to receive and answer offline
 mails, questions related to r are best sent directly to the list, where
 the real experts are.)


  Original Message 
 Subject:Re: evaluation question
 Date:   Sun, 25 Jan 2009 20:32:22 +0100








 xxx wrote:

 snip

 Someone sent in an example a few days ago showing that prac1 ( see
 below ) doesn't work. Then someone else sent two different
 ways of fixing it.
 I'm still slightly confused.

 snip




 x-1:10;
 y-rnorm(10) + x;

 # THIS DOES NOT WORK

 prac1 - function( model,wghts){
   lm( model, weights = wghts)
 }

 prac1(model = y~x, wghts = rep(1, 10))

 tfm:

  the variables are taken from 'environment(formula)', typically
  the environment from which 'lm' is called. 

 when lm is applied to a model, the variable names used to pass arguments
 to lm (here, 'wghts') are looked up in the environment where the model
 was defined.  here, you have two environments:

 - the global one (say, e_g), where x, y, and prac1 are defined;
 - the call-local one (say, e_p1), created when prac1 is applied.

 there is a variable name 'wghts' in the latter, but none in the
 former.  just before the call, environmentwise the situation is as follows:

 e_g = { 'x':v1, 'y':v2, 'prac1':v3 }

 where e_g contains three mappings (of those we are interested here), written 
 here as name:value, none for
 'wghts'.  (the v1, v2, v3 stand for the respective values, as in the
 code above.)

 when you apply prac1, you create a new, local environment:

 e_p1 = { 'model':v4, 'wghts':v5 }

 where v4 is a promise with the expression 'y~x' and evaluation
 environment e_g (the caller's environment), and v5 is a promise with the
 expression 'rep(1, 10)' and evaluation environment e_g.

 when you call lm, things are a little bit more complicated.  after some
 black magic is performed on the arguments in the lm call, weights are
 extracted from the model using model.weights, and the lookup is
 performed not in e_p1, but in e_g.

 rm(list=ls()) # cleanup
 x = 1:10
 y = rnorm(10)+x

 p1 = function(model, wghts)
lm(model, weights=wghts)

 p1(y~x, rep(1,10))
 # (somewhat cryptic) error: no variable named 'wghts' found

 wghts = rep(1,10)
 p1(y~x, wghts)
 # now works, e_g has a binding for 'wghts'
 # passing wghts as an argument to p1 makes no difference

 note, due to lazy evaluation, the following won't do:

 rm(wghts) # cleanup

 p1(y~x, wghts-rep(1,10))
 # wghts still not found in e_g


 if you happen to generalize your p1 over the additional arguments to be
 passed to lm, ugly surprizes await, too:

 p2 = function(model, ...) {
# some additional code
lm(model, ...) }
 p2(y~x, weights=rep(1,10))
 # (rather cryptic) error


 if you want to fit a model with different sets of weights, the following
 won't do:

 rm(wghts) # cleanup
 lapply(
   list(rep(1,10), rep(c(0.5, 1.5), 5)), # alternative weight vectors
   function(weights) p1(y~x, weights))
 # wghts not found in e_g, as before

 but this, incidentally, will work:

 rm(wghts) # cleanup
 lapply(
   list(rep(1,10), rep(c(0.5, 1.5), 5)),
   function(wghts) p1(y~x, wghts))
 # wghts found in e_g, not in e_p1

 as will this:

 rm(wghts) # cleanup
 lapply(
   list(rep(1,10), rep(c(0.5, 1.5), 5)),
   function(wghts) p1(y~x))
 # wghts found in e_g

 but obviously not this:

 rm(wghts) # cleanup
 lapply(
   list(rep(1,10), rep(c(0.5, 1.5), 5)),
   function(weights) p1(y~x))
 # wghts not found




 # SOLUTION # 1

 prac2 - function( model,wghts){
  environment(model) - environment()
  lm(model,weights = wghts)
 }

 prac2(model = y~x, wghts = rep(1, 10))

 environment() returns the local call environment (see e_p1 above), where
 'wghts' is mapped to a promise to evaluate rep(1,10) in e_g.  you set
 the environment of model to e_p1, so that lm looks for wghts there --
 and finds it.

 this is an 'elegant' workaround, with possible disastrous
 consequences if the model happens to include a variable named 'model' or
 'wghts':

 model = 1:10
 prac2(y~model, rep(1,10))
 # can't use model in a formula?

 wghts = x
 prac2(y~wghts, rep(1,10))
 # oops, not quite the same prac2(y~x, rep(1,10))

 another problem with this 'elegant' 'solution' is that if prac_ happens to 
 have
 local variables with names in conflict with names in the model formula, you're
 in trouble again:

 prac2 = function(model, wghts) {
environment(model) = environment()
x = 

Re: [R] Gibbs sampler...did it work?

2009-01-25 Thread jim holtman
It is not that you are out of memory; one of your two 's2yg' objects
is not large enough (improperly dimensioned?) so you get the subscript
error.  You can put the following in your script to catch the error
and then examine the values:

options(error=utils::recover)


On Sun, Jan 25, 2009 at 6:25 PM, ekwaters ekwat...@unimelb.edu.au wrote:

 I am writing a Gibbs sampler. I think it is outputting some of what I want,
 in that I am getting vector of several thousand values (but not 10,000) in a
 txt file at the end.

 My question is, is the error message (see below) telling me that it can't
 output 10,000 values (draws) because of a limitation in my memory, file
 size, shape etc, or that there is an error in the sampler itself?

 s2eg2=1/rgamma(mg2,(12/2),.5*t(residuals(lm(yg[,1]~xg-1))%*%residuals(lm(yg[,1]~xg-1
 for(i in 1:mg2){
 + s2yg[i,]=parsy+t(rnorm(1,mean=0,sd=s2ygscale[i])%*%chol(s2eg2[i]*xgtxgi))
 + write(c(s2yg[i,],s2eg2[i]),
 + file=/media/DataTravelerMini/KINGSTON/Honours/R/IPR/s2yg2.txt, append=T,
 ncolumns=1)
 + if(i%%50==0){print(c(s2yg[i,],s2eg2[i]))}}

 I GET A BUNCH OF NUMBERS PRINTED HERE, THE OUTPUTTED VALUES WHICH ALSO
 APPEAR IN A TEXT FILE.  I HIT ABOUT 2000 VALUES, THEN I GET THIS MESSAGE:

 Error in s2yg[i, ] = parsy + t(rnorm(1, mean = 0, sd = s2ygscale[i]) %*%  :
subscript out of bounds


 Ned


 --
 View this message in context: 
 http://www.nabble.com/Gibbs-sampler...did-it-work--tp21658246p21658246.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Question about rgenoud

2009-01-25 Thread Dimitri Liakhovitski
Hello!

I am VERY new to genetic optimization and have a question about
rgenoud package (http://sekhon.berkeley.edu/rgenoud/):

Is it possible to modify the settings of rgenoud so that one could see
not only the winner solution but also the runner ups? By runner
ups I mean at least several other (previous?) maxima.

Thank you very much for any hint!

-- 
Dimitri Liakhovitski
MarketTools, Inc.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Omitting a desired line from a table [Beginner Question]

2009-01-25 Thread Jörg Groß

sorry,
there is a comma missing;


fish[fish$GeoArea == 1  fish$Month == 10, ]



Am 25.01.2009 um 23:33 schrieb Jörg Groß:



fish[fish$GeoArea == 1  fish$Month == 10]

Am 25.01.2009 um 23:06 schrieb pfc_ivan:



I am a beginner using this R software and have a quick question.

I added a file into the R called fish.txt using this line.

fish-read.table(fish.txt, head=T, fill=T)

The .txt file looks like this. Since it contains like 30 lines  
of data I

will copy/paste first 5 lines.

Year GeoArea  SmpNo   Month
1970113 7   
197111310   
1972113 8   
197321310   
197411311   

Now what I want to do is to omit all the lines in the file that arent
happening in GeoArea 1, and that arent happening in Month 10. So  
basically
The only lines that I want to keep are the lines that have  
GeoArea=1 and
Month=10 at the same time. So if GeoArea=2 and Month=10 I dont need  
it. So i
just need the lines that have both of those values correct. How do  
I delete

the rest of the lines that I dont need?

Thank you everyone.


--
View this message in context: 
http://www.nabble.com/Omitting-a-desired-line-from-a-table--Beginner-Question--tp21657416p21657416.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Plotting graph for Missing values

2009-01-25 Thread Shreyasee
Hi,

I have imported one dataset in R.
I want to calculate the percentage of missing values for each month (May
2006 to March 2007) for each variable.
Just to begin with I tried the following code :

*for(i in 1:length(dos))
for(j in 1:length(patientinformation1)
if(dos[i]==May-06  patientinformation1[j]==)
a - j+1
a*

The above code was written to calculate the number of missing values for May
2006, but I am not getting the correct results.
Can anybody help me?

Thanks,
Shreyasee

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting graph for Missing values

2009-01-25 Thread jim holtman
What does you data look like?  You could use 'split' and then examine
the data in each range to count the number missing.  Would have to
have some actual data to suggest a solution.

On Sun, Jan 25, 2009 at 8:30 PM, Shreyasee shreyasee.prad...@gmail.com wrote:
 Hi,

 I have imported one dataset in R.
 I want to calculate the percentage of missing values for each month (May
 2006 to March 2007) for each variable.
 Just to begin with I tried the following code :

 *for(i in 1:length(dos))
 for(j in 1:length(patientinformation1)
 if(dos[i]==May-06  patientinformation1[j]==)
 a - j+1
 a*

 The above code was written to calculate the number of missing values for May
 2006, but I am not getting the correct results.
 Can anybody help me?

 Thanks,
 Shreyasee

[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting graph for Missing values

2009-01-25 Thread Shreyasee
Hi Jim,

The dataset has 4 variables (dos, patientinformation1, patientinformation2,
patientinformation3).
In dos variable ther are months (May 2006 to March 2007) when the surgeries
were formed.
I need to calculate the percentage of missing values for each variable
(patientinformation1, patientinformation2, patientinformation3) for each
month.
I need a common script to calculate that for each variable.

Thanks,
Shreyasee


On Mon, Jan 26, 2009 at 9:46 AM, jim holtman jholt...@gmail.com wrote:

 What does you data look like?  You could use 'split' and then examine
 the data in each range to count the number missing.  Would have to
 have some actual data to suggest a solution.

 On Sun, Jan 25, 2009 at 8:30 PM, Shreyasee shreyasee.prad...@gmail.com
 wrote:
  Hi,
 
  I have imported one dataset in R.
  I want to calculate the percentage of missing values for each month (May
  2006 to March 2007) for each variable.
  Just to begin with I tried the following code :
 
  *for(i in 1:length(dos))
  for(j in 1:length(patientinformation1)
  if(dos[i]==May-06  patientinformation1[j]==)
  a - j+1
  a*
 
  The above code was written to calculate the number of missing values for
 May
  2006, but I am not getting the correct results.
  Can anybody help me?
 
  Thanks,
  Shreyasee
 
 [[alternative HTML version deleted]]
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 



 --
 Jim Holtman
 Cincinnati, OH
 +1 513 646 9390

 What is the problem that you are trying to solve?


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] generic questions about probability and simulation -- not directly related to R

2009-01-25 Thread Jong-Hoon Kim
Dear helpers,

As the title says, my question is not directly related to R.
I find, however, that there are many people who are both knowledgeable and
kind in this email list, and so decided to give it a try.

I do stochastic simulations.  Parameter values used in simulation often come
from the observations of the real word phenomena.
Parameter values are often given as rates (of change), time, or
probabilities.
I am confused about how I go about converting parameters given with
different units.

For example, I have a discrete time Markov model that describes the
following process:

A - B - C

Let's suppose that I am given average time that individuals stay at A, dA,
as 3 days.  We assume that dA is exponentially distributed.
Similarly, dB follows an exponential distribution with average 1000 days.


I decide to simulate the model with a time step corresponding to one day.

Would any of the following be correct?
a. A probability an individual makes transitions from A to B is 1/3.
Likewise, transition from B to C occurs with probability 1/1000.
b. If I reduce the size of time step as 0.1 day, then the transition
probability from A to B is 0.1*(1/3).  Likewise,  transition probability
from B to C is 0.1*(1/1000)
c. The size of time step must not be larger than 3 day, which makes the
transition probability to 1.
d. if parameters values are given rates of change, then I can directly
translate them to a probabilities per unit time.  There is no difference
between a rate and probability per time.

How do we know about the reasonable size of time steps?

Any help would be greatly appreciated.  Also, could anybody suggest pointers
or books that can be useful in this regard?

Sincerely,

-- JH

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] sem package: start values

2009-01-25 Thread Anthony Dick

Hello-

If I input a variance-covariance matrix and specify NA for start values, 
how does sem determine the start value? Is there a default?


Anthony

--
Anthony Steven Dick, Ph.D.
Post-Doctoral Fellow
Human Neuroscience Laboratory
Department of Neurology
The University of Chicago
5841 S. Maryland Ave. MC-2030
Chicago, IL 60637
Phone: (773)-834-7770
Email: ad...@uchicago.edu
Web: http://home.uchicago.edu/~adick/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting graph for Missing values

2009-01-25 Thread jim holtman
Here is an example of how you might approach it:

 dos - seq(as.Date('2006-05-01'), as.Date('2007-03-31'), by='1 day')
 pat1 - rbinom(length(dos), 1, .5)  # generate some data
 # partition by month and then list out the number of zero values (missing)
 tapply(pat1, format(dos, %Y%m), function(x) sum(x==0))
200605 200606 200607 200608 200609 200610 200611 200612 200701 200702 200703
21 22 16 18 16 15 16 17 14 16 13



On Sun, Jan 25, 2009 at 8:51 PM, Shreyasee shreyasee.prad...@gmail.com wrote:
 Hi Jim,

 The dataset has 4 variables (dos, patientinformation1, patientinformation2,
 patientinformation3).
 In dos variable ther are months (May 2006 to March 2007) when the surgeries
 were formed.
 I need to calculate the percentage of missing values for each variable
 (patientinformation1, patientinformation2, patientinformation3) for each
 month.
 I need a common script to calculate that for each variable.

 Thanks,
 Shreyasee


 On Mon, Jan 26, 2009 at 9:46 AM, jim holtman jholt...@gmail.com wrote:

 What does you data look like?  You could use 'split' and then examine
 the data in each range to count the number missing.  Would have to
 have some actual data to suggest a solution.

 On Sun, Jan 25, 2009 at 8:30 PM, Shreyasee shreyasee.prad...@gmail.com
 wrote:
  Hi,
 
  I have imported one dataset in R.
  I want to calculate the percentage of missing values for each month (May
  2006 to March 2007) for each variable.
  Just to begin with I tried the following code :
 
  *for(i in 1:length(dos))
  for(j in 1:length(patientinformation1)
  if(dos[i]==May-06  patientinformation1[j]==)
  a - j+1
  a*
 
  The above code was written to calculate the number of missing values for
  May
  2006, but I am not getting the correct results.
  Can anybody help me?
 
  Thanks,
  Shreyasee
 
 [[alternative HTML version deleted]]
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 



 --
 Jim Holtman
 Cincinnati, OH
 +1 513 646 9390

 What is the problem that you are trying to solve?





-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sem package: start values

2009-01-25 Thread John Fox
Dear Anthony,

From ?sem:

If given as NA, the program will compute a start value, by a slight
modification of the method described by McDonald and Hartmann (1992). Note:
In some circumstances, some start values are selected randomly; this might
produce small differences in the parameter estimates when the program is
rerun.

To see exactly what's done, print out the startvalues function.

Regards,
 John

--
John Fox, Professor
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
web: socserv.mcmaster.ca/jfox


 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
On
 Behalf Of Anthony Dick
 Sent: January-25-09 9:15 PM
 To: r-help@r-project.org
 Subject: [R] sem package: start values
 
 Hello-
 
 If I input a variance-covariance matrix and specify NA for start values,
 how does sem determine the start value? Is there a default?
 
 Anthony
 
 --
 Anthony Steven Dick, Ph.D.
 Post-Doctoral Fellow
 Human Neuroscience Laboratory
 Department of Neurology
 The University of Chicago
 5841 S. Maryland Ave. MC-2030
 Chicago, IL 60637
 Phone: (773)-834-7770
 Email: ad...@uchicago.edu
 Web: http://home.uchicago.edu/~adick/
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting graph for Missing values

2009-01-25 Thread Shreyasee
Hi Jim,

I need to calculate the missing values in variable patientinformation1 for
the period of May 2006 to March 2007 and then plot the graph of the
percentage of the missing values over these months.
This has to be done for each variable.
The code which you have provided, calculates the missing values for the
months variable, am I right?
I need to calculate for all the variables for each month.

Thanks,
Shreyasee


On Mon, Jan 26, 2009 at 10:29 AM, jim holtman jholt...@gmail.com wrote:

 Here is an example of how you might approach it:

  dos - seq(as.Date('2006-05-01'), as.Date('2007-03-31'), by='1 day')
  pat1 - rbinom(length(dos), 1, .5)  # generate some data
  # partition by month and then list out the number of zero values
 (missing)
  tapply(pat1, format(dos, %Y%m), function(x) sum(x==0))
 200605 200606 200607 200608 200609 200610 200611 200612 200701 200702
 200703
21 22 16 18 16 15 16 17 14 16 13
 


 On Sun, Jan 25, 2009 at 8:51 PM, Shreyasee shreyasee.prad...@gmail.com
 wrote:
  Hi Jim,
 
  The dataset has 4 variables (dos, patientinformation1,
 patientinformation2,
  patientinformation3).
  In dos variable ther are months (May 2006 to March 2007) when the
 surgeries
  were formed.
  I need to calculate the percentage of missing values for each variable
  (patientinformation1, patientinformation2, patientinformation3) for each
  month.
  I need a common script to calculate that for each variable.
 
  Thanks,
  Shreyasee
 
 
  On Mon, Jan 26, 2009 at 9:46 AM, jim holtman jholt...@gmail.com wrote:
 
  What does you data look like?  You could use 'split' and then examine
  the data in each range to count the number missing.  Would have to
  have some actual data to suggest a solution.
 
  On Sun, Jan 25, 2009 at 8:30 PM, Shreyasee shreyasee.prad...@gmail.com
 
  wrote:
   Hi,
  
   I have imported one dataset in R.
   I want to calculate the percentage of missing values for each month
 (May
   2006 to March 2007) for each variable.
   Just to begin with I tried the following code :
  
   *for(i in 1:length(dos))
   for(j in 1:length(patientinformation1)
   if(dos[i]==May-06  patientinformation1[j]==)
   a - j+1
   a*
  
   The above code was written to calculate the number of missing values
 for
   May
   2006, but I am not getting the correct results.
   Can anybody help me?
  
   Thanks,
   Shreyasee
  
  [[alternative HTML version deleted]]
  
   __
   R-help@r-project.org mailing list
   https://stat.ethz.ch/mailman/listinfo/r-help
   PLEASE do read the posting guide
   http://www.R-project.org/posting-guide.html
   and provide commented, minimal, self-contained, reproducible code.
  
 
 
 
  --
  Jim Holtman
  Cincinnati, OH
  +1 513 646 9390
 
  What is the problem that you are trying to solve?
 
 



 --
 Jim Holtman
 Cincinnati, OH
 +1 513 646 9390

 What is the problem that you are trying to solve?


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sem package: start values

2009-01-25 Thread Anthony Dick

Thanks John--just needed to rtfm a little farther down :)

Anthony

John Fox wrote:

Dear Anthony,

From ?sem:

If given as NA, the program will compute a start value, by a slight
modification of the method described by McDonald and Hartmann (1992). Note:
In some circumstances, some start values are selected randomly; this might
produce small differences in the parameter estimates when the program is
rerun.

To see exactly what's done, print out the startvalues function.

Regards,
 John

--
John Fox, Professor
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
web: socserv.mcmaster.ca/jfox


  

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]


On
  

Behalf Of Anthony Dick
Sent: January-25-09 9:15 PM
To: r-help@r-project.org
Subject: [R] sem package: start values

Hello-

If I input a variance-covariance matrix and specify NA for start values,
how does sem determine the start value? Is there a default?

Anthony

--
Anthony Steven Dick, Ph.D.
Post-Doctoral Fellow
Human Neuroscience Laboratory
Department of Neurology
The University of Chicago
5841 S. Maryland Ave. MC-2030
Chicago, IL 60637
Phone: (773)-834-7770
Email: ad...@uchicago.edu
Web: http://home.uchicago.edu/~adick/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide


http://www.R-project.org/posting-guide.html
  

and provide commented, minimal, self-contained, reproducible code.




  



--
Anthony Steven Dick, Ph.D.
Post-Doctoral Fellow
Human Neuroscience Laboratory
Department of Neurology
The University of Chicago
5841 S. Maryland Ave. MC-2030
Chicago, IL 60637
Phone: (773)-834-7770
Email: ad...@uchicago.edu
Web: http://home.uchicago.edu/~adick/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting graph for Missing values

2009-01-25 Thread jim holtman
YOu can save the output of the tapply and then replicate it for each
of the variables.  The data can be used to plot the graphs.

On Sun, Jan 25, 2009 at 9:38 PM, Shreyasee shreyasee.prad...@gmail.com wrote:
 Hi Jim,

 I need to calculate the missing values in variable patientinformation1 for
 the period of May 2006 to March 2007 and then plot the graph of the
 percentage of the missing values over these months.
 This has to be done for each variable.
 The code which you have provided, calculates the missing values for the
 months variable, am I right?
 I need to calculate for all the variables for each month.

 Thanks,
 Shreyasee


 On Mon, Jan 26, 2009 at 10:29 AM, jim holtman jholt...@gmail.com wrote:

 Here is an example of how you might approach it:

  dos - seq(as.Date('2006-05-01'), as.Date('2007-03-31'), by='1 day')
  pat1 - rbinom(length(dos), 1, .5)  # generate some data
  # partition by month and then list out the number of zero values
  (missing)
  tapply(pat1, format(dos, %Y%m), function(x) sum(x==0))
 200605 200606 200607 200608 200609 200610 200611 200612 200701 200702
 200703
21 22 16 18 16 15 16 17 14 16
 13
 


 On Sun, Jan 25, 2009 at 8:51 PM, Shreyasee shreyasee.prad...@gmail.com
 wrote:
  Hi Jim,
 
  The dataset has 4 variables (dos, patientinformation1,
  patientinformation2,
  patientinformation3).
  In dos variable ther are months (May 2006 to March 2007) when the
  surgeries
  were formed.
  I need to calculate the percentage of missing values for each variable
  (patientinformation1, patientinformation2, patientinformation3) for each
  month.
  I need a common script to calculate that for each variable.
 
  Thanks,
  Shreyasee
 
 
  On Mon, Jan 26, 2009 at 9:46 AM, jim holtman jholt...@gmail.com wrote:
 
  What does you data look like?  You could use 'split' and then examine
  the data in each range to count the number missing.  Would have to
  have some actual data to suggest a solution.
 
  On Sun, Jan 25, 2009 at 8:30 PM, Shreyasee
  shreyasee.prad...@gmail.com
  wrote:
   Hi,
  
   I have imported one dataset in R.
   I want to calculate the percentage of missing values for each month
   (May
   2006 to March 2007) for each variable.
   Just to begin with I tried the following code :
  
   *for(i in 1:length(dos))
   for(j in 1:length(patientinformation1)
   if(dos[i]==May-06  patientinformation1[j]==)
   a - j+1
   a*
  
   The above code was written to calculate the number of missing values
   for
   May
   2006, but I am not getting the correct results.
   Can anybody help me?
  
   Thanks,
   Shreyasee
  
  [[alternative HTML version deleted]]
  
   __
   R-help@r-project.org mailing list
   https://stat.ethz.ch/mailman/listinfo/r-help
   PLEASE do read the posting guide
   http://www.R-project.org/posting-guide.html
   and provide commented, minimal, self-contained, reproducible code.
  
 
 
 
  --
  Jim Holtman
  Cincinnati, OH
  +1 513 646 9390
 
  What is the problem that you are trying to solve?
 
 



 --
 Jim Holtman
 Cincinnati, OH
 +1 513 646 9390

 What is the problem that you are trying to solve?





-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting graph for Missing values

2009-01-25 Thread Shreyasee
Hi Jim,

I tried the code which u provided.
In place of dos in command pat1 - rbinom(length(dos), 1, .5)  # generate
some data
I added patientinformation1 variable and then I gave the command for
tapply but its giving me the following error:

*Error in tapply(pat1, format(dos, %Y%m), function(x) sum(x == 0)) :
  arguments must have same length*


Thanks,
Shreyasee



On Mon, Jan 26, 2009 at 10:50 AM, jim holtman jholt...@gmail.com wrote:

 YOu can save the output of the tapply and then replicate it for each
 of the variables.  The data can be used to plot the graphs.

 On Sun, Jan 25, 2009 at 9:38 PM, Shreyasee shreyasee.prad...@gmail.com
 wrote:
  Hi Jim,
 
  I need to calculate the missing values in variable patientinformation1
 for
  the period of May 2006 to March 2007 and then plot the graph of the
  percentage of the missing values over these months.
  This has to be done for each variable.
  The code which you have provided, calculates the missing values for the
  months variable, am I right?
  I need to calculate for all the variables for each month.
 
  Thanks,
  Shreyasee
 
 
  On Mon, Jan 26, 2009 at 10:29 AM, jim holtman jholt...@gmail.com
 wrote:
 
  Here is an example of how you might approach it:
 
   dos - seq(as.Date('2006-05-01'), as.Date('2007-03-31'), by='1 day')
   pat1 - rbinom(length(dos), 1, .5)  # generate some data
   # partition by month and then list out the number of zero values
   (missing)
   tapply(pat1, format(dos, %Y%m), function(x) sum(x==0))
  200605 200606 200607 200608 200609 200610 200611 200612 200701 200702
  200703
 21 22 16 18 16 15 16 17 14 16
  13
  
 
 
  On Sun, Jan 25, 2009 at 8:51 PM, Shreyasee shreyasee.prad...@gmail.com
 
  wrote:
   Hi Jim,
  
   The dataset has 4 variables (dos, patientinformation1,
   patientinformation2,
   patientinformation3).
   In dos variable ther are months (May 2006 to March 2007) when the
   surgeries
   were formed.
   I need to calculate the percentage of missing values for each variable
   (patientinformation1, patientinformation2, patientinformation3) for
 each
   month.
   I need a common script to calculate that for each variable.
  
   Thanks,
   Shreyasee
  
  
   On Mon, Jan 26, 2009 at 9:46 AM, jim holtman jholt...@gmail.com
 wrote:
  
   What does you data look like?  You could use 'split' and then examine
   the data in each range to count the number missing.  Would have to
   have some actual data to suggest a solution.
  
   On Sun, Jan 25, 2009 at 8:30 PM, Shreyasee
   shreyasee.prad...@gmail.com
   wrote:
Hi,
   
I have imported one dataset in R.
I want to calculate the percentage of missing values for each month
(May
2006 to March 2007) for each variable.
Just to begin with I tried the following code :
   
*for(i in 1:length(dos))
for(j in 1:length(patientinformation1)
if(dos[i]==May-06  patientinformation1[j]==)
a - j+1
a*
   
The above code was written to calculate the number of missing
 values
for
May
2006, but I am not getting the correct results.
Can anybody help me?
   
Thanks,
Shreyasee
   
   [[alternative HTML version deleted]]
   
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
   
  
  
  
   --
   Jim Holtman
   Cincinnati, OH
   +1 513 646 9390
  
   What is the problem that you are trying to solve?
  
  
 
 
 
  --
  Jim Holtman
  Cincinnati, OH
  +1 513 646 9390
 
  What is the problem that you are trying to solve?
 
 



 --
 Jim Holtman
 Cincinnati, OH
 +1 513 646 9390

 What is the problem that you are trying to solve?


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to vectorize this?

2009-01-25 Thread Akshaya Jha
Hi,

I have the following datasets:
x=data I am looking through
key=a set of data with the codes I want

I have the following issue:
I want the subset of x which has a code contained in the key dataset.  That is, 
if x[i] is contained in the key dataset, I want to keep it.  Note that x may 
contain multiple of the same codes (or obviously none of that code as well)

I currently use two for-loops thusly in my R-code:

k=1
y=data.frame(1,stringsAsFactors=FALSE)
for(i in 1:length(x)){
for(j in 1:length(key)){

if(x[i]==key[j]){
y[k]=x[i]
k=k+1;
}

}
}
  
However, my dataset (x in this example) is pretty large, so I want to avoid 
using two for-loops.  Does anybody know an easier way to approach this?

Thanks

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to vectorize this?

2009-01-25 Thread Jorge Ivan Velez
Hi Akshaka,
Take a look at ?%in%. Here is an example for the help:

 x-1:10
 key-c(1,3,5,9)
 x %in% key
 [1]  TRUE FALSE  TRUE FALSE  TRUE FALSE FALSE FALSE  TRUE FALSE
 x[ x %in% key ]
[1] 1 3 5 9


HTH,

Jorge



On Sun, Jan 25, 2009 at 10:27 PM, Akshaya Jha aksha...@andrew.cmu.eduwrote:

 Hi,

 I have the following datasets:
 x=data I am looking through
 key=a set of data with the codes I want

 I have the following issue:
 I want the subset of x which has a code contained in the key dataset.  That
 is, if x[i] is contained in the key dataset, I want to keep it.  Note that x
 may contain multiple of the same codes (or obviously none of that code as
 well)

 I currently use two for-loops thusly in my R-code:

 k=1
 y=data.frame(1,stringsAsFactors=FALSE)
 for(i in 1:length(x)){
 for(j in 1:length(key)){

 if(x[i]==key[j]){
 y[k]=x[i]
 k=k+1;
 }

 }
 }

 However, my dataset (x in this example) is pretty large, so I want to avoid
 using two for-loops.  Does anybody know an easier way to approach this?

 Thanks

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to vectorize this?

2009-01-25 Thread markleeds

Hi:  if i understand, i think

newx-x[ x %in% key]

should give you what you want.



On Sun, Jan 25, 2009 at 10:27 PM, Akshaya Jha wrote:


Hi,

I have the following datasets:
x=data I am looking through
key=a set of data with the codes I want

I have the following issue:
I want the subset of x which has a code contained in the key dataset. 
That is, if x[i] is contained in the key dataset, I want to keep it. 
Note that x may contain multiple of the same codes (or obviously none 
of that code as well)


I currently use two for-loops thusly in my R-code:

k=1
y=data.frame(1,stringsAsFactors=FALSE)
for(i in 1:length(x)){
for(j in 1:length(key)){

if(x[i]==key[j]){
y[k]=x[i]
k=k+1;
}

}
}
  However, my dataset (x in this example) is pretty large, so I 
want to avoid using two for-loops.  Does anybody know an easier way to 
approach this?


Thanks

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Stat textbook recommendation

2009-01-25 Thread Jason Rupert
Probably not intentional, but there doesn't appear to be  a link to R or any R 
related material on the site.  Ha.  Found that interesting.  Still a good 
list...

--- On Sat, 1/24/09, David C. Howell david.how...@uvm.edu wrote:
From: David C. Howell david.how...@uvm.edu
Subject: Re: [R] Stat textbook recommendation
To: r-help@r-project.org
Date: Saturday, January 24, 2009, 5:47 PM

Monte,

For a list of online sources that may be useful, go to
http://www.uvm.edu/~dhowell/methods/Websites/Archives.html
and check out some of the material referenced there. Clay Helberg's site is
particularly helpful. Unfortunately it is virtually impossible to keep links
current, so some are likely to be dead--although you can often find them via
Google.
Dave Howell

-- David C. Howell
Prof. Emeritus, Univ. of Vermont

PO Box 770059
627 Meadowbrook Circle
Steamboat Springs, CO
80477

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting graph for Missing values

2009-01-25 Thread jim holtman
do:

str(dos)
str(patientinformation1)

They must be the same length for the command to work: must be a one to
one match of the data.

On Sun, Jan 25, 2009 at 10:23 PM, Shreyasee shreyasee.prad...@gmail.com wrote:
 Hi Jim,

 I tried the code which u provided.
 In place of dos in command pat1 - rbinom(length(dos), 1, .5)  # generate
 some data
 I added patientinformation1 variable and then I gave the command for
 tapply but its giving me the following error:

 Error in tapply(pat1, format(dos, %Y%m), function(x) sum(x == 0)) :
   arguments must have same length


 Thanks,
 Shreyasee



 On Mon, Jan 26, 2009 at 10:50 AM, jim holtman jholt...@gmail.com wrote:

 YOu can save the output of the tapply and then replicate it for each
 of the variables.  The data can be used to plot the graphs.

 On Sun, Jan 25, 2009 at 9:38 PM, Shreyasee shreyasee.prad...@gmail.com
 wrote:
  Hi Jim,
 
  I need to calculate the missing values in variable patientinformation1
  for
  the period of May 2006 to March 2007 and then plot the graph of the
  percentage of the missing values over these months.
  This has to be done for each variable.
  The code which you have provided, calculates the missing values for the
  months variable, am I right?
  I need to calculate for all the variables for each month.
 
  Thanks,
  Shreyasee
 
 
  On Mon, Jan 26, 2009 at 10:29 AM, jim holtman jholt...@gmail.com
  wrote:
 
  Here is an example of how you might approach it:
 
   dos - seq(as.Date('2006-05-01'), as.Date('2007-03-31'), by='1 day')
   pat1 - rbinom(length(dos), 1, .5)  # generate some data
   # partition by month and then list out the number of zero values
   (missing)
   tapply(pat1, format(dos, %Y%m), function(x) sum(x==0))
  200605 200606 200607 200608 200609 200610 200611 200612 200701 200702
  200703
 21 22 16 18 16 15 16 17 14 16
  13
  
 
 
  On Sun, Jan 25, 2009 at 8:51 PM, Shreyasee
  shreyasee.prad...@gmail.com
  wrote:
   Hi Jim,
  
   The dataset has 4 variables (dos, patientinformation1,
   patientinformation2,
   patientinformation3).
   In dos variable ther are months (May 2006 to March 2007) when the
   surgeries
   were formed.
   I need to calculate the percentage of missing values for each
   variable
   (patientinformation1, patientinformation2, patientinformation3) for
   each
   month.
   I need a common script to calculate that for each variable.
  
   Thanks,
   Shreyasee
  
  
   On Mon, Jan 26, 2009 at 9:46 AM, jim holtman jholt...@gmail.com
   wrote:
  
   What does you data look like?  You could use 'split' and then
   examine
   the data in each range to count the number missing.  Would have to
   have some actual data to suggest a solution.
  
   On Sun, Jan 25, 2009 at 8:30 PM, Shreyasee
   shreyasee.prad...@gmail.com
   wrote:
Hi,
   
I have imported one dataset in R.
I want to calculate the percentage of missing values for each
month
(May
2006 to March 2007) for each variable.
Just to begin with I tried the following code :
   
*for(i in 1:length(dos))
for(j in 1:length(patientinformation1)
if(dos[i]==May-06  patientinformation1[j]==)
a - j+1
a*
   
The above code was written to calculate the number of missing
values
for
May
2006, but I am not getting the correct results.
Can anybody help me?
   
Thanks,
Shreyasee
   
   [[alternative HTML version deleted]]
   
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
   
  
  
  
   --
   Jim Holtman
   Cincinnati, OH
   +1 513 646 9390
  
   What is the problem that you are trying to solve?
  
  
 
 
 
  --
  Jim Holtman
  Cincinnati, OH
  +1 513 646 9390
 
  What is the problem that you are trying to solve?
 
 



 --
 Jim Holtman
 Cincinnati, OH
 +1 513 646 9390

 What is the problem that you are trying to solve?





-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plotting graph for Missing values

2009-01-25 Thread Shreyasee
Hi Jim,

I run the following code

*ds - read.csv(file=D:/Shreyasee laptop data/ASC Dataset/Subset of the ASC
Dataset.csv, header=TRUE)
 attach(ds)
 str(dos)*

I am getting the following message:

 *Factor w/ 12 levels -00-00,6-Aug,..: 6 6 6 6 6 6 6 6 6 6 ...*

Thanks,
Shreyasee



On Mon, Jan 26, 2009 at 12:20 PM, jim holtman jholt...@gmail.com wrote:

 do:

 str(dos)
 str(patientinformation1)

 They must be the same length for the command to work: must be a one to
 one match of the data.

 On Sun, Jan 25, 2009 at 10:23 PM, Shreyasee shreyasee.prad...@gmail.com
 wrote:
  Hi Jim,
 
  I tried the code which u provided.
  In place of dos in command pat1 - rbinom(length(dos), 1, .5)  #
 generate
  some data
  I added patientinformation1 variable and then I gave the command for
  tapply but its giving me the following error:
 
  Error in tapply(pat1, format(dos, %Y%m), function(x) sum(x == 0)) :
arguments must have same length
 
 
  Thanks,
  Shreyasee
 
 
 
  On Mon, Jan 26, 2009 at 10:50 AM, jim holtman jholt...@gmail.com
 wrote:
 
  YOu can save the output of the tapply and then replicate it for each
  of the variables.  The data can be used to plot the graphs.
 
  On Sun, Jan 25, 2009 at 9:38 PM, Shreyasee shreyasee.prad...@gmail.com
 
  wrote:
   Hi Jim,
  
   I need to calculate the missing values in variable
 patientinformation1
   for
   the period of May 2006 to March 2007 and then plot the graph of the
   percentage of the missing values over these months.
   This has to be done for each variable.
   The code which you have provided, calculates the missing values for
 the
   months variable, am I right?
   I need to calculate for all the variables for each month.
  
   Thanks,
   Shreyasee
  
  
   On Mon, Jan 26, 2009 at 10:29 AM, jim holtman jholt...@gmail.com
   wrote:
  
   Here is an example of how you might approach it:
  
dos - seq(as.Date('2006-05-01'), as.Date('2007-03-31'), by='1
 day')
pat1 - rbinom(length(dos), 1, .5)  # generate some data
# partition by month and then list out the number of zero values
(missing)
tapply(pat1, format(dos, %Y%m), function(x) sum(x==0))
   200605 200606 200607 200608 200609 200610 200611 200612 200701 200702
   200703
  21 22 16 18 16 15 16 17 14 16
   13
   
  
  
   On Sun, Jan 25, 2009 at 8:51 PM, Shreyasee
   shreyasee.prad...@gmail.com
   wrote:
Hi Jim,
   
The dataset has 4 variables (dos, patientinformation1,
patientinformation2,
patientinformation3).
In dos variable ther are months (May 2006 to March 2007) when the
surgeries
were formed.
I need to calculate the percentage of missing values for each
variable
(patientinformation1, patientinformation2, patientinformation3) for
each
month.
I need a common script to calculate that for each variable.
   
Thanks,
Shreyasee
   
   
On Mon, Jan 26, 2009 at 9:46 AM, jim holtman jholt...@gmail.com
wrote:
   
What does you data look like?  You could use 'split' and then
examine
the data in each range to count the number missing.  Would have to
have some actual data to suggest a solution.
   
On Sun, Jan 25, 2009 at 8:30 PM, Shreyasee
shreyasee.prad...@gmail.com
wrote:
 Hi,

 I have imported one dataset in R.
 I want to calculate the percentage of missing values for each
 month
 (May
 2006 to March 2007) for each variable.
 Just to begin with I tried the following code :

 *for(i in 1:length(dos))
 for(j in 1:length(patientinformation1)
 if(dos[i]==May-06  patientinformation1[j]==)
 a - j+1
 a*

 The above code was written to calculate the number of missing
 values
 for
 May
 2006, but I am not getting the correct results.
 Can anybody help me?

 Thanks,
 Shreyasee

[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible
 code.

   
   
   
--
Jim Holtman
Cincinnati, OH
+1 513 646 9390
   
What is the problem that you are trying to solve?
   
   
  
  
  
   --
   Jim Holtman
   Cincinnati, OH
   +1 513 646 9390
  
   What is the problem that you are trying to solve?
  
  
 
 
 
  --
  Jim Holtman
  Cincinnati, OH
  +1 513 646 9390
 
  What is the problem that you are trying to solve?
 
 



 --
 Jim Holtman
 Cincinnati, OH
 +1 513 646 9390

 What is the problem that you are trying to solve?


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and 

Re: [R] Build Error on Opensolaris iconvlist

2009-01-25 Thread Prof Brian Ripley

On Sun, 25 Jan 2009, Karun Gahlawat wrote:


Prof Ripely,
Thanks. SUN gnu-iconv package should overwrite the sun version and so
it does. Apparently, it does not work.


As I said, you also need to pick up the correct header.

I built this library from gnu source with gcc and now it configures 
and builds but fails on make check for regressions.


This topic is touched in the manual with some 'blas' and 'lapack'
libraries. Not sure if these are related though. I am not sure either
where and what to get for this. Apologize as this is all new to me.


You need to look at the end of tests/reg-tests-1.R to see the error.

BTW according to the posting guide this topic was for the R-devel 
list.




Here is the extract from make check logs..

running code in 'reg-tests-1.R' ...*** Error code 1
The following command caused the error:
LC_ALL=C SRCDIR=. R_DEFAULT_PACKAGES= ../bin/R --vanilla 
reg-tests-1.R  reg-tests-1.Rout 21 || (mv reg-tests-1.Rout
reg-tests-1.Rout.fail  exit 1)
make: Fatal error: Command failed for target `reg-tests-1.Rout'
Current working directory /opt/R-2.8.1/tests
*** Error code 1
The following command caused the error:
make reg-tests-1.Rout reg-tests-2.Rout reg-IO.Rout reg-IO2.Rout
reg-plot.Rout reg-S4.Rout  RVAL_IF_DIFF=1
make: Fatal error: Command failed for target `test-Reg'
Current working directory /opt/R-2.8.1/tests
*** Error code 1
The following command caused the error:
for name in Examples Specific Reg Internet; do \
 make test-${name} || exit 1; \
done
make: Fatal error: Command failed for target `test-all-basics'
Current working directory /opt/R-2.8.1/tests
*** Error code 1
The following command caused the error:
(cd tests  make check)
make: Fatal error: Command failed for target `check'


On Sun, Jan 25, 2009 at 2:05 PM, Prof Brian Ripley
rip...@stats.ox.ac.uk wrote:

On Sun, 25 Jan 2009, Karun Gahlawat wrote:


Uwe,
Sorry I missed it. I do have gnu iconv..
SUNWgnu-libiconv

ls -lra /usr/lib/*iconv* | more
lrwxrwxrwx   1 root root  14 Jan 23 21:23 /usr/lib/libiconv.so
- li
bgnuiconv.so
lrwxrwxrwx   1 root root  22 Jan 23 21:23
/usr/lib/libgnuiconv.so -
../gnu/lib/libiconv.so

And hence the confusion..


Did you tell R to use that one?  You need the correct header files set as
well as the library, or you will get the system iconv.  (The header file
remaps the entry point names.)

Perhaps you need to study the R-admin manual carefully, which describes how
to get the correct iconv.



On Sat, Jan 24, 2009 at 1:00 PM, Uwe Ligges
lig...@statistik.tu-dortmund.de wrote:



Karun Gahlawat wrote:


Hi!

Trying to build R-2.8.1. while configuring, it throws error

./configure

checking iconv.h usability... yes
checking iconv.h presence... yes
checking for iconv.h... yes
checking for iconv... yes
checking whether iconv accepts UTF-8, latin1 and UCS-*... no
checking for iconvlist... no
configure: error: --with-iconv=yes (default) and a suitable iconv is
not available

I am confused.. sorry new to this..
I can see the iconv binary, headers and libs all in the standard
directory. Please help or redirect!



Please read the documentation, the R Installation and Administration
manuals
tells you:

You will need GNU libiconv: the Solaris version of iconv is not
sufficiently powerful. 

Uwe Ligges



SunOS  5.11 snv_101b i86pc i386 i86pc

CC: Sun Ceres C++ 5.10 SunOS_i386 2008/10/22


--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595





--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] .Renviron for multiple hardwares...

2009-01-25 Thread Prof Brian Ripley

On Sun, 25 Jan 2009, Jonathan Greenberg wrote:

Ah, perfect -- so would the ideal R_LIBS_USER setting (to more or less 
guarantee the libraries will work on every possible computer) be something 
along the lines of:


~/myRlibraries/%V%p%o%a

Or is this overkill?


%V is overkill. On some OSes %v is needed (Mac OS, WIndows) and on 
others you can get away without it. And %p includes %o and %a.  The 
default is on a Unix-alike


~/R/%p-library//%v



--j

Prof Brian Ripley wrote:

On Sun, 25 Jan 2009, Henrik Bengtsson wrote:


The script .Rprofile evaluates R code on startup.  You could use that
to test for various environment variables.  Alternatively, use Unix
shell scripts to set system environment variables to be used in a
generic .Renviron.  See help(Startup) for more details.


Well, not just 'Unix shell scripts', just R_ENVIRON_USER apppriately (on 
any OS).




/Henrik

On Sun, Jan 25, 2009 at 11:22 AM, Jonathan Greenberg
greenb...@ucdavis.edu wrote:
Our lab has a lot of different unix boxes, with different hardware, and 
I'm
assuming (perhaps wrongly) that by setting a per-user package 
installation
directory, the packages will only work on one type of hardware.  Our 
systems

are all set up to share the same home directory (and, thus, the same
.Renviron file) -- so, is there a way to set, in the .Renviron file,
per-computer or per-hardware settings?  The idea is to have a different
package installation directory for each computer (e.g.
~/R/computer1/packages and ~/R/computer2/packages.


Well, we anticipated that and the default personal directory is
set by R_LIBS_USER, and that has a platform-specific default.  See 
?.libPaths.


None of this is uncommon: my dept home file system is shared by x86_64 
Linux, i386 Linus, x86_64 Solaris, Sparc Solaris, Mac OS X and Windows.  I 
just let install.packages() create a personal library for me on each one I 
use it on.



Thoughts?  Ideas?  Thanks!

--j

--

Jonathan A. Greenberg, PhD
Postdoctoral Scholar
Center for Spatial Technologies and Remote Sensing (CSTARS)
University of California, Davis
One Shields Avenue
The Barn, Room 250N
Davis, CA 95616
Cell: 415-794-5043
AIM: jgrn307, MSN: jgrn...@hotmail.com, Gchat: jgrn307

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.





--

Jonathan A. Greenberg, PhD
Postdoctoral Scholar
Center for Spatial Technologies and Remote Sensing (CSTARS)
University of California, Davis
One Shields Avenue
The Barn, Room 250N
Davis, CA 95616
Cell: 415-794-5043
AIM: jgrn307, MSN: jgrn...@hotmail.com, Gchat: jgrn307

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using help()

2009-01-25 Thread Thomas Lumley

On Sun, 25 Jan 2009, Patrick Burns wrote:


Michael Kubovy wrote:

Dear R-helpers,

[...]

(2) If I remember dnorm() and want to be reminded of the call, I also  get a 
list of pages.
  


It sounds to me like here you want:

args(dnorm)



or, for functions hidden in a namespace, argsAnywhere().

   -thomas

Thomas Lumley   Assoc. Professor, Biostatistics
tlum...@u.washington.eduUniversity of Washington, Seattle

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.