Joseph Retzer wrote:
Dear R-help,
I'm using the R2WinBUGS package and getting an error message:
Error in file(file, r) : unable to open connection
In addition: Warning message:
cannot open file 'codaIndex.txt', reason 'No such file or
directory'
Looks
The problem is most likely your use of cat() for output. Consider
x - 8665540.49905558
cat(x, \n)
8665540
cat(as.character(x), \n)
8665540.49905558
options(digits=10)
cat(x, \n)
8665540.499
So it would be best to do the conversions yourself, and I would
investigate using format() to do so.
Kåre Edvardsen wrote:
Dear R-gurus!
Is it possible within boxplot to break the y-scale into two (or
anything)? I'd like to have a normal linear y-range from 0-10 and the
next tick mark starting at, say 60 and continue to 90. The reason is for
better visualising the outliers.
Hi Kare,
In
Dear R'Helpers and Colleagues,
I have looked in the documentation, asked to some colleagues in the
lab, but was unable to solve the following difficulty.
I am running R on a iMac G5 with OS 10.4.
The file below (73 rows x 144 col) shows the values of a climatic
index on the globe with a
Hi list,
I have been trying to pileup dataframes with different column names using
rbind. I am getting the following error message. I was wondering if there is
a way to go around this minor problem?
Error in match.names(clabs, names(xi)) : names don't match previous names:
G
Thanks
Hello,
Apologies if this is the wrong list, I am a first-time poster here. I
have an experiment in which an output is measured in response to 42
different categories.
I am only interested which of the categories is significantly different
from a reference category.
Here is the summary of the
Hi,
I am having troubles importing data from an scv file in R.
This is my current code:
returns - read.zoo(empty143.csv, format = %d/%b/%Y, sep = ,, header
= TRUE, skip=1 )
for importing data of this type:
Request4*,
,fx.TWD.USD.SPOT.soho
01/Jan/1988,0.03502627
04/Jan/1988,0.03502627
On 05/04/06, Paul Johnson [EMAIL PROTECTED] wrote:
Hi Paul,
I upgraded from Fedora Core 4 to Fedora Core 5 and I find a lot of
previously installed packages won't run because shared libraries or
other system things have changed out from under the installed R
libraries. I do not know for
Hi
if you are really sure that all columns in all data frames you want
to stack are the same type and in the same position just make names
in all data frames equal.
df1-data.frame(rnorm(10), rnorm(10))
df2-data.frame(rnorm(10), rnorm(10))
names(df2)-c(a, b)
df1
rnorm.10. rnorm.10..1
1
Hi, Steve,
I think you can trick it by adding an empty level with a small cex:
keyArgs - list(points=list(pch=c(NA,rep(17,5),NA),lwd=2,
col=c(NA,c(red, chartreuse3, black, cyan,
blue)),NA),
text=list(lab=c(S-R
Hi,
I'm using Mark Dettling's supclust package.
I have a typical numeric matrix x of explanatory variables (n cases, p
features, n p).
I have a numeric vector y of length n, containing a quantitative
response variable.
I am interested in grouping features in a way which is strongly
Hi,
I hope to use R to perform some hypothesis testing, such as t.test() and
var.test().
However, I don't find a function that could do test on means with variance known
(i.e., u test or z test in some textbook), and a function that could do test on
variances of normal distribution (i.e.
Does lme prediction work correctly with poly() terms?
In the following simulated example, the predictions
are wildly off.
Or am I doing something daft?
Milk yield for five cows is measured weekly for 45 weeks.
Yield is simulated as cubic function of weekno + random
cow effect (on intercept) +
On 4/6/06 7:17 AM, Jinsong Zhao [EMAIL PROTECTED] wrote:
Hi,
I hope to use R to perform some hypothesis testing, such as t.test() and
var.test().
However, I don't find a function that could do test on means with variance
known
(i.e., u test or z test in some textbook), and a function
Hi All,
I have certain combinations for which I have some value, e.g.
0 1 20
0 2 15
1 1 40
1 2 52
Now I need to sort this list for which I'll get the combination against the
lowest value. In this case, 15, and the combination will be 0 2.
--
SUMANTA
On 4/6/06 8:04 AM, Sumanta Basak [EMAIL PROTECTED] wrote:
Hi All,
I have certain combinations for which I have some value, e.g.
0 1 20
0 2 15
1 1 40
1 2 52
Now I need to sort this list for which I'll get the combination against the
I don't believe predict.lme implements makepredictcall (which is the magic
used in normal model-fitting), nor does it use model.frame.default.
So the answer would appear to be that there is no reason to expect it to
work with poly().
See
Dear R-users,
I intend to do a spatial analysis on the genetic structuring within a
population. For this I had thought to prepare a kernel density estimate
map showing the spatial distribution of individuals, while incorporating
the genetic distances among individuals. I have a dataset of
From: Sean Davis
On 4/6/06 8:04 AM, Sumanta Basak [EMAIL PROTECTED] wrote:
Hi All,
I have certain combinations for which I have some value, e.g.
0 1 20
0 2 15
1 1 40
1 2 52
Now I need to sort this list for which
It is useful to me. Thank you!
ask the other question.
I have some data with many points in a smaller scales (e.g. 0-10) and little
points in a larger scales (e.g, 10-1000).If I want to a normal linear
y-scale from 0-10 and let the other points (10-1000) display in a shorter
distance.
Can I do it
If you don't insist on kernel smoothing, and are willing to use something
similar, locfit() in the locfit package uses local likelihood to estimate
density and can accept weights. E.g.,
library(locfit)
plot(locfit(~Petal.Length + Petal.Width, data=iris))
plot(locfit(~Petal.Length + Petal.Width,
Hi,
I want to do the following:
1) create a trellis plot with 1 x 1 layout
2) add a key in the upper right hand corner of the plotting region (i.e.,
the panel), but after the initial call to trellis
3) resize the resulting device graphics device without changing the relative
position of the
On Thu, 2006-04-06 at 14:29 +0200, May, Roel wrote:
Dear R-users,
I intend to do a spatial analysis on the genetic structuring within a
population. For this I had thought to prepare a kernel density estimate
map showing the spatial distribution of individuals, while incorporating
the
Hi
[MacOsX 10.4.6; R-2.2.1]
I have a bundle that comprises three packages. I want to run R CMD
check on
each one individually, as it takes a long time to run on all three.
I am
having problems.
The bundle as a whole passes R CMD check, but fails when I cd to the
bundle
directory and
Dear Markus,
I indeed have a data set consisting of 1/2*N*(N-1) unique pairs of
individuals with data
x1 y1 x2 y2 w
I am however not interested in, like you said, the value at x1, y1 of w
summarised by a kernel function over all x2, y2 (if I understand you
rightly that is...). This sounds like
I have a question, which very easy to solve, but I can't find a solution.
I want to convert a data frame to matrix. Here my toy example:
L3 - c(1:3)
L10 - c(1:6)
d - data.frame(cbind(x=c(10,20), y=L10), fac=sample(L3, + 6, repl=TRUE))
d
x y fac
1 10 1 1
2 20 2 1
3 10 3 1
4 20 4 3
5
Hi
set the column names to NULL:
a - data.frame(x=1:4,y=4:1)
aa - as.matrix(a)
colnames(aa) - NULL
aa
[,1] [,2]
114
223
332
441
best wishes
Robin
On 6 Apr 2006, at 15:16, Muhammad Subianto wrote:
I have a question, which very easy to solve,
try the following:
out - data.matrix(d)
dimnames(out) - NULL
out
Best,
Dimitris
Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven
Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web:
On this day 06/04/2006 16:22, Robin Hankin wrote:
Hi
set the column names to NULL:
a - data.frame(x=1:4,y=4:1)
aa - as.matrix(a)
colnames(aa) - NULL
aa
On this day 06/04/2006 16:28, Dimitris Rizopoulos wrote:
try the following:
out - data.matrix(d)
dimnames(out) - NULL
out
Hello all!
w2k, R-2.2.1. update.packages done today.
I just started to work with a new dataset, using lme() (library nlme) and
estimable() (library gmodels). I first wanted to establish the fixed
effects for eight fertiliser treatments (variable treat) coded as A to H.
Fitting and reducing a
Apparently you do not understand the point, and seem to (want to) see
patterns all over the place. A good start for the treatment of this
interesting disease is 'Fooled by Randomness' by Nassim Nicholas
Taleb. The main point of the book is that many things may be a lot
more random than one might
Hi,
After posting this I found a collection of functions designed to interact
with a trellis plot after it has been created. One such function is
trellis.focus, which retrieves the viewport for a specific panel.
Calling trellis.focus(name=panel,row=1,column=1,highlight=FALSE) sets the
current
R-gurus...
I've got a 5 column dataframe where I'd like to plot each ID's b
against c with b in ascending order (within the same ID). How do I
sort b so that the other variables are altered equally?
IDab c d
101 1 240 26.7 21.85
101 2 335 21.8 21.85
101 3 1387
The help for filled.contour suggests using the plot.axes argument to annotate a
plot, try changing the last part of your 3rd example to:
filled.contour(nlong,nlat,z1,col=rainbow
(100),xlab=longitude,ylab=latitude,
plot.axes=map(add=TRUE))
Or
filled.contour(nlong,nlat,z1,col=rainbow
Have you thought about using a log scale?
Clint BowmanINTERNET: [EMAIL PROTECTED]
Air Quality Modeler INTERNET: [EMAIL PROTECTED]
Department of Ecology VOICE: (360) 407-6815
PO Box 47600FAX:(360)
There is a z.test function in the TeachingDemos Package. Note however
that this function is meant for teaching purposes as a stepping stone
for students to learn the syntax and output for hypothesis test
functions before learning about t tests etc. The z.test function only
does one sample tests.
Please read the Help files before posting! At the bottom of ?sort you will
find a link to order which is the answer to your question.
-- Bert Gunter
Genentech Non-Clinical Statistics
South San Francisco, CA
The business of the statistician is to catalyze the scientific learning
process. -
I think you have to remove the DESCRIPTION.in file when you check the
package stand alone. As I recall, if it exists that is the flag to
indicate a bundle. Also beware that the DESCRIPTION file in the stand
alone package is not exactly the same as the DESCRIPTION.in file when it
is part of
Can anyone comment or point me to a discussion of the
pros and cons of robust regressions, vs. a more
manual approach to trimming outliers and/or
normalizing data used in regression analysis?
__
R-help@stat.math.ethz.ch mailing list
On Thu, 6 Apr 2006, Andrew McDonagh wrote:
My question relates to the meaning of the p-values. Do the p-values
relate to
a) the confidence in the estimate
or
b)the confidence that the non-intercept categories are different to the
intercept
Both (given a loose interpretation of your words and
Greetings all,
I've done some ANOVAs and have found significant effects across my
groups, so now I have to do some post-hoc tests to ascertain which of
the groups is driving the significant effect. Usually one would do
something like a Newman-Keuls or Scheffe test to get at this but based
on
This is a question about how to calculate similarities/distances
among items that are classified by hierarchical attributes
for the purpose of visualizing the relations among items by means
of clustering, MDS, self-organizing maps, and so forth.
I have a set of ~260 items that have been
There is a **Huge** literature on robust regression, including many books
that you can search on at e.g. Amazon. I think it fair to say that we have
known since at least the 1970's that practically any robust downweighting
procedure (see, e.g M-estimation) is preferable (more efficient, better
Thanks very much Uwe! There was a problem reading my data which I would not
have discovered without careful examination of the WinBUGS log file. The
program is working now!
Take care,
Joe
-Original Message-
From: Uwe Ligges [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 06, 2006 1:30
Bottom Line Up Front: How does one reshape genetic data from long to wide?
I currently have a lot of data. About 180 individuals (some
probands/patients, some parents, rare siblings) and SNP data from 6000 loci
on each. The standard formats seem to be something along the lines of Famid,
pid,
On 4/5/06, Steven Lacey [EMAIL PROTECTED] wrote:
Try this...
xyplot(y~x,data=data.frame(x=1:10,y=1:10))
keyArgs - list()
keyArgs - list(points=list(pch=c(NA,rep(17,5)),lwd=2,col=c(NA,c(red,
chartreuse3, black, cyan, blue))),
text=list(lab=c(S-R Mapping,
On 4/6/06, Steven Lacey [EMAIL PROTECTED] wrote:
Hi,
I want to do the following:
1) create a trellis plot with 1 x 1 layout
2) add a key in the upper right hand corner of the plotting region (i.e.,
the panel), but after the initial call to trellis
3) resize the resulting device graphics
yeah, I try this method. But it is not fine to some data.
On 4/6/06, Clint Bowman [EMAIL PROTECTED] wrote:
Have you thought about using a log scale?
Clint BowmanINTERNET: [EMAIL PROTECTED]
Air Quality Modeler INTERNET: [EMAIL PROTECTED]
To add to Bert's comments:
- Normalizing data (e.g., subtracting mean and dividing by SD) can help
numerical stability of the computation, but that's mostly unnecessary with
modern hardware. As Bert said, that has nothing to do with robustness.
- Instead of _replacing_ lm() with rlm() or
Thanks, Andy. Well said. Excellent points. The final weights from rlm serve
this diagnostic purpose, of course.
-- Bert
-Original Message-
From: Liaw, Andy [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 06, 2006 9:56 AM
To: 'Berton Gunter'; 'r user'; 'rhelp'
Subject: RE: [R]
On Thu, 6 Apr 2006, Paul Gilbert wrote:
I think you have to remove the DESCRIPTION.in file when you check the
package stand alone. As I recall, if it exists that is the flag to
indicate a bundle. Also beware that the DESCRIPTION file in the stand
alone package is not exactly the same as the
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Berton Gunter
Sent: 06 April 2006 14:22
To: 'r user'; 'rhelp'
Subject: Re: [R] pros and cons of robust regression? (i.e. rlm vs lm)
There is a **Huge** literature on robust regression, including many books
You might want to take a look at the multcomp package, which does it in more
modern fashion. The tukey or dunnett options there refer to the types
of comparisons (Tukey means all pairwise, Dunnett means all vs. control)
rather than the named procedures. The package has a vignette that ought to
I'm asking this question purely for my own benefit, not to try to correct
anyone. The procedure you refer to as normalization I have always heard
referred to as standardization. Is the former the proper term? Also, you
say its not necessary given today's hardware, but isn't it beneficial to get
Deepayan,
I also noticed that the width of the key object is off if the title of the
key is bigger than the columns. For example,
xyplot(y~x,data=data.frame(x=1:10,y=1:10))
keyArgs - list()
keyArgs - list(points=list(pch=c(NA,rep(17,5)),lwd=2,col=c(NA,c(red,
chartreuse3, black, cyan, blue))),
A great example of the hazards of automatic outlier rejection is the
story of how the hole in the ozone layer in the southern hemisphere was
discovered. Outliers were dutifully entered into the data base but
discounted as probable metrology problems, which also plagued the
Spencer:
Your comment reinforces Andy's point, which is that purported outliers must
not be ignored but need to be clearly identified and examined. For reasons
that you well understand, robust regression methods are better for this in
the linear models context than standard least aquares.
Hello,
I have two densities which I plot with:
sm.density(gps)
sm.density(gps2)
Both data sets are 2D and have the same scale.
Is there a way of plotting the difference between the two density plots ?
Thank you very much,
Phil
__
On 4/6/06, Steven Lacey [EMAIL PROTECTED] wrote:
Deepayan,
I also noticed that the width of the key object is off if the title of the
key is bigger than the columns. For example,
xyplot(y~x,data=data.frame(x=1:10,y=1:10))
keyArgs - list()
keyArgs -
I have recently become familiar with the latex() function thanks to
the help of many people on this list. However, now I need to figure
out how to make the right dataframe for the tables I want to make.
What I need to do is make tables with the std. errors beneath the
estimates in parentheses
On 4/6/2006 1:34 PM, Philipp H. Mohr wrote:
Hello,
I have two densities which I plot with:
sm.density(gps)
sm.density(gps2)
Both data sets are 2D and have the same scale.
Is there a way of plotting the difference between the two density plots ?
?sm.density describes the value
Some people use the two terms sort of interchangeably.
Beneficial in what sense? If there are no polynomial terms involving a
variable, any linear transformation of a variable (by itself) does not
change the quality of the fit at all. The scaling gets reflected in the
coefficient and its SE,
Your description was a bit hard for me to follow. I think what you're
trying to do is something like
TestRmpi.R
--
library(Rmpi)
if (0==mpi.comm.rank(comm=0)) {
# 'manager', e.g., cat(manager\n)
} else {
# 'worker', e.g., cat(mpi.comm.rank(comm=0),worker\n)
}
mpi.quit()
TestRmpi.sh
Suppose I have an arbitrary R object. Is there a way to find out its
format? There are 118 points, each described by two numbers. Let the
name of the object be obj without the quotes. I can do a print
(obj), but all I get is a bunch of numbers. I can do a ls.str (obj),
but all I get is a bunch
On 4/6/2006 2:24 PM, Thomas L Jones wrote:
Suppose I have an arbitrary R object. Is there a way to find out its
format? There are 118 points, each described by two numbers. Let the
name of the object be obj without the quotes. I can do a print
(obj), but all I get is a bunch of numbers. I
I have the following:
a - matrix(1:10, nrow = 2, byrow = TRUE)
b - array(as.integer(0), c(7, 5))
idx - list()
length(idx) - 2
dim(idx) - c(1, 2)
idx[[1]] - as.integer(1:2)
idx[[2]] - as.integer(1:5)
I can do the following, which works if 'b' is a matrix.
b[idx[[1]], idx[[2]]] - a
b
Hi Tom,
If you assume that's a common question, and so likely to be in base,
help.search(object, package=base) calls up a list of functions relating
to objects. (Leaving off the package argument calls up a much longer one).
Browsing thru the list gets you the intriguing line:
class(base)
hello all. i have been trying to develop a representation (in the S4 sense)
for a floating cash object, which would store a cash amount as a function of an
arbitrary number of variables (with unknown values). for example, an interest
rate swap may call for a payment in one year that can be
On Thu, 6 Apr 2006, Duncan Murdoch wrote:
On 4/6/2006 1:34 PM, Philipp H. Mohr wrote:
Hello,
I have two densities which I plot with:
sm.density(gps)
sm.density(gps2)
Both data sets are 2D and have the same scale.
Is there a way of plotting the difference between the two
There are some nice new R contributions in the
latest volume of JSS
http://www.jstatsoft.org
The most recent one (and a good model for similar
comparative work) is
Authors:
Alexandros Karatzoglou and David Meyer and Kurt Hornik
Title:
Support Vector Machines in R
Reference:
Volume 15, 2006,
There are several kinds of standardization, and 'normalization' is
only one of them. For some details you could check
http://support.sas.com/91doc/getDoc/statug.hlp/stdize_index.htm
(see Details for standardization methods).
Standardization is required prior to clustering to control for the
Hello all,
I hope someone can help me with this.
I have function that calculates two values based on input data. A
simple example follows:
test-function(x,s,rangit=seq(0,10,1))
{
rangit-rangit
y-vector()
p-vector()
for(i in 0:length(rangit)){
y[i]-x+s[1]+rangit[i]
p[i]-x+s[2]+rangit[i]
}
Frank E Harrell Jr wrote:
Eric Rescorla [EMAIL PROTECTED] wrote:
(2) I'd like to compute goodness-of-fit statistics for my fit
(Hosmer-Lemeshow, Pearson, etc.). I didn't see a package that
did this. Have I missed one?
Hosmer-Lemeshow has low power and relies on arbitrary binning of
Hi All,
Could you please help with the error I get from the following codes?
Library(cluster)
data(iris)
bc1 - bclust(iris[,2:5], 3, base.method=clara, base.centers=5)
Then I got:
Committee Member:
1(1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)
Hi,
I have a data fram like this:
date column1 column2 column3 value1 value2 value3
1-1 A B C 10 5 2
2-1 A B D 5 2 0
3-1 A B E 17 10 7
How can I reshape it to:
date column1 column2 column3 v x
1-1 A B C value1 10
1-1 A B C value2 5
1-1 A B C value3 2
2-1 A B D value1 5
2-1 A B D value2 2
2-1 A
I assume you want a list containing length(x) data frames, one per
list component. If that's it try this:
test - function(x = 1:4, s = 1:2, rangit = 0:11)
lapply(x, function(x) data.frame(rangit, y = x+s[1]+rangit, p=x+s[2]+rangit))
test() # test run
On 4/6/06, Guenther, Cameron [EMAIL
Try this:
do.call([-, c(list(b), idx, list(a)))
On 4/6/06, Paul Roebuck [EMAIL PROTECTED] wrote:
I have the following:
a - matrix(1:10, nrow = 2, byrow = TRUE)
b - array(as.integer(0), c(7, 5))
idx - list()
length(idx) - 2
dim(idx) - c(1, 2)
idx[[1]] - as.integer(1:2)
idx[[2]] -
Just follow this example from example(reshape):
reshape(wide, idvar=Subject, varying=list(names(wide)[2:12]),
v.names=conc, direction=long)
like this:
long - reshape(dd, direction = long, varying =
list(names(dd)[5:7]), idvar = v,
v.names = x, times = paste(value, 1:3,
Just a note to say if you use msm you may find that
statetable returns an incorrect statetable: you can
then build an incorrect model and not even know that
statetable was the villain.
shfets
__
R-help@stat.math.ethz.ch mailing list
I have a question about how to reference variables in a dataframe.
Normally, after I have read in some Stata data using the following command
all - read.dta('all.dta')
Whenever I want to use the variable sat.vr1 in the all data frame,
I do so using
all$sat.vr1
However, I'd like to be able to
Here are two ways:
with(iris, Sepal.Length + Sepal.Width)
attach(iris)
Species
On 4/6/06, Brian Quinif [EMAIL PROTECTED] wrote:
I have a question about how to reference variables in a dataframe.
Normally, after I have read in some Stata data using the following command
all -
R has many different functions for financial computations, but I
don't know if you will find them all together. Are you familiar with
RSiteSearch (including the second restrict argument)?
If you can't find what you want, please submit another post (after
first reading the
I don't know what PMT, etc, does.
So I just give some hints and maybe it is not so useful.
There are several packages in CRAN related to financial.Have you try
to find some functions in them?
fBasics Rmetrics - Marketes and Basic Statistics
fCalendar Rmetrics - Chronological Objects
fExtremes
Those functions in Excel are for computing things like interest payment for
fixed interest rate loan, etc. RSiteSearch() doesn't turn up anything
useful, so I'd guess no one has written them. Good opportunity for people
to contribute, I guess.
Andy
From: Spencer Graves
R has many
See the first two hits for:
RSiteSearch(IRR)
On 4/6/06, Liaw, Andy [EMAIL PROTECTED] wrote:
Those functions in Excel are for computing things like interest payment for
fixed interest rate loan, etc. RSiteSearch() doesn't turn up anything
useful, so I'd guess no one has written them. Good
Dear Community,
I'm interested in developing a package that could ease the
command-line learning curve for new users. It would provide more
detailed syntax checking and commentary as feedback. It would try to
anticipate common new-user errors, and provide feedback to help
correct them.
As a
Hi,
I am using rpart to do leave one out cross validation, but met some problem,
Data is a data frame, the first column is the subject id, the second column is
the group id, and the rest columns are numerical variables,
Data[1:5,1:10]
sub.id group.id X3262.345 X3277.402 X3369.036 X3439.895
I'm interested in any thoughts that people have about this idea -
what errors do you commonly see, and how can they be dealt with?
NAs introduced by coercion
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE
Thanks,
Ok so I feel silly now ... I think I need to get a more thorough text and
work through some examples. 'replace' does what I was trying to do manually.
Thanks for the tip, this really is MUCH faster.
-Original Message-
From: Gabor Grothendieck [mailto:[EMAIL PROTECTED]
Sent:
I would like to use a for loop to run estimations on 12 different
subsets of a dataset.
I have the basics of my script figured out, but I am having problems
getting the loop to name some files as I would like. Here is a
sketch of my code:
sub.dataset - c(101, 201)
#Assume I only have two
See ?paste and try:
paste(sub.dataset[i], tex, sep = .)
On 4/7/06, Brian Quinif [EMAIL PROTECTED] wrote:
I would like to use a for loop to run estimations on 12 different
subsets of a dataset.
I have the basics of my script figured out, but I am having problems
getting the loop to name some
I tried that, and the problem seems to be that the results of the
'paste'ing are enclosed in quotes...that is the paste yields
'101.tex'. Of course, that is what I said I wanted above, and it
should work for that particular instance, but I also want to be able
to create estimates.101 and
92 matches
Mail list logo