Re: [R] Scaling of font sizes in layout()

2004-04-04 Thread Pisut Tempatarachoke
Paul Murrell wrote:
Hi

Pisut Tempatarachoke wrote:

Hi all,

In the following example,

#--EXAMPLE--
test - function(subfigure)
{
plot(c(1:10),c(1:10),cex=4)
text(1,9,subfigure,cex=10)
}
m - matrix(c(1,2,5,5,3,4,5,5),4,2)
layout(m)
test(a)
test(b)
test(c)
test(d)
test(e)
#---
Is it possible to have the font (a,b,...,e) and pch sizes (including 
the axis-label, tick and tick-label sizes) scaled proportionally with 
the size of each plot when I put multiple plots on the same page?


When you have multiple figures, R tries to think for you and reduces the 
base size of text.  You can explicitly control this base size through 
par().  Does the following slight modification of your example do what 
you want?

test - function(subfigure)
{
plot(c(1:10),c(1:10),cex=4)
text(1,9,subfigure,cex=10)
}
m - matrix(c(1,2,5,5,3,4,5,5),4,2)
layout(m)
test(a)
test(b)
test(c)
test(d)
par(cex=1)
test(e)
Paul
Hi Paul,

Sorry for taking so long to reply.  Your suggestion worked right away 
but I have been busily caught up with other things.  Again, thank you 
very much for your help.

Best regards
Pisut
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] memory limit problem

2004-04-04 Thread Prof Brian Ripley
What do you mean `did not work'?  Did it not start (you may need to reboot 
your machine to clear its memory tables) or did your task run out of 
memory?

Please do read the posting guide and its references, and try to give 
useful information about the problem you encounter.  Saying `did not work' 
without ever saying what is maximally uninformative.


On Sun, 4 Apr 2004, Yi-Xiong Sean Zhou wrote:

 I tried using --max-mem-size=1400M at the command line on 1.8.1 and did not
 work. However, 1.9.0beta works. The Os is XP professional on Dell inspiron
 8600. 
 
 Yi-Xiong
 
 -Original Message-
 From: Prof Brian Ripley [mailto:[EMAIL PROTECTED] 
 Sent: Saturday, April 03, 2004 11:20 PM
 To: Roger D. Peng
 Cc: Yi-Xiong Sean Zhou; [EMAIL PROTECTED]
 Subject: Re: [R] memory limit problem
 
 That is true, but I don't see that Yi-Xiong Sean Zhou has actually yet 
 followed the instructions for 1.8.1, which is to set --max-mem-size on the 
 command line (and this is in the rw-FAQ as people have pointed out).
 
 The issue is that on Windows the memory address space can get fragmented, 
 and this is ameliorated by reserving memory in advance -- that is what 
 using --max-mem-size (and not memory.limit) does for you.
 
 When used as recommended, both 1.8.1 and 1.9.0beta can handle workspaces
 of up to about 1.7Gb.  1.9.0 can go higher on suitable OSes: see its
 rw-FAQ.
 
 On Sun, 4 Apr 2004, Roger D. Peng wrote:
 
  In general, this is not an R problem, it is a Windows problem.  I find 
  that these types of memory problems do not appear on Linux, for example.
  
  -roger
  
  Yi-Xiong Sean Zhou wrote:
   R1.9.0beta solves the problem for now. The memory foot print of R1.9.0
 is
   way smaller than R1.8.1, with only 400M. It will be interesting to see
 how
   R1.9.0 handles the memory problem when it needs more than 700M.
   
   Thanks for your helps. 
   
   Yi-Xiong
   
   -Original Message-
   From: Roger D. Peng [mailto:[EMAIL PROTECTED] 
   Sent: Saturday, April 03, 2004 2:52 PM
   To: Yi-Xiong Sean Zhou
   Cc: 'Uwe Ligges'; [EMAIL PROTECTED]
   Subject: Re: [R] memory limit problem
   
   You may want to try downloading the development version of R at 
   http://cran.us.r-project.org/bin/windows/base/rdevel.html.  This 
   version deals with Windows' deficiencies in memory management a 
   little better.
   
   -roger
   
   Yi-Xiong Sean Zhou wrote:
   
   
  After memory.limit(1500), the error message still pop out:
  
  Error: cannot allocate vector of size 11529 Kb
  
  While 
  
  
  
  memory.size()
  
  [1] 307446696
  
  
  memory.limit()
  
  [1] 1572864000
  
  And the system is only using 723MB physical memory, while 2G is the
 total.
   
   
  Does anyone have a clue of what is going on? 
  
  
  Yi-Xiong
  
  
  -Original Message-
  From: Uwe Ligges [mailto:[EMAIL PROTECTED] 
  Sent: Saturday, April 03, 2004 12:40 PM
  To: Yi-Xiong Sean Zhou
  Cc: [EMAIL PROTECTED]
  Subject: Re: [R] memory limit problem
  
  
  
  Yi-Xiong Sean Zhou wrote:
  
  
  Could anyone advise me how to allocate 1.5Gbyte memory for R on a Dell
  laptop running XP professional with 2G memory?
  
  
  See ?Memory or the the R for Windows FAQ, which tells you:
  
  2.7 There seems to be a limit on the memory it uses!
  
  Indeed there is. It is set by the command-line flag --max-mem-size (see
  How do I install R for Windows?) and defaults to the smaller of the
  amount of physical
  RAM in the machine and 1Gb. [...]
  
  
  
  
  I have tried
  
  C:\Program Files\R\rw1081\bin\Rgui.exe --max-vsize=1400M
  
  but I only get only 500MB for R actually.
  
  
  I also tried memory.limit(2^30) in R and got error of:
  
  
  Well, you don't want to allocate 2^30 *Mega*Bytes (see ?memory.limit),
  do you? 
  
  
  Either use the command line flag --max-mem-size=1500M or within R:
   memory.limit(1500)
  
   
  
  
  Error in memory.size(size) : cannot decrease memory limit
  
  
  Since your limit was roughly 10^6-times off the right one, you got an
  integer overflow internally, I think.
  
  Uwe Ligges
  
  
   
  
  
  Yi-Xiong
  
 [[alternative HTML version deleted]]
  
  __
  [EMAIL PROTECTED] mailing list
  https://www.stat.math.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide!
  
  http://www.R-project.org/posting-guide.html
  
  __
  [EMAIL PROTECTED] mailing list
  https://www.stat.math.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide!
   
   http://www.R-project.org/posting-guide.html
   
   
  
  
  __
  [EMAIL PROTECTED] mailing list
  https://www.stat.math.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html
  
  
 
 

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford,

[R] Can't seem to finish a randomForest.... Just goes and goes!

2004-04-04 Thread David L. Van Brunt, Ph.D.
Playing with randomForest, samples run fine. But on real data, no go.

Here's the setup: OS X, same behavior whether I'm using R-Aqua 1.8.1 or the
Fink compile-of-my-own with X-11, R version 1.8.1.

This is on OS X 10.3 (aka Panther), G4 800Mhz with 512M physical RAM.

I have not altered the Startup options of R.

Data set is read in from a text file with read.table, and has 46 variables
and 1,855 cases. Trying the following:

The DV is categorical, 0 or 1. Most of the IV's are either continuous, or
correctly read in as factors. The largest factor has 30 levels Only the
DV seems to need identifying as a factor to force class trees over
regresssion:

Mydata$V46-as.factor(Mydata$V46)
Myforest.rf-randomForest(V46~.,data=Mydata,ntrees=100,mtry=7,proximities=FALSE
, importance=FALSE)

5 hours later, R.bin was still taking up 75% of my processor.  When I've
tried this with larger data, I get errors referring to the buffer (sorry,
not in front of me right now).

Any ideas on this? The data don't seem horrifically large. Seems like there
are a few options for setting memory size, but I'm  not sure which of them
to try tweaking, or if that's even the issue.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] Can't seem to finish a randomForest.... Just goes and goe s!

2004-04-04 Thread Liaw, Andy
When you have fairly large data, _do not use the formula interface_, as a
couple of copies of the data would be made.  Try simply:

Myforest.rf - randomForest(Mydata[, -46], Mydata[,46], 
ntrees=100, mtry=7)

[Note that you don't need to set proximity (not proximities) or importance
to FALSE, as that's the default already.]

You might also want to use do.trace=1 to see if trees are actually being
grown (assuming there's no output buffering as in Rgui on Windows, otherwise
you'll probably want to turn that off).

I had run randomForest on data set much larger than that, without problem,
so I don't imagine your data would be `difficult'.  (I have not used the
Mac, though.)

Andy

 From: David L. Van Brunt, Ph.D.
 
 Playing with randomForest, samples run fine. But on real data, no go.
 
 Here's the setup: OS X, same behavior whether I'm using 
 R-Aqua 1.8.1 or the
 Fink compile-of-my-own with X-11, R version 1.8.1.
 
 This is on OS X 10.3 (aka Panther), G4 800Mhz with 512M 
 physical RAM.
 
 I have not altered the Startup options of R.
 
 Data set is read in from a text file with read.table, and 
 has 46 variables
 and 1,855 cases. Trying the following:
 
 The DV is categorical, 0 or 1. Most of the IV's are either 
 continuous, or
 correctly read in as factors. The largest factor has 30 
 levels Only the
 DV seems to need identifying as a factor to force class trees over
 regresssion:
 
 Mydata$V46-as.factor(Mydata$V46)
 Myforest.rf-randomForest(V46~.,data=Mydata,ntrees=100,mtry=7
,proximities=FALSE
 , importance=FALSE)
 
 5 hours later, R.bin was still taking up 75% of my processor. 
  When I've
 tried this with larger data, I get errors referring to the 
 buffer (sorry,
 not in front of me right now).
 
 Any ideas on this? The data don't seem horrifically large. 
 Seems like there
 are a few options for setting memory size, but I'm  not sure 
 which of them
 to try tweaking, or if that's even the issue.
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Can't seem to finish a randomForest.... Just goes and goe s!

2004-04-04 Thread David L. Van Brunt, Ph.D.
Thanks for the pointer!! Can't believe you got back to me so quickly on a
Sunday evening. I'll give that a shot and let you know how it goes.

On 4/4/04 19:07, Liaw, Andy [EMAIL PROTECTED] wrote:

 When you have fairly large data, _do not use the formula interface_, as a
 couple of copies of the data would be made.  Try simply:
 
 Myforest.rf - randomForest(Mydata[, -46], Mydata[,46],
   ntrees=100, mtry=7)
 
 [Note that you don't need to set proximity (not proximities) or importance
 to FALSE, as that's the default already.]
 
 You might also want to use do.trace=1 to see if trees are actually being
 grown (assuming there's no output buffering as in Rgui on Windows, otherwise
 you'll probably want to turn that off).
 
 I had run randomForest on data set much larger than that, without problem,
 so I don't imagine your data would be `difficult'.  (I have not used the
 Mac, though.)
 
 Andy
 
 From: David L. Van Brunt, Ph.D.
 
 Playing with randomForest, samples run fine. But on real data, no go.
 
 Here's the setup: OS X, same behavior whether I'm using
 R-Aqua 1.8.1 or the
 Fink compile-of-my-own with X-11, R version 1.8.1.
 
 This is on OS X 10.3 (aka Panther), G4 800Mhz with 512M
 physical RAM.
 
 I have not altered the Startup options of R.
 
 Data set is read in from a text file with read.table, and
 has 46 variables
 and 1,855 cases. Trying the following:
 
 The DV is categorical, 0 or 1. Most of the IV's are either
 continuous, or
 correctly read in as factors. The largest factor has 30
 levels Only the
 DV seems to need identifying as a factor to force class trees over
 regresssion:
 
 Mydata$V46-as.factor(Mydata$V46)
 Myforest.rf-randomForest(V46~.,data=Mydata,ntrees=100,mtry=7
 ,proximities=FALSE
 , importance=FALSE)
 
 5 hours later, R.bin was still taking up 75% of my processor.
  When I've
 tried this with larger data, I get errors referring to the
 buffer (sorry,
 not in front of me right now).
 
 Any ideas on this? The data don't seem horrifically large.
 Seems like there
 are a few options for setting memory size, but I'm  not sure
 which of them
 to try tweaking, or if that's even the issue.
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html
 
 
 
 
 --
 Notice:  This e-mail message, together with any attachments, contains
 information of Merck  Co., Inc. (One Merck Drive, Whitehouse Station, New
 Jersey, USA 08889), and/or its affiliates (which may be known outside the
 United States as Merck Frosst, Merck Sharp  Dohme or MSD and in Japan, as
 Banyu) that may be confidential, proprietary copyrighted and/or legally
 privileged. It is intended solely for the use of the individual or entity
 named on this message.  If you are not the intended recipient, and have
 received this message in error, please notify us immediately by reply e-mail
 and then delete it from your system.
 --

-- 
David L. Van Brunt, Ph.D.
Outlier Consulting  Development
mailto: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Cochrane-Orcutt

2004-04-04 Thread pnick
hi everybody
i'm looking for a function to estimate a regression model via the Cochrane
Orcutt method
thanks

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] memory limit problem

2004-04-04 Thread Z P
Do you mean in Linux, there is no need to set memory limit? If needed, how 
to set it? Thanks.





From: Roger D. Peng [EMAIL PROTECTED]
To: Yi-Xiong Sean Zhou [EMAIL PROTECTED]
CC: [EMAIL PROTECTED]
Subject: Re: [R] memory limit problem
Date: Sun, 04 Apr 2004 00:13:46 -0500
In general, this is not an R problem, it is a Windows problem.  I find that 
these types of memory problems do not appear on Linux, for example.

-roger

Yi-Xiong Sean Zhou wrote:
R1.9.0beta solves the problem for now. The memory foot print of R1.9.0 is
way smaller than R1.8.1, with only 400M. It will be interesting to see how
R1.9.0 handles the memory problem when it needs more than 700M.
Thanks for your helps.

Yi-Xiong

-Original Message-
From: Roger D. Peng [mailto:[EMAIL PROTECTED] Sent: Saturday, April 03, 
2004 2:52 PM
To: Yi-Xiong Sean Zhou
Cc: 'Uwe Ligges'; [EMAIL PROTECTED]
Subject: Re: [R] memory limit problem

You may want to try downloading the development version of R at 
http://cran.us.r-project.org/bin/windows/base/rdevel.html.  This version 
deals with Windows' deficiencies in memory management a little better.

-roger

Yi-Xiong Sean Zhou wrote:


After memory.limit(1500), the error message still pop out:

Error: cannot allocate vector of size 11529 Kb

While



memory.size()
[1] 307446696


memory.limit()
[1] 1572864000

And the system is only using 723MB physical memory, while 2G is the 
total.


Does anyone have a clue of what is going on?

Yi-Xiong

-Original Message-
From: Uwe Ligges [mailto:[EMAIL PROTECTED] Sent: 
Saturday, April 03, 2004 12:40 PM
To: Yi-Xiong Sean Zhou
Cc: [EMAIL PROTECTED]
Subject: Re: [R] memory limit problem



Yi-Xiong Sean Zhou wrote:


Could anyone advise me how to allocate 1.5Gbyte memory for R on a Dell
laptop running XP professional with 2G memory?


See ?Memory or the the R for Windows FAQ, which tells you:

2.7 There seems to be a limit on the memory it uses!

Indeed there is. It is set by the command-line flag --max-mem-size (see
How do I install R for Windows?) and defaults to the smaller of the
amount of physical
RAM in the machine and 1Gb. [...]



I have tried

C:\Program Files\R\rw1081\bin\Rgui.exe --max-vsize=1400M

but I only get only 500MB for R actually.

I also tried memory.limit(2^30) in R and got error of:


Well, you don't want to allocate 2^30 *Mega*Bytes (see ?memory.limit),
do you?
Either use the command line flag --max-mem-size=1500M or within R:
memory.limit(1500)



Error in memory.size(size) : cannot decrease memory limit


Since your limit was roughly 10^6-times off the right one, you got an
integer overflow internally, I think.
Uwe Ligges





Yi-Xiong

  [[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html



__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] Cochrane-Orcutt

2004-04-04 Thread John Fox
Dear pnick,

If you search the r-help archives, you'll see that some time ago I posted a
Cochrane-Orcutt function. It's not clear to me, however, why you'd want to
use this in preference to the gls function in the nlme package.

I hope this helps,
 John

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of 
 [EMAIL PROTECTED]
 Sent: Sunday, April 04, 2004 7:20 PM
 To: r
 Subject: [R] Cochrane-Orcutt
 
 hi everybody
 i'm looking for a function to estimate a regression model via 
 the Cochrane Orcutt method thanks


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] residuals with missing values

2004-04-04 Thread Ajay Shah
 hi: sorry to bother you all again.  I am running a simple lm(y~x+z)
 regression, in which some of the observations are missing.
 Unfortunately, the residuals vector from the lm object omits all the
 missing values, which means that I cannot simply do residual
 diagnostics (e.g., plot(y,x)).  Would it not make more sense to have
 the residuals propagate the missing values, so that the residuals
 are guaranteed to have the same length as the variables?
 Alternatively, maybe the residuals() function could do this instead.
 But the documentation is not clear:

I had a similar situation, and Brian Ripley said to me:

  If you have missing data in your data frame and want residuals for
  all observations, you need to use na.action=na.exclude, not the
  default na.omit.

-- 
Ajay Shah   Consultant
[EMAIL PROTECTED]  Department of Economic Affairs
http://www.mayin.org/ajayshah   Ministry of Finance, New Delhi

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] boot question

2004-04-04 Thread Ajay Shah
  x-rnorm(20)
  mean(x)
 [1] -0.2272851
  results-boot(x,mean,R=5)
 
 What in the world am I missing??

See http://www.mayin.org/ajayshah/KB/R/statistics.html

-- 
Ajay Shah   Consultant
[EMAIL PROTECTED]  Department of Economic Affairs
http://www.mayin.org/ajayshah   Ministry of Finance, New Delhi

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] x-only zoom and pan?

2004-04-04 Thread Randy Zelick
Hello list,

Could the following be done without too much grief...?

Lets say I have two or three time series objects that I want to inspect
visually. Each I would plot with a y-offset so they stack up. They share
the same X scaling. The problem is that each is perhaps 100K values. Due
to the large number of values, features of the data sets cannot be seen
when all values are plotted.

What would be nice is to plot a fraction of the X range (say 10%). This
would be equivalent to zooming in the X direction. Then using a key
(ideally an arrow key), shift the viewing frame right or left to
effectively scroll through the data. So first you view 0-10%, then 10-20%
and so forth.

If necessary I can fabricate a vector with X values in it and plot(x,y)
instead of as time series, if this makes it any easier.

I am using a Windows version of R.

Thanks,

=Randy=

R. Zelick   email: [EMAIL PROTECTED]
Department of Biology   voice: 503-725-3086
Portland State University   fax:   503-725-3888

mailing:
P.O. Box 751
Portland, OR 97207

shipping:
1719 SW 10th Ave, Room 246
Portland, OR 97201

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] x-only zoom and pan?

2004-04-04 Thread Jason Turner
 Could the following be done without too much grief...?

It sounds possible, but there are already packages that deal with these
issues.  Some suggestions:

1) Use SVG plots, and Adobe's SVG Viewer plug-in for various web browsers.
 See the RSvgDevice package for details.

2) Use the Java graphics device - though under Windows, getting SJava
working can be problematic.  http://www.omegahat.org/RJavaDevice/

Cheers

Jason

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to improve this code?

2004-04-04 Thread Gabor Grothendieck

If I understand correctly, storelist and customerlist are two column matrices
of lat and long and you want all combos less than a certain distance apart
sorted by store and distance.

dd is the distance matrix of all pairs.  We form this into a data frame of row
numbers (i.e. store numbers), column numbers (i.e.  customer numbers) and
distances, subset that and then sort it.  Then tapply seq to each group of
data from the same store to get ranks within stores.

Note that this forms some very large matrices if your data is large.

require(fields)
maxd - 100
dd - rdist.earth( storelist, customerlist, miles = F )
out - data.frame( store=c(row(dd)), cust=c(col(dd)), dist=c(dd) )[c(dd)maxd,]
out - out[ order( out$store, out$dist ),]
rk - c( unlist( tapply( out$store,  out$store, function(x)seq(along=x) ) ) )
out - cbind( rank=rk, out )


Danny Heuman dsheuman at rogers.com writes:

: 
: Hi all,
: 
: I've got some functioning code that I've literally taken hours to
: write.  My 'R' coding is getting better...it used to take days :)
: 
: I know I've done a poor job of optimizing the code.  In addition, I'm
: missing an important step and don't know where to put it.
: 
: So, three questions:
: 
: 1)  I'd like the resulting output to be sorted on distance (ascending)
: and to have the 'rank' column represent the sort order, so that rank 1
: is the first customer and rank 10 is the 10th.  Where do I do this?
: 
: 2)  Can someone suggest ways of 'optimizing' or improving the code?
: It's the only way I'm going to learn better ways of approaching R.
: 
: 3)  If there are no customers in the store's Trade Area, I'd like the
: output file have nothing written to it .  How can I do that?
: 
: All help is appreciated.
: 
: Thanks,
: 
: Danny
: 
: 
: *
: library(fields)
: 
: #Format of input files:  ID, LONGITUDE, LATITUDE
: 
: #Generate Store List
: storelist - cbind(1:100, matrix(rnorm(100, mean = -60,  sd = 3), ncol
: = 1),
:matrix(rnorm(100, mean = 50, sd = 3), ncol = 1))
: 
: #Generate Customer List
: customerlist - cbind(1:1,matrix(rnorm(1, mean = -60,  sd =
: 20), ncol = 1),
:matrix(rnorm(1, mean = 50, sd = 10), ncol = 1))
: 
: #Output file
: outfile - c:\\output.txt
: outfilecolnames - c(rank,storeid,custid,distance)
: write.table(t(outfilecolnames), file = outfile, append=TRUE,
: sep=,,row.names=FALSE, col.names=FALSE)
: 
: #Trade Area Size
: TAsize - c(100)
: 
: custlatlon - customerlist[, 2:3]
: 
: for(i in 1:length(TAsize)){
:   for(j in 1:nrow(storelist)){
:   cat(Store: , storelist[j],  TA Size = , TAsize[i],
: \n)
:   
:   storelatlon - storelist[j, 2:3]
:   
:   whichval -
: which(rdist.earth(t(as.matrix(storelatlon)), as.matrix(custlatlon),
: miles=F) = TAsize[i])
: 
:   dist -
: as.data.frame(rdist.earth(t(as.matrix(storelatlon)),
: as.matrix(custlatlon), miles=F)[whichval])
: 
:   storetag -
: as.data.frame(cbind(1:nrow(dist),storelist[j,1]))
:   fincalc -
: as.data.frame(cbind(1:nrow(dist),(customerlist[whichval,1]),rdist.earth(t
(as.matrix(storelatlon)),
: as.matrix(custlatlon), miles=F)[whichval]))
: 
:   combinedata - data.frame(storetag, fincalc)
: 
:   combinefinal - subset(combinedata, select= c(-1,-3))
:   
:   flush.console()
:   
:   write.table(combinefinal, file = outfile, append=TRUE,
: sep=,, col.names=FALSE)
:   }
:   
: }
: 
: __
: R-help at stat.math.ethz.ch mailing list
: https://www.stat.math.ethz.ch/mailman/listinfo/r-help
: PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
: 
:

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html