[R] a function more appropriate than 'sapply'?

2013-01-26 Thread emorway
I'm wondering if I need to use a function other than sapply as the following
line of code runs indefinitely (or  30 min so far) and uses up all 16Gb of
memory on my machine for what seems like a very small dataset (data attached
in a txt file  wells.txt
http://r.789695.n4.nabble.com/file/n4656723/wells.txt  ).  The R code is:

wells-read.table(c:/temp/wells.txt,col.names=c(name,plc_hldr))
wells2-wells[sapply(wells[,1],function(x)length(strsplit(as.character(x),
_)[[1]])==2),]

The 2nd line of R code above gets bogged down and takes all my RAM with it:
http://r.789695.n4.nabble.com/file/n4656723/memory_loss.png 

I'm simply trying to extract all of the lines of data that have a single _
in the first column and place them into a dataset called wells2.  If that
were to work, I then want to extract the lines of data that have two _ and
put them into a separate dataset, say wells3.  Is there a better way to do
this than the one-liner above?

-Eric



--
View this message in context: 
http://r.789695.n4.nabble.com/a-function-more-appropriate-than-sapply-tp4656723.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] resizing data

2013-01-25 Thread emorway
Undoubtedly this question has been asked before, I just can't seem to find
the combination of search terms to produce it.  I'm trying to resize a
dataset that is pulled into R using read.table.  However, I think the same
problem can be produced using matrix:

x-matrix(1:64,8)
x
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
#[1,]19   17   25   33   41   49   57
#[2,]2   10   18   26   34   42   50   58
#[3,]3   11   19   27   35   43   51   59
#[4,]4   12   20   28   36   44   52   60
#[5,]5   13   21   29   37   45   53   61
#[6,]6   14   22   30   38   46   54   62
#[7,]7   15   23   31   39   47   55   63
#[8,]8   16   24   32   40   48   56   64

The true order of data in the larger problem I'm working with is actually
transposed, like so:
x-t(x)
x
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
#[1,]12345678
#[2,]9   10   11   12   13   14   15   16
#[3,]   17   18   19   20   21   22   23   24
#[4,]   25   26   27   28   29   30   31   32
#[5,]   33   34   35   36   37   38   39   40
#[6,]   41   42   43   44   45   46   47   48
#[7,]   49   50   51   52   53   54   55   56
#[8,]   57   58   59   60   61   62   63   64

I'm trying to resize the data (in this example, a matrix) to say a 16 x 4
matrix while preserving the consecutive order of the individual elements in
a left-to-right top-to-bottom fashion.  The example below is wrong because
the first row should be 1 2 3 4, how can I make this happen?  It would
also be nice to make a 4 x 16 matrix where the first row contains the values
of x[1,1:8] followed by x[2,1:8].  I'm guessing there is a 1 liner of R code
for this type of thing so I don't have to resort to nested for loops?

y-matrix(x,nrow=16,ncol=4)
y
#  [,1] [,2] [,3] [,4]
# [1,]1357
# [2,]9   11   13   15
# [3,]   17   19   21   23
# [4,]   25   27   29   31
# [5,]   33   35   37   39
# [6,]   41   43   45   47
# [7,]   49   51   53   55
# [8,]   57   59   61   63
# [9,]2468
#[10,]   10   12   14   16
#[11,]   18   20   22   24
#[12,]   26   28   30   32
#[13,]   34   36   38   40
#[14,]   42   44   46   48
#[15,]   50   52   54   56
#[16,]   58   60   62   64




--
View this message in context: 
http://r.789695.n4.nabble.com/resizing-data-tp4656653.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] resizing data

2013-01-25 Thread emorway
I played around with your example on the smaller dataset, and it seemed like
it was doing what I wanted.  However, applying it to the larger problem, I
didn't  get a resized 2D dataset that preserved the order I was hoping for.
Hopefully the following illustrates the larger problem:

x-matrix(0,nrow=29328,ncol=7)

# Now set the 70,000th element to 1 to find out where it ends up?
# Iterating on columns first, then rows, the 70,000th element is
# at index [1,7]

x[1,7]-1
y-t(matrix(x,nrow=546))
dim(y)
# [1] 376 546

# Does 1 appear at index [129,112] as I expect
# (128 complete rows x 546 cols = 69888 + 112 = 70,000) thus, row 129, col
112
y[129,112]
# [1] 0

# No, so where is it? 
grep(1,y)
# [1] 123293
 
Where is that?



--
View this message in context: 
http://r.789695.n4.nabble.com/resizing-data-tp4656653p4656662.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] setting off-diagonals to zero

2013-01-23 Thread emorway
The following 1460 x 1460 matrix can be throught of as 16 distinct 365 x 365
matrices.  I'm trying to set off-diaganol terms in the 16 sub-matrices with
indices more than +/- 5 (days) from each other to zero using some for loops. 
This works well for some, but not all, of the for loops.  The R code Im
using follows.  For some reason the third loop below zero's-out everything
in the sub-quadrant it is targeting, which is readily observable when
viewing the matrix (View(MAT)).

library(Matrix)
MAT-matrix(rnorm(1460*1460,mean=0,sd=1),nrow = 1460, ncol = 1460)

#works great
for (i in 1:365) {  
  SEQ - (i - 5):(i + 5)
  SEQ - SEQ[SEQ  0  SEQ  366]  
  MAT[(1:365)[-SEQ], i] - 0  
}

#works great
for (i in 1:365) {  
  SEQ - (i - 5):(i + 5)
  SEQ - SEQ[SEQ  0  SEQ  366]  
  MAT[(1:365)[-SEQ], i + 365] - 0  
}

#zero's out everything, including main-diagonal and near-main-diagonal
terms???
for (i in 731:1095) {  
  SEQ - (i - 5):(i + 5)
  SEQ - SEQ[SEQ  730  SEQ  1096]  
  MAT[(731:1095)[-SEQ], i + 365] - 0  
}

View(MAT)

I'm not sure why the third FOR loop above is not leaving the main-diagonal
and near-main-diagonal terms alone?



--
View this message in context: 
http://r.789695.n4.nabble.com/setting-off-diagonals-to-zero-tp4656407.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] setting off-diagonals to zero

2013-01-23 Thread emorway
I'm not following.  Printing SEQ to the screen at the intermediate steps
using the following modified R code suggests that 'i' is fine and is not
getting reset to 1 as you suggest?  My understanding, or rather my desired
output if someone else is able to weight-in, is that the values in the
second line of output (731 732 733 etc.) should not be appearing in the 3rd
line of output.  The third line of output should be missing 731 thru 736. 
Any suggestions on how to modify the R code are certainly welcome. 
Suggested revisions will be substituted back into the third FOR loop in my
original post on this thread to prevent the main- and near-main-diagonal
terms from being set equal to zero.

for (i in 731:732) {
  SEQ - (i - 5):(i + 5)
  print(SEQ)
  SEQ - SEQ[SEQ  730  SEQ  1096]
  print(SEQ)
  print((731:1095)[-SEQ])
}

# [1] 726 727 728 729 730 731 732 733 734 735 736
# [1] 731 732 733 734 735 736
# [1]  731  732  733  734  735  736  737  738  739  740  741  742  743  744 
745  746  747  748  749  750  751  752  753  754  755  756...
 



--
View this message in context: 
http://r.789695.n4.nabble.com/setting-off-diagonals-to-zero-tp4656407p4656461.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Filling a covariance matrix

2012-10-31 Thread emorway
Hi Joshua,

The code you put together is very helpful.  I have run into a small issue,
however, and I am surprised you weren't getting similar error message when
you tried running the nested for loops?  As an example, I've pared down the
for loop a bit to highlight the error message I'm getting (note also that I
had to modify xs[[i]]$Q to xs[[i]]$*x*Q):

for(i in seq_along(xs)) {
  for(j in seq_along(xs)) {
xcov[paste0(Q,i),paste0(Q,j)] - xcov[paste0(Q,j),paste0(Q,i)]
- cov(xs[[i]]$xQ, xs[[j]]$xQ, use=complete.obs)
  }
}

#Error in cov(xs[[i]]$xQ, xs[[j]]$xQ, use = complete.obs) : 
#  incompatible dimensions

#Some investigation...
i
# 1
j
# 95

length(xs[[i]]$xQ)
# 96
length(xs[[j]]$xQ)
# 92

xs[[j]]$xQ[1:10]
#xQ
#2004-04-04 00:00:00 674
#2004-04-04 00:15:00 669
#2004-04-04 00:30:00 664
#2004-04-04 00:45:00 664
#2004-04-04 01:00:00 669
#2004-04-04 01:15:00 659
#2004-04-04 01:30:00 674
#2004-04-04 01:45:00 669
#2004-04-04 03:00:00 664
#2004-04-04 03:15:00 674

xs[[i]]$xQ[1:10]
# xQ
#2004-01-01 00:00:00 0.43
#2004-01-01 00:15:00 0.43
#2004-01-01 00:30:00 0.43
#2004-01-01 00:45:00 0.43
#2004-01-01 01:00:00 0.57
#2004-01-01 01:15:00 0.57
#2004-01-01 01:30:00 0.57
#2004-01-01 01:45:00 0.43
#2004-01-01 02:00:00 0.43
#2004-01-01 02:15:00 0.57

I suppose the reason use=complete.obs, use=pair, or
use=pairwise.complete.obs won't work is because the date stamp is
different despite the fact there are similar time stamps during the
respective days.  Thus, I'm wondering if there is a way to direct the cov
function to make the calculation using only the time stamp (and not the date
stamp) to determine pairs?

Thanks, Eric



--
View this message in context: 
http://r.789695.n4.nabble.com/Filling-a-covariance-matrix-tp4647170p4648015.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Filling a covariance matrix

2012-10-23 Thread emorway
useRs – 
I’m working with the attached data that contains one year’s worth of
sub-daily observations of flow (“Q”) and specific conductance (“SC”, a
surrogate for concentration) at a point in a stream.  The R code posted
below shows the extent of data processing thus far.  My goal is to create a
covariance matrix that takes on the following form:
  Q1 Q2 … Q365  SC1 SC2 … SC365
Q1
Q2
…
Q365
SC1
SC2
...
SC365

Where the covariance between Q1 (flow on day 1)  Q2 (flow on day 2) is
determined using the sub-daily data contained in the variable ‘x’ of the R
code below.  Similarly, the covariance between Q1  SC1 (specific
conductance on day 1) would be made using the sub-daily observations of flow
and specific conductance.  Covariance between observations that are more
than 5 days distant from one another are likely meaningless.  Thus, the
covariance matrix should reflect this limitation with zeros.  For example,
the covariance between Q1  Q6, or between Q1  SC6, or between SC359
(specific conductance on day 359)  SC365 (specific conductance on day 365)
would be zero as these observations are more than 5 days apart.  Here is the
R code that reads the attached files containing Q and SC and puts the
processed data into ‘x’:

07130500_BelowJM_q_2004.txt
http://r.789695.n4.nabble.com/file/n4647170/07130500_BelowJM_q_2004.txt  
07130500_BelowJM_2004.txt
http://r.789695.n4.nabble.com/file/n4647170/07130500_BelowJM_2004.txt  

library(xts)
Q_subDaily-read.table(C:/temp/07130500_BelowJM_q_2004.rdb,col.names=c('date','time','tz','Q','rating','unknown'),colClasses=c(character,character,character,numeric,character,character))
SC_subDaily-read.table(C:/temp/07130500_BelowJM_2004.rdb,col.names=c('date','time','tz','SC','rating','unknown'),colClasses=c(character,character,character,numeric,character,character))

Q_subDaily$datetime.str - paste(Q_subDaily$date, Q_subDaily$time)
SC_subDaily$datetime.str - paste(SC_subDaily$date, SC_subDaily$time)

fmt - %Y%m%d %H%M%S
xQ - xts(as.matrix(Q_subDaily[Q]), as.POSIXct(Q_subDaily$datetime.str,
format=fmt))
xSC - xts(as.matrix(SC_subDaily[SC]),
as.POSIXct(SC_subDaily$datetime.str, format=fmt))

x - merge(xQ,xSC)

And here’s where I’m stuck, I’m not sure how to create the covariance matrix
I’ve described above?  I would appreciate and greatly benefit from the sort
of help often found in the useR community.




--
View this message in context: 
http://r.789695.n4.nabble.com/Filling-a-covariance-matrix-tp4647170.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] calculate within-day correlations

2012-09-15 Thread emorway
Hi Joshua, 

I was attempting to work with your code on the larger datasets, which I have
to read in with 'read.table', but I've apparently missed something.   When I
attempt to massage the data.frame a bit in the code below, as.POSIXct drops
the time component which then precludes use of xts.  I think you'll see what
I mean (the rdb file is a text file containing the data, which is attached
to the post):

library(xts)
Q_hourly-read.table(C:/temp/07130500_BelowJM_q.rdb,skip=59,col.names=c('date','time','tz','Q','rating','unknown'),colClasses=c(character,character,character,numeric,character,character))

notice that

Q_hourly[1,]
#  date   time  tz   Q rating unknown
#1 19981001 00 MDT 326  3   A
Q_hourly[2,]
 # date   time  tz   Q rating unknown
#2 19981001 001500 MDT 326  3   A

and

paste(strptime(Q_hourly[1,1],%Y%m%d),
,format(strptime(Q_hourly[1,2],%H%M%S),format=%H:%M:%S),sep='')

#[1] 1998-10-01 00:00:00

but for some reason, the time stamp is dropped in the following, which
breaks the call to xts (I think)

as.POSIXct(paste(strptime(Q_hourly[1,1],%Y%m%d),
,format(strptime(Q_hourly[1,2],%H%M%S),format=%H:%M:%S),sep=''),format=%Y-%m-%d
%H:%M:%S,tz=)

#[1] 1998-10-01 PDT

The Q_use data.frame I'm building here should come out exactly 
http://r.789695.n4.nabble.com/file/n4643206/07130500_BelowJM_q.rdb
07130500_BelowJM_q.rdb the same as in my original post (above), but I can't
seem to seem to force the preservation of the time stamp even though I've
explicitly stated the format I want to be stored (...format=%Y-%m-%d
%H:%M:%S).  Any ideas?

xQ - xts(Q_use[Q], Q_use$date)
#Error in `[.data.frame`(x, indx) : undefined columns selected




--
View this message in context: 
http://r.789695.n4.nabble.com/calculate-within-day-correlations-tp4643091p4643206.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] calculate within-day correlations

2012-09-13 Thread emorway
useRs, 

Here is some R-ready data for my question to follow.  Of course this data is
small snippet from a much larger dataset that is about a decade long.

Q-read.table(textConnection(2002   3   28   15   77.38815
2002   3   28   30   77.09505
2002   3   28   45   76.80196
2002   3   28   60   76.50887
2002   3   28   75   76.50887
2002   3   28   90   76.50887
2002   3   28   105   76.50887
2002   3   28   120   76.50887
2002   3   28   135   76.80196
2002   3   28   150   77.09506
2002   3   28   165   77.38815
2002   3   28   180   77.68125
2002   3   28   195   77.68125
2002   3   28   210   77.68125
2002   3   28   225   77.68125
2002   3   28   240   77.68125
2002   3   28   255   77.38815
2002   3   28   270   77.09505
2002   3   28   285   76.80196
2002   3   28   300   76.50887
2002   3   28   315   76.80196
2002   3   28   330   77.09506
2002   3   28   345   77.38815
2002   3   28   360   77.68125
2002   3   28   375   77.68125
2002   3   28   390   77.68125
2002   3   28   405   77.68125
2002   3   28   420   77.68125
2002   3   28   435   77.68125
2002   3   28   450   77.68125
2002   3   28   465   77.68125
2002   3   28   480   77.68125
2002   3   28   495   77.97691
2002   3   28   510   78.27258
2002   3   28   525   78.56824
2002   3   28   540   78.86389
2002   3   28   555   78.86389
2002   3   28   570   78.86389
2002   3   28   585   78.86389
2002   3   28   600   78.86389
2002   3   28   615   78.86389
2002   3   28   630   78.86389
2002   3   28   645   78.86389
2002   3   28   660   78.86389
2002   3   28   675   78.86389
2002   3   28   690   78.86389
2002   3   28   705   78.86389
2002   3   28   720   78.86389
2002   3   28   735   79.16212
2002   3   28   750   79.46034
2002   3   28   765   79.75856
2002   3   28   780   80.05679
2002   3   28   795   79.75856
2002   3   28   810   79.46033
2002   3   28   825   79.16211
2002   3   28   840   78.86389
2002   3   28   855   78.56823
2002   3   28   870   78.27257
2002   3   28   885   77.97691
2002   3   28   900   77.68125
2002   3   28   915   77.68125
2002   3   28   930   77.68125
2002   3   28   945   77.68125
2002   3   28   960   77.68125
2002   3   28   975   77.68125
2002   3   28   990   77.29046
2002   3   28   1005   76.89966
2002   3   28   1020   76.50887
2002   3   28   1035   75.93033
2002   3   28   1050   75.35179
2002   3   28   1065   74.77325
2002   3   28   1080   74.19473
2002   3   28   1095   73.90929
2002   3   28   1110   73.62385
2002   3   28   1125   73.33841
2002   3   28   1140   73.05298
2002   3   28   1155   73.33842
2002   3   28   1170   73.62386
2002   3   28   1185   73.90929
2002   3   28   1200   74.19473
2002   3   28   1215   74.19473
2002   3   28   1230   74.19473
2002   3   28   1245   74.19473
2002   3   28   1260   74.19473
2002   3   28   1275   74.48272
2002   3   28   1290   74.77071
2002   3   28   1305   75.05871
2002   3   28   1320   75.3467
2002   3   28   1335   75.63725
2002   3   28   1350   75.92779
2002   3   28   1365   76.21832
2002   3   28   1380   76.50887
2002   3   28   1395   76.80196
2002   3   28   1410   77.09506
2002   3   28   1425   77.38815
2002   3   28   1440   77.68125
2002   3   29   15   77.38815
2002   3   29   30   77.09505
2002   3   29   45   76.80196
2002   3   29   60   76.50887
2002   3   29   75   76.21832
2002   3   29   90   75.92778
2002   3   29   105   75.63724
2002   3   29   120   75.3467
2002   3   29   135   75.63725
2002   3   29   150   75.92779
2002   3   29   165   76.21832
2002   3   29   180   76.50887
2002   3   29   195   76.21832
2002   3   29   210   75.92778
2002   3   29   225   75.63724
2002   3   29   240   75.3467
2002   3   29   255   75.3467
2002   3   29   270   75.3467
2002   3   29   285   75.3467
2002   3   29   300   75.3467
2002   3   29   315   75.3467
2002   3   29   330   75.3467
2002   3   29   345   75.3467
2002   3   29   360   75.3467
2002   3   29   375   75.63725
2002   3   29   390   75.92779
2002   3   29   405   76.21832
2002   3   29   420   76.50887
2002   3   29   435   76.50887
2002   3   29   450   76.50887
2002   3   29   465   76.50887
2002   3   29   480   76.50887
2002   3   29   495   76.80196
2002   3   29   510   77.09506
2002   3   29   525   77.38815
2002   3   29   540   77.68125
2002   3   29   555   77.97691
2002   3   29   570   78.27258
2002   3   29   585   78.56824
2002   3   29   600   78.86389
2002   3   29   615   79.16212
2002   3   29   630   79.46034
2002   3   29   645   79.75856
2002   3   29   660   80.05679
2002   3   29   675   80.05679
2002   3   29   690   80.05679
2002   3   29   705   80.05679
2002   3   29   720   80.05679
2002   3   29   735   80.35759
2002   3   29   750   80.65839
2002   3   29   765   80.9592
2002   3   29   780   81.26001
2002   3   29   795   81.26001
2002   3   29   810   81.26001
2002   3   29   825   81.26001
2002   3   29   840   81.26001
2002   3   29   855   81.26001
2002   3   29   870   81.26001
2002   3   29   885   81.26001

Re: [R] Problem with converting character vector to time

2012-06-23 Thread emorway
It's not entirely clear where your trying to go with your problem, but one
'solution' depending on your destination could be:

time_obj-format(as.POSIXct(test,format=%H:%M:%S),format=%H:%M:%S)
 time_obj
[1] 00:49:19
 class(time_obj)
[1] character

 But this is a fairly unsatisfactory solution since it simply undoes the
application of as.POSIXct
 
Perhaps you can share more of your problem so folks responding know where
you're trying to get to.

--
View this message in context: 
http://r.789695.n4.nabble.com/Problem-with-converting-character-vector-to-time-tp4634283p4634293.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] extracting values from txt file that follow user-supplied quote

2012-06-08 Thread emorway
I'll summarize the results in terms of total run time for the suggestions
that have been made as well as post the code for those that come across this
post in the future.  First the results (the code for which is provided
second):

What I tried to do using suggestions from Bert and Dan:
t1
#   user  system elapsed 
# 208.211.68  210.34

Gabor's suggested code:
t2
#   user  system elapsed 
#  51.120.63   51.75 

Rui's suggested code:
t3a  #(Get the number of lines)
#   user  system elapsed 
#  45.13   11.08   56.23
t3b  #(now perform the function
#   user  system elapsed 
#  50.590.55   51.16

So in summary it appears that Gabor's and Rui's code are quite similar (in
terms of runtime) if a priori knowledge of the number of lines in the file
is known (e.g. t2 is roughly equal to t3b).  It would seem Gabor's code is a
little more robust since it doesn't require the total number of lines in the
file be supplied.  And here is the code used to get these times (note that
the file I used was the 1GB text file, not the reduced version attached to
the top post of this thread):

#
#modified attempt
#

library(gsubfn)
library(tcltk2)
p-[-0-9]\\S+
pd-numeric()
txt_con-file(description=D:/MCR_BeoPEST - Copy/MCR.out,open=r)
t1-system.time(
while (length(txt_line-readLines(txt_con,n=1))){
  if (length(grep(DISCREPANCY = ,txt_line))) {
pd-c(pd,as.numeric(strapplyc(txt_line, p)[[1]]))
  }
})
close(txt_con)
t1
#   user  system elapsed 
# 208.211.68  210.34


#
#Suggested by G. Grothendieck
#

g-function(txt_con, string, from, to, ...) {
L - readLines(txt_con)
matched - grep(string, L, value = TRUE, ...)
as.numeric(substring(matched, from, to))
}
txt_con-file(description=D:/MCR_BeoPEST - Copy/MCR.out,open=r)
t2-system.time(
  edm-g(txt_con, PERCENT DISCREPANCY = , 70, 78, fixed = TRUE) 
)
close(txt_con)
t2
#   user  system elapsed 
#  51.120.63   51.75 

#-
#Suggested by Rui Barradas
#-

library(R.utils)
t3a-system.time(num_lines-countLines(D:/MCR_BeoPEST - Copy/MCR.out))
t3a
#   user  system elapsed 
#  45.13   11.08   56.23


fun - function(con, pattern, nlines, n=5000L){
if(is.character(con)){
con - file(con, open=rt)
on.exit(close(con))
}
passes - nlines %/% n
remaining - nlines %% n
res - NULL
for(i in seq_len(passes)){
txt - readLines(con, n=n)
res - c(res, as.numeric(substr(txt[grepl(pattern, txt)],
70, 78)))
}
if(remaining){
txt - readLines(con, n=remaining)
res - c(res, as.numeric(substr(txt[grepl(pattern, txt)],
70, 78)))
}
res
}

txt_con-file(description=D:/MCR_BeoPEST - Copy/MCR.out,open=r)
pat-PERCENT DISCREPANCY =
num_lines - 14405247L
t3b - system.time(pd2 - fun(txt_con, pat, num_lines, 10L)) 
close(txt_con)
t3b
#   user  system elapsed 
#  50.590.55   51.16



--
View this message in context: 
http://r.789695.n4.nabble.com/extracting-values-from-txt-file-that-follow-user-supplied-quote-tp4632558p4632810.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] extracting values from txt with regular expression

2012-06-07 Thread emorway
Thanks for your suggestions.  Bert, in your response you raised my awareness
to regular expressions.  Are regular expressions the same across various
languages?  Consider the following line of text:

txt_line- PERCENT DISCREPANCY =   0.01 PERCENT DISCREPANCY =  
   
-0.05

It seems python uses the following line of code to extract the two values in
txt_line and store them in a variable called v:

v = re.findall([+-]? *(?:\d+(?:\.\d*)|\.\d+)(?:[eE][+-]?\d+)?, line)
#v[0]  0.01
#v[1]  -0.05

I tried something similar in R (but it didn't work) by using the same
regular expression, but got an error:

edm-grep([+-]? *(?:\d+(?:\.\d*)|\.\d+)(?:[eE][+-]?\d+)?,txt_line)
#Error: '\d' is an unrecognized escape in character string starting [+-]?
*(?:\d

I'm not even sure which function in R most efficiently extracts the values
from txt_line.  Basically, I want to peel out the values and think I can
use the decimal point to construct the regular expression, but don't know
where to go from here?


--
View this message in context: 
http://r.789695.n4.nabble.com/extracting-values-from-txt-file-that-follow-user-supplied-quote-tp4632558p4632724.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] extracting values from txt with regular expression

2012-06-07 Thread emorway
Hi Dan and Rui,  Thank you for the suggestions, both were very helpful. 
Rui's code was quite fast...there is one more thing I want to explore for my
own edification, but first I need some help fixing the code below, which is
a slight modification to Dan's suggestion.  It'll no doubt be tough to beat
the time Rui's code finished the task in, but I'm willing to try.  First, I
need to fix the following, which 'peels' the wrong bit of text from
txt_line.  Instead of extracting as it now does (shown below), can the
code be modified to extract the values 0.01 and -0.05, and store them in the
variable 'extracted'?

txt_line- PERCENT DISCREPANCY =   0.01 PERCENT DISCREPANCY =  
   
-0.05
extracted - 
strsplit(gsub([+-]?(?:\\d+(?:\\.\\d*)|\\.\\d+)(?:[eE][+-]?\\d+)?,\\1%,txt_line),%)
 
extracted
#[1]  PERCENT DISCREPANCY =PERCENT DISCREPANCY = 




--
View this message in context: 
http://r.789695.n4.nabble.com/extracting-values-from-txt-file-that-follow-user-supplied-quote-tp4632558p4632753.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] extracting values from txt file that follow user-supplied quote

2012-06-06 Thread emorway
useRs- 

I'm attempting to scan a more than 1Gb text file and read and store the
values that follow a specific key-phrase that is repeated multiple time
throughout the file.  A snippet of the text file I'm trying to read is
attached.  The text file is a dumping ground for various aspects of the
performance of the model that generates it.  Thus, the location of
information I'm wanting to extract from the file is not in a fixed position
(i.e. it does not always appears in a predictable location, like line 1000,
or 2000, etc.).  Rather, the desired values always follow a specific phrase:
   PERCENT DISCREPANCY =

One approach I took was the following:

library(R.utils)

txt_con-file(description=D:/MCR_BeoPEST - Copy/MCR.out,open=r)
#The above will need to be altered if one desires to test code on the
attached txt file, which will run much quicker
system.time(num_lines-countLines(D:/MCR_BeoPEST - Copy/MCR.out))
#elapsed time on full 1Gb file took about 55 seconds on a 3.6Gh Xeon 
num_lines
#14405247

system.time(
for(i in 1:num_lines){
  txt_line-readLines(txt_con,n=1)
  if (length(grep(PERCENT DISCREPANCY =,txt_line))) {
pd-c(pd,as.numeric(substr(txt_line,70,78)))
  }
}
)
#Time took about 5 minutes

The inefficiencies in this approach arise due to reading the file twice
(first to get num_lines, then to step through each line looking for the
desired text).  

Is there a way to speed this process up through the use of a ?scan  ?  I
wan't able to get anything working, but what I had in mind was scan through
the more than 1Gb file and when the keyphrase (e.g.   PERCENT
DISCREPANCY =  ) is encountered, read and store the next 13 characters
(which will include some white spaces) as a numeric value, then resume the
scan until the key phrase is encountered again and repeat until the
end-of-the-file marker is encountered.  Is such an approach even possible or
is line-by-line the best bet?

http://r.789695.n4.nabble.com/file/n4632558/MCR.out MCR.out 



--
View this message in context: 
http://r.789695.n4.nabble.com/extracting-values-from-txt-file-that-follow-user-supplied-quote-tp4632558.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] obtaining a true/false vector with combination of strsplit, length, unlist,

2012-02-11 Thread emorway
Hi, 

A pared down version of the dataset I'm working with:

edm-read.table(textConnection(WELLIDX_GRID Y_GRID LAYER ROW COLUMN
SPECIES CALCULATED OBSERVED
w301_3  4428.   1389 2   6 18   1   3558   
6490.
w304_12 4836.   6627 2  27 20   1   3509   
3228.
02_10_120803.6125E+04  13875 1  56145   1   2774  
-999.0
02_10_120803.6125E+04  13875 1  56145   1   2774  
-999.0
02_10_120813.6375E+04  13875 1  56146   1   3493  
-999.0
02_10_120923.9125E+04  13875 1  56157   1   4736  
-999.0
w305_12 2962.   7326 2  30 12   1   4575   
5899.),header=T)
closeAllConnections() 

I'm having a hard time coming up with the R code that would produce a
TRUE/FALSE vector based on whether or not the first column of the data.frame
edm has a length of 2 or 3?  To show what I mean going row-by-row, I could
do the following:

 length(strsplit(as.character(edm$WELLID),_)[[1]])==3
[1] FALSE
 length(strsplit(as.character(edm$WELLID),_)[[2]])==3
[1] FALSE
 length(strsplit(as.character(edm$WELLID),_)[[3]])==3
[1] TRUE
 length(strsplit(as.character(edm$WELLID),_)[[4]])==3
[1] TRUE
 length(strsplit(as.character(edm$WELLID),_)[[5]])==3
[1] TRUE
 length(strsplit(as.character(edm$WELLID),_)[[6]])==3
[1] TRUE
 length(strsplit(as.character(edm$WELLID),_)[[7]])==3
[1] FALSE

I've fumbled around trying to come up with a line of R code that would
create a vector that looks like:  FALSE FALSE TRUE TRUE TRUE TRUE FALSE

The final goal is to use this vector to create two new data.frames, where,
for example, the first contains all the rows of edm in which the first
column has a length of 2 when split using a _ character.  The second
data.frame would contain all the rows in which the first column has a length
of 3 when split using a _ character.

Thanks,
Eric

--
View this message in context: 
http://r.789695.n4.nabble.com/obtaining-a-true-false-vector-with-combination-of-strsplit-length-unlist-tp4380050p4380050.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] legend, lheight, and alignment

2011-12-01 Thread emorway
Hello,

A bit of fairly simple code, yet I don't seem to be able to manipulate it
quite as much as I would like:

1)  It would be nice if the objects appearing in the legend were aligned,
and by aligned I mean the boxes are centered over the lines.  Do I need to
adjust the use of NA in the code below to accomplish this?  Here's how it
appears on my machine:

http://r.789695.n4.nabble.com/file/n4129402/example.png 

2)  Because I'm using a call to 'expression' in the text contained in the
legend, it would be nice to space  the lines of text a bit more.  My feeble
attempt was to increase the lheight parameter in the hopes this would affect
the legend...to no avail.  Further, lheight cannot be added to the argument
list of legend.  I've been unable to track down another parameter applicable
to legend, suggestions very much appreciated.

par(lheight=2)
plot(1,1,col=white)

legend(center, legend=c(expression(paste(Observed
,italic(bar(EC)[e]))),expression(paste(Simulated
,italic(bar(EC)[e]))),test1,test2),
   lty = c(NA,NA,1,1),
   col = c(black,black,red,blue), 
   density = c(NA,25,NA,NA),
   border=c(black,black,NA,NA),
   fill = c(grey,black,NA,NA), 
   angle = c(NA,45,NA,NA),
   cex = 1.25, bty = n, xpd = TRUE)


 sessionInfo()
R version 2.13.2 (2011-09-30)
Platform: x86_64-pc-mingw32/x64 (64-bit)

locale:
[1] LC_COLLATE=English_United States.1252 
[2] LC_CTYPE=English_United States.1252   
[3] LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C  
[5] LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base 

loaded via a namespace (and not attached):
[1] tools_2.13.2
 


--
View this message in context: 
http://r.789695.n4.nabble.com/legend-lheight-and-alignment-tp4129402p4129402.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] subplot strange behavoir

2011-10-21 Thread emorway
Hello Dr. Snow,

With regard to your response from earlier this month:


When I copy and paste your code I get what is expected, the 2 subplots line
up on the same y-value.  What version of R are you using, which version of
subplot? What platform?

I'm still troubled by the fact that layout and subplot (from TeachingDemos)
are not playing nicely together on my machine.  sessionInfo():

sessionInfo()
#R version 2.13.2 (2011-09-30)
#Platform: x86_64-pc-mingw32/x64 (64-bit)
#other attached packages:
#[1] TeachingDemos_2.7 

I'd really like to get this working on my machine as it seems to be working
on yours.  While I previously tried a simply example for the initial forum
post, I'm curious if the real plot I'm trying to make works on your machine. 
Should you happen to have a spare moment and I'm not pushing my luck, I've
attached 4 small data files, 1 text file containing the R commands I'm
trying to run (including 'layout' and 'subplot' called
R_Commands_Plot_MT3D_Analytical_Comparison_For_Paper.txt), and the
incorrect tiff output I'm getting on my machine.  I've directed all paths in
the R code to c:/temp/  so everything should quickly work if files are
dropped there.  Should it work on your machine as we would expect, does
anything come to mind for how to fix it on my machine?

Very Respectfully,
Eric

http://r.789695.n4.nabble.com/file/n3926941/AnalyticalDissolvedSorbedConcAt20PoreVols.txt
AnalyticalDissolvedSorbedConcAt20PoreVols.txt 
http://r.789695.n4.nabble.com/file/n3926941/AnalyticalEffluentConcentration.txt
AnalyticalEffluentConcentration.txt 
http://r.789695.n4.nabble.com/file/n3926941/Conc_Breakthru_at_100cm.txt
Conc_Breakthru_at_100cm.txt 
http://r.789695.n4.nabble.com/file/n3926941/Conc_Profile_20T.txt
Conc_Profile_20T.txt 
http://r.789695.n4.nabble.com/file/n3926941/R_Commands_Plot_MT3D_Analytical_Comparison_For_Paper.txt
R_Commands_Plot_MT3D_Analytical_Comparison_For_Paper.txt 
http://r.789695.n4.nabble.com/file/n3926941/NonEquilibrium_ForPaper.tif
NonEquilibrium_ForPaper.tif 

--
View this message in context: 
http://r.789695.n4.nabble.com/subplot-strange-behavoir-tp3875917p3926941.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] contouring x y scatter data

2011-10-17 Thread emorway
Hello,

I'm almost positive R can do the following, I just haven't hit upon the
right package or search terms, however.  Here's what I'm after:  I've got
concentration output from two different models that I want to qualitatively
compare in a contour plot (or some variant of a contour plot).  The problem
as I see it is that the data is not gridded is the usual regular fashion,
and even if it were, the models are working with two different irregular
computational/numerical grids at which concentrations are calculated. 
Running the code below will produce a plot that will hopefully illuminate
what I'm attempting to describe.  At each location where 'O' appears, there
is a calculated concentration that I would like to contour.  I'd like to do
the same thing at each 'X' location and then compare, visually, how close
the contours are (hopefully close).  Both data.frames contain columns/fields
titled ts181 and ts1825 which contain the concentrations at time step
181 days and time step 1825 days.  Is it possible to contour x-y-value data
if it is not laid out on a regular grid?


X-c(0,100,200,300,400,500,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600,3700,3800,3900,4000,0,100,200,300,400,500,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600,3700,3800,3900,4000,0,100,200,300,400,500,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600,3700,3800,3900,4000,0,100,200,300,400,500,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600,3700,3800,3900,4000,0,100,200,300,400,500,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600,3700,3800,3900,4000,0,100,200,300,400,5!
 
00,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600,3700,3800,3900,4000,0,100,200,300,400,500,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600,3700,3800,3900,4000,0,100,200,300,400,500,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600,3700,3800,3900,4000,0,100,200,300,400,500,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600,3700,3800,3900,4000,0,100,200,300,400,500,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600,3700,3800,3900,4000,0,100,200,300,400,500,600,700,800,900,1000!
 ,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400
,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600,3700,3800,3900,4000)

Z-c(50,49.95,49.85,49.75,49.65,49.55,49.45,49.35,49.25,49.15,49.05,48.95,48.85,48.75,48.65,48.55,48.45,48.35,48.25,48.15,48.05,47.95,47.85,47.75,47.65,47.55,47.45,47.35,47.25,47.15,47.05,46.95,46.85,46.75,46.65,46.55,46.45,46.35,46.25,46.15,46.1,49.25,49.2,49.1,49,48.9,48.8,48.7,48.6,48.5,48.4,48.3,48.2,48.1,48,47.9,47.8,47.7,47.6,47.5,47.4,47.3,47.2,47.1,47,46.9,46.8,46.7,46.6,46.5,46.4,46.3,46.2,46.1,46,45.9,45.8,45.7,45.6,45.5,45.4,45.35,48.2,48.15,48.05,47.95,47.85,47.75,47.65,47.55,47.45,47.35,47.25,47.15,47.05,46.95,46.85,46.75,46.65,46.55,46.45,46.35,46.25,46.15,46.05,45.95,45.85,45.75,45.65,45.55,45.45,45.35,45.25,45.15,45.05,44.95,44.85,44.75,44.65,44.55,44.45,44.35,44.3,47,46.95,46.85,46.75,46.65,46.55,46.45,46.35,46.25,46.15,46.05,45.95,45.85,45.75,45.65,45.55,45.45,45.35,45.25,45.15,45.05,44.95,44.85,44.75,44.65,44.55,44.45,44.35,44.25,44.15,44.05,43.95,43.85,43.75,43.65,43.55,43.45,43.35,43.25,43.15,43.1,45.5,45.45,45.35,45.25,45.15,45.05,44.95,44.85,44.75,44.6!
 

Re: [R] contouring x y scatter data

2011-10-17 Thread emorway
Hi Carlos, 

Thanks for the response.  The plot you suggested was not in line with what
I'm trying to produce in R.  The last plot found at:  

http://www.advsofteng.com/gallery_contour.html  

is more along the lines of what I'm looking for.  Notice the scatter points
(shown by x) are not laid out in a regular/gridded fashion, yet the
plotting software was still able to use some kind of interpolation and plot
contours.  All the contouring options I'm finding in R require the spatial
locations of data points to be in a 'regular' grid.  That is, data in each
row must have the same y coordinate and data in each column must have the
same x coordinate.  I'm seeking flexibility in the location of the points on
which the contours are based.

--
View this message in context: 
http://r.789695.n4.nabble.com/contouring-x-y-scatter-data-tp3912828p3913544.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] contouring x y scatter data

2011-10-17 Thread emorway
Thank you David, I'm moving forward again.  I was not aware of that website,
I was recently at 

http://www.r-project.org/

and clicked on the 'mailing list' link and didn't see it there...with 8k+
posts, maybe its worth adding?

--
View this message in context: 
http://r.789695.n4.nabble.com/contouring-x-y-scatter-data-tp3912828p3913617.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Irregular 3d objects with rgl

2011-10-14 Thread emorway
Hello, 

While exploring if rgl is along the lines of what I need, I checked out
demo(rgl) and didn't find quite what I'm looking for, and am therefore
seeking additional help/suggestions.

The application is geared towards displaying a 3D rendering of a contaminant
plume in the subsurface with the following highlights:  Once the plume was
rendered as a 3D object, a pie-like wedge could be removed (or cut away)
exposing the higher concentrations within the plume as 'hotter' colors. 
About the closest example I could find is here: 

http://mclaneenv.com/graphicdownloads/plume.jpg 

Whereas this particular rendering shows a bullet-like object where 3/4 of
the object is removed, I would like to try and show something where 3/4 of
the object remains, and where the object has been cut away the colors would
show concentrations within the plume, just as in the example referenced
above.  It would seem most software capable of this type of thing is
proprietary (and perhaps for good reason if it is a difficult problem to
solve). 

I've put together a very simple 6x6x6 cube with non-zero values internal to
it representing the plume.  I wondering if an isosurface where conc = 0.01
can be rendered in 3D and then if a bite or wedge can be removed from the
3d object exposing the higher concentrations inside as discussed above?

x-c(1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6)

y-c(1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3,4,4,4,4,4,4,5,5,5,5,5,5,6,6,6,6,6,6,1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3,4,4,4,4,4,4,5,5,5,5,5,5,6,6,6,6,6,6,1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3,4,4,4,4,4,4,5,5,5,5,5,5,6,6,6,6,6,6,1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3,4,4,4,4,4,4,5,5,5,5,5,5,6,6,6,6,6,6,1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3,4,4,4,4,4,4,5,5,5,5,5,5,6,6,6,6,6,6,1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3,4,4,4,4,4,4,5,5,5,5,5,5,6,6,6,6,6,6)

z-c(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6)

conc-c(0,0,0,0,0,0,0,0,0.1,0.1,0,0,0,0.1,1,1,0.1,0,0,0.1,0.5,1,0.1,0,0,0,0.2,0.2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.05,0.1,0,0,0,0.05,0.8,0.8,0.05,0,0,0.05,0.4,0.8,0.05,0,0,0,0.1,0.1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.05,0,0,0,0,0.6,0.6,0.02,0,0,0,0.2,0.5,0.02,0,0,0,0.05,0.05,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.05,0.2,0,0,0,0,0,0.05,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.02,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)

plume-data.frame(cbind(x=x,y=y,z=z,conc=conc))

if it helps to view the concentrations in layer by layer tabular form:
Layer 1
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.10 0.10 0.00 0.00
0.00 0.10 1.00 0.50 0.20 0.00
0.00 0.10 1.00 1.00 0.20 0.00
0.00 0.00 0.10 0.10 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
Layer 2 
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.05 0.05 0.00 0.00
0.00 0.05 0.80 0.40 0.10 0.00
0.00 0.10 0.80 0.80 0.10 0.00
0.00 0.00 0.05 0.05 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
Layer 3 
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.60 0.20 0.05 0.00
0.00 0.05 0.60 0.50 0.05 0.00
0.00 0.00 0.02 0.02 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
Layer 4 
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.05 0.00 0.00 0.00
0.00 0.00 0.20 0.05 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
Layer 5 
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.02 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
Layer 6
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00




--
View this message in context: 
http://r.789695.n4.nabble.com/Irregular-3d-objects-with-rgl-tp3906573p3906573.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] subplot strange behavoir

2011-10-05 Thread emorway
Hello, 

Below is some example code that should reproduce an error I'm encountering
while trying to create a tiff plot with two subplots.  If I run just the
following bit of code through the R GUI the result is what I'd like to have
appear in the saved tiff image:

x-seq(0:20)
y-c(1,1,2,2,3,4,5,4,3,6,7,1,1,2,2,3,4,5,4,3,6)
plot(x,y,type=l,las=1,ylim=c(0,12))
subplot(edm.sub(x[seq(1:5)],y[seq(1:5)]),x=4,y=9,size=c(1,1.5))
subplot(edm.sub(x[seq(15,20,by=1)],y[seq(15,20,by=1)]),x=17,y=9,size=c(1,1.5))

However, if expanding on this code with:

edm.sub-function(x,y){plot(x,y,col=red,frame.plot=F,
   las=1,xaxs=i,yaxs=i,type=b,
   ylim=c(0,6),xlab=,ylab=)}

png(c:/temp/lookat.tif,res=120,height=600,width=1200)
layout(matrix(c(1,2),2,2,byrow=TRUE),c(1.5,2.5),respect=TRUE)
plot(seq(1:10),seq(1:10),type=l,las=1,col=blue)
plot(x,y,type=l,las=1,ylim=c(0,12))
subplot(edm.sub(x[seq(1:5)],y[seq(1:5)]),x=4,y=9,size=c(1,1.5))
subplot(edm.sub(x[seq(15,20,by=1)],y[seq(15,20,by=1)]),x=17,y=9,size=c(1,1.5))
dev.off()

One will notice the second subplot is out of position (notice the
y-coordinate is the same for both subplots...y=9):
http://r.789695.n4.nabble.com/file/n3875917/lookat.png 

If I try to 'guess' a new y-coordinate for the second subplot, say y=10:

png(c:/temp/lookat.tif,res=120,height=600,width=1200)
layout(matrix(c(1,2),2,2,byrow=TRUE),c(1.5,2.5),respect=TRUE)
plot(seq(1:10),seq(1:10),type=l,las=1,col=blue)
plot(x,y,type=l,las=1,ylim=c(0,12))
subplot(edm.sub(x[seq(1:5)],y[seq(1:5)]),x=4,y=9,size=c(1,1.5))
subplot(edm.sub(x[seq(15,20,by=1)],y[seq(15,20,by=1)]),x=17,y=10,size=c(1,1.5))
dev.off()

R kicks back the following message
Error in plot.new() : plot region too large

Am I mis-using subplot?

Thanks, Eric

--
View this message in context: 
http://r.789695.n4.nabble.com/subplot-strange-behavoir-tp3875917p3875917.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] subplot strange behavoir

2011-10-05 Thread emorway
Hello Greg,

Session info is below.  Running Win7 64-bit.  I just upgraded my version of
R and tried rerunning the code and got the same odd result.  I, too, get an
expected result when I create the plot in the R GUI.  The problem crops up
only when I try and create the plot in png() or tiff().  Perhaps I need to
try par(new=T), or some other trick-of-the-trade?  If you're unable to
recreate the problem, I suppose this is nearly a dead-end?

sessionInfo()
R version 2.13.2 (2011-09-30)
Platform: x86_64-pc-mingw32/x64 (64-bit)

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United
States.1252   
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C 
 
[5] LC_TIME=English_United States.1252

attached base packages:
[1] splines   stats graphics  grDevices utils datasets  methods  
base 

other attached packages:
[1] TeachingDemos_2.7 Hmisc_3.8-3   survival_2.36-9  

loaded via a namespace (and not attached):
[1] cluster_1.14.0  grid_2.13.2 lattice_0.19-33 tools_2.13.2   


--
View this message in context: 
http://r.789695.n4.nabble.com/subplot-strange-behavoir-tp3875917p3876178.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] subplot strange behavoir

2011-10-05 Thread emorway
I tried this trick, and clearly things are not going in the right direction. 
It seems 'layout' is at the root of my frustration, so I can make two plots
and marge them in adobe illustrator (or something similar).

png(c:/temp/lookat.png,res=120,height=600,width=1200)
layout(matrix(c(1,2),2,2,byrow=TRUE),c(1.5,2.5),respect=TRUE)
plot(seq(1:10),seq(1:10),type=l,las=1,col=blue)
plot(x,y,type=l,las=1,ylim=c(0,12))
subplot(edm.sub(x[seq(1:5)],y[seq(1:5)]),x=4,y=9,size=c(1,1.5))
par(new=T)
plot(x,y,las=1,ylim=c(0,12),type=n,ann=F,las=1,col=blue)
subplot(edm.sub(x[seq(15,20,by=1)],y[seq(15,20,by=1)]),x=17,y=9,size=c(1,1.5))
dev.off()

http://r.789695.n4.nabble.com/file/n3876285/lookat.png 


--
View this message in context: 
http://r.789695.n4.nabble.com/subplot-strange-behavoir-tp3875917p3876285.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] animation error

2011-06-14 Thread emorway
Hello, 

While running the following code I encountered a bit of a funny error. 
First the code:

library(animation)
ani.options(convert = shQuote('C:/Program
Files/ImageMagick-6.7.0-Q16/convert.exe'))

plot.ani-function() {
  for (ts in 1:730){
dat-read.table(paste(c:/temp/ConcProfile_,ts,.txt,sep=),
header=T,sep='\t')

   
#tiff(paste(c:/temp/Plot_ConcProfile_sp,ts,.tif,sep=),height=800,width=600,res=130)
   
plot(dat$Conc[1:200],rev(dat$Depth[1:200]),type=l,xlab=Conc.,ylab=Depth,
  yaxt=n,col=blue,lwd=2,xlim=c(0,120))
axis(side=2,at=seq(0,200,by=25),labels=rev(seq(0,200,by=25)),las=1)
axis(side=2,at=seq(0,200,by=5),labels=F,tcl=-0.25)
text(x=110,y=200,paste(SP ,ts,sep=))
ani.pause()
#dev.off()
  }
}

saveMovie(plot.ani(),movie.name=concSlug.gif,img.name=ConcProfile,interval=0.15,outdir=getwd())

#Executing: 
#C:/Program Files/ImageMagick-6.7.0-Q16/convert.exe -loop 0
#-delay 15
#C:/Users/emorway/AppData/Local/Temp/2/RtmpwH5TwA/ConcProfile1.png
#C:/Users/emorway/AppData/Local/Temp/2/RtmpwH5TwA/ConcProfile2.png
...
#C:/Users/emorway/AppData/Local/Temp/2/RtmpwH5TwA/ConcProfile113.png
#C:/Users/emorway/AppData/Local/Temp/2/RtmpwH5TwA/ConcProfile114.png
#There seems to be an error in the conversion...

The code runs without issue when I dump individual tiff images to the temp
directory (that is, I run only the for loop).  Is there a limit to the
number of frames that can be added?  If I shorten the for loop to 100 the
gif is created, if I try to run it all the way out to 730 days (as coded
above) it terminates with the rather non-descriptive error above.  Is there
an option I've overlooked?

Thanks...
Eric

--
View this message in context: 
http://r.789695.n4.nabble.com/animation-error-tp3598481p3598481.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help with filled.contour()

2011-04-02 Thread emorway
Michael, 

Although this is a rather old post I'm responding to, I recently came across
it and have a suggestions for getting rid of the legend.  Simply modify the
code associated with the function and stuff it into a new function,

edm-function (x = seq(0, 1, length.out = nrow(z)), y = seq(0, 1, 
length.out = ncol(z)), z, xlim = range(x, finite = TRUE), 
ylim = range(y, finite = TRUE), zlim = range(z, finite = TRUE), 
levels = pretty(zlim, nlevels), nlevels = 20, color.palette = cm.colors, 
col = color.palette(length(levels) - 1), plot.title, plot.axes, 
key.title, key.axes, asp = NA, xaxs = i, yaxs = i, las = 1, 
axes = TRUE, frame.plot = axes, ...) 
{
if (missing(z)) {
if (!missing(x)) {
if (is.list(x)) {
z - x$z
y - x$y
x - x$x
}
else {
z - x
x - seq.int(0, 1, length.out = nrow(z))
}
}
else stop(no 'z' matrix specified)
}
else if (is.list(x)) {
y - x$y
x - x$x
}
if (any(diff(x) = 0) || any(diff(y) = 0)) 
stop(increasing 'x' and 'y' values expected)
mar.orig - (par.orig - par(c(mar, las, mfrow)))$mar
on.exit(par(par.orig))
w - (3 + mar.orig[2L]) * par(csi) * 2.54
#layout(matrix(c(2, 1), ncol = 2L), widths = c(1, lcm(w)))
par(las = las)
mar - mar.orig
mar[4L] - mar[2L]
mar[2L] - 1
par(mar = mar)
#plot.new()
#plot.window(xlim = c(0, 1), ylim = range(levels), xaxs = i, 
#yaxs = i)
#rect(0, levels[-length(levels)], 1, levels[-1L], col = col)
#if (missing(key.axes)) {
#if (axes) 
#axis(4)
#}
#else key.axes
#box()
#if (!missing(key.title)) 
#key.title
#mar - mar.orig
#mar[4L] - 1
#par(mar = mar)
plot.new()
plot.window(xlim, ylim, , xaxs = xaxs, yaxs = yaxs, asp = asp)
if (!is.matrix(z) || nrow(z) = 1L || ncol(z) = 1L) 
stop(no proper 'z' matrix specified)
if (!is.double(z)) 
storage.mode(z) - double
.Internal(filledcontour(as.double(x), as.double(y), z,
as.double(levels), 
col = col))
if (missing(plot.axes)) {
if (axes) {
title(main = , xlab = , ylab = )
Axis(x, side = 1)
Axis(y, side = 2)
}
}
else plot.axes
if (frame.plot) 
box()
if (missing(plot.title)) 
title(...)
else plot.title
invisible()
}

Then call this new function,

edm(x, y, z, axes = F, frame.plot = F, asp = 1,
 col = palette(gray(seq(0, 0.9, len = 25))), nlevels = 25) 

Eric

--
View this message in context: 
http://r.789695.n4.nabble.com/R-Help-with-filled-contour-tp815296p3422837.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Plotting Finite Element and Finite Difference Results together

2011-03-18 Thread emorway
As a simplification of a much larger problem, I'm using the following simple
example to explore ways of getting the finite difference results to plot
over the same extents as the finite element solution (see figure).  I
haven't discovered (if it exist) a way of getting the finite difference data
to extrapolate, perhaps I'm using the wrong plotting tool?

FD.dat-matrix(c(1,3.5,3.5,1,
 1,2.5,2.5,1,
 1,1.5,1.5,1,
 1,1,1,1),
   nrow=4,ncol=4,byrow=TRUE)

FE.dat-matrix(c(1,1,4,1,1,
 1,1,3.5,1,1,
 1,1,2.5,1,1,
 1,1,1.5,1,1,
 1,1,1,1,1),
   nrow=5,ncol=5,byrow=TRUE)

FD.x.coords-c(0.5,1.5,2.5,3.5)
FE.x.coords-c(0,1,2,3,4)
FD.y.coords-FD.x.coords
FE.y.coords-FE.x.coords

contour(FD.x.coords,FD.y.coords,FD.dat,xlim=c(0,4),ylim=c(0,4))
contour(FE.x.coords,FE.y.coords,FE.dat,xlim=c(0,4),ylim=c(0,4),add=T,col=red)


http://r.789695.n4.nabble.com/file/n3388485/Slide1.png 

--
View this message in context: 
http://r.789695.n4.nabble.com/Plotting-Finite-Element-and-Finite-Difference-Results-together-tp3388485p3388485.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Using 'contour' to compare concentration profiles on the same plot

2011-01-29 Thread emorway

Hello,

Using the data and code below I've been using R to compare output from two
different solute transport codes, the red contours are from verified code
while the blue contours come from some modified code I've been working on. 
The goal is for the contours to match, and although there is currently a
slight discrepancy this would be expected.  The plot shows a pulse of
infiltrating water with an arbitrary concentration moving downward.  I'm
curious if there is a clever way to plot the labels so that they don't
interfere with one another?  Getting even more fancy, I'm also wondering if
spaces can be created and left empty in one set of contours (blue) to
account for values from the other contours (say red) ?  In other words,
labels from the blue (or red) contours won't be drawn over the red's (or
blue's) contour line.

Thanks, Eric


vs2d-matrix(c(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.14,0.15,0.15,0.16,0.15,0.15,0.14,0.14,0.14,0.14,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.13,0.029600,0.029800,0.03,0.030100,0.03,0.029800,0.029600,0.029500,0.029400,0.029400,0.04,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.71,0.156000,0.156000,0.156000,0.156000,0.156000,0.156000,0.156000,0.155000,0.155000,0.155000,0.19,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.000108,0.289000,0.289000,0.289000,0.288000,0.289000,0.289000,0.289000,0.289000,0.289000,0.289000,0.29,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.000109,0.436000,0.436000,0.436000,0.436000,0.436000,0.436000,0.436000,0.436000,0.436000,0.436000,0.29,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.90,0.593000,0.593000,0.593000,0.592000,0.593000,0.593000,0.593000,0.593000,0.593000,0.593000,0.24,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.61,0.742000,0.742000,0.741000,0.741000,0.741000,0.742000,0.742000,0.742000,0.742000,0.742000,0.17,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.36,0.862000,0.862000,0.862000,0.861000,0.862000,0.862000,0.862000,0.862000,0.862000,0.862000,0.10,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.16,0.942000,0.942000,0.942000,0.942000,0.942000,0.942000,0.942000,0.942000,0.942000,0.942000,0.04,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.05,0.983000,0.983000,0.983000,0.983000,0.983000,0.983000,0.983000,0.983000,0.983000,0.983000,0.01,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.01,0.997000,0.997000,0.997000,0.997000,0.997000,0.997000,0.997000,0.997000,0.997000,0.997000,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)
  ,nrow=20,ncol=40,byrow=T)

mt3d-matrix(c(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,

Re: [R] Fitting an Inverse Gamma Distribution

2011-01-14 Thread emorway

Hello David, 

Thank you for pointing me to the GeneralizedHyperbolic package.  I've been
playing around with hyperbFit with my data, but get:

Warning Messages:
In besselK(zeta, nu = 1) : value out of range in 'bessel_k' 10



Next, just to see if I could get something to plot I tried the following to
no avail:

hist(iniSal_US_forHist,breaks=seq(1.1,21,by=0.625),col=grey,freq=F,xlim=c(0,21))
curve(dgig(x,param=c(11,0,-4)),col=blue,from=0,to=20,add=T)


Could I trouble you for a bit more guidance?  Also, does the fitting
function also deal with shifts?  In other words, if 500 values were sampled
from an inv. gamma and and then had 4 added to them, would the fitting
function account for this?

Thanks, Eric
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Fitting-an-Inverse-Gamma-Distribution-tp3216865p3218474.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Grid drawing over the filled.contour's legend

2011-01-13 Thread emorway

Hello, 

For a reason I can't seem to figure out (have searched posts on this forum
for filled.contour grid), the grid (in the code below) is plotting over
the legend that accompanies the filled.contour.  The dataset has 40 columns
and 20 rows.  Where have I gone wrong?  How can I draw a grid with 40
columns and 20 rows that is constrained to the plotted region and doesn't
draw over the legend?

vs2d_conc20-read.table(textConnection(0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 

[R] Fitting an Inverse Gamma Distribution

2011-01-13 Thread emorway

http://r.789695.n4.nabble.com/file/n3216865/Inverse_Gamma.png 

Hello,

I am seeking help in estimating the parameters of an inverse gamma
distribution (from the 'actuar' package) using a function like 'fitdistr'. 
Unfortunately I haven't found such a package using findFn('fit Inverse
Gamma') from the 'sos' package and was therefore hoping someone might be
aware of such a function?  

Secondly, is there a way to shift the pdf (code below) to the right (rather
than the data to the left)?  I tried:

par(new=T)
shift-1
hist(iniSal_US_forHist,breaks=seq(1.1,21,by=0.625),col=grey,freq=F,xlim=c(0-shift,21-shift),plot=F)
curve(dinvgamma(x,scale=11.835,shape=4.4242),from=0,to=20,add=T,col=purple,lwd=2)

but this failed in shifting the curve.  More broadly, the data plotted in
the histogram represents a calibration target for output generated by a
finite difference solute transport model.  The values that will be generated
by the finite difference model would ideally fit the shifted (if that can be
figured out) inverse gamma pdf.  To the extent that fit is deemed poor, the
parameter estimation software associated with the finite difference model
will adjust parameters until a better fit is found.  I will try to use a
goodness of fit test to determine if one set of parameter values in the
finite difference model produce output that fits the inv. gamma curve herein
better than another.  But first I need to establish the curve the finite
difference model output should target and would greatly appreciate any leads
on how it might be shifted and/or fit more precisely by a fitdistr-like
function.

library(MASS)
library(actuar)

iniSal_US_forHist-c(2.368000,3.532614,3.064330,3.347069,3.066333,4.233636,3.465650,2.858553,
2.946731,2.945417,2.415000,2.873019,5.521000,5.788148,5.314630,5.509672,6.032840,6.009310,
4.110833,6.073182,5.652833,4.425733,6.481852,4.076857,3.289310,4.524000,3.985811,5.399714,
4.490606,6.956729,5.270933,8.099107,5.058250,6.394500,5.644000,5.202459,5.67,3.152680,
3.220952,2.777381,3.115467,3.642759,3.488333,3.022439,2.610290,2.618571,3.218000,3.417634,
10.327317,7.344270,6.886154,4.015800,3.063103,6.832292,4.600238,2.939000,5.999027,7.894878,
4.411538,2.384762,6.816154,2.782500,2.475333,2.799138,2.739063,2.619917,2.892545,2.468167,
2.577079,2.821875,2.502500,2.969032,2.046023,3.073077,4.408000,3.411774,3.50,4.283607,
4.284000,4.276714,3.228103,2.639875,3.453194,2.821200,3.838723,1.714253,2.273750,2.611882,
2.321781,2.567500,2.557045,1.288875,2.175211,1.736000,2.250781,7.433366,7.033553,5.47,
7.132727,8.505937,9.174545,6.554487,7.060286,6.617160,8.210986,4.404045,6.062381,5.149625,
2.972105,5.358889,3.910968,3.715873,1.728966,2.843667,4.413906,3.016346,7.168636,3.839394,
3.930141,7.019882,3.459429,5.050250,3.492714,3.226667,3.987667,2.770227,3.661167,1.553000,
2.867391,2.897193,2.611707,2.577167,2.904697,2.733077,2.507241,11.044865,6.425484,8.567222,
8.552344,7.493396,4.807381,9.697869,9.471333,6.783175,4.563571,8.059649,9.448679,5.803778,
4.769423,4.424634,7.586042,4.451556,3.622373,6.390152,4.424375,4.135806,5.025400,5.410635,
7.012292,2.961071,3.192188,2.989643,3.471429,2.867966,1.980541,3.172344,2.574783,2.958983,
1.708140,3.604853,3.479000,2.845000,2.742603,2.923968,3.620308,2.452500,2.721375,3.166333,
2.742162,2.793000,3.337000,5.192025,5.365875,3.079000,8.415970,6.612277,6.734706,4.856857,
5.164783,7.743667,6.894151,4.666538,9.227167,8.077581,6.109833,6.621724,18.098182,12.705600,
15.490784,17.394750,12.422364,14.832727,8.326000,11.352400,3.431429,2.658261,3.219773,3.605185,
4.030299,3.262241,3.503250,3.522763,2.847312,2.996618,3.075769,3.387731,3.066923,3.078200,
2.466957,3.214167,2.707778,3.384839,2.283556,2.912258,3.378000,2.726750,2.95,2.195000,
4.819063,3.604578,3.694906,5.068000,4.676582,3.028831,4.261042,3.593235,4.501224,2.880317,
5.750333,3.257833,3.967458,2.522292,2.725738,2.549231,2.591389,2.990488,2.681222,2.685854,
2.284750,2.585938,2.432824,3.108875,2.611340,3.916667,2.418095,2.476406,2.801235,3.278000,
2.434921,2.617826,3.133939,2.774321,4.196173,3.764286,3.555833,5.317361,3.970800,4.136400,
4.487013,3.746393,4.754000,3.854316,3.742353,3.044079,2.817821,3.995179,3.643134,3.642593,
3.604533,2.935902,4.088310,5.344407,3.076883,3.287105,3.720870,2.032258,2.872593,5.787313,
6.017838,5.425205,4.880600,3.582295,4.90,3.489016,4.603030,5.344407,6.184286,4.047083,
4.788304,4.661325,4.815938,4.056790,3.765595,5.348772,5.200222,4.906311,3.900147,3.782897,
3.767313,3.417732,3.725455,2.888750,2.552333,2.521613,2.531522,2.510833,2.710208,2.445273,
2.619750,2.094737,2.399355,2.758000,2.317077,2.247755,3.594333,4.607805,2.69,3.084706,
3.529000,2.326200,3.309851,2.647805,2.802250,2.778667,3.231235,2.418065,3.134545,3.807843,
2.988372,2.709792,3.084035,3.633279,2.958750,2.170081,2.67,2.640706,2.841600,3.399219,
2.561373,2.574824,3.046447,2.647500,3.554875,1.865000,2.858333,2.355000,2.508082,2.376833,

Re: [R] Strange behavior when trying to piggyback off of fitdistr

2011-01-11 Thread emorway

Avi, 

Its been nearly a year since you made this post, but I'm curious if you were
able to find a solution to your problem?  Your inquiry is closely related to
a problem I'm having.

Thanks, 
Eric
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Strange-behavior-when-trying-to-piggyback-off-of-fitdistr-tp1012536p3209887.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Fitting an Inverse Gamma Distribution to Survey Data

2011-01-07 Thread emorway

Hello, 

I've been attempting to fit the data below with an inverse gamma
distribution.  The reason for this is outside proprietary software (@Risk)
kicked back a Pearson5 (inverse gamma) as the best fitting distribution with
a Chi-Sqr goodness-of-fit roughly 40% better than with a log-normal fit. 
Looking up Inverse gamma on this forum led me the following post:

http://r.789695.n4.nabble.com/Inverse-gamma-td825481.html#a825482

But I think I'm misunderstanding the suggestion made in that post.  Is there
way to estimate the shape and rate parameters for an inverse-gamma and then
plot the PDF as I have done below using other more readily available pdf's
in R?

Thanks, Eric

library(MASS)

iniSal_US_forHist-c(2.368000,3.532614,3.064330,3.347069,3.066333,4.233636,3.465650,2.858553,
2.946731,2.945417,2.415000,2.873019,5.521000,5.788148,5.314630,5.509672,6.032840,6.009310,
4.110833,6.073182,5.652833,4.425733,6.481852,4.076857,3.289310,4.524000,3.985811,5.399714,
4.490606,6.956729,5.270933,8.099107,5.058250,6.394500,5.644000,5.202459,5.67,3.152680,
3.220952,2.777381,3.115467,3.642759,3.488333,3.022439,2.610290,2.618571,3.218000,3.417634,
10.327317,7.344270,6.886154,4.015800,3.063103,6.832292,4.600238,2.939000,5.999027,7.894878,
4.411538,2.384762,6.816154,2.782500,2.475333,2.799138,2.739063,2.619917,2.892545,2.468167,
2.577079,2.821875,2.502500,2.969032,2.046023,3.073077,4.408000,3.411774,3.50,4.283607,
4.284000,4.276714,3.228103,2.639875,3.453194,2.821200,3.838723,1.714253,2.273750,2.611882,
2.321781,2.567500,2.557045,1.288875,2.175211,1.736000,2.250781,7.433366,7.033553,5.47,
7.132727,8.505937,9.174545,6.554487,7.060286,6.617160,8.210986,4.404045,6.062381,5.149625,
2.972105,5.358889,3.910968,3.715873,1.728966,2.843667,4.413906,3.016346,7.168636,3.839394,
3.930141,7.019882,3.459429,5.050250,3.492714,3.226667,3.987667,2.770227,3.661167,1.553000,
2.867391,2.897193,2.611707,2.577167,2.904697,2.733077,2.507241,11.044865,6.425484,8.567222,
8.552344,7.493396,4.807381,9.697869,9.471333,6.783175,4.563571,8.059649,9.448679,5.803778,
4.769423,4.424634,7.586042,4.451556,3.622373,6.390152,4.424375,4.135806,5.025400,5.410635,
7.012292,2.961071,3.192188,2.989643,3.471429,2.867966,1.980541,3.172344,2.574783,2.958983,
1.708140,3.604853,3.479000,2.845000,2.742603,2.923968,3.620308,2.452500,2.721375,3.166333,
2.742162,2.793000,3.337000,5.192025,5.365875,3.079000,8.415970,6.612277,6.734706,4.856857,
5.164783,7.743667,6.894151,4.666538,9.227167,8.077581,6.109833,6.621724,18.098182,12.705600,
15.490784,17.394750,12.422364,14.832727,8.326000,11.352400,3.431429,2.658261,3.219773,3.605185,
4.030299,3.262241,3.503250,3.522763,2.847312,2.996618,3.075769,3.387731,3.066923,3.078200,
2.466957,3.214167,2.707778,3.384839,2.283556,2.912258,3.378000,2.726750,2.95,2.195000,
4.819063,3.604578,3.694906,5.068000,4.676582,3.028831,4.261042,3.593235,4.501224,2.880317,
5.750333,3.257833,3.967458,2.522292,2.725738,2.549231,2.591389,2.990488,2.681222,2.685854,
2.284750,2.585938,2.432824,3.108875,2.611340,3.916667,2.418095,2.476406,2.801235,3.278000,
2.434921,2.617826,3.133939,2.774321,4.196173,3.764286,3.555833,5.317361,3.970800,4.136400,
4.487013,3.746393,4.754000,3.854316,3.742353,3.044079,2.817821,3.995179,3.643134,3.642593,
3.604533,2.935902,4.088310,5.344407,3.076883,3.287105,3.720870,2.032258,2.872593,5.787313,
6.017838,5.425205,4.880600,3.582295,4.90,3.489016,4.603030,5.344407,6.184286,4.047083,
4.788304,4.661325,4.815938,4.056790,3.765595,5.348772,5.200222,4.906311,3.900147,3.782897,
3.767313,3.417732,3.725455,2.888750,2.552333,2.521613,2.531522,2.510833,2.710208,2.445273,
2.619750,2.094737,2.399355,2.758000,2.317077,2.247755,3.594333,4.607805,2.69,3.084706,
3.529000,2.326200,3.309851,2.647805,2.802250,2.778667,3.231235,2.418065,3.134545,3.807843,
2.988372,2.709792,3.084035,3.633279,2.958750,2.170081,2.67,2.640706,2.841600,3.399219,
2.561373,2.574824,3.046447,2.647500,3.554875,1.865000,2.858333,2.355000,2.508082,2.376833,
2.728710,1.752833,1.571967,1.800333,2.265455,2.353226,2.568028,2.359500,2.472639,1.675385,
2.667386,2.49,2.482632,2.593452,2.695510,2.466941,2.624211,3.835645,3.519667,2.661940,
3.516167,3.146585,3.420462,4.809833,3.028500,3.192297,4.256333,2.516897,3.033846,2.793359,
6.700714,5.441250,6.872500,4.528333,7.490328,4.673788,6.376885,3.023167,4.429167,4.317625,
16.729231,8.372500,6.279828,10.805098,8.359452,8.277576,8.008846,8.742308,12.155942,5.975063,
3.317333,2.883021,3.310822,3.119219,3.921190,3.552986,3.647162,4.017667,3.895342,3.381096,
2.769412,3.225294,4.169333,3.733919,2.859492,3.674875,3.401017,3.160267,4.109545,4.347867,
3.867000,3.672763,4.364844,3.804250,2.613000,4.289909,4.212059,4.785797,4.149687,6.299444,
5.640135,5.713448,4.766438,7.032027,5.610533,3.126111,6.322029,4.417692,6.392532,2.753175,
2.549655,3.456533,3.199000,3.852338,3.387549,3.098033,3.271733,3.679180,2.655484,6.858167,
5.808033,7.55,8.388387,5.108732,7.895152,5.223580,5.741714,8.026250,4.928421,2.797292,

Re: [R] optim and singularity

2010-12-31 Thread emorway

Thanks Berend, setting lower=c(0,0.01) seems to have solved the problem.  I
have a dataset with 200+ cropped fields with associated data and wanted to
generate each plot in a way that minimized the visual residuals between the
green squares and blue circles.  I have inserted a resulting plot so you can
see how your suggestion has helped me accomplish this end.

Eric

http://r.789695.n4.nabble.com/file/n3169658/Field__18_.png 
-- 
View this message in context: 
http://r.789695.n4.nabble.com/optim-and-singularity-tp3168722p3169658.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] optim and singularity

2010-12-30 Thread emorway

Hello,

I was unable to find clues to my problem in ?optim.  Using the data and code
below, I get an error (system is exactly singular) when a particular line
of code is left in, but have found that 'optim' works when I comment it out. 
The line of code in question is after the closeAllConnections() line of code
and contains a call to na.approx from the zoo package.  What's confusing
to me is why optim works without it versus with it?  Any clues would be very
helpful in moving forward.  The overall goal of the code below is to
minimize the residuals between the blue circles and green squares by
adjusting the limits of the secondary y-axis.

Thanks, Eric

library(zoo)
temp.dat-read.table(textConnection(Well Meas_Date WTD ECgw GW_Project_Area
Region Avg.EM.Survey.Value Avg.Soil.Paste.EC SD_EM_Survey
1 4/12/1999 1.75 2.27 LOWER-UPSTREAM US NA NA NA
1 5/11/1999 1.24 5.04 LOWER-UPSTREAM US NA NA NA
1 5/27/1999 1.27 4.45 LOWER-UPSTREAM US NA NA NA
1 6/3/1999 1.27 4.09 LOWER-UPSTREAM US 3.347069 3.126667 0.6347013
1 6/11/1999 1.52 2.84 LOWER-UPSTREAM US NA NA NA
1 6/18/1999 1.19 2.34 LOWER-UPSTREAM US NA NA NA
1 6/24/1999 1.21 2.48 LOWER-UPSTREAM US NA NA NA
1 6/30/1999 1.19 2.45 LOWER-UPSTREAM US NA NA NA
1 7/7/1999 1.59 2.47 LOWER-UPSTREAM US NA NA NA
1 7/14/1999 1.75 2.73 LOWER-UPSTREAM US NA NA NA
1 7/21/1999 1.75 2.33 LOWER-UPSTREAM US NA NA NA
1 7/28/1999 1.78 2.63 LOWER-UPSTREAM US NA NA NA
1 8/4/1999 1.37 2.25 LOWER-UPSTREAM US NA NA NA
1 8/12/1999 0.08 2.37 LOWER-UPSTREAM US 2.858553 NA 0.3560363
1 8/18/1999 2.29 2.48 LOWER-UPSTREAM US NA NA NA
1 9/4/1999 0.02 2.48 LOWER-UPSTREAM US NA NA NA
1 9/16/1999 1.93 2.45 LOWER-UPSTREAM US NA NA NA
1 10/1/1999 2.01 2.49 LOWER-UPSTREAM US NA NA NA
1 11/13/1999 1.91 2.32 LOWER-UPSTREAM US NA NA NA
1 1/4/2000 1.83 2.59 LOWER-UPSTREAM US NA NA NA
1 2/24/2000 1.93 2.83 LOWER-UPSTREAM US NA NA NA
1 3/25/2000 1.73 2.71 LOWER-UPSTREAM US NA NA NA
1 4/15/2000 1.47 3.12 LOWER-UPSTREAM US NA NA NA
1 4/28/2000 1.98 2.76 LOWER-UPSTREAM US NA NA NA
1 5/17/2000 1.80 2.92 LOWER-UPSTREAM US NA NA NA
1 5/30/2000 NA NA NA US 3.064330 NA 0.3970829
1 6/1/2000 1.58 2.83 LOWER-UPSTREAM US NA NA NA
1 6/6/2000 1.65 2.97 LOWER-UPSTREAM US NA NA NA
1 6/15/2000 1.85 2.65 LOWER-UPSTREAM US NA NA NA
1 6/23/2000 1.90 2.78 LOWER-UPSTREAM US NA NA NA
1 6/29/2000 1.80 2.67 LOWER-UPSTREAM US NA NA NA
1 7/6/2000 2.00 2.69 LOWER-UPSTREAM US NA NA NA
1 7/14/2000 1.97 3.46 LOWER-UPSTREAM US NA NA NA
1 7/20/2000 1.69 2.57 LOWER-UPSTREAM US NA NA NA
1 7/27/2000 1.95 3.16 LOWER-UPSTREAM US NA NA NA
1 8/2/2000 2.03 3.12 LOWER-UPSTREAM US NA NA NA
1 8/8/2000 NA NA NA US 3.185567 2.522467 0.3765025
1 8/11/2000 2.00 2.65 LOWER-UPSTREAM US NA NA NA
1 8/19/2000 1.95 2.50 LOWER-UPSTREAM US NA NA NA
1 9/29/2000 2.11 2.11 LOWER-UPSTREAM US NA NA NA
1 10/20/2000 2.16 2.75 LOWER-UPSTREAM US NA NA NA
1 12/1/2000 1.83 2.81 LOWER-UPSTREAM US NA NA NA
1 1/19/2001 2.01 3.01 LOWER-UPSTREAM US NA NA NA
1 3/5/2001 2.01 2.35 LOWER-UPSTREAM US NA NA NA
1 3/30/2001 2.13 2.10 LOWER-UPSTREAM US NA NA NA
1 4/21/2001 1.87 2.36 LOWER-UPSTREAM US NA NA NA
1 5/15/2001 2.00 2.37 LOWER-UPSTREAM US NA NA NA
1 5/29/2001 1.60 2.59 LOWER-UPSTREAM US NA NA NA
1 6/4/2001 NA NA NA US 4.233636 NA 1.3738540
1 6/7/2001 1.71 2.55 LOWER-UPSTREAM US NA NA NA
1 6/14/2001 1.83 2.75 LOWER-UPSTREAM US NA NA NA
1 6/20/2001 1.93 3.02 LOWER-UPSTREAM US NA NA NA
1 6/29/2001 1.80 2.74 LOWER-UPSTREAM US NA NA NA
1 7/6/2001 1.91 2.70 LOWER-UPSTREAM US NA NA NA
1 7/12/2001 2.04 2.46 LOWER-UPSTREAM US NA NA NA
1 7/18/2001 1.87 2.55 LOWER-UPSTREAM US NA NA NA
1 7/25/2001 1.96 2.67 LOWER-UPSTREAM US NA NA NA
1 8/2/2001 2.02 2.42 LOWER-UPSTREAM US NA NA NA
1 8/8/2001 NA NA NA US 2.946731 NA 0.3748240
1 8/10/2001 1.94 2.36 LOWER-UPSTREAM US NA NA NA
1 8/17/2001 1.94 0.75 LOWER-UPSTREAM US NA NA NA
1 9/1/2001 2.12 2.22 LOWER-UPSTREAM US NA NA NA
1 9/15/2001 2.15 2.35 LOWER-UPSTREAM US NA NA NA
1 9/29/2001 1.95 2.20 LOWER-UPSTREAM US NA NA NA
1 10/27/2001 1.08 1.72 LOWER-UPSTREAM US NA NA NA
1 11/17/2001 1.99 2.21 LOWER-UPSTREAM US NA NA NA
1 1/10/2002 2.01 2.31 LOWER-UPSTREAM US NA NA NA
1 2/16/2002 1.94 1.74 LOWER-UPSTREAM US NA NA NA
1 3/23/2002 2.03 2.46 LOWER-UPSTREAM US NA NA NA
1 4/13/2002 2.11 2.50 LOWER-UPSTREAM US NA NA NA
1 5/11/2002 2.11 2.54 LOWER-UPSTREAM US NA NA NA
1 5/21/2002 NA NA NA US 2.368000 NA 0.3235852
1 5/30/2002 2.07 2.56 LOWER-UPSTREAM US NA NA NA
1 6/6/2002 2.11 2.55 LOWER-UPSTREAM US NA NA NA
1 6/13/2002 2.03 2.66 LOWER-UPSTREAM US NA NA NA
1 6/27/2002 2.19 2.55 LOWER-UPSTREAM US NA NA NA
1 7/3/2002 2.22 2.68 LOWER-UPSTREAM US NA NA NA
1 7/11/2002 2.06 2.61 LOWER-UPSTREAM US NA NA NA
1 7/18/2002 2.23 2.58 LOWER-UPSTREAM US NA NA NA
1 7/25/2002 2.23 2.62 LOWER-UPSTREAM US NA NA NA
1 8/1/2002 2.21 2.60 LOWER-UPSTREAM US NA NA NA
1 8/8/2002 2.29 2.60 LOWER-UPSTREAM US NA NA NA
1 8/14/2002 NA NA NA US 1.863548 NA 0.2400621
1 8/15/2002 2.26 2.61 LOWER-UPSTREAM US NA NA NA
1 8/21/2002 2.34 2.51 LOWER-UPSTREAM US NA NA NA
1 9/6/2002 2.31 2.51 

Re: [R] optim and singularity

2010-12-30 Thread emorway

Hi Tom, 

I followed you suggestion and found the following:

class(temp.dat$WTD)
#[1] numeric
temp.dat$WTD-na.approx(temp.dat$WTD,as.numeric(as.Date(temp.dat$Meas_Date,%m/%d/%Y)))
class(temp.dat$WTD)
#[1] numeric

Have I misunderstood you?  If not, then my original confusion still
stands...

Respectfully,
Eric
-- 
View this message in context: 
http://r.789695.n4.nabble.com/optim-and-singularity-tp3168722p3168955.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] adjust secondary y-axis bounds to minimize visual residuals

2010-12-22 Thread emorway

Hello,

I'm plotting two sets of data referenced to either the left or right y-axes. 
The first, water table depth (blue circles), is plotted on the left y-axis
in reverse order (0 at the top) as this is more intuitive when thinking in
terms of depth.  The second is electrical conductance (a surrogate for
salinity), and is referenced to the right y-axis.  The data and plot
commands follow below.

I've been trying to figure out a way to minimize the variable called
min_this by changing the values in ylim_2 only, with one constraint:
ylim_2[1]0.  I'm unsure of how to stuff the code that follows the plot
commands into a loop that adjusts ylim_2 so as to minimize min_this?  I'm
guessing there is a function to do this?

Thanks,
Eric

library(zoo)
#example data and plotting commands follow
temp.dat-read.table(textConnection(Well Meas_Date WTD ECgw GW_Project_Area
Region Avg.EM.Survey.Value Avg.Soil.Paste.EC SD_EM_Survey
1 4/12/1999 1.75 2.27 LOWER-UPSTREAM US NA NA NA
1 5/11/1999 1.24 5.04 LOWER-UPSTREAM US NA NA NA
1 5/27/1999 1.27 4.45 LOWER-UPSTREAM US NA NA NA
1 6/3/1999 1.27 4.09 LOWER-UPSTREAM US 3.347069 3.126667 0.6347013
1 6/11/1999 1.52 2.84 LOWER-UPSTREAM US NA NA NA
1 6/18/1999 1.19 2.34 LOWER-UPSTREAM US NA NA NA
1 6/24/1999 1.21 2.48 LOWER-UPSTREAM US NA NA NA
1 6/30/1999 1.19 2.45 LOWER-UPSTREAM US NA NA NA
1 7/7/1999 1.59 2.47 LOWER-UPSTREAM US NA NA NA
1 7/14/1999 1.75 2.73 LOWER-UPSTREAM US NA NA NA
1 7/21/1999 1.75 2.33 LOWER-UPSTREAM US NA NA NA
1 7/28/1999 1.78 2.63 LOWER-UPSTREAM US NA NA NA
1 8/4/1999 1.37 2.25 LOWER-UPSTREAM US NA NA NA
1 8/12/1999 0.08 2.37 LOWER-UPSTREAM US 2.858553 NA 0.3560363
1 8/18/1999 2.29 2.48 LOWER-UPSTREAM US NA NA NA
1 9/4/1999 0.02 2.48 LOWER-UPSTREAM US NA NA NA
1 9/16/1999 1.93 2.45 LOWER-UPSTREAM US NA NA NA
1 10/1/1999 2.01 2.49 LOWER-UPSTREAM US NA NA NA
1 11/13/1999 1.91 2.32 LOWER-UPSTREAM US NA NA NA
1 1/4/2000 1.83 2.59 LOWER-UPSTREAM US NA NA NA
1 2/24/2000 1.93 2.83 LOWER-UPSTREAM US NA NA NA
1 3/25/2000 1.73 2.71 LOWER-UPSTREAM US NA NA NA
1 4/15/2000 1.47 3.12 LOWER-UPSTREAM US NA NA NA
1 4/28/2000 1.98 2.76 LOWER-UPSTREAM US NA NA NA
1 5/17/2000 1.80 2.92 LOWER-UPSTREAM US NA NA NA
1 5/30/2000 NA NA NA US 3.064330 NA 0.3970829
1 6/1/2000 1.58 2.83 LOWER-UPSTREAM US NA NA NA
1 6/6/2000 1.65 2.97 LOWER-UPSTREAM US NA NA NA
1 6/15/2000 1.85 2.65 LOWER-UPSTREAM US NA NA NA
1 6/23/2000 1.90 2.78 LOWER-UPSTREAM US NA NA NA
1 6/29/2000 1.80 2.67 LOWER-UPSTREAM US NA NA NA
1 7/6/2000 2.00 2.69 LOWER-UPSTREAM US NA NA NA
1 7/14/2000 1.97 3.46 LOWER-UPSTREAM US NA NA NA
1 7/20/2000 1.69 2.57 LOWER-UPSTREAM US NA NA NA
1 7/27/2000 1.95 3.16 LOWER-UPSTREAM US NA NA NA
1 8/2/2000 2.03 3.12 LOWER-UPSTREAM US NA NA NA
1 8/8/2000 NA NA NA US 3.185567 2.522467 0.3765025
1 8/11/2000 2.00 2.65 LOWER-UPSTREAM US NA NA NA
1 8/19/2000 1.95 2.50 LOWER-UPSTREAM US NA NA NA
1 9/29/2000 2.11 2.11 LOWER-UPSTREAM US NA NA NA
1 10/20/2000 2.16 2.75 LOWER-UPSTREAM US NA NA NA
1 12/1/2000 1.83 2.81 LOWER-UPSTREAM US NA NA NA
1 1/19/2001 2.01 3.01 LOWER-UPSTREAM US NA NA NA
1 3/5/2001 2.01 2.35 LOWER-UPSTREAM US NA NA NA
1 3/30/2001 2.13 2.10 LOWER-UPSTREAM US NA NA NA
1 4/21/2001 1.87 2.36 LOWER-UPSTREAM US NA NA NA
1 5/15/2001 2.00 2.37 LOWER-UPSTREAM US NA NA NA
1 5/29/2001 1.60 2.59 LOWER-UPSTREAM US NA NA NA
1 6/4/2001 NA NA NA US 4.233636 NA 1.3738540
1 6/7/2001 1.71 2.55 LOWER-UPSTREAM US NA NA NA
1 6/14/2001 1.83 2.75 LOWER-UPSTREAM US NA NA NA
1 6/20/2001 1.93 3.02 LOWER-UPSTREAM US NA NA NA
1 6/29/2001 1.80 2.74 LOWER-UPSTREAM US NA NA NA
1 7/6/2001 1.91 2.70 LOWER-UPSTREAM US NA NA NA
1 7/12/2001 2.04 2.46 LOWER-UPSTREAM US NA NA NA
1 7/18/2001 1.87 2.55 LOWER-UPSTREAM US NA NA NA
1 7/25/2001 1.96 2.67 LOWER-UPSTREAM US NA NA NA
1 8/2/2001 2.02 2.42 LOWER-UPSTREAM US NA NA NA
1 8/8/2001 NA NA NA US 2.946731 NA 0.3748240
1 8/10/2001 1.94 2.36 LOWER-UPSTREAM US NA NA NA
1 8/17/2001 1.94 0.75 LOWER-UPSTREAM US NA NA NA
1 9/1/2001 2.12 2.22 LOWER-UPSTREAM US NA NA NA
1 9/15/2001 2.15 2.35 LOWER-UPSTREAM US NA NA NA
1 9/29/2001 1.95 2.20 LOWER-UPSTREAM US NA NA NA
1 10/27/2001 1.08 1.72 LOWER-UPSTREAM US NA NA NA
1 11/17/2001 1.99 2.21 LOWER-UPSTREAM US NA NA NA
1 1/10/2002 2.01 2.31 LOWER-UPSTREAM US NA NA NA
1 2/16/2002 1.94 1.74 LOWER-UPSTREAM US NA NA NA
1 3/23/2002 2.03 2.46 LOWER-UPSTREAM US NA NA NA
1 4/13/2002 2.11 2.50 LOWER-UPSTREAM US NA NA NA
1 5/11/2002 2.11 2.54 LOWER-UPSTREAM US NA NA NA
1 5/21/2002 NA NA NA US 2.368000 NA 0.3235852
1 5/30/2002 2.07 2.56 LOWER-UPSTREAM US NA NA NA
1 6/6/2002 2.11 2.55 LOWER-UPSTREAM US NA NA NA
1 6/13/2002 2.03 2.66 LOWER-UPSTREAM US NA NA NA
1 6/27/2002 2.19 2.55 LOWER-UPSTREAM US NA NA NA
1 7/3/2002 2.22 2.68 LOWER-UPSTREAM US NA NA NA
1 7/11/2002 2.06 2.61 LOWER-UPSTREAM US NA NA NA
1 7/18/2002 2.23 2.58 LOWER-UPSTREAM US NA NA NA
1 7/25/2002 2.23 2.62 LOWER-UPSTREAM US NA NA NA
1 8/1/2002 2.21 2.60 LOWER-UPSTREAM US NA NA NA
1 8/8/2002 2.29 2.60 LOWER-UPSTREAM US NA NA NA
1 8/14/2002 NA NA NA US 1.863548 NA 0.2400621

[R] Formatting 'names.arg' in barplot

2010-12-08 Thread emorway

Hello,

I've been looking through ?phantom and ?expression and this forum for
examples of how I might be able to manipulate some of the names that appear
on the y-axis of the barplot below.  For example, the gw in ECgw would
appear as a subscript...or qr would be the theta symbol followed by
subscript r.  My attempts haven't even come close to what I'm after.  I
could switch to ?text or ?mtext, but thought for positioning purposes it
would be easier to do it with names.arg in the barplot function call.

Thanks,
Eric


base.dat.sel2-read.table(textConnection(base.dat.Covariate
base.dat.US.Number.Observations
base.dat.US.Num.Obs.to.Achieve.Starting.Residual
base.dat.US.Percent.Reduction.in.EM38.Surveys
base.dat.DS.Number.Observations
base.dat.DS.Num.Obs.to.Achieve.Starting.Residual
base.dat.DS.Percent.Reduction.in.EM38.Surveys
Baseline 391 391 NA 281 281 NA 
WTD 391 315 0.194 281 210 0.253 
ECgw 391 362 0.074 281 280 0.004 
SM 391 349 0.107 281 280 0.004 
WTD_ECgw 391 300 0.233 281 209 0.256 
WTD_SM 391 286 0.269 281 210 0.253 
ECgw_SM 391 318 0.187 281 280 0.004 
WTD_ECgw_SM 391 269 0.312 281 210 0.253 
Sand 391 359 0.082 281 279 0.007 
Silt 391 369 0.056 281 280 0.001 
Clay 391 370 0.054 281 269 0.043 
WTD_ECgw_SM_Sand 391 263 0.327 281 181 0.356 
qr 391 360 0.079 281 270 0.038 
Ks 391 370 0.054 281 276 0.017 
qr_Ks 391 358 0.084 281 271 0.035 
WTD_ECgw_SM_qr 391 261 0.332 281 188 0.331 
WTD_ECgw_SM_Ks 391 260 0.335 281 205 0.270),header=T)
closeAllConnections()

par(mar=c(3,15,2,1))
barplot(base.dat.sel2$base.dat.US.Num.Obs.to.Achieve.Starting.Residual[order(base.dat.sel2$base.dat.US.Num.Obs.to.Achieve.Starting.Residual,decreasing=F)],
 
names.arg=base.dat.sel2$base.dat.Covariate[order(base.dat.sel2$base.dat.US.Num.Obs.to.Achieve.Starting.Residual,decreasing=F)],
 
col=grey,horiz=T,las=1,xlim=c(0,450),yaxs=i,cex.axis=1.2,cex.names=1.1)
  
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Formatting-names-arg-in-barplot-tp3078861p3078861.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] boxplot: reverse y-axis order

2010-11-21 Thread emorway

Hello, 

Searching this forum has enabled me to get pretty far in what I'm trying to
do.  However, there is one more manipulation I would like to make and I
haven't found a solution.  Using the data and code below, I generate the
plot produced by the last command.  If possible I would like to reverse the
order of the y-axis (bearing in mind horizontal=T) so that 0 is plotted at
the upper most part of the y-axis and 3 at the axis intercepts.  I've tried
the experiment approach to no avail, meaning I've placed rev(...) around
various arguments but with wrong results.  

Thanks,
Eric


df-read.table(textConnection(Field Date AvgWTD Region variable value hole
depth depthM
204 17-Aug-00 2.897428989 US R1 NA R 1 0
205 17-Aug-00 4.017264366 US R1 1.01 R 1 0
206 10-Jun-03 NA US R1 1.19 R 1 0
207 26-May-04 NA US R1 1.6 R 1 0
207 29-May-03 NA US R1 1.23 R 1 0
207 17-Aug-00 NA US R1 1.82 R 1 0
21 03-Jun-99 1.896872044 US R1 NA R 1 0
22 10-Aug-00 1.546097994 US R1 1.3 R 1 0
22 17-Aug-99 1.639824152 US R1 1.11 R 1 0
22 02-Jun-99 1.202943921 US R1 1.44 R 1 0
23 22-May-02 1.52449 US R1 1.58 R 1 0
23 09-Aug-00 1.938527942 US R1 2.2 R 1 0
23 17-Aug-99 0.93040204 US R1 1.36 R 1 0
23 02-Jun-99 1.479804039 US R1 2.65 R 1 0
24 09-Jun-03 NA US R1 1.08 R 1 0
24 04-Jun-01 1.484376073 US R1 NA R 1 0
24 09-Aug-00 1.439671993 US R1 2.41 R 1 0
24 03-Jun-99 1.481328011 US R1 2.36 R 1 0
26A 16-Aug-00 3.128010035 US R1 2.44 R 1 0
26A 06-Apr-99 NA US R1 1.81 R 1 0
26B 10-Aug-00 2.090928078 US R1 1.62 R 1 0
27 26-May-04 NA US R1 1.6 R 1 0
27 29-May-03 NA US R1 NA R 1 0
27 04-Jun-01 NA US R1 2.24 R 1 0
27 09-Aug-00 5.324855804 US R1 9.79 R 1 0
27 10-Jun-99 NA US R1 1.25 R 1 0
28 10-Aug-04 3.565550327 US R1 1.01 R 1 0
28 10-Aug-00 2.169414043 US R1 1.79 R 1 0
28 10-Jun-99 1.633728027 US R1 2.1 R 1 0
29 09-Jun-03 1.847087979 US R1 3.01 R 1 0
29 22-May-02 1.950719953 US R1 0.981 R 1 0
29 08-Jun-01 1.647139192 US R1 1.8 R 1 0
29 15-Aug-00 1.946148038 US R1 3.01 R 1 0
29 04-Jun-99 1.824735999 US R1 0.845 R 1 0
30 28-May-03 NA US R1 1.09 R 1 0
30 10-Aug-00 3.727704048 US R1 0.76 R 1 0
30 17-Aug-99 NA US R1 NA R 1 0
30 03-Jun-99 NA US R1 1.87 R 1 0
31 15-Aug-00 1.594104052 US R1 2.32 R 1 0
31 17-Aug-99 0.961643994 US R1 0.99 R 1 0
31 03-Jun-99 0.907288015 US R1 1.48 R 1 0
33 04-Jun-01 5.030724049 US R1 1.23 R 1 0
33 15-Aug-00 4.110228062 US R1 2.24 R 1 0
33 12-Aug-99 1.383792043 US R1 NA R 1 0
33 04-Jun-99 1.542287946 US R1 1.42 R 1 0
35 14-Aug-00 1.918715954 US R1 2.1 R 1 0
35 17-Aug-99 1.495044112 US R1 1.52 R 1 0
35 06-Apr-99 NA US R1 2.02 R 1 0
37 22-May-02 4.803647995 US R1 2.12 R 1 0
37 06-Aug-01 3.499104261 US R1 1.91 R 1 0
37 16-Aug-00 3.473958015 US R1 2.41 R 1 0
37 08-Jun-99 NA US R1 0.648 R 1 0
38 26-May-04 NA US R1 1.94 R 1 0
38 04-Aug-03 NA US R1 2.02 R 1 0
38 16-Aug-01 2.645663977 US R1 4.97 R 1 0
38 08-Jun-99 1.240535975 US R1 1.06 R 1 0
39 16-Aug-00 1.674571276 US R1 3.25 R 1 0
39 08-Jun-99 0.774192035 US R1 7.41 R 1 0
40 09-Aug-04 NA US R1 18 R 1 0
40 26-May-04 NA US R1 7.03 R 1 0
40 29-May-03 NA US R1 3.61 R 1 0
40 04-Jun-01 0.669035971 US R1 3.92 R 1 0
40 16-Aug-00 0.518160045 US R1 2.81 R 1 0
40 11-Aug-99 0.232257605 US R1 NA R 1 0
41 09-Aug-05 0.704087973 US R1 1.65 R 1 0
41 26-May-04 1.865375996 US R1 1.62 R 1 0
41 04-Aug-03 1.544319987 US R1 0.968 R 1 0
41 02-Jun-03 2.767584085 US R1 1.02 R 1 0
41 11-Aug-00 1.138427973 US R1 0.98 R 1 0
41 12-Aug-99 0.924763203 US R1 1.64 R 1 0
41 09-Jun-99 0.964184046 US R1 1.24 R 1 0
42 11-Aug-00 1.082802057 US R1 1.09 R 1 0
43 09-Jun-03 1.775460005 US R1 1.14 R 1 0
43 30-May-01 1.38074398 US R1 2.38 R 1 0
43 16-Aug-00 0.61214 US R1 2.23 R 1 0
43 12-Aug-99 0.583996832 US R1 4.97 R 1 0
43 09-Jun-99 0.838199973 US R1 3.41 R 1 0
45 06-Jun-01 5.977128029 US R1 1.72 R 1 0
45 10-Aug-00 5.331714153 US R1 1.57 R 1 0
45 10-Jun-99 NA US R1 1.13 R 1 0
48 09-Aug-04 3.940454483 US R1 0.886 R 1 0
48 28-May-03 NA US R1 1.15 R 1 0
48 30-May-01 2.782824039 US R1 2.79 R 1 0
48 15-Aug-00 2.336292028 US R1 7.22 R 1 0
48 12-Aug-99 1.379220009 US R1 NA R 1 0
49 16-Aug-00 2.613964796 US R1 2.1 R 1 0
49 31-May-99 NA US R1 NA R 1 0
5 08-Aug-00 NA US R1 1.19 R 1 0
50 16-Aug-00 2.816961765 US R1 1.02 R 1 0
50 11-Aug-99 2.66755 US R1 1.45 R 1 0
50 31-May-99 NA US R1 1.54 R 1 0
51 02-Jun-03 NA US R1 4.15 R 1 0
51 16-Aug-00 NA US R1 4.68 R 1 0
51 11-Aug-99 NA US R1 NA R 1 0
51 09-Jun-99 5.42076 US R1 2.37 R 1 0
52 04-Aug-03 2.141219854 US R1 1.88 R 1 0
52 09-Jun-03 1.549908042 US R1 1.13 R 1 0
52 04-Jun-01 2.020823956 US R1 3.13 R 1 0
52 10-Aug-99 2.029206038 US R1 NA R 1 0
52 09-Jun-99 1.862328053 US R1 1.56 R 1 0
53 12-Aug-99 NA US R1 1.37 R 1 0
53 09-Jun-99 NA US R1 3.77 R 1 0
57 11-Aug-99 1.907286048 US R1 4.58 R 1 0
57 09-Jun-99 0.930655956 US R1 11.1 R 1 0
6 28-May-03 3.97459197 US R1 1.35 R 1 0
6 31-May-01 4.736591816 US R1 2.69 R 1 0
6 08-Aug-00 5.587999821 US R1 1.43 R 1 0
60A 11-Jun-99 NA US R1 1.56 R 1 0
61 22-May-02 2.862071991 US R1 1.57 R 1 0
61 06-Aug-01 2.178558111 US R1 1.69 R 1 0
61 

[R] barplot3d cutting off labels

2010-11-13 Thread emorway

Hello, 

Using the function that is available at:

http://addictedtor.free.fr/graphiques/sources/source_161.R

The following command leads to a plot with labels that are cut off:

barplot3d(c(4.01,4.71,2.03,1.28,1.42,1.76,0,
6.58,3.25,3.11,2.10,2.05,1.19,1.28,
6.44,5.50,3.69,4.53,3.33,1.70,2.57,
6.01,5.36,5.90,6.61,4.67,2.62,3.83,
10.69,5.90,5.62,6.64,5.71,5.35,3.18,
8.98,7.30,7.73,7.23,14.10,8.65,4.00,
13.39,10.91,5.57,7.34,6.03,5.44,3.24),
 rows=7, theta = 55, phi = 22, expand=0.9,
 col.lab=c(GWTD  1,1 = GWTD  2,2 = GWTD  3,3 = GWTD  4,4
= GWTD  5,5 = GWTD  6,GWTD  6), 
 row.lab=c(GW ECe  1,1 = GW ECe  2,2 = GW ECe  3,3 = GW ECe
 4,4 = GW ECe  5,5 = GW ECe  6,GW ECe  6),

col.bar=c(#FF6633,#33,#99CCFF,#9933FF,#44ff58,#CC,#FF),
 z.lab=Soil ECe)

I've also tried to use par(mar=c(5,5,5,5)) or par(oma=c(5,5,5,5)), but to no
avail.  Any ideas?

Thanks, 
E

-- 
View this message in context: 
http://r.789695.n4.nabble.com/barplot3d-cutting-off-labels-tp3041309p3041309.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] histograms resulting from call to soil.texture

2010-10-20 Thread emorway

Hello,

Using the dataset below I produce a soil.texture plot (R code for this
included at the end).  One will notice I have plotted the points based on
whether or not they are in a region called 'upstream' or 'downstream'.  I'm
curious if there is a way to somehow extract counts of the number of points
(red points and blue points) falling within each classification (e.g. silty
loam, sandy clay, etc.)?

Eric

soil.dat-read.table(textConnection(Field_Num Region Region_Num Region_Sym
Region_Col Date Location Location_Num sand silt clay
1 US 1 1 4 6/3/1999 Surface 1 26.25 52.5 21.25
1 US 1 1 4 8/8/2000 Surface 1 27.5 52.5 20
1 US 1 1 4 8/8/2000 One_Foot 2 27.5 50 22.5
1 US 1 1 4 8/8/2000 Two_Foot 3 80 20 0
1 US 1 1 4 8/8/2000 Three_Foot 4 67.5 22.5 10
1 US 1 1 4 8/8/2000 Four_Foot 5 95 0 5
2 US 1 1 4 8/8/2000 Surface 1 55 12.5 32.5
2 US 1 1 4 6/4/2001 Surface 1 48.75 43.75 7.5
2 US 1 1 4 6/1/1999 Surface 1 28 34 38
2 US 1 1 4 8/8/2000 One_Foot 2 62.5 7.5 30
2 US 1 1 4 6/4/2001 One_Foot 2 65 16.25 18.75
2 US 1 1 4 8/8/2000 Two_Foot 3 55 0 45
2 US 1 1 4 6/4/2001 Two_Foot 3 50 21.25 28.75
2 US 1 1 4 8/8/2000 Three_Foot 4 47.5 5 47.5
2 US 1 1 4 6/4/2001 Three_Foot 4 77.5 15 7.5
2 US 1 1 4 8/8/2000 Four_Foot 5 62.5 0 37.5
2 US 1 1 4 6/4/2001 Four_Foot 5 75 15 10
5 US 1 1 4 8/1/2000 Surface 1 25 36.25 38.75
5 US 1 1 4 6/9/1999 Surface 1 48.75 31.25 20
5 US 1 1 4 8/1/2000 One_Foot 2 25 40 35
5 US 1 1 4 8/1/2000 Two_Foot 3 21.13 30.99 47.89
5 US 1 1 4 8/1/2000 Three_Foot 4 37.5 25 37.5
5 US 1 1 4 8/1/2000 Four_Foot 5 37.5 25 37.5
6 US 1 1 4 5/28/2003 Surface 1 37.5 37.5 25
6 US 1 1 4 8/8/2000 Surface 1 37.5 22.5 40
6 US 1 1 4 5/31/2001 Surface 1 21.25 35 43.75
6 US 1 1 4 5/28/2003 One_Foot 2 77.5 10 12.5
6 US 1 1 4 8/8/2000 One_Foot 2 82.5 5 12.5
6 US 1 1 4 5/31/2001 One_Foot 2 61.25 21.25 17.5
6 US 1 1 4 5/28/2003 Two_Foot 3 82.5 6.25 11.25
6 US 1 1 4 8/8/2000 Two_Foot 3 85 5 10
6 US 1 1 4 5/31/2001 Two_Foot 3 76.25 15 8.75
6 US 1 1 4 5/28/2003 Three_Foot 4 85 5 10
6 US 1 1 4 8/8/2000 Three_Foot 4 72.5 7.5 20
6 US 1 1 4 5/31/2001 Three_Foot 4 68.75 16.25 15
6 US 1 1 4 5/28/2003 Four_Foot 5 67.5 18.75 13.75
6 US 1 1 4 8/8/2000 Four_Foot 5 67.5 7.5 25
6 US 1 1 4 5/31/2001 Four_Foot 5 76.25 13.75 10
6 US 1 1 4 5/28/2003 Six_Foot 6 61.25 23.75 15
7 US 1 1 4 8/4/2003 Surface 1 40 42.5 17.5
7 US 1 1 4 6/2/2003 Surface 1 41.25 46.25 12.5
7 US 1 1 4 8/8/2000 Surface 1 52.5 12.5 35
7 US 1 1 4 6/1/2001 Surface 1 40 27.5 32.5
7 US 1 1 4 8/4/2003 One_Foot 2 42.5 40 17.5
7 US 1 1 4 6/2/2003 One_Foot 2 40 48.75 11.25
7 US 1 1 4 8/8/2000 One_Foot 2 47.5 10 42.5
7 US 1 1 4 6/1/2001 One_Foot 2 47.5 26.25 26.25
7 US 1 1 4 8/4/2003 Two_Foot 3 65 20 15
7 US 1 1 4 6/2/2003 Two_Foot 3 40 55 5
7 US 1 1 4 8/8/2000 Two_Foot 3 45 5 50
7 US 1 1 4 6/1/2001 Two_Foot 3 57.5 17.5 25
7 US 1 1 4 8/4/2003 Three_Foot 4 77.5 11.25 11.25
7 US 1 1 4 6/2/2003 Three_Foot 4 85 12.5 2.5
7 US 1 1 4 8/8/2000 Three_Foot 4 60 7.5 32.5
7 US 1 1 4 6/1/2001 Three_Foot 4 65 18.75 16.25
7 US 1 1 4 8/4/2003 Four_Foot 5 22.5 60 17.5
7 US 1 1 4 6/2/2003 Four_Foot 5 86.25 6.25 7.5
7 US 1 1 4 8/8/2000 Four_Foot 5 25 20 55
7 US 1 1 4 6/1/2001 Four_Foot 5 37.5 35 27.5
7 US 1 1 4 8/4/2003 Six_Foot 6 52.5 33.75 13.75
7 US 1 1 4 6/2/2003 Six_Foot 6 56.25 36.25 7.5
8 US 1 1 4 8/8/2000 Surface 1 40 15 45
8 US 1 1 4 6/3/1999 Surface 1 32.5 45 22.5
8 US 1 1 4 8/8/2000 One_Foot 2 45 21.25 33.75
8 US 1 1 4 8/8/2000 Two_Foot 3 33.75 20 46.25
8 US 1 1 4 8/8/2000 Three_Foot 4 38.75 18.75 42.5
8 US 1 1 4 8/8/2000 Four_Foot 5 32.5 23.75 43.75
11 US 1 1 4 8/4/2003 Surface 1 28.75 45 26.25
11 US 1 1 4 5/29/2003 Surface 1 27.5 47.5 25
11 US 1 1 4 8/9/2000 Surface 1 32.5 20 47.5
11 US 1 1 4 5/31/2001 Surface 1 25 30 45
11 US 1 1 4 6/2/1999 Surface 1 26 34 40
11 US 1 1 4 8/4/2003 One_Foot 2 20 50 30
11 US 1 1 4 5/29/2003 One_Foot 2 25 47.5 27.5
11 US 1 1 4 8/9/2000 One_Foot 2 25 25 50
11 US 1 1 4 5/31/2001 One_Foot 2 12.5 37.5 50
11 US 1 1 4 8/4/2003 Two_Foot 3 20 20 60
11 US 1 1 4 5/29/2003 Two_Foot 3 20 57.5 22.5
11 US 1 1 4 8/9/2000 Two_Foot 3 27.5 30 42.5
11 US 1 1 4 5/31/2001 Two_Foot 3 10 37.5 52.5
11 US 1 1 4 8/4/2003 Three_Foot 4 22.5 60 17.5
11 US 1 1 4 5/29/2003 Three_Foot 4 26.25 51.25 22.5
11 US 1 1 4 8/9/2000 Three_Foot 4 30 22.5 47.5
11 US 1 1 4 5/31/2001 Three_Foot 4 15 22.5 62.5
11 US 1 1 4 8/4/2003 Four_Foot 5 30 47.5 22.5
11 US 1 1 4 5/29/2003 Four_Foot 5 35 43.75 21.25
11 US 1 1 4 8/9/2000 Four_Foot 5 30 22.5 47.5
11 US 1 1 4 5/31/2001 Four_Foot 5 20 25 55
11 US 1 1 4 8/4/2003 Six_Foot 6 51.25 18.75 30
11 US 1 1 4 5/29/2003 Six_Foot 6 56.25 17.5 26.25
12 US 1 1 4 8/4/2003 Surface 1 28.75 55 16.25
12 US 1 1 4 6/9/2003 Surface 1 32.5 36.25 31.25
12 US 1 1 4 8/9/2000 Surface 1 30 27.5 42.5
12 US 1 1 4 6/9/1999 Surface 1 37.5 38.75 23.75
12 US 1 1 4 8/4/2003 One_Foot 2 18.75 45 36.25
12 US 1 1 4 6/9/2003 One_Foot 2 21.25 43.75 35
12 US 1 1 4 8/9/2000 One_Foot 2 15 38.75 46.25
12 US 1 1 4 8/4/2003 Two_Foot 3 15 45 40
12 US 1 1 4 6/9/2003 Two_Foot 3 12.5 38.75 48.75
12 US 1 1 

[R] hdiffplot

2010-10-18 Thread emorway

Hello, 

I'm trying to track down more information on hdiffplot than what is supplied
in ?hdiffplot.  More specifically, the example code found at the bottom
?hdiffplot (##Compare *three* of them:) is something I'm very interested
in using for my own analysis.  However, I don't see how to add a legend or
even how to dig into the bnlst object that is created for determining what
the displayed colors mean.  The following example helps illustrate what I
mean:

library(hexbin)

x1 - rnorm(1)
y1 - rnorm(1)
x2 - rnorm(1,mean=1)
y2 - rnorm(1,mean=1)
x3 - rnorm(1,mean=1)
y3 - rnorm(1,mean=0)

xbnds - range(x1,x2,x3)
ybnds - range(y1,y2,y3)

bin1 - hexbin(x1,y1,xbnds=xbnds,ybnds=ybnds) 
bin2 - hexbin(x2,y2,xbnds=xbnds,ybnds=ybnds) 
bin3 - hexbin(x3,y3,xbnds=xbnds,ybnds=ybnds) 

erodebin1 - erode.hexbin(smooth.hexbin(bin1))
erodebin2 - erode.hexbin(smooth.hexbin(bin2))
erodebin3 - erode.hexbin(smooth.hexbin(bin3))

bnlst - list(b1=erodebin1, b2=erodebin2, b3=erodebin3)
hdiffplot(bnlst)

Can 'grid.hexlegend' be used to add a legend somehow?

-Eric
-- 
View this message in context: 
http://r.789695.n4.nabble.com/hdiffplot-tp3000644p3000644.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] spplot cuts

2010-09-27 Thread emorway

Hello-

After looking through ?spplot, I would expect that I could specify the
values of the cuts:

...‘cuts’ number of cuts or the actual cuts to use...

So in the following command, 

spplot(lzm.krige.dir[var1.pred], scales=list(draw=TRUE),
xlab=Easting,ylab=Northing,
 
cuts=seq(0.0,0.4,by=0.01),key.space=right,cex=1.1,col.regions=terrain.colors(41),
  main=Specific Yield Layer 1,sp.layout=list(pts))

I get the following warning messages and the plot is a solid color:

Warning messages:
1: In if (length.out == 0L) integer(0L) else if (One) seq_len(length.out)
else if (missing(by)) { :
  the condition has length  1 and only the first element will be used
2: In if (length.out  2L) if (from == to) rep.int(from, length.out) else
as.vector(c(from,  :
  the condition has length  1 and only the first element will be used

Looking at Edzer's response to a post:
http://r.789695.n4.nabble.com/using-spplot-sp-package-with-5-quantiles-td836245.html#a836246

it reinforces to me the idea that I should be able to specify where the cuts
should occur.  I'm not sure what to do with the warning message and my
google searches have been unfruitful.

Some supplemental information that might be helpful:

range(data.frame(lzm.krige.dir[var1.pred])[,1])
[1] 0.1277067 0.2933876

I give the range of the plotted values because it varies between datasets
and I would like the color scale to remain fixed between images rather than
changing image to image making visual comparison's more difficult.

Any suggestions would be greatly appreciated,
Eric


-- 
View this message in context: 
http://r.789695.n4.nabble.com/spplot-cuts-tp2716237p2716237.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R's Data Dredging Philosophy for Distribution Fitting

2010-07-14 Thread emorway

Forum, 

I'm a grad student in Civil Eng, took some Stats classes that required
students learn R, and I have since taken to R and use it for as much as I
can.  Back in my lab/office, many of my fellow grad students still use
proprietary software at the behest of advisers who are familiar with the
recommended software (Statistica, @Risk (Excel Add-on), etc).  I have spent
a lot of time learning R and am confident it can generally out-process,
out-graph, or more simply stated, out-perform most of these other software
packages.  However, one area my view has been humbled in is distribution
fitting.

I started by reading through
http://cran.r-project.org/doc/contrib/Ricci-distributions-en.pdf  After that
I started digging around on this forum and found posts like this one
http://r.789695.n4.nabble.com/Fitting-usual-distributions-td80.html#a80
that are close to what I'm after.  That is, given an observation dataset, I
would like to call a function that cycles through numerous distributions
(common or not) and then ranks them for me based on Chi-Square,
Kolmogorov-Smirnov and/or Anderson-Darling, for example.  

This question was asked back in 2004:
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/37053.html but the response
was that this kind of thing wasn't in R nor in proprietary software to the
best of the responding author's memory.  In 2010, however, this is no longer
true as @Risk's
(http://www.palisade.com/risk/?gclid=CKvblPSM7KICFZQz5wodDRI2fg)
Distribution Fitting function does this very thing.  And it is here that
my R pride has taken a hit.  Based on the first response to the question
posed here
http://r.789695.n4.nabble.com/Which-distribution-best-fits-the-data-td859448.html#a859448
is it fair to say that the R community (I realize this is only 1 view) would
take exception to this kind of data mining?  

Unless I've missed a discussion of a package that does this very thing, it
seems as though I would need to code something up using fitdistr() and do
all the ranking myself.  Undoubtedly that would be a good exercise for me,
but its hard for me to believe R would be a runner-up to something like
distribution fitting in @Risk.

Eric
-- 
View this message in context: 
http://r.789695.n4.nabble.com/R-s-Data-Dredging-Philosophy-for-Distribution-Fitting-tp2289508p2289508.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Average 2 Columns when possible, or return available value

2010-06-25 Thread emorway

Forum, 

Using the following data:

DF-read.table(textConnection(A B
22.60 NA
 NA NA
 NA NA
 NA NA
 NA NA
 NA NA
 NA NA
 NA NA
102.00 NA
 19.20 NA
 19.20 NA
 NA NA
 NA NA
 NA NA
 11.80 NA
 7.62 NA
 NA NA
 NA NA
 NA NA
 NA NA
 NA NA
 75.00 NA
 NA NA
 18.30 18.2
 NA NA
 NA NA
 8.44 NA
 18.00 NA
 NA NA
 12.90 NA),header=T)
closeAllConnections()

The second column is a duplicate reading of the first column, and when two
values are available, I would like to average column 1 and 2 (example code
below).  But if there is only one reading, I would like to retain it, but I
haven't found a good way to exclude NA's using the following code:

t(as.matrix(aggregate(t(as.matrix(DF)),list(rep(1:1,each=2)),mean)[,-1]))

Currently, row 24 is the only row with a returned value.  I'd like the 
result to return column A if it is the only available value, and average
where possible.  Of course, if both columns are NA, NA is the only possible
result.

The result I'm after would look like this (row 24 is an avg):

 22.60 
NA
NA
NA
NA
NA
NA
NA
102.00
 19.20
 19.20
NA
NA
NA
 11.80
  7.62
NA
NA
NA
NA
NA
 75.00
NA
 18.25
NA
NA
  8.44
 18.00
NA
 12.90

This is a small example from a much larger data frame, so if you're
wondering what the deal is with list(), that will come into play for the
larger problem I'm trying to solve.

Respectfully,
Eric
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Average-2-Columns-when-possible-or-return-available-value-tp2269049p2269049.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] barplot: invalid 'xlim' value

2010-05-15 Thread emorway

Forum,

I am attempting to plot weekly horizontal histograms, something very similar
to: 
http://addictedtor.free.fr/graphiques/RGraphGallery.php?graph=109

The part of the code used to create this graphic that interest me:
# Draw the histograms
for (i in 1:length(ages)){
par(new = TRUE)
xmin - -ages[i] + xlim[1]
xmax - xlim[2] - ages[i]
ser - freqs[, i+1]
ser - ser/max(ser) * barscale
barplot(ser, horiz = TRUE, axes = FALSE, xlim = c(xmin, xmax),
ylim = ylim, col = barcol, space = 0)
}

When I try something similar with the code below, I get the error:
Error in plot.window(xlim, ylim, log = log, ...) : invalid 'xlim' value

However, I can successfully use the the xlim and ylim data to plot points,
no R complaints (see code below that creates purple dots).  I can't figure
out why barplot doesn't like xlim.  Assuming that it is something small and
obvious, I'm also curious if someone could explain what is going on with
xmin and xmax in the code snippet above?  When I investigated the ages
array, its value increases at each i index, but this leads to xmin steadily
decreasing.  xmax has similar behavoir.  Although I know what is going on, I
don't know why?  It seems very counter intuitive.  Also, I did look at the
post:

http://r.789695.n4.nabble.com/multiple-graphs-td830743.html#a830744
But subplot disconnects the data from the existing coordinate system.

I've prepared my data to be easily uploaded, but the paste command into R
took about 10-20 sec. on my machine to complete (I did pare down the dataset
by 80% or so):

library(zoo)
#create a time series
ttime.80.2-seq(as.POSIXlt(2000-06-12 02:00,%Y-%m-%d
%H:%M,tz=),as.POSIXlt(2000-07-17 01:00,%Y-%m-%d
%H:%M,tz=),by=hour)
#Load in the data for the base plot
well.80.2-read.table(textConnection(row~ID~date~Reading~WellID~week
19578~Well_80-2~6/12/2000 2:00~-202.034~80-2~0
19579~Well_80-2~6/12/2000 3:00~-201.018~80-2~0
19580~Well_80-2~6/12/2000 4:00~-199.494~80-2~0
19581~Well_80-2~6/12/2000 5:00~-197.716~80-2~0
19582~Well_80-2~6/12/2000 6:00~-190.858~80-2~0
19583~Well_80-2~6/12/2000 7:00~-181.460~80-2~0
19584~Well_80-2~6/12/2000 8:00~-170.030~80-2~0
19585~Well_80-2~6/12/2000 9:00~-155.552~80-2~0
19586~Well_80-2~6/12/2000 10:00~-118.468~80-2~0
19587~Well_80-2~6/12/2000 11:00~-96.116~80-2~0
19588~Well_80-2~6/12/2000 12:00~-75.542~80-2~0
19589~Well_80-2~6/12/2000 13:00~-65.128~80-2~0
19590~Well_80-2~6/12/2000 14:00~-59.032~80-2~0
19591~Well_80-2~6/12/2000 15:00~-68.684~80-2~0
19592~Well_80-2~6/12/2000 16:00~-78.844~80-2~0
19593~Well_80-2~6/12/2000 17:00~-87.480~80-2~0
19594~Well_80-2~6/12/2000 18:00~-95.354~80-2~0
19595~Well_80-2~6/12/2000 19:00~-102.974~80-2~0
19596~Well_80-2~6/12/2000 20:00~-109.070~80-2~0
19597~Well_80-2~6/12/2000 21:00~-113.388~80-2~0
19598~Well_80-2~6/12/2000 22:00~-117.706~80-2~0
19599~Well_80-2~6/12/2000 23:00~-121.008~80-2~0
19600~Well_80-2~6/13/2000 0:00~-123.802~80-2~0
19601~Well_80-2~6/13/2000 1:00~-125.326~80-2~0
19602~Well_80-2~6/13/2000 2:00~-126.342~80-2~0
19603~Well_80-2~6/13/2000 3:00~-127.104~80-2~0
19604~Well_80-2~6/13/2000 4:00~-127.866~80-2~0
19605~Well_80-2~6/13/2000 5:00~-128.882~80-2~0
19606~Well_80-2~6/13/2000 6:00~-128.882~80-2~0
19607~Well_80-2~6/13/2000 7:00~-129.644~80-2~0
19608~Well_80-2~6/13/2000 8:00~-130.660~80-2~0
19609~Well_80-2~6/13/2000 9:00~-132.184~80-2~0
19610~Well_80-2~6/13/2000 10:00~-132.184~80-2~0
19611~Well_80-2~6/13/2000 11:00~-133.962~80-2~0
19612~Well_80-2~6/13/2000 12:00~-135.740~80-2~0
19613~Well_80-2~6/13/2000 13:00~-138.280~80-2~0
19614~Well_80-2~6/13/2000 14:00~-139.296~80-2~0
19615~Well_80-2~6/13/2000 15:00~-140.820~80-2~0
19616~Well_80-2~6/13/2000 16:00~-141.836~80-2~0
19617~Well_80-2~6/13/2000 17:00~-143.360~80-2~0
19618~Well_80-2~6/13/2000 18:00~-145.138~80-2~0
19619~Well_80-2~6/13/2000 19:00~-145.138~80-2~0
19620~Well_80-2~6/13/2000 20:00~-146.154~80-2~0
19621~Well_80-2~6/13/2000 21:00~-146.916~80-2~0
19622~Well_80-2~6/13/2000 22:00~-147.678~80-2~0
19623~Well_80-2~6/13/2000 23:00~-147.678~80-2~0
19624~Well_80-2~6/14/2000 0:00~-148.694~80-2~0
19625~Well_80-2~6/14/2000 1:00~-148.694~80-2~0
19626~Well_80-2~6/14/2000 2:00~-148.694~80-2~0
19627~Well_80-2~6/14/2000 3:00~-148.694~80-2~0
19628~Well_80-2~6/14/2000 4:00~-148.694~80-2~0
19629~Well_80-2~6/14/2000 5:00~-149.456~80-2~0
19630~Well_80-2~6/14/2000 6:00~-149.456~80-2~0
19631~Well_80-2~6/14/2000 7:00~-149.456~80-2~0
19632~Well_80-2~6/14/2000 8:00~-150.472~80-2~0
19633~Well_80-2~6/14/2000 9:00~-150.472~80-2~0
19634~Well_80-2~6/14/2000 10:00~-150.472~80-2~0
19635~Well_80-2~6/14/2000 11:00~-150.472~80-2~0
19636~Well_80-2~6/14/2000 12:00~-149.456~80-2~0
19637~Well_80-2~6/14/2000 13:00~-148.694~80-2~0
19638~Well_80-2~6/14/2000 14:00~-148.694~80-2~0
19639~Well_80-2~6/14/2000 15:00~-148.694~80-2~0
19640~Well_80-2~6/14/2000 16:00~-148.694~80-2~0
19641~Well_80-2~6/14/2000 17:00~-148.694~80-2~0
19642~Well_80-2~6/14/2000 

Re: [R] barplot: invalid 'xlim' value

2010-05-15 Thread emorway

Josh, 

I wonder if you try pulling the data from the post directly it will work
better for you:
http://r.789695.n4.nabble.com/barplot-invalid-xlim-value-td2217919.html#a2217919

As for your response to the other code I was asking about...that code works
perfectly well as it is written, so there is no typo.  I just can't figure
out why it needs to be this way for it to work.  As I mentioned, it is
counter intuitive.

Eric
-- 
View this message in context: 
http://r.789695.n4.nabble.com/barplot-invalid-xlim-value-tp2217919p2217953.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] barplot: invalid 'xlim' value

2010-05-15 Thread emorway

Josh, 

Thanks for looking into this further.  Picking up on your hint I tried to
coerce the xlim values as well:

barplot(coldat,horiz=TRUE,axes=FALSE,xlim=c(as.numeric(strptime(well.80.2$date[1],%m/%d/%Y
%H:%M)),as.numeric(strptime(well.80.2$date[length(well.80.2$date)],%m/%d/%Y
%H:%M))),ylim=ylim,col=8,space=0) 

It doesn't complain at this point, but it doesn't appear to do anything
either.  Hopefully someone else will spot the problem.

Eric
-- 
View this message in context: 
http://r.789695.n4.nabble.com/barplot-invalid-xlim-value-tp2217919p2218029.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Add 1 week to date with hours included for subset

2010-05-14 Thread emorway

For those whose search lands on this post, here is the solution I found that
worked

well.80.2-data.frame(well.80.2,week=floor(as.numeric(difftime(strptime(well.80.2$date,%m/%d/%Y
%H:%M,tz=),strptime(well.80.2$date[1],%m/%d/%Y
%H:%M,tz=),tz=,units=hours))/168))

-- 
View this message in context: 
http://r.789695.n4.nabble.com/Add-1-week-to-date-with-hours-included-for-subset-tp2216062p2216710.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] insert values based on common ID

2010-05-14 Thread emorway

Forum,

with the datasets a and b below, I'm trying to establish a relationship
based on the common column week and insert the value from the column
weekAvg in b to the column weekAvg in a.  The dataset a is several thousand
lines long.  I've tried looking at 'match', writing functions, 'rbind.fill'
and various search terms in this forum to no avail.  Thanks...

a-read.table(textConnection(row ID date Reading WellID week weekAvg
1 Well_80-2 6/12/2000 2:00 -202.034 80-2 0 NA
2 Well_80-2 6/12/2000 3:00 -201.018 80-2 0 NA
3 Well_80-2 6/12/2000 4:00 -199.494 80-2 0 NA
4 Well_80-2 6/12/2000 5:00 -197.716 80-2 0 NA
5 Well_80-2 6/12/2000 6:00 -190.858 80-2 0 NA
6 Well_80-2 6/12/2000 7:00 -181.460 80-2 0 NA
7 Well_80-2 6/19/2000 10:00 -166.728 80-2 1 NA
8 Well_80-2 6/19/2000 11:00 -167.490 80-2 1 NA
9 Well_80-2 6/19/2000 12:00 -167.490 80-2 1 NA
10 Well_80-2 6/19/2000 13:00 -167.490 80-2 1 NA
11 Well_80-2 6/19/2000 14:00 -168.506 80-2 1 NA
12 Well_80-2 6/19/2000 15:00 -168.506 80-2 1 NA),header=T)
closeAllConnections() 

b-read.table(textConnection(week weekAvg  
0 -147.3726
1 -181.3429
2 -151.7208
3 -188.8653
4 -163.7465
5 -161.6873
6 -158.5168
7 -146.6136
8 -175.4351
9 -100.9450
10 -151.3655
11 -125.8975
12 -162.5993),header=T)
closeAllConnections() 


-- 
View this message in context: 
http://r.789695.n4.nabble.com/insert-values-based-on-common-ID-tp2216735p2216735.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Add 1 week to date with hours included for subset

2010-05-13 Thread emorway

Forum, 

I've got hourly data for roughly 5 months of time that I want to subset into
successive weeks.  The first two elements of the dataset are

well.80.2$date[1]
[1] 6/12/2000 2:00
9256 Levels: 10/1/2001...

well.80.2$date[2]
[1] 6/12/2000 3:00
9256 Levels: 10/1/2001

and so on until mid-october.  I've been able to add 1 week the first element
with the following:

as.POSIXlt(paste(as.character(as.Date(well.80.2$date[1],%m/%d/%Y
%H:%M)+7),as.character(format(strptime(well.80.2$date[1],%m/%d/%Y
%H:%M),%H:%M)),sep= ))
[1] 2000-06-19 02:00:00
  
What I've been unable to do is then use this for comparative purposes in the
subset command, something to the effect of:

contin.80.2-subset(well.80.2,well.80.2$date=well.80.2$date[1]) 
well.80.2$date 
as.POSIXlt(paste(as.character(as.Date(well.80.2$date[1],%m/%d/%Y
%H:%M)+7),as.character(format(strptime(well.80.2$date[1],%m/%d/%Y
%H:%M),%H:%M)),sep= ))

If there is an easier way to parse the dataset into weekly bins taking into
consideration the hours, I'm open to suggestions.

Thank you.

-- 
View this message in context: 
http://r.789695.n4.nabble.com/Add-1-week-to-date-with-hours-included-for-subset-tp2216062p2216062.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Polygon Shading Based on Relative Line Position

2010-05-06 Thread emorway

Forum, 

I have two time series that I plot up using the data/code below.  The x-axis
shows weekly time step  starting on 6/3/1999 (SP=10) and ending on
10/25/2007 (SP=448).  I'm trying to shade between the two lines in a similar
fashion as the graph found here:

http://addictedtor.free.fr/graphiques/RGraphGallery.php?graph=122

Can the polygon function be used to do this?  There are lots of posts
pertaining to shading between two lines on this forum, but none with
alternating shading.  I did look over the ?polygon discussion, specifically
the brownian motion plot, but the shading is constant regardless of which
line is greater than/above the other.  Due to the 20 or so times the lines
cross each other I wasn't sure how to modify the last example in ?plot

DS.Load.filtered-read.table(textConnection(SP   JMLoad CoolidgeLoad
10 6287371153.3   5383351151
11 5016438684.4   5149840395
12 4020238617.8   4097757349
18 1681570096.3   2330476184
19 2011084698.2   1568941725
21 1145032462.8959165782
22 1063307043.8645435740
24  925120077.1667723509
25  587189249.9707324096
26  541535137.0639181098
27  421481931.6692405402
28  268198706.4599814619
30  490052206.1509190477
32  429672101.9518539938
33  479360154.8518084664
34  332705173.5627963150
407437921.6520362947
417272780.2511155928
44  189312657.1461023436
46  159935695.6834850739
48  580467416.1909478429
49  711678748.9825542943
54  820999693.4602141144
55 1281245122.3539526986
56 1234525377.3607948868
57 1107369351.4761832777
59  930102499.8513626069
60  934106161.7525646818
61 1074012858.8436276835
62 1856920332.5660276684
64 2564136121.9   1610652560
65 2404760972.5   1465864246
66 2035806517.6   1591773461
68 1886124760.8   1611344996
71 2287805156.8   1382811156
73 1376909678.5791595277
74 1265209973.9638691715
75  777429216.0404560430
76  595882924.8315035344
77  521183273.0253026361
78  533726296.6245560652
82  267965540.5295694926
943645170.3334233598
953644636.1338526100
963616835.0357388387
973478475.0360407980
983395990.1383170168
993424680.2375567185
1037706454.2338718083
1047484685.1341367271
105   12100851.7350978464
106   43374657.9326563936
109 1172149909.6263931016
110 1281917018.7345653054
113  835813366.3682986351
114  524765450.1717050861
115  220652819.6948602823
116 1147516084.1349630240
117 2205837663.4750724733
118 2506424170.0   1178190964
119 2265639020.8   1075464253
120 2068263612.8   1413273356
121 2156640167.4   1066482651
123 1910796858.1   1037178735
124 1611203132.5   1073785193
125  778788726.3542689397
126  833341620.3306049833
127  761683183.2237702159
128  719063683.5195396505
129 1146047977.4285866547
130  801138377.3316385292
131  657429028.1213995657
133  118871474.0211083899
134  204384729.1192625209
135  260090209.9199517710
136   24201037.7194351081
1378417831.9198857821
138   15899802.8191076815
1399772533.4199805343
1403616601.1203149621
1413893325.1207562895
1462789497.9256905051
1472624087.4239137572
1482320129.9233871812
1492375061.3244789361
1502623139.9230084008
1512679009.1235954773
1522623771.6221495933
1532679186.6222735406
1552651789.1172518801
1562679171.0161670825
157   13314442.4144901548
158   56080331.1125125488
159  786640402.5184918943
160 1069335727.1260721047
161 1077616331.7124086059
163  183185160.3118822561
164  143488200.9 69403023
165  200176394.9 76669321
166  229339546.2 57984637
167  270128506.1 58218605
168  794854222.2 63732496
169 1043519643.8429644515
170  987053037.5480691194
173  196554934.9 52094336
174  208620907.2 45293829
175   79198798.3 33293628
176   91447590.0 29101572
181  511657497.8143139470
187   90245277.8101068632
188   15889549.8116258690
1893182084.1112911409
1903042679.0113186554
1912903819.8113216345
1922903116.0114596143
1932710229.9116519095
1942710557.5116534186
1952599733.3110767298
1962516739.6111859396
1972543807.5116030474
1982322890.6116630304
2012681845.9114896789
2022488111.8115222450
2032626004.0117142088
2042238467.7113530229
2052293570.7119880375
2062266600.2113747430
2072570052.8104537669
2082515095.9100642204
2102432006.7 70423676
211  158252693.6 67914034
212  339920353.6 59404637
213  337973820.1 74727730
214  403331802.8 75101010
219 1278541396.5244833157
220 1134609769.0204191503
221 1137461413.1255674658
222 1101354706.1117538411
223  509565793.1 40451505
224  147178862.1 21781015
225   

Re: [R] Polygon Shading Based on Relative Line Position

2010-05-06 Thread emorway

The last sentence should read:

Due to the 20 or so times the lines cross each other I wasn't sure how to
modify the last example in ?polygon
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Polygon-Shading-Based-on-Relative-Line-Position-tp2132718p2132720.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] melt, remove rows with common id if NA encountered in value

2010-05-05 Thread emorway

forum,

After melting a data.frame I get some 18,000+ rows (some rows below).  I
want to filter/remove all rows that have an SP  variable value in
common with a row containing NA in the value column.  So in the example
rows below, all rows with SP=425  variable=CoolidgeLoad would be removed
because NA was encountered in at least one of the rows (in this case 2 of
the rows).   Continuing the example, rows with SP=426 
variable=CoolidgeLoad would be retained since NA doesn't appear in the
value column of any row containing both of these values.  Rows with SP=361 
variable=JMLoad would also survive.  After performing the filter I plan to
cast the data.

-Eric

SP   variable value
425  CoolidgeLoad 6.044483e+07
425  CoolidgeLoad 5.314102e+07
425  CoolidgeLoad 5.287761e+07
425  CoolidgeLoad 4.592003e+07
425  CoolidgeLoad 4.314002e+07
425  CoolidgeLoad   NA
425  CoolidgeLoad   NA
426  CoolidgeLoad 5.412465e+07
426  CoolidgeLoad 6.122925e+07
426  CoolidgeLoad 6.514926e+07
426  CoolidgeLoad 6.605055e+07
426  CoolidgeLoad 6.525006e+07
426  CoolidgeLoad 6.166645e+07
426  CoolidgeLoad 6.167510e+07
427  CoolidgeLoad 5.372381e+07
427  CoolidgeLoad 4.551596e+07
427  CoolidgeLoad 4.246508e+07
427  CoolidgeLoad   NA
427  CoolidgeLoad   NA
427  CoolidgeLoad   NA
427  CoolidgeLoad   NA
359JMLoad 3.036887e+05
359JMLoad   NA
359JMLoad   NA
359JMLoad   NA
359JMLoad   NA
359JMLoad 3.313669e+05
359JMLoad 3.037101e+05
360JMLoad 3.036887e+05
360JMLoad 2.761196e+05
360JMLoad   NA
360JMLoad 3.037101e+05
360JMLoad 3.036887e+05
360JMLoad 3.036672e+05
360JMLoad 3.036458e+05
361JMLoad 3.036029e+05
361JMLoad 3.035814e+05
361JMLoad 3.312733e+05
361JMLoad 3.037101e+05
361JMLoad 3.313435e+05
361JMLoad 3.037315e+05
361JMLoad 3.313669e+05
-- 
View this message in context: 
http://r.789695.n4.nabble.com/melt-remove-rows-with-common-id-if-NA-encountered-in-value-tp2131770p2131770.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] different forms of nls recommendations

2010-03-20 Thread emorway

Hello, 

Using this data:
http://n4.nabble.com/file/n1676330/US_Final_Values.txt US_Final_Values.txt 

and the following code i got the image at the end of this message:

US.final.values-read.table(c:/tmp/US_Final_Values.txt,header=T,sep= )
US.nls.1-nls(US.final.values$ECe~a*US.final.values$WTD^b+c,data=US.final.values,start=list(a=2.75,b=-0.95,c=0.731),trace=TRUE)
f.US1-function(x){coef(US.nls.1)[a]*x^coef(US.nls.1)[b]+coef(US.nls.1)[c]}
xvals.US1-seq(min(US.final.values$WTD),max(US.final.values$WTD),length.out=75)
yvals.US1-f.US1(xvals.US1)
Rsq.nls.1-sum((predict(US.nls.1)-mean(US.final.values$ECe))^2/sum((US.final.values$ECe-mean(US.final.values$ECe))^2))
plot(US.final.values$WTD,US.final.values$ECe,col=red,pch=19,cex=.75)
lines(xvals.US1,yvals.US1,col=blue)

but the r^2 wasn't so hot.  
Rsq.nls.1
[1] 0.2377306

So I wanted to try a different equation of the general form a/(b+c*x^d)

US.nls.2-nls(US.final.values$ECe~(a/(b+c*US.final.values$WTD^d)),data=US.final.values,start=list(a=100.81,b=73.7299,c=0.0565,d=-6.043),trace=TRUE,algorithm=port)

but that ended with Convergence failure: false convergence (8).  I tried
relaxing the convergence criteria to no avail.  Assuming the form of the
equation I'm trying to use is the problem, I've been unable to track down a
source that shows the shapes of various non-linear equations that I might be
able to try as alternatives.  Any suggestions?

http://n4.nabble.com/file/n1676330/nls_image.jpg 

-- 
View this message in context: 
http://n4.nabble.com/different-forms-of-nls-recommendations-tp1676330p1676330.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] adding infrequent date labels to x-axis

2010-02-23 Thread emorway

Hi Hrishi, 

With regard to the post you helped me out with, I've got my graph almost
dialed in the way I would like it.  There was one more small thing my
searches have been unable to turn up.  I've tried searching with stagger
labels offset Labels alternate labels to no avail.  

the pertinent lines of code I'm basing my question on are:
m2-ticks.lab==Feb|ticks.lab==May|ticks.lab==Aug|ticks.lab==Nov
Axis(x,at=ticks.at[m2],side=1,labels=ticks.lab[m2],las=1,cex.axis=0.75,tick=F,padj=-2)

I would prefer to use cex.axis=1, but because the labels crowd each other it
doesn't place them all.  Is there a command I can use that would essentially
stagger the labels?, like so...
   |
   |
   |   (Graph Area)
   +__
May  Nov  May Nov 
  Aug  Feb  Aug Feb

http://n4.nabble.com/file/n1566433/TimeSeries_Example1.jpg 

Respectfully,
Eric
-- 
View this message in context: 
http://n4.nabble.com/adding-infrequent-date-labels-to-x-axis-tp1564804p1566433.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] adding infrequent date labels to x-axis

2010-02-22 Thread emorway

I'm sure there is a clever way to do the following, but I've been unable to
find it on this forum or by writing my own functions.  I have 8 years worth
of weekly data but would like to restrict the labels on the x-axis to months
only.  I've included the first year's worth of data below. 

My line of thought has been along these lines

x-seq(as.Date(1999-04-01),as.Date(2007-10-25),by=1 month)
y-as.POSIXlt(x)$mon+1
months-month.name
month.names-months[as.numeric(y)]
month.names-substr(month.names,1,3)

plot(cropped.cast1$date,cropped.cast1$Frac_ET_Satsfd_mean,xaxt='n')
mtext(text=month.names,side=1,at=???

I'm not sure how to tie month.names back to their true location on the
x-axis?

I appreciate any insights, Eric
Also,

class(cropped.cast1$date)
[1] POSIXt  POSIXct

cropped.cast1
  date Frac_ET_Satsfd_mean Frac_ET_Satsfd_sd
1999-04-08   0.83448850.13545515
1999-04-15   0.83554660.12810387
1999-04-22   0.85955790.11259251
1999-04-29   0.89972250.09611060
1999-05-06   0.87143640.09527164
1999-05-13   0.85302030.11088544
1999-05-20   0.84378660.12689882
1999-05-27   0.83100030.13985307
1999-06-03   0.80312030.15851422
1999-06-10   0.82885050.12827027
1999-06-17   0.82511300.13051783
1999-06-24   0.82276390.14227501
1999-07-01   0.79146890.15892716
1999-07-08   0.80509290.14465413
1999-07-15   0.83701410.11843615
1999-07-22   0.84486970.10823010
1999-07-29   0.85619250.10694348
1999-08-05   0.85207900.09953065
1999-08-12   0.84299250.10545427
1999-08-19   0.83979660.11629002
1999-08-26   0.83679530.12363411
1999-09-02   0.82194790.13870596
1999-09-09   0.82181930.13617427
1999-09-16   0.82175840.13346997
1999-09-23   0.82168340.13304117
1999-09-30   0.81110050.14367143
1999-10-07   0.80908130.14967750
1999-10-14   0.82651880.13484263
1999-10-21   0.83913330.11873929
1999-10-28   0.84261020.11215439
1999-11-04   0.84318130.11007485
1999-11-11   0.83941400.11206864
1999-11-18   0.83506500.11042384
1999-11-25   0.83600820.11011926
1999-12-02   0.83621290.10834491
1999-12-09   0.83775120.10519698
1999-12-16   0.83673390.10176535
1999-12-23   0.83386210.10273662
1999-12-30   0.83170940.10470654
-- 
View this message in context: 
http://n4.nabble.com/adding-infrequent-date-labels-to-x-axis-tp1564804p1564804.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] adding infrequent date labels to x-axis

2010-02-22 Thread emorway

Hello Hrishi, 

The command you suggested plotted the years on the x-axis only.  It was the
same exact plot as the one I included in the original post.

Respectfully,
Eric
-- 
View this message in context: 
http://n4.nabble.com/adding-infrequent-date-labels-to-x-axis-tp1564804p1564875.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] adding infrequent date labels to x-axis

2010-02-22 Thread emorway

Hello Hrishi,

That worked great, and in the process I learned some new ways of going about
writing R code.  Thank you very much for helping me out!

Eric
-- 
View this message in context: 
http://n4.nabble.com/adding-infrequent-date-labels-to-x-axis-tp1564804p1564943.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] formatting do.call output

2010-02-11 Thread emorway

Hello, 

With a dataset that is about 1.4e6 rows long
my.dat[seq(1:20),]
 date Cell_ID Classification WT_Depth Frac_ET_Satsfd
 1999-04-08 974  3 3.5850830.244561400
 1999-04-081188  3 3.5260010.123484700
 1999-04-081189  3 3.1870120.215916700
 1999-04-081403  3 3.3950200.163972900
 1999-04-081409  3 2.3680420.415165500
 1999-04-081617  3 4.0399170.003224544
 1999-04-081618  3 3.4578860.148585900
 1999-04-081619  3 2.1489260.475924300
 1999-04-081620  2 1.5239260.633190900
 1999-04-081621  3 2.1979980.469294300
 1999-04-151622  7 2.7590330.325698400
 1999-04-151623  3 2.8020020.313719600
 1999-04-151624  3 3.0629880.243275900
 1999-04-151833  2 3.4830320.141840700
 1999-04-151834  2 3.1280520.232235400
 1999-04-151835  7 3.3540040.176209000
 1999-04-151836  3 2.6910400.341234700
 1999-04-151837  3 3.1409910.228083800
 1999-04-151838  3 2.3929440.413379300
 1999-04-152048  2 3.7120360.084534560
.
.
.
I use
edm.func-function(x){c(mu=mean(x),var=var(x))}
output-do.call(rbind,tapply(my.dat$Frac_ET_Satsfd,list(my.dat$date,my.dat$Classification),edm.func))
data.frame(output)
muvar
1  0.7980007 0.03446669
2  0.7947966 0.03429280
3  0.8240736 0.02482441
.
.
3573 0.4509044 0.03821251
3574 0.4484108 0.03856110
3575 0.4519150 0.03889944

There are 447 dates and 8 classifications (1-8).  What is the best way to
include the corresponding date and classification that goes with each row?

Thanks Eric
-- 
View this message in context: 
http://n4.nabble.com/formatting-do-call-output-tp1477702p1477702.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] boxplot label cleanup

2010-02-10 Thread emorway

The boxplot below is close to what I would like, but needs some clean up. 
I've been unable to address the following:
1) How can I use the full extent of the plot space (there is white space on
either side of the data)?
2) How can I reduce the # of month labels (e.g. every 3rd month Jan Mar
Jun...) but still keep them staggered?
3) The year labels at the top of the plot have been positioned using the
following line

mtext(year.observed,3,at=seq(3,99,by=12),padj=-1)

I would prefer to position the label using the actual value contained in the
date column of the data.frame (which is a date variable type).  Fixing the
position as I have doesn't seem...pure?

4) The grid is meant to partition the data from year to year and was
generated using

grid(nx=9,ny=0,col=gray60,lty=1)

but it is clearly off.  Is there a way to draw the grid lines exactly
between december and January?  I wasn't sure how much code I should include,
so here it all is: (images follow)

my.dat-read.table(C:/Eric/Programs/UtilityProgram_Summarize_UNSAT_Zone_Results_8Class/Process_UZF_Output/bin/R_Data_fracETSat_US.txt,header=TRUE,sep=\t)
attach(my.dat)
dates-strptime(as.character(my.dat$date),%m/%e/%Y)
my.dat=my.dat[,2:5]
my.dat=data.frame(date=dates,my.dat)
detach(my.dat)
daterange=c(as.POSIXlt(min(my.dat$date)),as.POSIXlt(max(my.dat$date)))
rm(dates)
attach(my.dat)

cropped=subset(my.dat,Classification==5)
natveg=subset(my.dat,Classification==7)

 cropped[c(1,2,3),]  #Example data
  date Cell_ID Classification WT_Depth Frac_ET_Satsfd
94  1999-04-084395  5 1.572998  0.7596927
95  1999-04-084396  5 1.923096  0.7588058
101 1999-04-084403  5 2.290039  0.7911426

month.year - function (x) {12*(as.POSIXlt(x)$year-99)+as.POSIXlt(x)$mon+1}
month-month.year(my.dat$date)
months-month.name
cropped.month-month.year(cropped$date)
mean.cropped.frac-tapply(cropped$Frac_ET_Satsfd,cropped.month,mean)
month.observed - numeric(length(mean.cropped.frac))
for(i in
1:length(mean.cropped.frac)){month.observed[i]=as.numeric(names(mean.cropped.frac[i]))%%12}
month.observed[month.observed==0]-12  #make the decembers 12 instead of 0
month.names-months[as.numeric(month.observed)]
month.names-substr(month.names,1,3)

year.vector-function (x) {as.POSIXlt(x)$year+1900}
cropped.year-year.vector(cropped$date)
mean.cropped.frac.yr-tapply(cropped$Frac_ET_Satsfd,cropped.year,mean)
year.observed-numeric(length(mean.cropped.frac.yr))
for(i in
1:length(mean.cropped.frac.yr)){year.observed[i]=as.numeric(names(mean.cropped.frac.yr[i]))}

boxplot(Frac_ET_Satsfd~month.year(cropped$date),data=cropped,col=lightblue,outline=F,
 range=0.0001,ylim=c(0,1),xaxt=n)
axis(1,at=1:103,labels=F)
mtext(month.names,1,at=1:103,line=rep(c(1,2),2))
mtext(year.observed,3,at=seq(3,99,by=12),padj=-1)
grid(nx=9,ny=0,col=gray60,lty=1)

http://n4.nabble.com/file/n1476343/timeSeriesMessy.jpg 

Thanks, Eric
-- 
View this message in context: 
http://n4.nabble.com/boxplot-label-cleanup-tp1476343p1476343.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] object 'xxx' not found

2010-02-08 Thread emorway

The following line of code seems fairly straight forward, yet it is kicking
back an error message:
for (i in
1:length(mean.natveg.frac)){month.observed[i]=as.numeric(names(mean.natveg.frac[i]))%%12}

Error message:

Error in month.observed[i] = as numeric(names(mean.natveg.frac[i]))%%12 :
object 'month.observed' not found

Seems like its not found because I'm trying to define it.  Also
length(mean.natveg.frac) = 103


Eric
-- 
View this message in context: 
http://n4.nabble.com/object-xxx-not-found-tp1473729p1473729.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] log-normal overlay

2010-01-04 Thread emorway

Hello,

Using the following lines of code, I created the following graph:

d.resid-c(-5.63,4.25,4.38,3.05,-1.85,1.40,1.80,-0.80,4.20,3.20,-2.70,-7.20,-0.10,-2.50,0.60,1.20,1.30,1.20,1.30,1.27,6.91,8.55,7.18,8.73,1.82,-1.45,5.91,5.45,-0.82,-4.55,0.82,82.91,1.73,3.09,5.64,-1.73,-9.55,4.27,17.45,9.64,-18.33,21.77,-31.56,-0.65,-13.11,-14.75,-18.75,-15.56,-3.11,-1.75,-1.84,-2.02,-8.93,-7.47,0.89,10.77,-15.93,-16.93,-13.33,-9.05,-2.25,-3.53,4.95,5.13,69.38,61.76,71.18,20.27,60.51,-5.38,-14.52,-5.04,3.02,-13.19,-14.70,-14.53,-8.81,4.30,-7.89,-6.37,60.72,84.11,25.01,91.55,95.65,96.13,92.23,93.22,91.74,92.12,90.79,85.30,73.77,72.44,65.51,49.31,45.50,-10.31,-22.50,-20.21,-3.20,-11.32,-9.29,-1.75,-0.91,-1.93,-0.23,-6.32,-0.84,-3.89,-5.04,1.71,-18.29,-36.63,-36.75,-24.63,-24.95,-27.85,-26.60,-21.20,-14.80,-29.80,-22.80,-21.70,-26.20,-25.10,-37.50,-34.40,-34.80,-33.70,-34.80,-33.70,-19.73,-18.09,-23.45,-23.82,-19.27,-20.18,-24.45,-25.09,-23.55,-25.82,-21.55,-24.18,-34.09,-27.27,-25.91,-26.36,-28.73,-23.55,-32.73,-29.55,-28.36,-30.41,-125.31,-15.64,-24.73,-19.1!
 
9,-19.83,-17.83,-19.64,-22.19,-21.83,-22.92,-51.10,-27.01,-27.55,-26.19,-17.01,-5.63,-0.75,-5.63,-2.95,9.15,4.40,6.80,-0.80,-0.80,9.20,10.30,12.80,5.90,6.50,-0.40,-6.80,-4.70,-6.80,-4.70,7.27,7.91,3.55,13.18,1.73,3.82,6.55,6.91,16.45,9.18,5.45,11.82,-5.09,13.73,15.09,20.64,9.27,15.45,10.27,5.45,2.64,-3.41,7.69,13.36,7.27,12.81,7.17,17.17,19.36,14.81,12.17,24.08,19.90,21.99,19.45,15.81,-67.31,24.99,24.99,14.59,11.64,15.38,14.10,16.49,27.00,-58.88,-89.97,-59.18,-65.28,-62.50,-69.03,-75.57,-89.66,-58.22,-49.96,-61.24,-68.06,-41.82,-80.30,-60.49,-70.98,-79.45,-77.85,-127.19,-62.80,-103.81,-119.91,-102.65,-113.82,-105.53,19.62,17.05,15.38,12.25,35.38,36.05,33.15,30.40,38.80,35.20,17.20,32.20,59.30,61.80,15.90,35.50,28.60,10.20,14.30,10.20,14.30,33.27,22.91,10.55,4.18,10.73,31.82,32.55,13.91,15.45,21.18,26.45,25.82,25.91,22.73,16.09,17.64,18.27,35.45,31.27,22.45,26.64,26.59,40.69,39.36,47.27,44.81,33.17,32.17,38.36,39.81,35.17,33.08,40.90,44.99,47.45,23.81,33.69,41.99,38.99,44.59!
 ,46.82,39.90,43.19,33.89,13.57,
27.35,-51.71,34.38,33.25,19.38,33.05,46.65,45.40,51.80,39.20,38.20,47.20,33.30,32.80,58.90,57.50,57.60,54.20,57.30,54.20,57.30,38.27,38.91,48.55,46.18,30.73,25.82,25.55,39.91,50.45,35.18,30.45,30.82,21.91,44.73,50.09,45.64,37.27,39.45,60.27,62.45,64.64,65.59,90.69,80.36,54.27,59.81,64.17,66.17,53.36,38.81,32.17,53.08,37.90,33.99,37.45,40.81,56.69,35.99,34.99,43.59,53.92,60.72,64.01,50.71,54.15,56.38,3.05,23.52,43.00,40.03,29.59,46.00,46.30,51.52,69.93,53.73,43.37,46.90,60.85,77.61,68.17,66.64,77.04,81.18,31.68,78.85,73.41,89.58,64.58,50.15,53.52,40.95,59.24,47.83,35.71,33.65,41.31,46.26,61.64,34.87,47.56,34.87,23.34,18.96,34.29,24.79,30.10,16.40,27.16,11.92,-11.71,31.22,23.09,28.17,38.76,39.60,44.68,40.28,44.86,47.29,51.87,56.81,60.51,42.97,48.51,33.85,24.10,4.79,4.72,-16.47,-14.64,-14.41,-3.89,1.14,-2.22,-8.31,-14.71,-0.63,-1.75,-14.13,-6.95,9.65,3.40,12.80,-1.80,-1.80,9.20,-5.70,-5.20,16.90,14.50,17.60,21.20,22.30,21.20,22.30,-0.73,-1.09,12.55,7.18,-5.27,-8.18,-9.45,3.91,1!
 
0.45,-4.82,-3.55,-8.18,-32.09,6.73,11.09,1.64,3.27,1.45,22.27,24.45,30.64,-32.41,-34.31,-19.64,-50.73,-45.19,-36.83,-34.83,-53.64,-62.19,-66.83,-50.92,-62.10,-67.01,-63.55,-59.19,-44.31,-59.01,-54.01,-55.41,-47.84,-41.96,-40.19,35.35,12.57,18.54,23.38,29.91,-10.36,31.24,29.30,103.89,1.66,-34.99,-17.01,-10.43,-14.63,-7.00,30.05,-57.81,-57.81,35.92,25.51,14.33,13.31,6.46,5.44,24.05,3.36,25.82,17.36,-4.63,-0.75,-12.63,-4.45,3.15,-1.60,3.80,-4.80,-2.80,1.20,-10.70,-12.20,5.90,5.50,6.60,12.20,11.30,12.20,11.30,-2.73,-2.09,4.55,13.18,-4.27,-8.18,-11.45,-0.09,6.45,-8.82,-8.55,-9.18,-12.09,0.73,3.09,0.64,3.27,-3.55,11.27,14.45,22.64,18.59,28.69,30.36,9.27,13.81,19.17,19.17,7.36,-1.19,-2.83,10.08,0.90,-6.01,-2.55,2.81,13.69,1.99,4.99,-2.41,6.25,13.04,13.29,26.85,-1.53,6.48,7.38,-27.86,-8.91,7.69,-5.34,-1.46,8.61,10.75,-23.28,-87.47,-24.73,-30.51,-26.38,-21.26,-3.59,-19.13,-17.61,-7.21,-0.02,81.98,-5.88,-13.45,-2.29,17.14,12.77,11.87,27.04,32.83,14.71,7.77,4.80,8.80,10.70,27.00,63.63!
 ,16.58,-0.69,-7.64,

[R] sppolot: fill below minimum legend value

2009-09-10 Thread emorway

In the plot below, there are some grid cells that have values below 10, which
is the lowest cut value I have specified.  Is there a way, without
adjusting the number of cuts, to tell R to fill in those cells with the
lowest possible color (in this case greeen)?  There is a white hole in the
image about a quarter of the way in from the left side, this is what I would
like to correct.  Thanks...Eric

The code:
pts-list(sp.points,K.dat,pch=3,col=black)
cuts-c(10,20,30,40,50,60,70,80,90,100,200,300,400,500,600,700,800,900,1000)
spplot(lzm.krige.dir[var1.pred],at=cuts,colorkey=list(at=log10(cuts),at=log10(cuts),labels=as.character(cuts)),scales=list(draw=TRUE),
xlab=Easting,ylab=Northing,key.space=right,cex=1.1,col.regions=terrain.colors(30),main=Hydraulic
Conductivity of Layer 2,sp.layout=list(pts))

The image:
http://www.nabble.com/file/p25392472/Image3.jpeg 
-- 
View this message in context: 
http://www.nabble.com/sppolot%3A-fill-below-minimum-legend-value-tp25392472p25392472.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] spplot modifications

2009-09-07 Thread emorway

http://www.nabble.com/file/p25336596/Conductivity1.jpeg 

I need a little help making modifications to the image included with this
post.  First, rather than using a linear color legend to display the output
I would like to use a log-scale legend.  Thus, the legend on the right would
go from 1 to 1000, for example, with a classic log-scale gradation.  What I
hope to avoid is taking the log of the values and displaying the logged
values using a linear scale.  Second, at the top of the legend there is a
random green bar that shouldn't be there.  If you know what causes this or
more importantly, how to get rid of it, please help.  I've included the code
and text files necessary to create this figure in the zip file.

Thank you,
Eric Morway
http://www.nabble.com/file/p25336596/Morway_R_Qs.zip Morway_R_Qs.zip 
-- 
View this message in context: 
http://www.nabble.com/spplot-modifications-tp25336596p25336596.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] assign point values as labels in spplot

2009-01-29 Thread emorway

In the code to follow, I'm trying to label points with their corresponding
values but have been unsuccessful after many variations to the following
code.  The code below creates the plot I want, I simply cannot get the black
points (+) to display the actual value.  I'm guessing the problem is
somewhere in the second to last line of code (starts with pts-...).  I
have attached the two text files needed to run the code.  
http://www.nabble.com/file/p21734824/R_Hk_Krig_File_Log.txt
R_Hk_Krig_File_Log.txt 
http://www.nabble.com/file/p21734824/Predict_Location_XY.txt
Predict_Location_XY.txt 
Eric

library(gstat)

K.dat-read.table(C:/temp/R_Hk_Krig_File_Log.txt, header = TRUE)
my.dat-data.frame(K.dat)
attach(my.dat)
coordinates(my.dat)=~X+Y

pred.dat-read.table(C:/temp/Predict_Location_XY.txt, header = FALSE)
names(pred.dat)-c(x,y,p)
my.pred.dat-data.frame(pred.dat)
coordinates(my.pred.dat)-~x+y
gridded(my.pred.dat) = TRUE

#Set up 2 exponential variograms
lzm.vgm.exp.noNug-vgm(1.1,Exp,2000,0)
lzm.vgm.exp.Nug-vgm(0.7,Exp,3000,0.4)

#Krige the exponential variograms
lzm.krig.exp.noNug-krige(Kh_Log10~1,my.dat,my.pred.dat,model=lzm.vgm.exp.noNug)
lzm.krig.exp.Nug-krige(Kh_Log10~1,my.dat,my.pred.dat,model=lzm.vgm.exp.Nug)

lzm.krig.exp.noNug$var1.pred-10^lzm.krig.exp.noNug$var1.pred
lzm.krig.exp.Nug$var1.pred-10^lzm.krig.exp.Nug$var1.pred

library(sp)
library(lattice)
pts-list(sp.points,K.dat,pch=3,col=black,labels=as.character(K.dat$Kh))
spplot(lzm.krig.exp.Nug[var1.pred], scales=list(draw=TRUE), 
xlab=Easting,ylab=Northing,cuts=25,key.space=right,cex=1.1,
col.regions=terrain.colors(25),main=Hydraulic Conductivity of Layer 2,
sp.layout=list(pts))

-- 
View this message in context: 
http://www.nabble.com/assign-point-values-as-labels-in-spplot-tp21734824p21734824.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.