Re: [R] Help with vector gymnastics

2007-08-22 Thread Erik Iverson
Philip -

I don't know if this is the best way, but it gives you the output you 
want.

Using your tf,

vals - rle(ifelse(tf, 5*which(tf), 0))
vals$values[vals$values == 0] - vals$values[which(vals$values==0) - 1]
inverse.rle(vals)
[1]  5  5  5  5 25 30 30

Gladwin, Philip wrote:
 Hello,
 
 What is the best way of solving this problem?
 
 answer - ifelse(tf=TRUE, i * 5, previous answer)
 where as an initial condition 
 tf[1] - TRUE
 
 
 For example if,
 tf - c(T,F,F,F,T,T,F)
 over i = 1 to 7
 then the output of the function will be
 answer = 5 5 5 5 25 30 30 
 
 Thank you.
 
 Phil,
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Imputing missing values in time series

2007-06-22 Thread Erik Iverson
I think my example should work for you, but I couldn't think of a way to 
do this without an interative while loop.

test - c(1,2,3,NA,4,NA,NA,5,NA,6,7,NA)

while(any(is.na(test)))
test[is.na(test)] - test[which(is.na(test))-1]

  test
  [1] 1 2 3 3 4 4 4 5 5 6 7 7

Horace Tso wrote:
 Folks,
 
 This must be a rather common problem with real life time series data
 but I don't see anything in the archive about how to deal with it. I
 have a time series of natural gas prices by flow date. Since gas is not
 traded on weekends and holidays, I have a lot of missing values,
 
 FDate Price
 11/1/2006 6.28
 11/2/2006 6.58
 11/3/2006 6.586
 11/4/2006 6.716
 11/5/2006 NA
 11/6/2006 NA
 11/7/2006 6.262
 11/8/2006 6.27
 11/9/2006 6.696
 11/10/20066.729
 11/11/20066.487
 11/12/2006NA
 11/13/2006NA
 11/14/20066.725
 11/15/20066.844
 11/16/20066.907
  
 What I would like to do is to fill the NAs with the price from the
 previous date * gas used during holidays is purchased from the week
 before. Though real simple, I wonder if there is a function to perform
 this task. Some of the imputation functions I'm aware of (eg. impute,
 transcan in Hmisc) seem to deal with completely different problems. 
 
 2.5.0/Windows XP
 
 Thanks in advance.
 
 HT
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How do I avoid a loop?

2007-06-19 Thread Erik Iverson
One more variation on the solution, no idea how it compares in speed.

Using your x ...

  ifelse(x, unlist(mapply(seq, to = rle(x)$lengths, from = 1)), 0)
[1] 1 2 3 0 0 1 2 0 1

Feng, Ken wrote:
 Hi,
 
 I start with an array of booleans:
 
   x - c( TRUE, TRUE, TRUE, FALSE, FALSE, TRUE, TRUE, FALSE, TRUE );
 
 I want to define an y - f(x) such that:
 
   y - c( 1, 2, 3, 0, 0, 1, 2, 0, 1 );
 
 In other words, do a cumsum when I see a TRUE, but reset to 0 if I see a 
 FALSE.
 
 I know I can do this with a very slow and ugly loop or maybe use apply,
 but I was hoping there are some R experts out there who can show me
 a cleaner/more elegant solution?
 
 Thanks in advance.
 
 - Ken
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Computing stats on common parts of multiple dataframes

2007-02-13 Thread Erik Iverson
Murali -

I've come up with something that might with work, with gratutious use of 
the *apply functions.  See ?apply, ?lappy, and ?mapply for how this 
would work.  Basically, just set my.list equal to a list of data.frames 
  you would like included.  I made this to work with matrices first, so 
it does use as.matrix() in my function.  Also, this could be turned into 
a general function so that you could specify a different function other 
than median.

#Make my.list equal to a list of dataframes you want
my.list - list(df1,df2)

#What's the shortest?
minrow - min(sapply(my.list,nrow))
#Chop all to the shortest
tmp - lapply(my.list, function(x) x[(nrow(x)-(minrow-1)):nrow(x),])

#Do the computation, could change median to mean, or a user defined
#function
matrix(apply(mapply([,lapply(tmp,as.matrix), 
MoreArgs=list(1:(minrow*2))), 1, median),
ncol=2)

HTH, whether or not this is any better than your for loop solution is 
left up to you.

Erik


Murali Menon wrote:
 Folks,
 
 I have three dataframes storing some information about
 two currency pairs, as follows:
 
 R a
 
 EUR-USDNOK-SEK
 1.231.33
 1.221.43
 1.261.42
 1.241.50
 1.211.36
 1.261.60
 1.291.44
 1.251.36
 1.271.39
 1.231.48
 1.221.26
 1.241.29
 1.271.57
 1.211.55
 1.231.35
 1.251.41
 1.251.30
 1.231.11
 1.281.37
 1.271.23
 
 
 
 R b
 EUR-USDNOK-SEK
 1.231.22
 1.211.36
 1.281.61
 1.231.34
 1.211.22
 
 
 
 R d
 
 EUR-USDNOK-SEK
 1.271.39
 1.231.48
 1.221.26
 1.241.29
 1.271.57
 1.211.55
 1.231.35
 1.251.41
 1.251.33
 1.231.11
 1.281.37
 1.271.23
 
 The twist is that these entries correspond to dates where the
 *last* rows in each frame are today's entries, and so on
 backwards in time.
 
 I would like to create a matrix of medians (a median for each row
 and for each currency pair), but only for those rows where all
 dataframes have entries.
 
 My answer in this case should look like:
 
 EUR-USDNOK-SEK
 
 1.251.41
 1.251.33
 1.231.11
 1.281.37
 1.271.23
 
 where the last EUR-USD entry = median(1.27, 1.21, 1.27), etc.
 
 Notice that the output is of the same dimensions as the smallest dataframe
 (in this case 'b').
 
 I can do it in a clumsy fashion by first obtaining the number
 of rows in the smallest matrix, chopping off the top rows
 of the other matrices to reduce them this size, then doing a
 for-loop across each currency pair, row-wise, to create a
 3-vector which I then apply median() on.
 
 Surely there's a better way to do this?
 
 Please advise.
 
 Thanks,
 
 Murali Menon
 
 _
 Valentine’s Day -- Shop for gifts that spell L-O-V-E at MSN Shopping
 
 
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] remove component from list or data frame

2007-02-08 Thread Erik Iverson


Jason Horn wrote:
 Sorry to ask such a simple question, but I can't find the answer after 
 extensive searching the docs and the web.
 
 How do you remove a component from a list?  For example say you have:
 
 lst-c(5,6,7,8,9)
 
 How do you remove, for example, the third component in the list?

Is the object lst really a list?  Try is.list(lst) to check.
To remove an element from a vector, use for example, lst[-3]

 
 lst[[3]]]-NULL generates an error:  Error: more elements supplied 
 than there are to replace
 
 

If lst were actually a list, that command would work with the obvious 
syntax fix.  So would lst[-3] though.

 
 Also, how do you remove a row from a data frame?  For example, say you 
 have:
 
 lst1-c(1,2,3,4,5)
 lst2-c(6,7,8,9,10)
 frame-data.frame(lst1,lst2)
 
 How do you remove, for example, the second row of frame?

You use

frame[-2, ] #remove second row, keep all columns.

 
 Thanks,
 
 - Jason
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R in Industry

2007-02-08 Thread Erik Iverson
Ben -

Ben Fairbank wrote:
 To those following this thRead:
 
 There was a thread on this topic a year or so ago on this list, in which
 contributors mentioned reasons that corporate powers-that-be were
 reluctant to commit to R as a corporate statistical platform.  (My
 favorite was There is no one to sue if something goes wrong.)
 
 One reason that I do not think was discussed then, nor have I seen
 discussed since, is the issue of the continuity of support.  If one
 person has contributed disproportionately heavily to the development and
 maintenance of a package, and then retires or follows other interests,
 and the package needs maintenance (perhaps as a consequence of new
 operating systems or a new version of R), is there any assurance that it
 will be available?  With a commercial package such as, say, SPSS, the
 corporate memory and continuance makes such continued maintenance
 likely, but is there such a commitment with R packages?  If my company
 came to depend heavily on a fairly obscure R package (as we are
 contemplating doing), what guarantee is there that it will be available
 next month/year/decade?  I know of none, nor would I expect one.

But you would have the source code, so as long as someone knew R, you 
could maintain it, expand it, customize it, patch it yourselves, even if 
the original maintainer left the project.  You can't say the same with a 
commercial package likely.


 
 As R says when it starts up, R is free software and comes with
 ABSOLUTELY NO WARRANTY.
 
 Ben Fairbank
 
 
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Patrick Burns
 Sent: Thursday, February 08, 2007 10:24 AM
 To: Albrecht,Dr. Stefan (AZ Private Equity Partner)
 Cc: r-help@stat.math.ethz.ch
 Subject: Re: [R] R in Industry
 
  From what I know Matlab is much more popular in
 fixed income than R, but R is vastly more popular in
 equities.  R seems to be making quite a lot of headway
 in finance, even in fixed income to some degree.
 
 At least to some extent, this is probably logical behavior --
 fixed income is more mathematical, and equities is more
 statistical.
 
 Matlab is easier to learn mainly because it has much simpler
 data structures.  However, once you are doing something
 where a complex data structure is natural, then R is going to
 be easier to use and you are likely to have a more complete
 implementation of what you want.
 
 If speed becomes a limiting factor, then moving the heavy
 computing to C is a natural thing to do, and very easy with R.
 
 Patrick Burns
 [EMAIL PROTECTED]
 +44 (0)20 8525 0696
 http://www.burns-stat.com
 (home of S Poetry and A Guide for the Unwilling S User)
 
 Albrecht, Dr. Stefan (AZ Private Equity Partner) wrote:
 
 
Dear all,

I was reading with great interest your comments about the use of R in
the industry. Personally, I use R as scripting language in the
 
 financial
 
industry, not so much for its statistical capabilities (which are
great), but more for programming. I once switched from S-Plus to R,
because I liked R more, it had a better and easier to use documentation
and it is faster (especially with loops).

Now some colleagues of mine are (finally) eager to join me in my
quantitative efforts, but they feel that they are more at ease with
Matlab. I can understand this. Matlab has a real IDE with symbolic
debugger, integrated editor and profiling, etc. The help files are
great, very comprehensive and coherent. It also could be easier to
learn.

And, I was very astonished to realise, Matlab is very, very much faster
with simple for loops, which would speed up simulations considerably.
So I have trouble to argue for a use of R (which I like) instead of
Matlab. The price of Matlab is high, but certainly not prohibitive. R
 
 is
 
great and free, but maybe less comfortable to use than Matlab.

Finally, after all, I have the impression that in many job offerings in
the financial industry R is much less often mentioned than Matlab.

I would very much appreciate any comments on my above remarks. I know
there has been some discussions of R vs. Matlab on R-help, but these
could be somewhat out-dated, since both languages are evolving quite
quickly.

With many thanks and best regards,
Stefan Albrecht



  [[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
 
 http://www.R-project.org/posting-guide.html
 
and provide commented, minimal, self-contained, reproducible code.


 

 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
 __
 R-help@stat.math.ethz.ch mailing list
 

Re: [R] setting a number of values to NA over a data.frame.

2007-02-07 Thread Erik Iverson
John -

Your initial problem uses 0, but the example uses 1 for the value that 
gets an NA.  My solution uses 1 to fit with your example.  There may be 
a better way, but try something like

data1[3:5] - data.frame(lapply(data1[3:5], function(x) ifelse(x==1, NA, 
x)))

The data1[3:5] is just a test subset  of columns I chose from your data1 
example.  Notice it appears twice, once on each side of the assignment 
operator.

In English, apply to each column of the data frame (which is a list) a 
function that will return NA if the element is 1, and the value 
otherwise, and then turn the modified lists into a data.frame, and save 
it as data1.



See the help files for lapply and ifelse if you haven't seen those before.

Maybe someone has a better way?

Erik

John Kane wrote:
 This is probably a simple problem but I don't see a
 solution.
 
 I have a data.frame with a number of columns where I
 would like 0 - NA
 
 thus I have df1[,144:157] - NA if df1[, 144: 157] ==0
 and df1[, 190:198] - NA if df1[, 190:198] ==0
 
 but I cannot figure out a way do this.  
 
 cata - c( 1,1,6,1,1,NA)
 catb - c( 1,2,3,4,5,6)
 doga - c(3,5,3,6,4, 0)
 dogb - c(2,4,6,8,10, 12)
 rata - c (NA, 9, 9, 8, 9, 8)
 ratb - c( 1,2,3,4,5,6)
 bata - c( 12, 42,NA, 45, 32, 54)
 batb - c( 13, 15, 17,19,21,23)
 id - c('a', 'b', 'b', 'c', 'a', 'b')
 site - c(1,1,4,4,1,4)
 mat1 -  cbind(cata, catb, doga, dogb, rata, ratb,
 bata, batb)
 
 data1 - data.frame(site, id, mat1)
 data1
 
  # Obviously this works fine for one column
 
 data1$site[data1$site ==1] - NA  ; data1
 
 but I cannot see how to do this with indices that
 would allow me to do more than one column in the
 data.frame.
 
 At one point I even tried something like this
 a - c(site)
 data1$a[data1$a ==1] - NA
 
 which seems to produce a corrupt data.frame.
 
 I am sure it is simple but I don't see it.  
 
 Any help would be much appreciated.
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Use a text variable's value to specify another varaible?

2007-01-26 Thread Erik Iverson
Can I ask why you aren't just passing in the object to your function, 
but instead a text name for that object?

Ben Fairbank wrote:
 Greetings guRus --
 
  
 
 If a variable, e.g., 'varname', is a character string, e.g. varname -
 datavector, and I want to apply a function, such as table(), to
 datavector, what syntax or method will do so using only the variable
 varname?  This seems similar to indirect addressing, but I have not seen
 a method for it in the R manuals.  Is there a general name for such
 indirect reference that one might search for?
 
  
 
 (This came up while writing a function that takes the value of 'varname'
 from the keyboard and then applies functions to it.)
 
  
 
 With thanks for any solution,
 
  
 
 Ben Fairbank
 
  
 
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] relative frequency plot

2006-04-27 Thread Erik Iverson
See ?truehist in the MASS package.



Philipp Pagel wrote:
 On Thu, Apr 27, 2006 at 10:48:39AM -0700, [EMAIL PROTECTED] wrote:
 
Hi All,

I want to use hist to get the relative frequency plot. But the range of
ylab is greater than 1,which I think it should be less than 1 since it
stands for the probability.

I'm confused. Could you please help me with it?
 
 
 I was pretty confused by that, too at first. The solution is that
 freq=False cause hist to plot the DENSITY rather than frequency. And
 density is not necesssarily the same as relative frequency. Excerpt from
 ?hist:
 
  density: values f^(x[i]), as estimated density values. If
   'all(diff(breaks) == 1)', they are the relative frequencies
   'counts/n' and in general satisfy sum[i; f^(x[i])
   (b[i+1]-b[i])] = 1, where b[i] = 'breaks[i]'.
 
 If you want relative distance try something like this:
 
 myhist = hist(x,breaks=52, plot=F)
 myhist$counts = myhist$counts / sum(myhist$counts)
 plot(myhist,main=NULL,border=TRUE,xlab=days,xlim=c(0,6),lty=2)
 
 Not exactly clean, though -- we are messing with the myhist object...
 
 
 cu
   Philipp


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] relative frequency plot

2006-04-27 Thread Erik Iverson
Martin -

Of course you are right.  The documentation for truehist (and hist) 
explains that fact nicely, which is why I thought to send him there. 
Sorry for any confusion.

Thanks,
Erik

Martin Maechler wrote:
Erik == Erik Iverson [EMAIL PROTECTED]
on Thu, 27 Apr 2006 13:44:16 -0500 writes:
 
 
 Erik See ?truehist in the MASS package.
 
 Not in this case!
 truehist() also computes a density,
 and its values on the y axis are not probabilities, either!
   hist(*, freq = FALSE)
 is fully sufficient here -- the problem of the original poster
 was to understand that a density can have values larger than 1.
 It may be interesting and is somewhat disappointing for us
 teachers of statistics to see how many people have posted in
 the past on this exact topic, sometimes even more or less
 assuming that R was doing some things wrongly because it showed
 densities (or density estimates as here) with values larger than
 one...  oh dear
 Mit der Dummheit kaempfen Goetter selbst vergebens. 
  - Friedrich Schiller, Die Jungfrau von Orleans
 
 Martin
 
Philipp Pagel wrote:
  On Thu, Apr 27, 2006 at 10:48:39AM -0700, [EMAIL PROTECTED]
  wrote:
  
  Hi All,
  
  I want to use hist to get the relative frequency
  plot. But the range of ylab is greater than 1,which I
  think it should be less than 1 since it stands for the
  probability.
  
  I'm confused. Could you please help me with it?
  
  
  I was pretty confused by that, too at first. The solution
  is that freq=False cause hist to plot the DENSITY rather
  than frequency. And density is not necesssarily the same
  as relative frequency. Excerpt from ?hist:
  
  density: values f^(x[i]), as estimated density values. If
  'all(diff(breaks) == 1)', they are the relative
  frequencies 'counts/n' and in general satisfy sum[i;
  f^(x[i]) (b[i+1]-b[i])] = 1, where b[i] = 'breaks[i]'.
  
  If you want relative distance try something like this:
  
  myhist = hist(x,breaks=52, plot=F) myhist$counts =
  myhist$counts / sum(myhist$counts)
  plot(myhist,main=NULL,border=TRUE,xlab=days,xlim=c(0,6),lty=2)
  
  Not exactly clean, though -- we are messing with the
  myhist object...
  
  
  cu Philipp
  
 
 Erik __
 Erik R-help@stat.math.ethz.ch mailing list
 Erik https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do
 Erik read the posting guide!
 Erik http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Were to find appropriate functions for a given task in R

2006-04-26 Thread Erik Iverson
  Then, people tend to
  define its own functions (I'm doing this too), and a lack of
  standardization makes it difficult to keep everything into control.

If you think of R as more of a language rather than a pre-packaged 
statistical program, I feel that helps.  In the C++ world, there are 
people all over writing classes and functions, and many of these have 
close or duplicate functionality.  As a programmer, you can decide which 
one to use for your needs, or program your own, or extend someone 
else's, the choice is yours.  The same in the R world, you have choices, 
and there does not have to be necessarily only method to do each task.

I don't feel there needs to be 'control', everyone can implement what 
they want.

If you think of R only as a prepacked statistical program however, I can 
see getting frustrated that there are usually many ways to do the same 
thing.  R is much more than this though.  That's how I see it anyway.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Indexing a vector by a list of vectors

2006-04-04 Thread Erik Iverson
Hello R-help -

I have

vec - c(string1, string2, string3)
ind - list(c(1,2),c(1,2,3))

I want vec indexed by each vector in the list ind.
The first element of the list I want would be vec[c(1,2)],
the second element would be vec[c(1,2,3)], like the following.

[[1]]
[1] string1 string2

[[2]]
[1] string1 string2 string3

Using for loops, this is simple.  For fun, I tried to implement it 
without a for loop using some combination of *apply() functions and [.

I succeeded with

myfunc - function(x) {
   do.call([,list(vec,x))
}
lapply(ind,myfunc)

I was not, however, able to get my desired result without defining my 
own dummy function.  Can anyone think of a way?  As I said, I already 
have a way that works, I'm just curious if there is a more 'elegant' 
solution that does not rely on my having to define another function. 
Seems like it should be possible.

Thanks, Erik Iverson

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Function dependency function

2006-03-31 Thread Erik Iverson
I had a similar need and found package mvbutils, function foodweb().

 From the help file:

  'foodweb' is applied to a group of functions (e.g. all those in a
  workspace); it produces a graphical display showing the hierarchy
  of which functions call which other ones. This is handy, for
  instance, when you have a great morass of functions in a
  workspace, and want to figure out which ones are meant to be
  called directly. 'callers.of(funs)' and 'callees.of(funs)' show
  which functions directly call, or are called directly by, 'funs'.

Hope that helps,
Erik Iverson

Matthew Dowle wrote:
 Hi,
 
 Is there a function taking a function as an argument, which returns all the
 functions it calls, and all the the functions those functions call, and so
 on?I could use Rprof, but that would involve executing the function,
 which may miss some branches of code.   I'd really like a function which
 looks at the source code to work out all the functions that could possibly
 be called.   When I develop a function and release to production environment
 (or to some library) then I may need to release other functions I've
 developed which that function calls.  As soon as the function call stack
 goes outside .GlobalEnv (for example into base) then the search can stop as
 I'm only interested in functions in .GlobalEnv  (my own functions).  Also
 useful would be the reverse function  i.e.  find all functions which could
 possibly call the function.  This could be used to find functions which are
 never called and could be considered for deletion.
 
 Thanks,
 Matthew
 
 
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] tcltk loading in R-2.2.1 from src

2006-03-09 Thread Erik Iverson
Patrice -

I had a very similar problem using TCL/TK 8.3.
Below is the email I sent to my computing group at work about how I 
fixed it.  Note that since my TCL/TK header (.h) files were in an odd 
location, the first step probably isn't relevant for you.  But I bet the 
second step is.

--
I believe I have found the solution to this problem.  There were 2 steps 
I took to get a build of R in my home directory that properly uses 
Tcl/Tk 8.3 under Linux.

The first was setting an environment variable to let R know where the 
tcl.h and tk.h files reside.  This environment variable was 
TCLTK_CPPFLAGS and is set to -I/s/include .  This can bet set in the 
config.site in R's build directory also.

(NOTE to R-help: The above is site specific to my location, /s is for 
software on the network.)

Second, there is some problem with the way R interacts with the 
tkConfig.sh file in /s/lib (or wherever your .sh file is located).  It 
comes from the following line in tkConfig.sh.


TK_XINCLUDES='# no special path needed'

The # character, indicating a comment, is somehow misinterpreted by 
the configure script which breaks the Tcl/Tk functionality.  I was able 
to get around this by simply removing the comment and leaving it as
TK_XINCLUDES=''

That got me a working version of the newest R using Tcl/Tk.  I'm not 
sure if Tcl/Tk version 8.4 would still put that comment in there, I 
believe it's changed though, so you'd only have to follow the first step 
to get R compiled with Tcl/Tk support.

I found reference to this problem on the R mailing list, it appears to 
only affect certain installations of Tcl/Tk.

-
HTH,
Erik Iverson

Patrice Seyed wrote:
 Hi,
 
 Having trouble loading tcltk in R 2.2.1 built from source.
 
 ./configure, make, make check, and make install run ok.
 
 
   library(tcltk)
 Error in firstlib(which.lib.loc, package) :
 Tcl/Tk support is not available on this system
 Error in library(tcltk) : .First.lib failed for 'tcltk'
 
 even though it is listed in library() output.
 
 I have the same problem even if i compile with options:
 ./configure --with-tcltk --with-tcl-config=/usr/lib/tclConfig.sh 
 --with-tk-config=/usr/lib/tkConfig.sh
 
 Is there a dep for R 2.2.1 on a specific version of tcl? Any hints on 
 this issue appreciated.
 
 running on linux (2.4.21-4) version:
 rpm -qa | grep tcl
 tcl-devel-8.3.5-92
 tcl-8.3.5-92
 
 Specifically, the package pbatR loads this library during installation.
 
 Thanks,
 Patrice


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html