Re: [R] Allocating shelf space

2007-05-09 Thread Gad Abraham

> A: Make the efficient use of space
> B: Minimise the spatial disclocation of related books
>(it is acceptable to separate large books from small books
>on the same subject, for the sake of efficient packing).

Some comments, hope they make sense:

Let f(x) be a function that maps from a specific book arrangement to a 
certain amount of space wastage.

You're also trying to minimise some function g() of the books' location. 
You can't minimise two functions at once, unless you minimise some 
function of both: h(f(x), g(x)). It up to you to determine what h() is.

For example, you could use a linear function, deciding that saving space 
is 10 times more important than keeping books close together. Then your 
objective function could be:
minimise:   h = f(x) + g(x)
subject to: f(x) = 10g(x)
 f(x) >= 0, g(x) >= 0
(plus some nontrivial constraints on x)

(You should also set a lower bound on the solution values, otherwise f 
will always be minimised at the expense of g, since f is "worth" more).

Although I've stated the problem in terms of Linear Programming, it's 
really cheating. The much bigger issue is the combinatorial optimisation 
problem underneath --- different arrangements of x result in different 
values of h. This is much harder than LP, for anything but a small 
number of objects to arrange. I'd be tempted to set up a toy version, 
with small number of possible x values and simple constraints, and run 
some heuristic-driven optimisation method such as simulated annealing, 
Ant Colony Optimisation, Genetic Algorithms, etc.

Cheers,
Gad

-- 
Gad Abraham
Department of Mathematics and Statistics
The University of Melbourne
Parkville 3010, Victoria, Australia
email: [EMAIL PROTECTED]
web: http://www.ms.unimelb.edu.au/~gabraham

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unit Testing Frameworks: summary and brief discussion

2007-05-09 Thread Hans-Peter
> - My code gives error and warning messages in some situations. I want to
> test that the errors and warnings work, but these flags are the correct
> response to the test. In fact, it is an error if I don't get the flag.
> How easy is it to set up automatic tests to check warning and error
> messages work?

Maybe like this:

### for errors:
res1fkt <- function() xls.info( exc )
res1 <- try( res1fkt(), silent = TRUE )
if (class( res1 ) != "try-error") stop( "xls.info, data not equal" )
cat( "REQUIRED (EXPECTED) error message: ", res1  )

### for warnings:
tryCatch( res1 <- encodeDateTime( yd, md, dd, hd, mind, secd, msd ),
warning = function(x) cat("REQUIRED (EXPECTED) warning
message:\n", x$message, "\n" ) )
  # have to resubmit the command as I didn't find a way to execute the command
  # (assignement) and catch the warning message (but suppress the warning)
suppressWarnings( res1 <- encodeDateTime( yd, md, dd, hd, mind, secd, msd ) )
if (!all( res1 == ddate )) stop( "encode/decode , data not equal" )


-- 
Regards,
Hans-Peter

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Follow-up about ordinal logit with mixtures: how about 'continuation ratio' strategy?

2007-05-09 Thread Paul Johnson
This is a follow up to the message I posted 3 days ago about how to
estimate mixed ordinal logit models.  I hope you don't mind that I am
just pasting in the code and comments from an R file for your
feedback.  Actual estimates are at the end of the post.

### Subject: mixed ordinal logit via "augmented" data setup.

### I've been interested in estimating an ordinal logit model with
### a random parameter.  I asked in r-help about it. It appears to be
### a difficult problem because even well established commercial
### programs like SAS are prone to provide unreliable estimates.

### So far, I've found 3 avenues for research.  1) Go Bayesian and use
### MCMC to estimate the model.  2) Specify a likelihood function and
### then use R's optim function (as described in Laura A. Thompson,
### 2007, S-PLUS (and R) Manual to Accompany Agresti's Categorical
### Data Analysis (2002) 2nd edition).  My guess is that either of
### those approaches would be worth the while, but I might have
### trouble persuading a target audience that they have good
### properties.  3) Adapt a "continuation ratio" approach.

### This latter approach was suggested by a post in r-help by Daniel
### Farewell 
### http://tolstoy.newcastle.edu.au/R/help/06/08/32398.html#start
### It pointed me in the direction of "continuation ratio" logit models
### and one way to estimate an ordinal logit model with random
### parameters.

### Farewell's post gives working example code that shows a way to
### convert a K category ordinal variable into K-1 dichotomous
### indicators (a "continuation ratio" model). Those K-1 indicators
### can be "stacked" into one column and then a logistic regression
### program that is written for a two-valued output can be used.
### Farewell reasoned that one might then use a program for two-valued
### outputs including mixed effects.  In his proposal, one would use
### the program lmer (package: lme4) ( a binomial family with a logit
### link) to estimate parameters for a dichotomous logit model with
### random parameters.

### This is the sort of magic trick I had suspected might be possible.
### Still, it is hard to believe it would work.  But in the r-help
### response to the post by Farewell, there is no general objection
### against his modeling strategy.

### I had not studied "continuation ratio" logit models before, so I
### looked up a few articles on estimation of ordinal models by
### re-coding the output as a sequence of binary comparisons (stop
### ratios, continuation ratios, etc).  The article that is most clear
### on how this can be done to estimate a proportional odds logistic
### model is

### Stephen R. Cole, Paul D. Allison, and Cande V. Ananth,
### Estimation of Cumulative Odds Ratios
### Ann Epidemiol 2004;14:172–178.

### They claim that one can recode an n-chotomy into n-1 dichotomous
### indicators.  Each observation in the original dataset begets n-1
### lines in the augmented version.  After creating the dichotomous
### indicator, one uses an ordinary dichotomous logit model to
### estimate parameters and cutpoints for an ordinal logit
### model. Cole, et al., are very clear.

### There is an additional benefit to the augmented data approach.
### One can explicitly test the proportional odds assumption by checking
### for interactions between the included independent variables and the
### "level" of the dependent variable being considered.  The Cole
### article shows some examples where the proportional assumption appears
### to be violated.

### To test it, I created the following example.  This shows the
### results of maximum likelihood estimation with the programs "polr"
### (package:MASS) and "lrm" (package: Design).  The estimates from
### the augmented data approach are not exactly the same as polr or
### lrm, but they are close.  It appears to me the claims about the
### augmented data approach are mostly correct.  The parameter
### estimates are pretty close to the true values, while the estimates
### of the ordinal cutpoints are a bit difficult to interpret.

### I don't know what to make of the model diagnostics for the augmented
### data model. Should I have confidence in the standard errors?
 How to interpret the degrees of freedom when 3 lines
### of data are manufactured from 1 observation?  Are likelihood-ratio
### (anova) tests valid in this context?  Are these estimates from the
### augmented data "equivalent to maximum likelihood"?  What does it
### mean that the t-ratios are so different?  That seems to be prima-facie
### evidence that the estimates based on the augmented data set are not
### trustworthy.

### Suppose I convince myself that the estimates of the ordinal model
### are "as good as" maximum likelihood.  Is it reasonable to take the
### next step, and follow Farewell's idea of using this kind of model
### to estimate a mixture model?  There are K-1 lines per case
### in the augmented data set. Suppose the observations were grouped
### into M sets and one

Re: [R] Unit Testing Frameworks: summary and brief discussion

2007-05-09 Thread Paul Murrell
Hi


Paul Gilbert wrote:
> Tony
> 
> Thanks for the summary.
> 
> My ad hoc system is pretty good for catching flagged errors, and 
> numerical errors when I have a check.  Could you (or someone else) 
> comment on how easy it would be with one of these more formal frameworks 
> to do three things I have not been able to accomplish easily:
> 
> - My code gives error and warning messages in some situations. I want to 
> test that the errors and warnings work, but these flags are the correct 
> response to the test. In fact, it is an error if I don't get the flag. 
> How easy is it to set up automatic tests to check warning and error 
> messages work?
> 
> - For some things it is the printed format that matters. How easy is it 
> to set up a test of the printed output? (Something like the Rout files 
> used in R CMD check.) I think this is what Tony Plate is calling 
> transcript file tests, and I guess it is not automatically available. I 
> am not really interested in something I would have to change with each 
> new release of R, and I need it to work cross-platform. I want to know 
> when something has changed, in R or my own code, without having to 
> examine the output carefully.
> 
> - (And now the hard one.) For some things it is the plotted output that 
> matters. Is it possible to set up automatic tests of plotting? I can 
> already test that plots run. I want to know if they "look very 
> different". And no, I don't have a clue where to start on this one.


For text-based graphics formats, you can just use diff;  for raster
formats, you can do per pixel comparisons.  These days there is
ImageMagick to do a compare and it will even produce an image of the
difference.  I have an old package called graphicsQC (not on CRAN) that
implemented some of these ideas (there was a talk at DSC 2003, see
http://www.stat.auckland.ac.nz/~paul/index.html).  A student worked on a
much better approach more recently, but I haven't put that up on the web
yet.  Let me know if you'd like to take a look at the newer package (it
would help to have somebody nagging me to get it finished off).

Paul


> Paul Gilbert
> 
> [EMAIL PROTECTED] wrote:
>> Greetings -
>>
>> I'm finally finished review, here's what I heard:
>>
>>  from Tobias Verbeke:
>>
>> [EMAIL PROTECTED] wrote:
>>> Greetings!
>>>
>>> After a quick look at current programming tools, especially with regards 
>>> to unit-testing frameworks, I've started looking at both "butler" and 
>>> "RUnit".   I would be grateful to receieve real world development 
>>> experience and opinions with either/both.Please send to me directly 
>>> (yes, this IS my work email), I will summarize (named or anonymous, as 
>>> contributers desire) to the list.
>>>
>> I'm founding member of an R Competence Center at an international 
>> consulting company delivering R services
>> mainly to the financial and pharmaceutical industries. Unit testing is 
>> central to our development methodology
>> and we've been systematically using RUnit with great satisfaction, 
>> mainly because of its simplicity. The
>> presentation of test reports is basic, though. Experiences concerning 
>> interaction with the RUnit developers
>> are very positive: gentle and responsive people.
>>
>> We've never used butler. I think it is not actively developed (even if 
>> the developer is very active).
>>
>> It should be said that many of our developers (including myself) have 
>> backgrounds in statistics (more than in cs
>> or software engineering) and are not always acquainted with the 
>> functionality in other unit testing frameworks
>> and the way they integrate in IDEs as is common in these other languages.
>>
>> I'll soon be personally working with a JUnit guru and will take the 
>> opportunity to benchmark RUnit/ESS/emacs against
>> his toolkit (Eclipse with JUnit- and other plugins, working `in perfect 
>> harmony' (his words)). Even if in my opinion the
>> philosophy of test-driven development is much more important than the 
>> tools used, it is useful to question them from
>> time to time and your message reminded me of this... I'll keep you 
>> posted if it interests you. Why not work out an
>> evaluation grid / check list for unit testing frameworks ?
>>
>> Totally unrelated to the former, it might be interesting to ask oneself 
>> how ESS could be extended to ease unit testing:
>> after refactoring a function some M-x ess-unit-test-function 
>> automagically launches the unit-test for this particular
>> function (based on the test function naming scheme), opens a *test 
>> report* buffer etc.
>>
>> Kind regards,
>> Tobias
>>
>>  from Tony Plate:
>>
>> Hi, I've been looking at testing frameworks for R too, so I'm interested 
>> to hear of your experiences & perspective.
>>
>> Here's my own experiences & perspective:
>> The requirements are:
>>
>> (1) it should be very easy to construct and maintain tests
>> (2) it should be easy to run tests, both automatically and manually
>

Re: [R] Unit Testing Frameworks: summary and brief discussion

2007-05-09 Thread Paul Gilbert
Tony

Thanks for the summary.

My ad hoc system is pretty good for catching flagged errors, and 
numerical errors when I have a check.  Could you (or someone else) 
comment on how easy it would be with one of these more formal frameworks 
to do three things I have not been able to accomplish easily:

- My code gives error and warning messages in some situations. I want to 
test that the errors and warnings work, but these flags are the correct 
response to the test. In fact, it is an error if I don't get the flag. 
How easy is it to set up automatic tests to check warning and error 
messages work?

- For some things it is the printed format that matters. How easy is it 
to set up a test of the printed output? (Something like the Rout files 
used in R CMD check.) I think this is what Tony Plate is calling 
transcript file tests, and I guess it is not automatically available. I 
am not really interested in something I would have to change with each 
new release of R, and I need it to work cross-platform. I want to know 
when something has changed, in R or my own code, without having to 
examine the output carefully.

- (And now the hard one.) For some things it is the plotted output that 
matters. Is it possible to set up automatic tests of plotting? I can 
already test that plots run. I want to know if they "look very 
different". And no, I don't have a clue where to start on this one.

Paul Gilbert

[EMAIL PROTECTED] wrote:
> Greetings -
> 
> I'm finally finished review, here's what I heard:
> 
>  from Tobias Verbeke:
> 
> [EMAIL PROTECTED] wrote:
>> Greetings!
>>
>> After a quick look at current programming tools, especially with regards 
> 
>> to unit-testing frameworks, I've started looking at both "butler" and 
>> "RUnit".   I would be grateful to receieve real world development 
>> experience and opinions with either/both.Please send to me directly 
>> (yes, this IS my work email), I will summarize (named or anonymous, as 
>> contributers desire) to the list.
>>
> I'm founding member of an R Competence Center at an international 
> consulting company delivering R services
> mainly to the financial and pharmaceutical industries. Unit testing is 
> central to our development methodology
> and we've been systematically using RUnit with great satisfaction, 
> mainly because of its simplicity. The
> presentation of test reports is basic, though. Experiences concerning 
> interaction with the RUnit developers
> are very positive: gentle and responsive people.
> 
> We've never used butler. I think it is not actively developed (even if 
> the developer is very active).
> 
> It should be said that many of our developers (including myself) have 
> backgrounds in statistics (more than in cs
> or software engineering) and are not always acquainted with the 
> functionality in other unit testing frameworks
> and the way they integrate in IDEs as is common in these other languages.
> 
> I'll soon be personally working with a JUnit guru and will take the 
> opportunity to benchmark RUnit/ESS/emacs against
> his toolkit (Eclipse with JUnit- and other plugins, working `in perfect 
> harmony' (his words)). Even if in my opinion the
> philosophy of test-driven development is much more important than the 
> tools used, it is useful to question them from
> time to time and your message reminded me of this... I'll keep you 
> posted if it interests you. Why not work out an
> evaluation grid / check list for unit testing frameworks ?
> 
> Totally unrelated to the former, it might be interesting to ask oneself 
> how ESS could be extended to ease unit testing:
> after refactoring a function some M-x ess-unit-test-function 
> automagically launches the unit-test for this particular
> function (based on the test function naming scheme), opens a *test 
> report* buffer etc.
> 
> Kind regards,
> Tobias
> 
>  from Tony Plate:
> 
> Hi, I've been looking at testing frameworks for R too, so I'm interested 
> to hear of your experiences & perspective.
> 
> Here's my own experiences & perspective:
> The requirements are:
> 
> (1) it should be very easy to construct and maintain tests
> (2) it should be easy to run tests, both automatically and manually
> (3) it should be simple to look at test results and know what went wrong 
> where
> 
> I've been using a homegrown testing framework for S-PLUS that is loosely 
> based on the R transcript style tests (run *.R and compare output with 
> *.Rout.save in 'tests' dir).  There are two differences between this 
> test framework and the standard R one:
> (1) the output to match and the input commands are generated from an 
> annotated transcript (annotations can switch some tests in or out 
> depending on the version used)
> (2) annotations can include text substitutions (regular expression 
> style) to be made on the output before attempting to match (this helps 
> make it easier to construct tests that will match across different 
> versions that might have mino

Re: [R] Allocating shelf space

2007-05-09 Thread Liaw, Andy
I don't know if there's an R solution, but this sounds to me like some
variation of the knapsack problem...

 http://en.wikipedia.org/wiki/Knapsack_problem

Andy

From: [EMAIL PROTECTED]
> 
> Hi Folks,
> 
> This is not an R question as such, though it may well have
> an R answer. (And, in any case, this community probably
> knows more about most things than most others ... indeed,
> has probably pondered this very question).
> 
> I: Given a "catalogue" of hundreds of books, where each
> "entry" has author and title (or equivalent ID), and also
> 
> Ia) The dimensions (thickness, height, depth) of the book
> Ib) A sort of classification of its subject/type/genre
> 
> II: Given also a specification of available and possibly
> potential bookshelf space (numbers of book-cases, the width,
> height and shelf-spacing of each, and the dimensions of any
> free wall-space where further book-cases may be placed),
> where some book-cases have fixed shelves and some have shelves
> with (discretely) adjustable position, and additional book-cases
> can be designed to measure (probably with adjustable shelves).
> 
> Question: Is there a resource to approach the solution of the
> problem of optimising the placement of adjustable shelves,
> the design of additional bookcases, and the placement of the
> books in the resulting shelf-space so as to
> 
> A: Make the efficient use of space
> B: Minimise the spatial disclocation of related books
>(it is acceptable to separate large books from small books
>on the same subject, for the sake of efficient packing).
> 
> Awaiting comments and suggestions with interest!
> With thanks,
> Ted.
> 
> 
> E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
> Fax-to-email: +44 (0)870 094 0861
> Date: 09-May-07   Time: 18:23:53
> -- XFMail --
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 
> 


--
Notice:  This e-mail message, together with any attachments,...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to control the sampling to make each sample unique

2007-05-09 Thread HelponR
I have a dataset of 1 records which I want to use to compare two
prediction models.

I split the records into test dataset (size = ntest) and training dataset
(size = ntrain). Then I run the two models.

Now I want to shuffle the data and rerun the models. I want many shuffles.

I know that the following command

sample ((1:1), ntrain)

can pick ntrain numbers from 1 to 1. Then I just use these rows as the
training dataset.

But how can I make sure each run of sample  produce different results? I
want the data output be unique each time.
I tested sample(). and found it usually produce different combinations. But
can I control it some how? Is there a better way to write this?

Thank you,

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Representing a statistic as a colour on a 2d plot

2007-05-09 Thread mister_bluesman

Ive been getting the color.scale function to work. However, I really need to
know is that if i have values: 0.1 0.2, 0.3, 0.4, 0.5, for example, how I
can plot these using colours that would be different if the contents of the
file were 0.6, 0.7, 0.8, 0.9 and 1.0. Using color.scale scales them so that
they differ, but only relative to each other, rather than taking the actual
value and converting them to some unique colour/colour intensity.

Many thanks



Jim Lemon-2 wrote:
> 
> mister_bluesman wrote:
>> Hello.
>> 
>> I have a 2d plot which looks like this:
>> 
>> http://www.nabble.com/file/8242/1.JPG 
>> 
>> This plot is derived from a file that holds statistics about each point
>> on
>> the plot and looks like this:
>> 
>>   abc   d  e
>> a00.4980.4730.524   0.528
>> 
>> b   0.498  0   0   0  0
>> c   0.473  0   0   0  0
>> d   0.524  0   0   0  0
>> e   0.528  0   0   0  0
>> 
>> However, I have another file called 2.txt, with the following contents:
>> 
>> a  b  c  d  e   
>> 0.5   0.7  0.32 0.34 0.01
>> 
>> What I would like to know is how do I convert these values in 2.txt to
>> colours or colour intensities so that the x's in the diagram above can be
>> colour coded as such.
> 
> Yo bluesman,
> 
> check color.scale in the plotrix package, cat
> it'll color your points to the values they're at
> 
> Jim
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Representing-a-statistic-as-a-colour-on-a-2d-plot-tf3703885.html#a10404970
Sent from the R help mailing list archive at Nabble.com.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Allocating shelf space

2007-05-09 Thread Ted Harding
Hi Folks,

This is not an R question as such, though it may well have
an R answer. (And, in any case, this community probably
knows more about most things than most others ... indeed,
has probably pondered this very question).

I: Given a "catalogue" of hundreds of books, where each
"entry" has author and title (or equivalent ID), and also

Ia) The dimensions (thickness, height, depth) of the book
Ib) A sort of classification of its subject/type/genre

II: Given also a specification of available and possibly
potential bookshelf space (numbers of book-cases, the width,
height and shelf-spacing of each, and the dimensions of any
free wall-space where further book-cases may be placed),
where some book-cases have fixed shelves and some have shelves
with (discretely) adjustable position, and additional book-cases
can be designed to measure (probably with adjustable shelves).

Question: Is there a resource to approach the solution of the
problem of optimising the placement of adjustable shelves,
the design of additional bookcases, and the placement of the
books in the resulting shelf-space so as to

A: Make the efficient use of space
B: Minimise the spatial disclocation of related books
   (it is acceptable to separate large books from small books
   on the same subject, for the sake of efficient packing).

Awaiting comments and suggestions with interest!
With thanks,
Ted.


E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Date: 09-May-07   Time: 18:23:53
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Removing a list of Objects

2007-05-09 Thread Patnaik, Tirthankar
Many thanks for this Gaurav. 
 
best,
-Tir

  _  

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 09, 2007 3:01 PM
To: Patnaik, Tirthankar [GWM-CIR]
Cc: r-help@stat.math.ethz.ch; [EMAIL PROTECTED]
Subject: Re: [R] Removing a list of Objects



try this 

rm(list=ls(pat="C243.Daily") 


> ls(pat=".") 
 [1] ".chutes"  ".densityplot"
".densityplot.default" ".densityplot.formula" 
 [5] ".eda" ".eda.ts"
".fancy.stripchart"".freqpoly"   
 [9] ".hist.and.boxplot"".lag" ".lm"
".median.test" 
[13] ".plot.hist.and.box"   ".scatterplot" ".sim"
".violinplot" 
[17] ".violinplot.default"  ".violinplot.formula"  ".z.test"

> ls(pat=".l") 
[1] ".lag" ".lm" 
> rm(list = ls(pat=".l")) 
> ls(pat=".l") 
character(0) 


-  Regards,

 \\\|///
  \\   --   //
   (  o   o  )
oOOo-(_)-oOOo
|
| Gaurav Yadav
| Assistant Manager, CCIL, Mumbai (India)
| Mob: +919821286118 Email: [EMAIL PROTECTED]
| Man is made by his belief, as He believes, so He is.
|   --- Bhagavad Gita   
|___Oooo
oooO(  )
(  )   )   /
 \   ((_/
   \_ )




"Patnaik, Tirthankar " <[EMAIL PROTECTED]> 
Sent by: [EMAIL PROTECTED] 

05/09/2007 02:33 PM 

To
"Gabor Csardi" <[EMAIL PROTECTED]> 
cc
r-help@stat.math.ethz.ch 
Subject
Re: [R] Removing a list of Objects






Hi Gabor,
Tried this, and didn't quite work.

> a <- list(paste("C243.Daily",sep="",1:5))
> a
[[1]]
[1] "C243.Daily1" "C243.Daily2" "C243.Daily3" "C243.Daily4"
"C243.Daily5"

> rm(list=a)
Error in remove(list, envir, inherits) : invalid first argument
>  

-Tir

-Original Message-
From: Gabor Csardi [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 09, 2007 12:37 PM
To: Patnaik, Tirthankar [GWM-CIR]
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] Removing a list of Objects

Hmmm,

rm(list=a)

is what you want.

Gabor

On Wed, May 09, 2007 at 10:29:05AM +0530, Patnaik, Tirthankar  wrote:
> Hi,
>  I have a simple beginner's question on removing a
list of
objects. 
> Say I have objects C243.Daily1, C243.Daily2...C243.Daily5 in my 
> workspace. I'd like to remove these without using rm five times.
> 
> So I write. 
> 
> > a <- list(paste("C243.Daily",sep="",1:5))
> 
> > rm(a)
> 
> Obviously this wouldn't work, as it would only remove the object a.
> 
> But is there any way I could do this, like on the lines of a UNIX `
> (grave-accent)
> 
> Something like
> 
> Prompt> rm `find . -type f -name "foo"`
> 
> TIA and best,
> -Tir
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-- 
Csardi Gabor <[EMAIL PROTECTED]>MTA RMKI, ELTE TTK

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




DISCLAIMER AND CONFIDENTIALITY CAUTION:\ \ This message and ...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Errors with systemfit package and systemfitClassic()

2007-05-09 Thread iamisha1
I get the following error message after using the sysfit package's function 
'systemfitClassic':

Error in data[[eqnVar]] : subscript out of bounds

When I do this:

MSYS1 <- cbind(Y, Num, F, PO, PD, GO, GD)
MigOLS1 <- systemfitClassic("OLS", F ~ PO + PD + GO + GD, eqnVar = "Num", 
timeVar = "Y", data = MSYS1)
and I get this error message: 

Error in inherits(x, "factor") : attempt to select more than one element

when I do this (removing quotes from columns set as 'eqnVar' and 'timeVar'):

MSYS1 <- cbind(Y, Num, F, PO, PD, GO, GD)
MigOLS1 <- systemfitClassic("OLS", F ~ PO + PD + GO + GD, eqnVar = Num, timeVar 
= Y, data = MSYS1)

When I query 'typeof()' I get the following:

Y: Integer
Num: Integer
F: Integer
PO: Integer
PD: Integer
GO: Double
GD: Double

I have set my data up in a manner analogous to that in the examples in the 
systemfit documentation.  Also, the panel is balanced.  If it matters, here are 
some descriptions of the data:

Y: Year
Num: ID of Flow
F: Flow
PO: Origin Population
PD: Destination Population
GO: Origin GDP
GD: Destination GDP
[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] power 2x3 exact test

2007-05-09 Thread Ted Harding
On 09-May-07 22:00:27, Duncan Murdoch wrote:
> On 09/05/2007 5:11 PM, Bingshan Li wrote:
>  > Hi, all,
>  >
>  > I am wondering if there is an algorithm for calculating power of 2x3
>  > table using Fisher exact test. There is one (algorithm 280) for 2x2
>  > Fisher exact test but I couldn't find one for 2x3 table. If we are
>  > not lucky enough to have one, is there any other way to calculate
>  > exact power of 2x3 table? The reason why I want exact power is
>  > because some cells are assumed to be very small and chi square
>  > approximation is not valid.
> 
> I think there are lots of possible alternatives to the null in a 2x3 
> table, so you may have trouble finding a single answer to this
> question. 
>   But assuming you have one in mind, I'd suggest doing a Monte Carlo 
> power calculation:  simulate a few thousand tables from the alternative
> distribution, and see what the distribution of p-values looks like.
> 
> Duncan Murdoch

I'd back Duncan on that point!

More specifically, for the 2x2 table, the table, conditional on the
marginals, is a function of one element (say top left-hand corner),
and the probability of any table depends on the single parameter
which is the odds-ratio of the 4 cell probabilities.

so this case is relatively easy and straightforward and, indeed,
for the 2x2 table R'a fisher.test() allows you to specify the
odds-ratio as a "null" parameter.

This is not the case with fisher.test() for a larger (say 2x3
table), so to investigate that case you cannot use fisher.test().

In all cases, however (according to the FORTRAN code on which
itis based -- see the reference in "?fisher.table"), the rejection
region for the exact fisher.test() consists of those table with
the smallest probabilities.

For the 2x3 table, say (cell counts with margins, and probabilities):


   a1  b1  c1 | d1  p1  q1  r1
  |
   a2  b2  c2 | d2  p2  q2  r2
   ---
   a   b   c  |  n

so that

   a1+b1+c1 = d1, a2+b2+c2 = d2,
   a1+a2 = a, b1+b2 = b, c1+c2 = c

the table is a function of any two functionally independent cells
(say a1 and b1), and its probability is a function of some two
odds-ratios, say

   (p1*r2)/(r1*p2)

   (q1*r2)/(r1*q2)

which, for the standard null hypothesis, are both equal to 1.
However, as Duncan says, alternatives are 2-dimensional and
so there is not a unique natural form for an alternative (as
opposed  to the 2x2 case, where it boils down to (p1*q2)/(p2*q1)
being not equal to 1, therefore greater than 1, or less than 1,
or 2-sidedly either >1 or <1).

The probability of the 2x3 table is proportional to

  ((p1*r2)/(r1*p2))^a1 * ((q1*r2)/(r1*q2))^b1

(or equivalent), divided by the product of the factorials of
a1, b1, c1, a2, b2, c2, subject to summing to 1 over all
combinations of (a1,b1) giving rise to a table compatible
with the marginal constraints.

Given that you expect some cells to be small, it should not
be a severe task to draw up a list of (a1,b1) values which
correspond to rejection of the null hypothesis (that both
ORs equal 1), and then the simulation using different values
of the two odds-ratios will give you the power for each such
pair of odds-ratios.

The main technical difficulty will be simulation of random
tables, conditional on the marginals, with the probabilities
as given above.

I don't know of a good suggestion for this. The fisher.test()
function will not help (see above). In the case of the 2x2
table, is is a straightforward hypergeometric distribution,
but 2x3 is not straightforward. Maybe someone has written
a function for this kind of application, and can point it
out to us???

A quick R-site search did not help!

Best wishes,
ted.



E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Date: 10-May-07   Time: 00:12:29
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Looking for a cleaner way to implement a setting certainindices of a matrix to 1 function

2007-05-09 Thread Prasenjit Kapat
On Tuesday 08 May 2007 05:45:53 pm Leeds, Mark (IED) wrote:
> That's a good idea : I didn't realize that my matrices would look so bad
> in the final email. All I want
> To do is output 1's in the diagonal elements and zero's everywhere else
> but the matrix is not square so by diagonals I
> Really mean if
>
> Lagnum = 1 then the elements are (1,1), (2,2), (3,3),(4,4),(5,5),(6,6)
>
> Lagnum = 2 then the elements (1,1), (2,2),
> (3,3),(4,4),(5,5),(6,6),(7,1),(8,2),(9,3),(10,4),(11,5),(12,6)
>
> Lagnum = 3 then the elements (1,1), (2,2),
> (3,3),(4,4),(5,5),(6,6),(7,1),(8,2),(9,3),(10,4),(11,5),(12,6),(13,1),(1
> 4,2),(15,3),(16,4),(17,5),
> (18,6)
>
> And lagnum always has to be greater than or equal to 1 and less than or
> equal to (number of cols/number of rows ). Thanks
> For your advice.

I think, the kronecker product method (by Gabor) is a cleaner solution. 
Something like:

kronecker(matrix(1,1,Lagnum), diag(K)).

My experience with such constructions, in really large dimensions, is that: 
kronecker(...) is much faster than {r,c}binds and rep(...).

Regards
PK

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] draw two plots on a single panel

2007-05-09 Thread Nguyen Dinh Nguyen
Hi Pat,
Certainly you can
But you need to provide more details, will advise closely
In general, first you plot the “frame” without plotting like this


plot(range(data1$x1,data2$x2), range(data1$y1,data2$y2), type=’n’) # means
just plotting a frame with xaxis ranges from x1-x2 and y from y1-y2, but no
plotting

then depends on your data type (i.e. line or scatter or blah blah) then you
have specific command for them
Give you here an example for reference:

# Two scatter plots in the same graph:

data1 <- data.frame(x1=rnorm(100,70,7), y1=rnorm(100,35,5))
data2 <- data.frame(x2=rnorm(100,78,8),  y2=rnorm(100,40,5))
plot(range(data1$x1,data2$x2), range(data1$y1,data2$y2), type=’n’) 
points(data1$x1,data1$y1,pch=17, col=’blue’)
points(data2$x2,data2$y2,pch=16, col=’red’)


Cheers
Nguyen

Message: 82
Date: Tue, 8 May 2007 16:49:47 -0700 (PDT)
From: "Patrick Wang" <[EMAIL PROTECTED]>
Subject: [R] draw two plots on a single panel
To: r-help@stat.math.ethz.ch
Message-ID:
<[EMAIL PROTECTED]>
Content-Type: text/plain;charset=iso-8859-1

Hi,

I have 2 dataset,

plot(data1)
plot(data2),

but it comes as two graphs, can I draw both on a single panel so I can
compare them?

Thanks
Pat

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] power 2x3 exact test

2007-05-09 Thread Duncan Murdoch
On 09/05/2007 5:11 PM, Bingshan Li wrote:
 > Hi, all,
 >
 > I am wondering if there is an algorithm for calculating power of 2x3
 > table using Fisher exact test. There is one (algorithm 280) for 2x2
 > Fisher exact test but I couldn't find one for 2x3 table. If we are
 > not lucky enough to have one, is there any other way to calculate
 > exact power of 2x3 table? The reason why I want exact power is
 > because some cells are assumed to be very small and chi square
 > approximation is not valid.

I think there are lots of possible alternatives to the null in a 2x3 
table, so you may have trouble finding a single answer to this question. 
  But assuming you have one in mind, I'd suggest doing a Monte Carlo 
power calculation:  simulate a few thousand tables from the alternative 
distribution, and see what the distribution of p-values looks like.

Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to avoid infinite loops when sourcing R files

2007-05-09 Thread Adrian Dragulescu

Hello,

I have a bunch of R files in a directory and I want to source all of them
with something like lapply(files, source).

I have a main.R file
source("C:/Temp/set.parms.R")
parms <- set.parms()
do.calcs(parms)
cat("I'm done with main.R\n")

Then I have set.parms.R function
set.parms <- function(){
  cat("I'm in set.parms.\n"); flush.console()
  directory <- "C:/Temp/"
  files <- dir(directory, "\\.[rR]$", full.name=T)
  files <- files[-grep("set.parms.R", files)] # remove infinite loop
  lapply(files, source)  # source them all

  cat("Exiting set.parms.\n"); flush.console()
}

And other functions f1, f2, f3, etc. in the same directory that also
source set.parms.R.  For example:
f1 <- function(){
  source("H:/user/R/RMG/Energy/VaR/Overnight/Test/set.parms.R")
  cat("I add two numbers.\n"); flush.console()
}

Because of the source command in f1, I get into an infinite loop.  This
must be a common situation but I don't know how to avoid it.
I need the source(set.parms) in f1, f2, f3, etc. because I want use a
different combination of them in other projects.


Thanks,
Adrian

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] power 2x3 exact test

2007-05-09 Thread Bingshan Li
Hi, all,

I am wondering if there is an algorithm for calculating power of 2x3  
table using Fisher exact test. There is one (algorithm 280) for 2x2  
Fisher exact test but I couldn't find one for 2x3 table. If we are  
not lucky enough to have one, is there any other way to calculate  
exact power of 2x3 table? The reason why I want exact power is  
because some cells are assumed to be very small and chi square  
approximation is not valid.

Thanks!

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] generalized least squares with empirical error covariance matrix

2007-05-09 Thread Roy Mendelssohn
Look at "DLM".  it can do bayesian dynamic linear models, ie. the  
bayes equivalent of kalman filtering.

-Roy M.
On May 9, 2007, at 1:09 PM, Andrew Schuh wrote:

> I have a bayesian hierarchical normal regression model, in which the
> regression coefficients are nested, which I've wrapped into one
> regression framework, y = X %*% beta + e .  I would like to run data
> through the model in a filter style (kalman filterish), updating
> regression coefficients at each step new data can be gathered.  After
> the first filter step, I will need to be able to feed the a non- 
> diagonal
> posterior covariance in for the prior of the next step.  "gls" and  
> "glm"
> seem to be set up to handle structured error covariances, where  
> mine is
> more empirical, driven completely by the data.  Explicitly solving w/
> "solve" is really sensitive to small values in the covariance  
> matrix and
> I've only been able to get reliable results at the first step by using
> weighted regression w/ lm().  Am I missing an obvious function for
> linear regression w/ a correlated  prior on the errors for the  
> updating
> steps?  Thanks in advance for any advice.
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting- 
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

**
"The contents of this message do not reflect any position of the U.S.  
Government or NOAA."
**
Roy Mendelssohn
Supervisory Operations Research Analyst
NOAA/NMFS
Environmental Research Division 
Southwest Fisheries Science Center
1352 Lighthouse Avenue
Pacific Grove, CA 93950-2097

e-mail: [EMAIL PROTECTED] (Note new e-mail address)
voice: (831)-648-9029
fax: (831)-648-8440
www: http://www.pfeg.noaa.gov/

"Old age and treachery will overcome youth and skill."

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to remove outer box from Wireframe plots?

2007-05-09 Thread Deepayan Sarkar
On 5/9/07, Seth W Bigelow <[EMAIL PROTECTED]> wrote:
>
> I would like to remove the outermost box from my wireframe plots -- this is
> the box that is automatically generated, and is not the inner cube that
> frames the data. There was a thread on this 4 yrs ago but none of the fixes
> work (e.g., grid.newpage(), grid.lines(gp = gpar(col = NA)) or
> par.box=list(col=1),col=NA. These just make the data or the cube disappear.
> Has anyone solved this issue?
> Here's some sample code. In case you are wondering, I have indeed purchased
> Paul Murrel's book.

But have you looked at example(wireframe)? The last example is what
you want. You might also want to add

scales = list(col = "black")

to the call.

-Deepayan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] generalized least squares with empirical error covariance matrix

2007-05-09 Thread Andrew Schuh
I have a bayesian hierarchical normal regression model, in which the 
regression coefficients are nested, which I've wrapped into one 
regression framework, y = X %*% beta + e .  I would like to run data 
through the model in a filter style (kalman filterish), updating 
regression coefficients at each step new data can be gathered.  After 
the first filter step, I will need to be able to feed the a non-diagonal 
posterior covariance in for the prior of the next step.  "gls" and "glm" 
seem to be set up to handle structured error covariances, where mine is 
more empirical, driven completely by the data.  Explicitly solving w/ 
"solve" is really sensitive to small values in the covariance matrix and 
I've only been able to get reliable results at the first step by using 
weighted regression w/ lm().  Am I missing an obvious function for 
linear regression w/ a correlated  prior on the errors for the updating 
steps?  Thanks in advance for any advice.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot with grid?

2007-05-09 Thread Deepayan Sarkar
On 5/9/07, Gav Wood <[EMAIL PROTECTED]> wrote:
> Gabor Grothendieck wrote:
> > Add the argument
> >
> >type = c("p", "g")
> >
> > to your xyplot call.
>
> So what's the easiest way to place a line at x=3 (ala "abline(v=3)") to
> the graph?

xyplot(x~y,data.frame(x=1:9,y=1:9,z=sort(rep(c('A','B','C'),3))),
   type = c("p", "g"),
   panel = function(...) {
   panel.xyplot(...)
   panel.abline(v = 3)
   },
   groups=z,auto.key=list(columns=3))

or

xyplot(x~y,data.frame(x=1:9,y=1:9,z=sort(rep(c('A','B','C'),3))),
   panel = function(...) {
   panel.grid(h = -1, v = -1)
   panel.abline(v = 3)
   panel.xyplot(...)
   },
   groups=z,auto.key=list(columns=3))

depending on whether you are going through the intermediate example or not.

> After calling the xyplot call, the panel.* functions seem to
> work only in device coordinates.

No, they work in native coordinates, you just happen to be in a
"viewport" where they are the same as the device coordinates. Note
that your expectations seem to be based on the traditional graphics
model with only one panel, which is not meaningful in multipanel
plots, like, say,

xyplot(x~y|z,data.frame(x=1:9,y=1:9,z=sort(rep(c('A','B','C'),3

If you wish to modify a lattice plot after it has been plotted (which
is justifiable only in circumstances where you want some sort of
interaction), see

?trellis.focus

-Deepayan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot with grid?

2007-05-09 Thread Gabor Grothendieck
You can do it via panel= or after the fact with trellis.focus...trellis.unfocus.
The following illstrates both.  The panel= function adds a vertical line
at 3 and after the fact we add a vertical line at 6.

pnl <- function(...) {
   panel.abline(v = 3)
   panel.xyplot(...)
}

xyplot(x~y,data.frame(x=1:9,y=1:9,z=sort(rep(c('A','B','C'),3))),
groups=z,auto.key=list(columns=3), panel = pnl)

trellis.focus("panel", 1, 1)
panel.abline(v = 6)
trellis.unfocus()



On 5/9/07, Gav Wood <[EMAIL PROTECTED]> wrote:
> Gabor Grothendieck wrote:
> > Add the argument
> >
> >type = c("p", "g")
> >
> > to your xyplot call.
>
> So what's the easiest way to place a line at x=3 (ala "abline(v=3)") to
> the graph? After calling the xyplot call, the panel.* functions seem to
> work only in device coordinates.
>
> Thanks for the help,
>
> Gav
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot with grid?

2007-05-09 Thread Gav Wood
Gabor Grothendieck wrote:
> Add the argument
> 
>type = c("p", "g")
> 
> to your xyplot call.

So what's the easiest way to place a line at x=3 (ala "abline(v=3)") to 
the graph? After calling the xyplot call, the panel.* functions seem to 
work only in device coordinates.

Thanks for the help,

Gav

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot with grid?

2007-05-09 Thread Gabor Grothendieck
Add the argument

   type = c("p", "g")

to your xyplot call.



On 5/9/07, Gav Wood <[EMAIL PROTECTED]> wrote:
> > Giving a reproducible example would be a good start.
>
> Ok, what's the easiest way to get a grid (ala grid()) on this graph?
>
> xyplot(x~y,data.frame(x=1:9,y=1:9,z=sort(rep(c('A','B','C'),3))),
> groups=z,auto.key=list(columns=3))
>
> Bish bosh,
>
> Gav
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Ic TukeyHSD

2007-05-09 Thread Bruno Churata
Hi,

What is expression IC on TukeyHSD?

contrast +/- qtukey(.95,  nmeans, df) * sqrt(MSe/n) ?

Thanks,

Bruno

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reading a web page in pdf format

2007-05-09 Thread Gabor Grothendieck
Here is one additional solution.  This one produces a data frame.  The
regular expression removes:

- everything from beginning to first (
- everything from last ( to end
- everything between ) and ( in the middle

The | characters separate the three parts.  Then read.table reads it in.


URL <- 
"http://www.snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf";
Lines.raw <- readLines(URL)
Lines <- grep("Industriale|Termoelettrico", Lines.raw, value = TRUE)

rx <- "^[^(]*[(]|[)][^(]*$|[)][^(]*[(]"
read.table(textConnection(gsub(rx, "", Lines)), dec = ",")


On 5/9/07, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> Modify this to suit.  After grepping out the correct lines we use strapply
> to find and emit character sequences that come after a "(" but do not contain
> a ")" .  back = -1 says to only emit the backreferences and not the entire
> matched expression (which would have included the leading "(" ):
>
> URL <- 
> "http://www.snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf";
> Lines.raw <- readLines(URL)
> Lines <- grep("Industriale|Termoelettrico", Lines.raw, value = TRUE)
> library(gsubfn)
> strapply(Lines, "[(]([^)]*)", back = -1, simplify = rbind)
>
> which gives a character matrix whose first column is the label
> and second column is the number in character form.  You can
> then manipulate it as desired.
>
> On 5/9/07, Vittorio <[EMAIL PROTECTED]> wrote:
> > Each day the daily balance in the following link
> >
> > http://www.
> > snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf
> >
> > is
> > updated.
> >
> > I would like to set up an R procedure to be run daily in a
> > server able to read the figures in a couple of lines only
> > ("Industriale" and "Termoelettrico", towards the end of the balance)
> > and put the data in a table.
> >
> > Is that possible? If yes, what R-packages
> > should I use?
> >
> > Ciao
> > Vittorio
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Error in plot.new() : figure margins too large

2007-05-09 Thread gatemaze
On 09/05/07, Prof Brian Ripley <[EMAIL PROTECTED]> wrote:
>
> On Wed, 9 May 2007, [EMAIL PROTECTED] wrote:
>
> > The code is:
> >
> > postscript(filename, horizontal=FALSE, onefile=FALSE, paper="special",
>
> You have not set a width or height, so please do your homework.


Thanks a lot for that and to Phil for replying. Just a minor "correction" to
your post. "You have not set a width AND height". Both seem to be required.
I had tried only with width thinking height would be calculated relatively
but I was still getting the same error.

> bg="white", family="ComputerModern", pointsize=10);
> > par(mar=c(5, 4, 0, 0) + 0.1);
> > plot(x.nor, y.nor, xlim=c(3,6), ylim=c(20,90), pch=normal.mark);
> >
> > gives error
> > Error in plot.new() : figure margins too large
> >
> > plotting on the screen without calling postscript works just fine .
> >
> > Any clues? Thanks.
> >
> >   [[alternative HTML version deleted]]
> >
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
> --
> Brian D. Ripley,  [EMAIL PROTECTED]
> Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel:  +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UKFax:  +44 1865 272595
>

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot with grid?

2007-05-09 Thread Deepayan Sarkar
On 5/9/07, Gav Wood <[EMAIL PROTECTED]> wrote:
> > Giving a reproducible example would be a good start.
>
> Ok, what's the easiest way to get a grid (ala grid()) on this graph?
>
> xyplot(x~y,data.frame(x=1:9,y=1:9,z=sort(rep(c('A','B','C'),3))),
>  groups=z,auto.key=list(columns=3))

xyplot(x~y,data.frame(x=1:9,y=1:9,z=sort(rep(c('A','B','C'),3))),
   type = c("p", "g"),
   groups=z,auto.key=list(columns=3))

-Deepayan

>
> Bish bosh,
>
> Gav
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot with grid?

2007-05-09 Thread Gavin Simpson
On Wed, 2007-05-09 at 19:13 +0100, Gav Wood wrote:
> > Giving a reproducible example would be a good start.
> 
> Ok, what's the easiest way to get a grid (ala grid()) on this graph?
> 
> xyplot(x~y,data.frame(x=1:9,y=1:9,z=sort(rep(c('A','B','C'),3))),
>  groups=z,auto.key=list(columns=3))
> 
> Bish bosh,

Er, write your own panel function:

xyplot(x~y,data.frame(x=1:9,y=1:9,z=sort(rep(c('A','B','C'),3))),
   groups=z,auto.key=list(columns=3), h = -1, v = -1,
   panel = function(x, y, ...) {
   panel.grid(...)
   panel.xyplot(x, y, ...)
 })

Not sure if that is the easiest way, or the best, but that's how I've
learnt to use lattice recently. The v and h arguments are passed to
panel.grid as part of "..." and just tell it to plot the grids at the
tick marks.

> 
> Gav

HTH Gav,

Gav

-- 
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
 Gavin Simpson [t] +44 (0)20 7679 0522
 ECRC, UCL Geography,  [f] +44 (0)20 7679 0565
 Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
 Gower Street, London  [w] http://www.ucl.ac.uk/~ucfagls/
 UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to remove outer box from Wireframe plots?

2007-05-09 Thread Seth W Bigelow

I would like to remove the outermost box from my wireframe plots -- this is
the box that is automatically generated, and is not the inner cube that
frames the data. There was a thread on this 4 yrs ago but none of the fixes
work (e.g., grid.newpage(), grid.lines(gp = gpar(col = NA)) or
par.box=list(col=1),col=NA. These just make the data or the cube disappear.
Has anyone solved this issue?
Here's some sample code. In case you are wondering, I have indeed purchased
Paul Murrel's book.

library(lattice)
library(grid)

w <- expand.grid(X1 = seq(-29.5,-25,0.1), X3 = seq(1,90,1))
w$z <- model(99.6,3.59,8.65,w$X1,0.5,w$X3)

wireframe(z~X3*X1,w,
 scales=list(arrows=FALSE))

Appreciatively,
Seth
Dr. Seth  W. Bigelow
Biologist, Sierra Nevada Research Center

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unit Testing Frameworks: summary and brief discussion

2007-05-09 Thread Martin Morgan
Oops, taking a look at the unit tests in RUnit, I see that specifying
'where=.GlobalEnv' is what I had been missing.

testCreateClass <- function() {
setClass("A", contains="numeric", where=.GlobalEnv)
a=new("A")
checkTrue(validObject(a))
removeClass("A", where=.GlobalEnv)
checkException(new("A"))
}

Executing test function testCreateClass  ...  done successfully.

RUNIT TEST PROTOCOL -- Wed May  9 11:11:27 2007 
*** 
Number of test functions: 1 
Number of errors: 0 
Number of failures: 0 

Sorry for the noise. Martin

Martin Morgan <[EMAIL PROTECTED]> writes:

> [EMAIL PROTECTED] writes:
>
>> [EMAIL PROTECTED] wrote:
>> [...]
>>> = From Seth Falcon:
>>>   1. At last check, you cannot create classes in unit test code and
>>>  this makes it difficult to test some types of functionality.  I'm
>>>  really not sure to what extent this is RUnit's fault as opposed
>>>  to limitation of the S4 implemenation in R.
>>
>> I'd be very interested to hear what problems you experienced. If you 
>> have any example ready I'd be happy to take a look at it.
>> So far we have not observed (severe) problems to create S4 classes and 
>> test them in unit test code. We actually use RUnit mainly on S4 classes 
>> and methods. There are even some very simple checks in RUnits own test 
>> cases which create and use S4 classes. For example in tests/runitRunit.r
>> in the source package.
>
> RUnit has been great for me, helping to develop a more rigorous
> programming approach and gaining confidence that my refactoring
> doesn't (unintentionally) break the established contract.
>
> One of the strengths of unit tests -- reproducible and expressible in
> the way that language sometimes is not:
>
> testCreateClass <- function() {
> setClass("A", contains="numeric")
> checkTrue(TRUE)
> }
>
>
> RUNIT TEST PROTOCOL -- Wed May  9 10:36:53 2007 
> *** 
> Number of test functions: 1 
> Number of errors: 1 
> Number of failures: 0 
>
>  
> 1 Test Suite : 
> CreateClass_test - 1 test function, 1 error, 0 failures
> ERROR in testCreateClass: Error in assign(mname, def, where) : cannot add 
> bindings to a locked environment
>
>> sessionInfo()
> R version 2.6.0 Under development (unstable) (2007-05-07 r41468) 
> x86_64-unknown-linux-gnu 
>
> locale:
> LC_CTYPE=en_US;LC_NUMERIC=C;LC_TIME=en_US;LC_COLLATE=en_US;LC_MONETARY=en_US;LC_MESSAGES=en_US;LC_PAPER=en_US;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US;LC_IDENTIFICATION=C
>
> attached base packages:
> [1] "tools" "stats" "graphics"  "grDevices" "utils" "datasets" 
> [7] "methods"   "base" 
>
> other attached packages:
>RUnit
> "0.4.15"
>
> -- 
> Martin Morgan
> Bioconductor / Computational Biology
> http://bioconductor.org
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-- 
Martin Morgan
Bioconductor / Computational Biology
http://bioconductor.org

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot with grid?

2007-05-09 Thread Gav Wood
> Giving a reproducible example would be a good start.

Ok, what's the easiest way to get a grid (ala grid()) on this graph?

xyplot(x~y,data.frame(x=1:9,y=1:9,z=sort(rep(c('A','B','C'),3))),
 groups=z,auto.key=list(columns=3))

Bish bosh,

Gav

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] predict.tree

2007-05-09 Thread Prof Brian Ripley

The idea is that you use

treemod<-tree(y~x1+x2, data = old)
predict(treemod, new, type = "class")

where new is a data frame containing the same column names as old (except 
perhaps 'y').


This applies to all model fitting functions, not just tree and rpart.

On Wed, 9 May 2007, [EMAIL PROTECTED] wrote:


I have a classification tree model similar to the following (slightly
simplified here):


treemod<-tree(y~x)


where y is a factor and x is a matrix of numeric predictors. They have
dimensions:


length(y)

[1] 1163

dim(x)

[1] 1163   75

I’ve evaluated the tree model and am happy with the fit. I also have a
matrix of cases that I want to use the tree model to classify. Call it
newx:


dim(newx)

[1] 6884275

The column names of newx match the column names of x. It seems that
prediction should be straightforward. To classify the first 10 values of
newx, for example, I think I should use:


predict(treemod, newx[1:10,], type = "class")


However, this returns a vector of the predicted classes of the training
data x, rather than the predicted classes of the new data. The returned
vector has length 1163, not length 10. This occurs regardless of the number
of rows in newx. It gives this warning message:

'newdata' had 10 rows but variable(s) found have 1163 rows

I must be misunderstanding the way I should format the newdata I pass to
predict. I’ve tried the rpart package as well, but have a similar problem.
What am I missing?

Thanks in advance,

Ryan Anderson
Graduate Student
Dept. of Forest Resources
University of Minnesota
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot with grid?

2007-05-09 Thread Deepayan Sarkar
On 5/9/07, Gav Wood <[EMAIL PROTECTED]> wrote:
> Hello folks,
>
> So I'd like to use the lattice xyplot function, but here's the thing;
> I'd like a grid on it and a bit of annotation (line and text).
>
> So I tried just using the panel.grid, panel.text and panel.line but they
> didn't work after the plot had been called seemingly using the device
> coordinate system. So I made a new proxy panel function (calling
> panel.xyplot and passing the x/y/groups/subscripts args) put them in it,
> and they started working well.
>
> Only problem is that xyplot no longer plots the key; I still pass
> auto.key to into the high-level function, but no key gets plotted.
> What's the best way to get what I want?

Giving a reproducible example would be a good start.

-Deepayan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reading a web page in pdf format

2007-05-09 Thread Gabor Grothendieck
Modify this to suit.  After grepping out the correct lines we use strapply
to find and emit character sequences that come after a "(" but do not contain
a ")" .  back = -1 says to only emit the backreferences and not the entire
matched expression (which would have included the leading "(" ):

URL <- 
"http://www.snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf";
Lines.raw <- readLines(URL)
Lines <- grep("Industriale|Termoelettrico", Lines.raw, value = TRUE)
library(gsubfn)
strapply(Lines, "[(]([^)]*)", back = -1, simplify = rbind)

which gives a character matrix whose first column is the label
and second column is the number in character form.  You can
then manipulate it as desired.

On 5/9/07, Vittorio <[EMAIL PROTECTED]> wrote:
> Each day the daily balance in the following link
>
> http://www.
> snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf
>
> is
> updated.
>
> I would like to set up an R procedure to be run daily in a
> server able to read the figures in a couple of lines only
> ("Industriale" and "Termoelettrico", towards the end of the balance)
> and put the data in a table.
>
> Is that possible? If yes, what R-packages
> should I use?
>
> Ciao
> Vittorio
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unit Testing Frameworks: summary and brief discussion

2007-05-09 Thread Martin Morgan
[EMAIL PROTECTED] writes:

> [EMAIL PROTECTED] wrote:
> [...]
>> = From Seth Falcon:
>>   1. At last check, you cannot create classes in unit test code and
>>  this makes it difficult to test some types of functionality.  I'm
>>  really not sure to what extent this is RUnit's fault as opposed
>>  to limitation of the S4 implemenation in R.
>
> I'd be very interested to hear what problems you experienced. If you 
> have any example ready I'd be happy to take a look at it.
> So far we have not observed (severe) problems to create S4 classes and 
> test them in unit test code. We actually use RUnit mainly on S4 classes 
> and methods. There are even some very simple checks in RUnits own test 
> cases which create and use S4 classes. For example in tests/runitRunit.r
> in the source package.

RUnit has been great for me, helping to develop a more rigorous
programming approach and gaining confidence that my refactoring
doesn't (unintentionally) break the established contract.

One of the strengths of unit tests -- reproducible and expressible in
the way that language sometimes is not:

testCreateClass <- function() {
setClass("A", contains="numeric")
checkTrue(TRUE)
}


RUNIT TEST PROTOCOL -- Wed May  9 10:36:53 2007 
*** 
Number of test functions: 1 
Number of errors: 1 
Number of failures: 0 

 
1 Test Suite : 
CreateClass_test - 1 test function, 1 error, 0 failures
ERROR in testCreateClass: Error in assign(mname, def, where) : cannot add 
bindings to a locked environment

> sessionInfo()
R version 2.6.0 Under development (unstable) (2007-05-07 r41468) 
x86_64-unknown-linux-gnu 

locale:
LC_CTYPE=en_US;LC_NUMERIC=C;LC_TIME=en_US;LC_COLLATE=en_US;LC_MONETARY=en_US;LC_MESSAGES=en_US;LC_PAPER=en_US;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US;LC_IDENTIFICATION=C

attached base packages:
[1] "tools" "stats" "graphics"  "grDevices" "utils" "datasets" 
[7] "methods"   "base" 

other attached packages:
   RUnit
"0.4.15"

-- 
Martin Morgan
Bioconductor / Computational Biology
http://bioconductor.org

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] xyplot with grid?

2007-05-09 Thread Gav Wood
Hello folks,

So I'd like to use the lattice xyplot function, but here's the thing; 
I'd like a grid on it and a bit of annotation (line and text).

So I tried just using the panel.grid, panel.text and panel.line but they 
didn't work after the plot had been called seemingly using the device 
coordinate system. So I made a new proxy panel function (calling 
panel.xyplot and passing the x/y/groups/subscripts args) put them in it, 
and they started working well.

Only problem is that xyplot no longer plots the key; I still pass 
auto.key to into the high-level function, but no key gets plotted. 
What's the best way to get what I want?

Cheers,

Gav

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Verify your newsletter subscription.

2007-05-09 Thread norasri ismail

THE REAL SECRET TO BECOMING
WEALTHY IN TODAY'S WORLD...


Being Yours Own Boss,
Using Leverage and
Buiding Residual Income!

Your subscription request for our
newsletter "secretlife888" hosted
by GetResponse service has been
received.

To confirm your subscription please
click the following link:

EASY 1-CLICK CONFIRMATION:
http://getresponse.com/k/1gxP/p5l6,a1OG

Once confirmed, you will be
instantly subscribed to our
newsletter.

You will be able to unsubscribe
or change your details at any time.

If you have received this email in
error and do not intend to become
our subscriber, no further action
is required on your part.

You won't receive further
information and you won't be
subscribed to any list until you
confirm your request above.

Thanks!

To Your Success,

MillionClub
7 Figure Earner
Top Internet Marketer


m.y.v.e.m.m.a
Ampang Walk
Kuala lumpur
Selangor 5000
Malaysia


--
Email address: "secretlife888" 
Type of request: IMPORT
Timestamp: 2007-05-09 13:25:21
IP address: 60.48.77.65

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] predict.tree

2007-05-09 Thread ande8047
I have a classification tree model similar to the following (slightly 
simplified here):

> treemod<-tree(y~x)

where y is a factor and x is a matrix of numeric predictors. They have 
dimensions:

> length(y)
[1] 1163
> dim(x)
[1] 1163   75

I’ve evaluated the tree model and am happy with the fit. I also have a 
matrix of cases that I want to use the tree model to classify. Call it 
newx:

> dim(newx)
[1] 6884275

The column names of newx match the column names of x. It seems that 
prediction should be straightforward. To classify the first 10 values of 
newx, for example, I think I should use:

> predict(treemod, newx[1:10,], type = "class")

However, this returns a vector of the predicted classes of the training 
data x, rather than the predicted classes of the new data. The returned 
vector has length 1163, not length 10. This occurs regardless of the number 
of rows in newx. It gives this warning message:

'newdata' had 10 rows but variable(s) found have 1163 rows

I must be misunderstanding the way I should format the newdata I pass to 
predict. I’ve tried the rpart package as well, but have a similar problem. 
What am I missing?

Thanks in advance,

Ryan Anderson
Graduate Student
Dept. of Forest Resources
University of Minnesota
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reading a web page in pdf format

2007-05-09 Thread Marc Schwartz
On Wed, 2007-05-09 at 10:55 -0500, Marc Schwartz wrote:
> On Wed, 2007-05-09 at 15:47 +0100, Vittorio wrote:
> > Each day the daily balance in the following link
> > 
> > http://www.
> > snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf
> > 
> > is 
> > updated.
> > 
> > I would like to set up an R procedure to be run daily in a 
> > server able to read the figures in a couple of lines only 
> > ("Industriale" and "Termoelettrico", towards the end of the balance) 
> > and put the data in a table.
> > 
> > Is that possible? If yes, what R-packages 
> > should I use?
> > 
> > Ciao
> > Vittorio
> 
> Vittorio,
> 
> Keep in mind that PDF files are typically text files. Thus you can read
> it in using readLines():
> 
> PDFFile <- 
> readLines("http://www.snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf";)
> 
> # Clean up
> unlink("http://www.snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf";)
> 
> 
> > str(PDFFile)
>  chr [1:989] "%PDF-1.2" "6 0 obj" "<<" "/Length 7 0 R" ...
> 
> 
> # Now find the lines containing the values you wish
> # Use grep() with a regex for either term
> Lines <- grep("(Industriale|Termoelettrico)", PDFFile)
> 
> > Lines
> [1] 33 34
> 
> > PDFFile[Lines]
> [1] "/F3 1 Tf 9 0 0 9 204 304 Tm (Industriale )Tj 9 0 0 9 420 304 Tm (   
> 46,6)Tj"
> [2] "9 0 0 9 204 283 Tm (Termoelettrico )Tj 9 0 0 9 420 283 Tm (   
> 99,3)Tj"  
> 
> 
> # Now parse the values out of the lines"
> Vals <- sub(".*\\((.*)\\).*", "\\1", PDFFile[Lines])
> 
> > Vals
> [1] "   46,6" "   99,3"
> 
> 
> # Now convert them to numeric
> # need to change the ',' to a '.' at least in my locale
>   
> > as.numeric(gsub(",", "\\.", Vals))
> [1] 46.6 99.3

Vittorio,

Just a quick tweak here, given the possibility that the order of the
values may be subject to change.

After reading the file and getting the lines, use:

# Use sub() with 2 back references, 1 for each value in the line
Vals <- sub(".*\\((.*)\\).*\\((.*)\\).*", "\\1 \\2", PDFFile[Lines])

> Vals
[1] "Industriale 46,6""Termoelettrico 99,3"


This gives us the labels and the values. Now convert to a data frame and
then coerce the values to numeric:

DF <- read.table(textConnection(Vals))

> DF
  V1   V2
1Industriale 46,6
2 Termoelettrico 99,3


DF$V2 <- as.numeric(sub(",", "\\.", DF$V2))

> DF
  V1   V2
1Industriale 46.6
2 Termoelettrico 99.3


> str(DF)
'data.frame':   2 obs. of  2 variables:
 $ V1: Factor w/ 2 levels "Industriale",..: 1 2
 $ V2: num  46.6 99.3


HTH,

Marc

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sample function and memory usage

2007-05-09 Thread Prof Brian Ripley
On Tue, 8 May 2007, Victor Gravenholt wrote:

> As a part of a simulation, I need to sample from a large vector repeatedly.
> For some reason sample() builds up the memory usage (> 500 MB for this
> example) when used inside a for loop as illustrated here:
>
> X <- 1:10
> P <- runif(10)
> for(i in 1:500) Xsamp <- sample(X,3,replace=TRUE,prob=P)
>
> Even worse, I am not able to free up memory without quitting R.
> I quickly run out of memory when trying to perform the simulation. Is
> there any way to avoid this to happen?
>
> The problem seem to appear only when specifying both replace=TRUE and
> probability weights for the vector being sampled, and this happens both
> on Windows XP and Linux (Ubuntu).

And for 1 < size <= 10.  There was a typo causing memory not to be 
freed in that range.  It is now fixed in 2.5.0 patched.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fehlermeldung

2007-05-09 Thread Stefan Grosse
I do not get this error message with that example you have given.

Hm. Dunno. Have you installed R-2.5.0 over an old version? Maybe there
could be your problem (although I doubt that) If so, try
|update.packages(checkBuilt=T) this installs "new" packages made for
2.5.0 over "old" made for 2.4.1 if the version number is the same. (Or
make a clean install of R-2.5.0- Remove everything and do a new install).

Stefan
|
 Original Message  
Subject: Re:[R] Fehlermeldung
From: [EMAIL PROTECTED]
To: Stefan Grosse <[EMAIL PROTECTED]>
Date: Wed May 09 2007 15:02:10 GMT+0200
> Dear Stefan,
>
> My operating system is Windows XP, my version of R is the latest (R-2.5.0). 
> Recently I have downloaded the package "mvtnorm" and a problem with the 
> command "pmvnorm" occured. Trying to enter the lines ...
>
> A <- diag(3)
> A[1,2] <-0.5
> A[1,3] <- 0.25
> A[2,3] <- 0.5
> pvmnorm(lower=c(-Inf,-Inf,-Inf), upper=c(2,2,2),mean = c(0,0,0), corr=A) 
>
> I got the following error message:
>
> .Fortran("mvtdst", N = as.integer(n), NU=as.integer(df), lower = 
> as.double(lower),  : 
> Fortran Symbolname "mvtdst" nicht in der DLL für Paket "mvtnorm"
>
> As I can make no sense of that whatsoever, I would like to ask you for some 
> qualified help. Thank you very much indeed.
>
> Best Regards,
> Andreas Faller
> ___
> SMS schreiben mit WEB.DE FreeMail - einfach, schnell und
> kostenguenstig. Jetzt gleich testen! http://f.web.de/?mc=021192
>
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unit Testing Frameworks: summary and brief discussion

2007-05-09 Thread ml-r-help

[EMAIL PROTECTED] wrote:
[...]
> = From Seth Falcon:
> 
> Hi Tony,
> 
> [EMAIL PROTECTED] writes:
>> After a quick look at current programming tools, especially with regards 
> 
>> to unit-testing frameworks, I've started looking at both "butler" and 
>> "RUnit".   I would be grateful to receieve real world development 
>> experience and opinions with either/both.Please send to me directly 
>> (yes, this IS my work email), I will summarize (named or anonymous, as 
>> contributers desire) to the list.
> 
> I've been using RUnit and have been quite happy with it.  I had not
> heard of butler until I read your mail (!).
> 
> RUnit behaves reasonably similarly to other *Unit frameworks and this
> made it easy to get started with as I have used both JUnit and PyUnit
> (unittest module).
> 
> Two things to be wary of:
> 
>   1. At last check, you cannot create classes in unit test code and
>  this makes it difficult to test some types of functionality.  I'm
>  really not sure to what extent this is RUnit's fault as opposed
>  to limitation of the S4 implemenation in R.

I'd be very interested to hear what problems you experienced. If you 
have any example ready I'd be happy to take a look at it.
So far we have not observed (severe) problems to create S4 classes and 
test them in unit test code. We actually use RUnit mainly on S4 classes 
and methods. There are even some very simple checks in RUnits own test 
cases which create and use S4 classes. For example in tests/runitRunit.r
in the source package.


>   2. They have chosen a non-default RNG, but recent versions provide a
>  way to override this.  This provided for some difficult bug
>  hunting when unit tests behaved differently than hand-run code
>  even with set.seed().
> 
> The maintainer has been receptive to feedback and patches.  You can
> look at the not-so-beautiful scripts and such we are using if you look
> at inst/UnitTest in: Category, GOstats, Biobase, graph
> 
> Best Wishes,
> 
> + seth
> 
[...]

Best,

   Matthias


-- 
Matthias Burger Project Manager/ Biostatistician
Epigenomics AGKleine Praesidentenstr. 110178 Berlin, Germany
phone:+49-30-24345-371  fax:+49-30-24345-555
http://www.epigenomics.com   [EMAIL PROTECTED]
--
Epigenomics AG Berlin   Amtsgericht Charlottenburg HRB 75861
Vorstand:   Geert Nygaard (CEO/Vorsitzender),  Dr. Kurt Berlin (CSO)
   Oliver Schacht PhD (CFO),  Christian Piepenbrock (COO)
Aufsichtsrat:   Prof. Dr. Dr. hc. Rolf Krebs (Chairman/Vorsitzender)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unit Testing Frameworks: summary and brief discussion

2007-05-09 Thread ml-r-help
[EMAIL PROTECTED] wrote:
> Greetings -
> 
> I'm finally finished review, here's what I heard:

Hi Anthony,

sorry for replying late. I'd like to chip in a brief experience report 
for our company.

We have used RUnit since 2003 starting with R 1.6.2 for our R software 
development. Since then it has been used for development for over a 
dozen packages with ~ 3k unit tests. Making use of S4 classes and 
methods with clear design contracts made its application even more 
fruitful. To automate the test process we utilized
Linux tools to tailor a build and test system to check our R packages 
with previous, current and development versions of R, CRAN and BioC 
packages to guarantee backward compatibility as far as possible whilst 
adapting to changes.

Over time the main benefits have been
  - fearless refactoring of major building blocks of our class hierarchy
  - early detection of and adaptation to changes in new R versions
  - data workflow integration testing
starting with some data warehouse query initiated from R throughout
to generated analysis reports using sweave or similar report
generators.
With this changes in the warehouse, R, some CRAN R package, our code
or the report templates could be spotted and fixed
well before any time critical analysis was due

rewarding the additional effort to write and maintain the tests.


Best regards,

   Matthias


>  from Tobias Verbeke:
> 
> [EMAIL PROTECTED] wrote:
>> Greetings!
>>
>> After a quick look at current programming tools, especially with regards 
> 
>> to unit-testing frameworks, I've started looking at both "butler" and 
>> "RUnit".   I would be grateful to receieve real world development 
>> experience and opinions with either/both.Please send to me directly 
>> (yes, this IS my work email), I will summarize (named or anonymous, as 
>> contributers desire) to the list.
>>
> I'm founding member of an R Competence Center at an international 
> consulting company delivering R services
> mainly to the financial and pharmaceutical industries. Unit testing is 
> central to our development methodology
> and we've been systematically using RUnit with great satisfaction, 
> mainly because of its simplicity. The
> presentation of test reports is basic, though. Experiences concerning 
> interaction with the RUnit developers
> are very positive: gentle and responsive people.
> 
> We've never used butler. I think it is not actively developed (even if 
> the developer is very active).
> 
> It should be said that many of our developers (including myself) have 
> backgrounds in statistics (more than in cs
> or software engineering) and are not always acquainted with the 
> functionality in other unit testing frameworks
> and the way they integrate in IDEs as is common in these other languages.
> 
> I'll soon be personally working with a JUnit guru and will take the 
> opportunity to benchmark RUnit/ESS/emacs against
> his toolkit (Eclipse with JUnit- and other plugins, working `in perfect 
> harmony' (his words)). Even if in my opinion the
> philosophy of test-driven development is much more important than the 
> tools used, it is useful to question them from
> time to time and your message reminded me of this... I'll keep you 
> posted if it interests you. Why not work out an
> evaluation grid / check list for unit testing frameworks ?
> 
> Totally unrelated to the former, it might be interesting to ask oneself 
> how ESS could be extended to ease unit testing:
> after refactoring a function some M-x ess-unit-test-function 
> automagically launches the unit-test for this particular
> function (based on the test function naming scheme), opens a *test 
> report* buffer etc.
> 
> Kind regards,
> Tobias
> 
>  from Tony Plate:
> 
> Hi, I've been looking at testing frameworks for R too, so I'm interested 
> to hear of your experiences & perspective.
> 
> Here's my own experiences & perspective:
> The requirements are:
> 
> (1) it should be very easy to construct and maintain tests
> (2) it should be easy to run tests, both automatically and manually
> (3) it should be simple to look at test results and know what went wrong 
> where
> 
> I've been using a homegrown testing framework for S-PLUS that is loosely 
> based on the R transcript style tests (run *.R and compare output with 
> *.Rout.save in 'tests' dir).  There are two differences between this 
> test framework and the standard R one:
> (1) the output to match and the input commands are generated from an 
> annotated transcript (annotations can switch some tests in or out 
> depending on the version used)
> (2) annotations can include text substitutions (regular expression 
> style) to be made on the output before attempting to match (this helps 
> make it easier to construct tests that will match across different 
> versions that might have minor cosmetic differences in how output is 
> formatted).
> 
> We use this test framework for both unit-style tests and

Re: [R] Reading a web page in pdf format

2007-05-09 Thread Marc Schwartz
On Wed, 2007-05-09 at 15:47 +0100, Vittorio wrote:
> Each day the daily balance in the following link
> 
> http://www.
> snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf
> 
> is 
> updated.
> 
> I would like to set up an R procedure to be run daily in a 
> server able to read the figures in a couple of lines only 
> ("Industriale" and "Termoelettrico", towards the end of the balance) 
> and put the data in a table.
> 
> Is that possible? If yes, what R-packages 
> should I use?
> 
> Ciao
> Vittorio

Vittorio,

Keep in mind that PDF files are typically text files. Thus you can read
it in using readLines():

PDFFile <- 
readLines("http://www.snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf";)

# Clean up
unlink("http://www.snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf";)


> str(PDFFile)
 chr [1:989] "%PDF-1.2" "6 0 obj" "<<" "/Length 7 0 R" ...


# Now find the lines containing the values you wish
# Use grep() with a regex for either term
Lines <- grep("(Industriale|Termoelettrico)", PDFFile)

> Lines
[1] 33 34

> PDFFile[Lines]
[1] "/F3 1 Tf 9 0 0 9 204 304 Tm (Industriale )Tj 9 0 0 9 420 304 Tm (   
46,6)Tj"
[2] "9 0 0 9 204 283 Tm (Termoelettrico )Tj 9 0 0 9 420 283 Tm (   99,3)Tj" 
 


# Now parse the values out of the lines"
Vals <- sub(".*\\((.*)\\).*", "\\1", PDFFile[Lines])

> Vals
[1] "   46,6" "   99,3"


# Now convert them to numeric
# need to change the ',' to a '.' at least in my locale

> as.numeric(gsub(",", "\\.", Vals))
[1] 46.6 99.3


HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reading a web page in pdf format

2007-05-09 Thread jim holtman
You can do it with the base toolkit.  Just read the PDF file in as
text and then extract the data:


> # read in PDF file as text
> x.in <- 
> readLines("http://www.snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf";)
> # find Industriale
> Ind <- grep("Industriale", x.in, value=TRUE)
> # find Termoelettrico
> Ter <- grep("Termoelettrico", x.in, value=TRUE)
> # extract the data
> Ind.data <- sub(".*\\(([\\s0-9,]*)\\).*", "\\1", Ind, perl=TRUE)
> Ter.data <- sub(".*\\(([\\s0-9,]*)\\).*", "\\1", Ter, perl=TRUE)
> Ind.data
[1] "   46,6"
> Ter.data
[1] "   99,3"
>
>

>


On 5/9/07, Vittorio <[EMAIL PROTECTED]> wrote:
> Each day the daily balance in the following link
>
> http://www.
> snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf
>
> is
> updated.
>
> I would like to set up an R procedure to be run daily in a
> server able to read the figures in a couple of lines only
> ("Industriale" and "Termoelettrico", towards the end of the balance)
> and put the data in a table.
>
> Is that possible? If yes, what R-packages
> should I use?
>
> Ciao
> Vittorio
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem you are trying to solve?

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reading a web page in pdf format

2007-05-09 Thread Gabor Csardi
Vittorio,

this isn't really an R problem, you need a tool to extract text from a 
PDF document. I've tried pdftotext from the xpdf bundle, and it worked 
fine for the file you linked. In my Ubuntu Linux it is in the
xpdf-utils package, search to xpdf to find out whether it is available 
on windows if you use windows. 

If you want to call it from R you can use the 'system' function. 

There may be other, better method i'm unaware of, of course.

Best,
Gabor

On Wed, May 09, 2007 at 03:47:59PM +0100, Vittorio wrote:
> Each day the daily balance in the following link
> 
> http://www.
> snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf
> 
> is 
> updated.
> 
> I would like to set up an R procedure to be run daily in a 
> server able to read the figures in a couple of lines only 
> ("Industriale" and "Termoelettrico", towards the end of the balance) 
> and put the data in a table.
> 
> Is that possible? If yes, what R-packages 
> should I use?
> 
> Ciao
> Vittorio
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-- 
Csardi Gabor <[EMAIL PROTECTED]>MTA RMKI, ELTE TTK

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Reading a web page in pdf format

2007-05-09 Thread Vittorio
Each day the daily balance in the following link

http://www.
snamretegas.it/italiano/business/gas/bilancio/pdf/bilancio.pdf

is 
updated.

I would like to set up an R procedure to be run daily in a 
server able to read the figures in a couple of lines only 
("Industriale" and "Termoelettrico", towards the end of the balance) 
and put the data in a table.

Is that possible? If yes, what R-packages 
should I use?

Ciao
Vittorio

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] A function for raising a matrix to a power?

2007-05-09 Thread Ravi Varadhan
Atte,

Your matrix A is not symmetric, that is why exponentiation using spectral
decomposition, expM.sd, does not give you the correct answer.  

Convert A to a symmetric matrix: 
A <- (A + t(A))/2
then the results will all match.

Ravi.


---

Ravi Varadhan, Ph.D.

Assistant Professor, The Center on Aging and Health

Division of Geriatric Medicine and Gerontology 

Johns Hopkins University

Ph: (410) 502-2619

Fax: (410) 614-9625

Email: [EMAIL PROTECTED]

Webpage:  http://www.jhsph.edu/agingandhealth/People/Faculty/Varadhan.html

 




-Original Message-
From: Atte Tenkanen [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 09, 2007 10:07 AM
To: Paul Gilbert
Cc: Ravi Varadhan; r-help@stat.math.ethz.ch
Subject: Re: [R] A function for raising a matrix to a power?


Hello,

Thanks for many replys. I tested all the functions presented. 
I'm a beginner in linear algebra, but now I have a serious question ;-)
Here is a matrix A, which determinant is 3, so it is nonsingular. 
Then there are similar computer runs done with each function proposed.
I have calculated powers of A between A^274 - A^277.
In the first version (see the outputs lower), when n=277, 
there comes "zero in the upper right corner". 
Can you say, if the first function is more reliable than others? 
What can you say about the accuracy of calculations?

-Atte

#-#

# Matrix A:
A=rbind(c(-3,-4,-2),c(3,5,1),c(2,-1,4))

A
det(A)

# 1st version:
"%^%"<-function(A,n){
  if(n==1) A else {B<-A; for(i in (2:n)){A<-A%*%B}}; A
  }

for(i in 274:277){print(i);print(A%^%i)}


# 2nd version:
"%^%" <- function(A, n) if(n == 1) A else A %*% (A %^% (n-1))

for(i in 274:277){print(i);print(A%^%i)}


# 3rd version:

mp <- function(mat,pow){
ans <- mat
for ( i in 1:(pow-1)){
ans <- mat%*%ans
}
return(ans)
}

for(i in 274:277){print(i);print(mp(A,i))}


# 4th version:

library(Malmig)

mtx.exp
for(i in 274:277){print(i);print(mtx.exp(A,i))}

# 5th version:

matrix.power <- function(mat, n)
{
  # test if mat is a square matrix
  # treat n < 0 and n = 0 -- this is left as an exercise
  # trap non-integer n and return an error
  if (n == 1) return(mat)
  result <- diag(1, ncol(mat))
  while (n > 0) {
if (n %% 2 != 0) {
  result <- result %*% mat
  n <- n - 1
}
mat <- mat %*% mat
n <- n / 2
  }
  return(result)
}

for(i in 274:277){print(i);print(matrix.power(A,i))}


# 6th version:

expM.sd <- function(X,e){Xsd <- eigen(X); Xsd$vec %*% diag(Xsd$val^e)
%*%t(Xsd$vec)}

for(i in 274:277){print(i);print(expM.sd(A,i))}


#-OUTPUTS---#


> A=rbind(c(-3,-4,-2),c(3,5,1),c(2,-1,4))
> 
> A
 [,1] [,2] [,3]
[1,]   -3   -4   -2
[2,]351
[3,]2   -14
> det(A)
[1] 3
> 
> # 1st version:
> "%^%"<-function(A,n){
+   if(n==1) A else {B<-A; for(i in (2:n)){A<-A%*%B}}; A
+   }
> 
> for(i in 274:277){print(i);print(A%^%i)}
[1] 274
   [,1]   [,2]   [,3]
[1,]  1.615642e+131  3.231283e+131 -3.940201e+115
[2,] -5.385472e+130 -1.077094e+131  4.925251e+114
[3,] -3.769831e+131 -7.539661e+131  7.880401e+115
[1] 275
   [,1]   [,2]   [,3]
[1,]  4.846925e+131  9.693850e+131 -1.182060e+116
[2,] -1.615642e+131 -3.231283e+131   0.00e+00
[3,] -1.130949e+132 -2.261898e+132  4.728241e+116
[1] 276
   [,1]   [,2]   [,3]
[1,]  1.454078e+132  2.908155e+132 -1.576080e+116
[2,] -4.846925e+131 -9.693850e+131   0.00e+00
[3,] -3.392848e+132 -6.785695e+132  1.576080e+117
[1] 277
   [,1]   [,2]   [,3]
[1,]  4.362233e+132  8.724465e+132   0.00e+00
[2,] -1.454078e+132 -2.908155e+132 -1.576080e+116
[3,] -1.017854e+133 -2.035709e+133  3.782593e+117
> 
> 
> # 2nd version:
> "%^%" <- function(A, n) if(n == 1) A else A %*% (A %^% (n-1))
> 
> for(i in 274:277){print(i);print(A%^%i)}
[1] 274
   [,1]   [,2]   [,3]
[1,]  1.615642e+131  3.231283e+131 -3.101125e+114
[2,] -5.385472e+130 -1.077094e+131  1.533306e+114
[3,] -3.769831e+131 -7.539661e+131  5.664274e+114
[1] 275
   [,1]   [,2]   [,3]
[1,]  4.846925e+131  9.693850e+131 -8.158398e+114
[2,] -1.615642e+131 -3.231283e+131  4.027430e+114
[3,] -1.130949e+132 -2.261898e+132  1.492154e+115
[1] 276
   [,1]   [,2]   [,3]
[1,]  1.454078e+132  2.908155e+132 -2.147761e+115
[2,] -4.846925e+131 -9.693850e+131  1.058350e+115
[3,] -3.392848e+132 -6.785695e+132  3.934193e+115
[1] 277
   [,1]   [,2]   [,3]
[1,]  4.362233e+132  8.724465e+132 -5.658504e+115
[2,] -1.454078e+132 -2.908155e+132  2.782660e+115
[3,] -1.017854e+133 -2.035709e+133  1.038290e+116
> 
> 
> # 3rd version:
> 
> mp <- function(mat,pow){
+ ans <- mat
+ for ( i in 1:(pow-1)){
+ ans 

[R] Fitting model with response and day bias

2007-05-09 Thread Bart Joosen
Hi,

I'm trying to fit a model which has a response bias, but also a day to day 
bias.
If I try to simulate the data, I don't get the right values with optim, and 
also I can't
use the function to give a prediction interval.

My simulated data are:

DF <- as.data.frame(cbind(x=rep(1:10,2),dag=rep(1:2,each=10)))
bias <- c(-0.2,0.5)
DF$y <- ((DF$x-0.1) * 5)+2 + bias[DF$dag]+rnorm(20,0,sd=0.5)


Which I try to fit with:
fn <- function(x){
a <- x[1]
b <- x[2]
c <- x[2]
sum((DF$y - (((DF$x-c)*a)+b + x[DF$dag+2]))^2)
}
optim(c(1,1,1,1,1),fn)

But with poor succes.

Also, in the real model, I have a response which is y/time (like in 
lm(y/time~x1 + x2,...) ) , but if I put the time variable at the right side 
(lm(y~I(x1 + x2)*time) , it gets an coefficient.
Is there a way to avoid this?

Thanks


Bart

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Writing custom functions for rpart

2007-05-09 Thread hadley wickham
On 5/9/07, Prof Brian Ripley <[EMAIL PROTECTED]> wrote:
> On Wed, 9 May 2007, hadley wickham wrote:
>
> > Hi everyone,
> >
> > Does anyone has experience with (or documentation for) writing custom
> > methods with rpart? The documentation hints: "Alternatively, 'method'
> > can be a list of functions 'init', 'split' and 'eval'", but doesn't
> > provide any details as to what those methods should do or what
> > arguments they should take etc.
> >
> > I've tried looking at the package source (and the source for the S
> > code it came from) but I can't follow what's going on in C vs R, and
> > as the default methods are coded in a different way, there are no
> > examples to follow.
>
> But there are, in the tests directory.

Thanks, I had missed those.  Perhaps a pointer from the documentation
would be appropriate?

Hadley

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Unit Testing Frameworks: summary and brief discussion

2007-05-09 Thread anthony . rossini
Greetings -

I'm finally finished review, here's what I heard:

 from Tobias Verbeke:

[EMAIL PROTECTED] wrote:
> Greetings!
>
> After a quick look at current programming tools, especially with regards 

> to unit-testing frameworks, I've started looking at both "butler" and 
> "RUnit".   I would be grateful to receieve real world development 
> experience and opinions with either/both.Please send to me directly 
> (yes, this IS my work email), I will summarize (named or anonymous, as 
> contributers desire) to the list.
> 
I'm founding member of an R Competence Center at an international 
consulting company delivering R services
mainly to the financial and pharmaceutical industries. Unit testing is 
central to our development methodology
and we've been systematically using RUnit with great satisfaction, 
mainly because of its simplicity. The
presentation of test reports is basic, though. Experiences concerning 
interaction with the RUnit developers
are very positive: gentle and responsive people.

We've never used butler. I think it is not actively developed (even if 
the developer is very active).

It should be said that many of our developers (including myself) have 
backgrounds in statistics (more than in cs
or software engineering) and are not always acquainted with the 
functionality in other unit testing frameworks
and the way they integrate in IDEs as is common in these other languages.

I'll soon be personally working with a JUnit guru and will take the 
opportunity to benchmark RUnit/ESS/emacs against
his toolkit (Eclipse with JUnit- and other plugins, working `in perfect 
harmony' (his words)). Even if in my opinion the
philosophy of test-driven development is much more important than the 
tools used, it is useful to question them from
time to time and your message reminded me of this... I'll keep you 
posted if it interests you. Why not work out an
evaluation grid / check list for unit testing frameworks ?

Totally unrelated to the former, it might be interesting to ask oneself 
how ESS could be extended to ease unit testing:
after refactoring a function some M-x ess-unit-test-function 
automagically launches the unit-test for this particular
function (based on the test function naming scheme), opens a *test 
report* buffer etc.

Kind regards,
Tobias

 from Tony Plate:

Hi, I've been looking at testing frameworks for R too, so I'm interested 
to hear of your experiences & perspective.

Here's my own experiences & perspective:
The requirements are:

(1) it should be very easy to construct and maintain tests
(2) it should be easy to run tests, both automatically and manually
(3) it should be simple to look at test results and know what went wrong 
where

I've been using a homegrown testing framework for S-PLUS that is loosely 
based on the R transcript style tests (run *.R and compare output with 
*.Rout.save in 'tests' dir).  There are two differences between this 
test framework and the standard R one:
(1) the output to match and the input commands are generated from an 
annotated transcript (annotations can switch some tests in or out 
depending on the version used)
(2) annotations can include text substitutions (regular expression 
style) to be made on the output before attempting to match (this helps 
make it easier to construct tests that will match across different 
versions that might have minor cosmetic differences in how output is 
formatted).

We use this test framework for both unit-style tests and system testing 
(where multiple libraries interact and also call the database).
One very nice aspect of this framework is that it is easy to construct 
tests -- just cut and paste from a command window.  Many tests can be 
generated very quickly this way (my impression is that is is much much 
faster to build tests by cutting and pasting transcripts from a command 
window than it is to build tests that use functions like all.equal() to 
compare data structures.) It is also easy to maintain tests in the face 
of change (e.g., with a new version of S-PLUS or with bug fixes to 
functions or with changed database contents) -- I use ediff in emacs to 
compare test output with the stored annotated transcript and can usually 
just use ediff commands to update the transcript.

This has worked well for us and now we are looking at porting some code 
to R.  I've not seen anything that offers these conveniences in R.

It wouldn't be too difficult to add these features to the built-in R 
testing framework, but I've not had success in getting anyone in R core 
to listen to even consider changes, so I've not pursued that route after 
an initial offer of some simple patches to tests.mk and wintests.mk.

RUnit doesn't have transcript-style tests, but it wasn't very difficult 
to add support for transcript-style tests to it.  I'll probably go ahead 
and use some version of that for our porting project.  (And offer it to 
the community if the RUnit maintainers want to incorporate 

Re: [R] Vignettes menu

2007-05-09 Thread Robert Gentleman
Hi,
   I don't think that advice given was quite correct. The addition of 
the menu is not a "misfeature" of Bioconductor packages, but rather an 
intentional act, I appreciate points of view differ, but there are times 
when you might choose slightly less pejorative language.

As for the question, it seems to me to be entirely an R question, if 
someone wants to know how to remove a menu item from the windows 
version, that seems to be a pretty general question about manipulating 
R, and has nothing to do with any specific package.

Robert

Prof Brian Ripley wrote:
> Please ask questions about Bioconductor packages of the maintainer or or 
> the Bioconductor list.  This is not a (mis-)feature of R, and does not 
> happen with most packages.
> 
> On Wed, 9 May 2007, Alejandro wrote:
> 
>> Hello,
>> when i try to load a library with library command (for example
>> library(Biobase)) a new menu appears on R command console (Vignettes)
>> with links to the help documents.
>> Is it possible to eliminate this menu or avoid the appearance of this menu?
>>
>> Thanks in advance,
>> Alejandro
> 

-- 
Robert Gentleman, PhD
Program in Computational Biology
Division of Public Health Sciences
Fred Hutchinson Cancer Research Center
1100 Fairview Ave. N, M2-B876
PO Box 19024
Seattle, Washington 98109-1024
206-667-7700
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Error in plot.new() : figure margins too large

2007-05-09 Thread Prof Brian Ripley
On Wed, 9 May 2007, [EMAIL PROTECTED] wrote:

> Yes, I already had a look on previous posts but nothing is really helpful to
> me.

I have never seen anyone do this before 

> The code is:
>
> postscript(filename, horizontal=FALSE, onefile=FALSE, paper="special",

You have not set a width or height, so please do your homework.

> bg="white", family="ComputerModern", pointsize=10);
> par(mar=c(5, 4, 0, 0) + 0.1);
> plot(x.nor, y.nor, xlim=c(3,6), ylim=c(20,90), pch=normal.mark);
>
> gives error
> Error in plot.new() : figure margins too large
>
> plotting on the screen without calling postscript works just fine .
>
> Any clues? Thanks.
>
>   [[alternative HTML version deleted]]
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] pvmnorm, error message

2007-05-09 Thread Prof Brian Ripley
But it should not give that error (once pmvnorm is spelled correctly I 
get no error).


I believe the message indicates that the installed package is corrupt.

On Wed, 9 May 2007, Dimitris Rizopoulos wrote:


A is not correlation matrix; try this instead:

A <- diag(rep(0.5, 3))
A[1, 2] <- 0.5
A[1, 3] <- 0.25
A[2, 3] <- 0.5
A <- A + t(A)
pmvnorm(lower = rep(-Inf, 3), upper = rep(2, 3), corr = A)


Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://med.kuleuven.be/biostat/
http://www.student.kuleuven.be/~m0390867/dimitris.htm


- Original Message -
From: "Andreas Faller" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, May 09, 2007 3:16 PM
Subject: [R] pvmnorm, error message


Hello there!

My operating system is Windows XP, my version of R is the latest
(R-2.5.0). Recently I have downloaded the package "mvtnorm" and a
problem with the command "pmvnorm" occured. Trying to enter the lines
...

A <- diag(3)
A[1,2] <-0.5
A[1,3] <- 0.25
A[2,3] <- 0.5
pvmnorm(lower=c(-Inf,-Inf,-Inf), upper=c(2,2,2),mean = c(0,0,0),
corr=A)

I got the following error message:

.Fortran("mvtdst", N = as.integer(n), NU=as.integer(df), lower =
as.double(lower), :
Fortran Symbolname "mvtdst" nicht in der DLL für Paket "mvtnorm"

Can anyone advise what to do now to get rid of this problem? Thank you
very much indeed.

Regards, Andreas Faller

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] A function for raising a matrix to a power?

2007-05-09 Thread Atte Tenkanen

Hello,

Thanks for many replys. I tested all the functions presented. 
I'm a beginner in linear algebra, but now I have a serious question ;-)
Here is a matrix A, which determinant is 3, so it is nonsingular. 
Then there are similar computer runs done with each function proposed.
I have calculated powers of A between A^274 - A^277.
In the first version (see the outputs lower), when n=277, 
there comes "zero in the upper right corner". 
Can you say, if the first function is more reliable than others? 
What can you say about the accuracy of calculations?

-Atte

#-#

# Matrix A:
A=rbind(c(-3,-4,-2),c(3,5,1),c(2,-1,4))

A
det(A)

# 1st version:
"%^%"<-function(A,n){
  if(n==1) A else {B<-A; for(i in (2:n)){A<-A%*%B}}; A
  }

for(i in 274:277){print(i);print(A%^%i)}


# 2nd version:
"%^%" <- function(A, n) if(n == 1) A else A %*% (A %^% (n-1))

for(i in 274:277){print(i);print(A%^%i)}


# 3rd version:

mp <- function(mat,pow){
ans <- mat
for ( i in 1:(pow-1)){
ans <- mat%*%ans
}
return(ans)
}

for(i in 274:277){print(i);print(mp(A,i))}


# 4th version:

library(Malmig)

mtx.exp
for(i in 274:277){print(i);print(mtx.exp(A,i))}

# 5th version:

matrix.power <- function(mat, n)
{
  # test if mat is a square matrix
  # treat n < 0 and n = 0 -- this is left as an exercise
  # trap non-integer n and return an error
  if (n == 1) return(mat)
  result <- diag(1, ncol(mat))
  while (n > 0) {
if (n %% 2 != 0) {
  result <- result %*% mat
  n <- n - 1
}
mat <- mat %*% mat
n <- n / 2
  }
  return(result)
}

for(i in 274:277){print(i);print(matrix.power(A,i))}


# 6th version:

expM.sd <- function(X,e){Xsd <- eigen(X); Xsd$vec %*% diag(Xsd$val^e) 
%*%t(Xsd$vec)}

for(i in 274:277){print(i);print(expM.sd(A,i))}


#-OUTPUTS---#


> A=rbind(c(-3,-4,-2),c(3,5,1),c(2,-1,4))
> 
> A
 [,1] [,2] [,3]
[1,]   -3   -4   -2
[2,]351
[3,]2   -14
> det(A)
[1] 3
> 
> # 1st version:
> "%^%"<-function(A,n){
+   if(n==1) A else {B<-A; for(i in (2:n)){A<-A%*%B}}; A
+   }
> 
> for(i in 274:277){print(i);print(A%^%i)}
[1] 274
   [,1]   [,2]   [,3]
[1,]  1.615642e+131  3.231283e+131 -3.940201e+115
[2,] -5.385472e+130 -1.077094e+131  4.925251e+114
[3,] -3.769831e+131 -7.539661e+131  7.880401e+115
[1] 275
   [,1]   [,2]   [,3]
[1,]  4.846925e+131  9.693850e+131 -1.182060e+116
[2,] -1.615642e+131 -3.231283e+131   0.00e+00
[3,] -1.130949e+132 -2.261898e+132  4.728241e+116
[1] 276
   [,1]   [,2]   [,3]
[1,]  1.454078e+132  2.908155e+132 -1.576080e+116
[2,] -4.846925e+131 -9.693850e+131   0.00e+00
[3,] -3.392848e+132 -6.785695e+132  1.576080e+117
[1] 277
   [,1]   [,2]   [,3]
[1,]  4.362233e+132  8.724465e+132   0.00e+00
[2,] -1.454078e+132 -2.908155e+132 -1.576080e+116
[3,] -1.017854e+133 -2.035709e+133  3.782593e+117
> 
> 
> # 2nd version:
> "%^%" <- function(A, n) if(n == 1) A else A %*% (A %^% (n-1))
> 
> for(i in 274:277){print(i);print(A%^%i)}
[1] 274
   [,1]   [,2]   [,3]
[1,]  1.615642e+131  3.231283e+131 -3.101125e+114
[2,] -5.385472e+130 -1.077094e+131  1.533306e+114
[3,] -3.769831e+131 -7.539661e+131  5.664274e+114
[1] 275
   [,1]   [,2]   [,3]
[1,]  4.846925e+131  9.693850e+131 -8.158398e+114
[2,] -1.615642e+131 -3.231283e+131  4.027430e+114
[3,] -1.130949e+132 -2.261898e+132  1.492154e+115
[1] 276
   [,1]   [,2]   [,3]
[1,]  1.454078e+132  2.908155e+132 -2.147761e+115
[2,] -4.846925e+131 -9.693850e+131  1.058350e+115
[3,] -3.392848e+132 -6.785695e+132  3.934193e+115
[1] 277
   [,1]   [,2]   [,3]
[1,]  4.362233e+132  8.724465e+132 -5.658504e+115
[2,] -1.454078e+132 -2.908155e+132  2.782660e+115
[3,] -1.017854e+133 -2.035709e+133  1.038290e+116
> 
> 
> # 3rd version:
> 
> mp <- function(mat,pow){
+ ans <- mat
+ for ( i in 1:(pow-1)){
+ ans <- mat%*%ans
+ }
+ return(ans)
+ }
> 
> for(i in 274:277){print(i);print(mp(A,i))}
[1] 274
   [,1]   [,2]   [,3]
[1,]  1.615642e+131  3.231283e+131 -3.101125e+114
[2,] -5.385472e+130 -1.077094e+131  1.533306e+114
[3,] -3.769831e+131 -7.539661e+131  5.664274e+114
[1] 275
   [,1]   [,2]   [,3]
[1,]  4.846925e+131  9.693850e+131 -8.158398e+114
[2,] -1.615642e+131 -3.231283e+131  4.027430e+114
[3,] -1.130949e+132 -2.261898e+132  1.492154e+115
[1] 276
   [,1]   [,2]   [,3]
[1,]  1.454078e+132  2.908155e+132 -2.147761e+115
[2,] -4.846925e+131 -9.693850e+131  1.058350e+115
[3,] -3.392848e+132 -6.785695e+132  3.934193e+115
[1] 277
   [,1]   [,2]   [,3]
[1,]  4.362233e+132  8.724465e+132 -5.658504e+115
[2,] -1.454078e+132 -2.908155e+132  2.782660e+115
[3,] -1.017854e+133 -2.035709e+133  1.038290e+116
> 
> 
> # 4th version
> 
> l

Re: [R] Writing custom functions for rpart

2007-05-09 Thread Prof Brian Ripley
On Wed, 9 May 2007, hadley wickham wrote:

> Hi everyone,
>
> Does anyone has experience with (or documentation for) writing custom
> methods with rpart? The documentation hints: "Alternatively, 'method'
> can be a list of functions 'init', 'split' and 'eval'", but doesn't
> provide any details as to what those methods should do or what
> arguments they should take etc.
>
> I've tried looking at the package source (and the source for the S
> code it came from) but I can't follow what's going on in C vs R, and
> as the default methods are coded in a different way, there are no
> examples to follow.

But there are, in the tests directory.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Error in plot.new() : figure margins too large

2007-05-09 Thread gatemaze
Yes, I already had a look on previous posts but nothing is really helpful to
me.
The code is:

postscript(filename, horizontal=FALSE, onefile=FALSE, paper="special",
bg="white", family="ComputerModern", pointsize=10);
par(mar=c(5, 4, 0, 0) + 0.1);
plot(x.nor, y.nor, xlim=c(3,6), ylim=c(20,90), pch=normal.mark);

gives error
Error in plot.new() : figure margins too large

plotting on the screen without calling postscript works just fine .

Any clues? Thanks.

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Generalizability Theory

2007-05-09 Thread Iasonas Lamprianou
Hi friends out there, any chance any of you knows how to run Generalizability 
Theory stats using R packages? Any help will greatly appreciated (especially if 
anyone has any examples)

jason
 
Dr. Iasonas Lamprianou
Department of Education
The University of Manchester
Oxford Road, Manchester M13 9PL, UK
Tel. 0044 161 275 3485
[EMAIL PROTECTED]


  ___

now.

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] pvmnorm, error message

2007-05-09 Thread Dimitris Rizopoulos
A is not correlation matrix; try this instead:

A <- diag(rep(0.5, 3))
A[1, 2] <- 0.5
A[1, 3] <- 0.25
A[2, 3] <- 0.5
A <- A + t(A)
pmvnorm(lower = rep(-Inf, 3), upper = rep(2, 3), corr = A)


Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://med.kuleuven.be/biostat/
 http://www.student.kuleuven.be/~m0390867/dimitris.htm


- Original Message - 
From: "Andreas Faller" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, May 09, 2007 3:16 PM
Subject: [R] pvmnorm, error message


Hello there!

My operating system is Windows XP, my version of R is the latest 
(R-2.5.0). Recently I have downloaded the package "mvtnorm" and a 
problem with the command "pmvnorm" occured. Trying to enter the lines 
...

A <- diag(3)
A[1,2] <-0.5
A[1,3] <- 0.25
A[2,3] <- 0.5
pvmnorm(lower=c(-Inf,-Inf,-Inf), upper=c(2,2,2),mean = c(0,0,0), 
corr=A)

I got the following error message:

.Fortran("mvtdst", N = as.integer(n), NU=as.integer(df), lower = 
as.double(lower), :
Fortran Symbolname "mvtdst" nicht in der DLL für Paket "mvtnorm"

Can anyone advise what to do now to get rid of this problem? Thank you 
very much indeed.

Regards, Andreas Faller

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Weighted least squares

2007-05-09 Thread S Ellison

>>> Adaikalavan Ramasamy <[EMAIL PROTECTED]> 09/05/2007 01:37:31 >>>
>..the variance of means of each row in table above is ZERO because 
>the individual elements that comprise each row are identical. 
>... Then is it valid then to use lm( y ~ x, weights=freq ) ?

ermmm... probably not, because if that heppened I'd strongly suspect we'd 
substantially violated some assumptions. 

We are given a number of groups of identical observations. But we are seeking a 
solution to a problem that posits an underlying variance. If it's not visible 
within the groups,  where is it? Has it disappeared in numerical precision, or 
is something else going on?

If we did this regression, we would see identical residuals for all members of 
a group. That would imply that the variance arises entirely from between-group 
effects and not at all from within-group effects. To me, that would in turn 
imply that the number of observations in the group is irrelevant; we should be 
using use unweighted regression on the group 'means' in this situation if we're 
using least squares at all. 

If we genuinely have independent observations and by some coincidence they have 
the same value within available precision, we might be justified in saying "we 
can't see the variance within groups, but we can estimate it from the residual 
variance". That would be equivalent to assuming constant variance, and my 
n/(s^2) reduces to n except for a scaling factor. Using n alone would then be 
consistent with one's assumptions, I think. On the kind of data I get, though 
(mostly chemical measurement with continuous scales), I'd have considerable 
difficulty justifying that assumption. And if I didn't have that kind of data 
(or a reasonable approximation thereto) I'd be wondering whether I should be 
using linear regression at all. 

S

***
This email and any attachments are confidential. Any use, co...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] pvmnorm, error message

2007-05-09 Thread Andreas Faller
Hello there!

My operating system is Windows XP, my version of R is the latest (R-2.5.0). 
Recently I have downloaded the package "mvtnorm" and a problem with the command 
"pmvnorm" occured. Trying to enter the lines ...

A <- diag(3)
A[1,2] <-0.5
A[1,3] <- 0.25
A[2,3] <- 0.5
pvmnorm(lower=c(-Inf,-Inf,-Inf), upper=c(2,2,2),mean = c(0,0,0), corr=A)

I got the following error message:

.Fortran("mvtdst", N = as.integer(n), NU=as.integer(df), lower = 
as.double(lower), :
Fortran Symbolname "mvtdst" nicht in der DLL für Paket "mvtnorm"

Can anyone advise what to do now to get rid of this problem? Thank you very 
much indeed.

Regards, Andreas Faller

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Writing custom functions for rpart

2007-05-09 Thread hadley wickham
Hi everyone,

Does anyone has experience with (or documentation for) writing custom
methods with rpart? The documentation hints: "Alternatively, 'method'
can be a list of functions 'init', 'split' and 'eval'", but doesn't
provide any details as to what those methods should do or what
arguments they should take etc.

I've tried looking at the package source (and the source for the S
code it came from) but I can't follow what's going on in C vs R, and
as the default methods are coded in a different way, there are no
examples to follow.

Thanks,

Hadley

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fehlermeldung

2007-05-09 Thread Stefan Grosse
First: the language of this mailing list is english (not to bother you
but potentially you adress more people then who could possibly help)

Second: on such errors specifying your R version and your System
(windows/Linux) would help

Third: a minimal example is also helpfull: hopw have used pmvnorm...

I suppose you use windows... I cannot replicate your error with R 2.5.0
using the example(pmvnorm)

Stefan

[EMAIL PROTECTED] wrote:
> Sehr geehrte Damen und Herren,
>
> um es kurz zu machen: Ich habe mir kürzlich das Package "mvtnorm" 
> heruntergelden. Leider kommt beim Aufruf des Befehles "pmvnorm" folgende 
> Fehlermeldung:
>
> .Fortran("mvtdst", N = as.integer(n), NU=as.integer(df), lower = 
> as.double(lower),  : 
> Fortran Symbolname "mvtdst" nicht in der DLL für Paket "mvtnorm"
>
> Ich habe überhaupt keine Ahnung, was das bedeutet. Können Sie mir da 
> weiterhelfen? 
>
> Herzlichen Dank für Ihre Hilfe.
>
> Liebe Grüße,
> Andreas Faller
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nlme fixed effects specification

2007-05-09 Thread roger koenker
Just to provide some closure on this thread, let me add two comments:

1.  Doug's version of my sweep function:

diffid1 <-
function(h, id) {
 id <- as.factor(id)[ , drop = TRUE]
 apply(as.matrix(h), 2, function(x) x - tapply(x, id, mean)[id])
}

is far more elegant than my original, and works perfectly, but

2.  I should have mentioned that proposed strategy gets the
coefficient estimates right, however their standard errors need a
degrees of freedom correction, which in the present instance
is non-negligible -- sqrt(98/89) -- since the lm() step doesn't
know that we have already estimated the fixed effects with the
sweep operation.

url:www.econ.uiuc.edu/~rogerRoger Koenker
email[EMAIL PROTECTED]Department of Economics
vox: 217-333-4558University of Illinois
fax:   217-244-6678Champaign, IL 61820


On May 5, 2007, at 7:16 PM, Douglas Bates wrote:

> On 5/5/07, roger koenker <[EMAIL PROTECTED]> wrote:
>>
>> On May 5, 2007, at 3:14 PM, Douglas Bates wrote:
>> >
>> > As Roger indicated in another reply you should be able to obtain  
>> the
>> > results you want by sweeping out the means of the groups from  
>> both x
>> > and y.  However, I tried Roger's function and a modified version  
>> that
>> > I wrote and could not show this.  I'm not sure what I am doing  
>> wrong.
>>
>> Doug,  Isn't it just that you are generating a  balanced factor and
>> Ivo is
>> generating an unbalanced one -- he wrote:
>
>> > fe = as.factor( as.integer( runif(100)*10 ) );
>
>> the coefficient on x is the same
>
>> or, aarrgh,  is it that you don't like the s.e. being wrong.   I
>> didn't notice
>> this at first.  But it shouldn't happen.  I'll have to take another
>> look at  this.
>
> No, my mistake was much dumber than that.  I was comparing the wrong
> coefficient.  For some reason I was comparing the coefficient for x in
> the second fit to the Intercept from the first fit.
>
> I'm glad that it really is working and, yes, you are right, the
> degrees of freedom are wrong in the second fit because the effect of
> those 10 degrees of freedom are removed from the data before the model
> is fit.
>
>
>> > I enclose a transcript that shows that I can reproduce the  
>> result from
>> > Roger's function but it doesn't do what either of us think it  
>> should.
>> > BTW, I realize that the estimate for the Intercept should be  
>> zero in
>> > this case.
>> >
>> >
>> >
>> >> now, with a few IQ points more, I would have looked at the lme
>> >> function instead of the nlme function in library(nlme).[then
>> >> again, I could understand stats a lot better with a few more IQ
>> >> points.]  I am reading the lme description now, but I still don't
>> >> understand how to specify that I want to have dummies in my
>> >> specification, plus the x variable, and that's it.  I think I  
>> am not
>> >> understanding the integration of fixed and random effects in  
>> the same
>> >> R functions.
>> >>
>> >> thanks for pointing me at your lme4 library.  on linux, version
>> >> 2.5.0, I did
>> >>   R CMD INSTALL matrix*.tar.gz
>> >>   R CMD INSTALL lme4*.tar.gz
>> >> and it installed painlessly.  (I guess R install packages don't  
>> have
>> >> knowledge of what they rely on;  lme4 requires matrix, which  
>> the docs
>> >> state, but having gotten this wrong, I didn't get an error.  no  
>> big
>> >> deal.  I guess I am too used to automatic resolution of  
>> dependencies
>> >> from linux installers these days that I did not expect this.)
>> >>
>> >> I now tried your specification:
>> >>
>> >> > library(lme4)
>> >> Loading required package: Matrix
>> >> Loading required package: lattice
>> >> > lmer(y~x+(1|fe))
>> >> Linear mixed-effects model fit by REML
>> >> Formula: y ~ x + (1 | fe)
>> >>  AIC BIC logLik MLdeviance REMLdeviance
>> >>  282 290   -138270  276
>> >> Random effects:
>> >>  Groups   NameVariance   Std.Dev.
>> >>  fe   (Intercept) 0.0445 0.211
>> >>  Residual 0.889548532468 0.9431588
>> >> number of obs: 100, groups: fe, 10
>> >>
>> >> Fixed effects:
>> >> Estimate Std. Error t value
>> >> (Intercept)  -0.0188 0.0943  -0.199
>> >> x 0.0528 0.0904   0.585
>> >>
>> >> Correlation of Fixed Effects:
>> >>   (Intr)
>> >> x -0.022
>> >> Warning messages:
>> >> 1: Estimated variance for factor 'fe' is effectively zero
>> >>  in: `LMEoptimize<-`(`*tmp*`, value = list(maxIter = 200L,
>> >> tolerance =
>> >> 0.000149011611938477,
>> >> 2: $ operator not defined for this S4 class, returning NULL in: x
>> >> $symbolic.cor
>> >>
>> >> Without being a statistician, I can still determine that this  
>> is not
>> >> the model I would like to work with.  The coefficient is  
>> 0.0528, not
>> >> 0.0232.  (I am also not sure why I am getting these warning  
>> messages
>> >> on my system, either, but I don't think it matters.)
>> >>
>> >> is there a simple way to get 

[R] Fehlermeldung

2007-05-09 Thread afaller_
Sehr geehrte Damen und Herren,

um es kurz zu machen: Ich habe mir kürzlich das Package "mvtnorm" 
heruntergelden. Leider kommt beim Aufruf des Befehles "pmvnorm" folgende 
Fehlermeldung:

.Fortran("mvtdst", N = as.integer(n), NU=as.integer(df), lower = 
as.double(lower),  : 
Fortran Symbolname "mvtdst" nicht in der DLL für Paket "mvtnorm"

Ich habe überhaupt keine Ahnung, was das bedeutet. Können Sie mir da 
weiterhelfen? 

Herzlichen Dank für Ihre Hilfe.

Liebe Grüße,
Andreas Faller

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Weighted least squares

2007-05-09 Thread John Fox
Dear Hadley,

> -Original Message-
> From: hadley wickham [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, May 09, 2007 2:21 AM
> To: John Fox
> Cc: R-help@stat.math.ethz.ch
> Subject: Re: [R] Weighted least squares
> 
> Thanks John,
> 
> That's just the explanation I was looking for. I had hoped 
> that there would be a built in way of dealing with them with 
> R, but obviously not.
> 
> Given that explanation, it stills seems to me that the way R 
> calculates n is suboptimal, as demonstrated by my second example:
> 
> summary(lm(y ~ x, data=df, weights=rep(c(0,2), each=50))) 
> summary(lm(y ~ x, data=df, weights=rep(c(0.01,2), each=50)))
> 
> the weights are only very slightly different but the 
> estimates of residual standard error are quite different (20 
> vs 14 in my run)
> 

Observations with 0 weight are literally excluded, while those with very
small weight (relative to others) don't contribute much to the fit.
Consequently you get very similar coefficients but different numbers of
observations.

I hope this helps,
 John

> Hadley
> 
> On 5/8/07, John Fox <[EMAIL PROTECTED]> wrote:
> > Dear Hadley,
> >
> > I think that the problem is that the term "weights" has different 
> > meanings, which, although they are related, are not quite the same.
> >
> > The weights used by lm() are (inverse-)"variance weights," 
> reflecting 
> > the variances of the errors, with observations that have 
> low-variance 
> > errors therefore being accorded greater weight in the 
> resulting WLS regression.
> > What you have are sometimes called "case weights," and I'm 
> unaware of 
> > a general way of handling them in R, although you could 
> regenerate the 
> > unaggregated data. As you discovered, you get the same coefficients 
> > with case weights as with variance weights, but different 
> standard errors.
> > Finally, there are "sampling weights," which are inversely 
> > proportional to the probability of selection; these are 
> accommodated by the survey package.
> >
> > To complicate matters, this terminology isn't entirely standard.
> >
> > I hope this helps,
> >  John
> >
> > 
> > John Fox, Professor
> > Department of Sociology
> > McMaster University
> > Hamilton, Ontario
> > Canada L8S 4M4
> > 905-525-9140x23604
> > http://socserv.mcmaster.ca/jfox
> > 
> >
> > > -Original Message-
> > > From: [EMAIL PROTECTED] 
> > > [mailto:[EMAIL PROTECTED] On Behalf Of hadley 
> > > wickham
> > > Sent: Tuesday, May 08, 2007 5:09 AM
> > > To: R Help
> > > Subject: [R] Weighted least squares
> > >
> > > Dear all,
> > >
> > > I'm struggling with weighted least squares, where 
> something that I 
> > > had assumed to be true appears not to be the case.
> > > Take the following data set as an example:
> > >
> > > df <- data.frame(x = runif(100, 0, 100)) df$y <- df$x + 1 + 
> > > rnorm(100, sd=15)
> > >
> > > I had expected that:
> > >
> > > summary(lm(y ~ x, data=df, weights=rep(2, 100))) 
> summary(lm(y ~ x, 
> > > data=rbind(df,df)))
> > >
> > > would be equivalent, but they are not.  I suspect the 
> difference is 
> > > how the degrees of freedom is calculated - I had expected 
> it to be 
> > > sum(weights), but seems to be sum(weights > 0).  This seems 
> > > unintuitive to me:
> > >
> > > summary(lm(y ~ x, data=df, weights=rep(c(0,2), each=50))) 
> > > summary(lm(y ~ x, data=df, weights=rep(c(0.01,2), each=50)))
> > >
> > > What am I missing?  And what is the usual way to do a linear 
> > > regression when you have aggregated data?
> > >
> > > Thanks,
> > >
> > > Hadley
> > >
> > > __
> > > R-help@stat.math.ethz.ch mailing list 
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide
> > > http://www.R-project.org/posting-guide.html
> > > and provide commented, minimal, self-contained, reproducible code.
> > >
> >
> >
> >
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Weighted least squares

2007-05-09 Thread John Fox
Dear Adai,

> -Original Message-
> From: Adaikalavan Ramasamy [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, May 08, 2007 8:38 PM
> To: S Ellison
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; R-help@stat.math.ethz.ch
> Subject: Re: [R] Weighted least squares
> 
> http://en.wikipedia.org/wiki/Weighted_least_squares gives a 
> formulaic description of what you have said.
> 
> I believe the original poster has converted something like this
> 
>   y x
>   0   1.1
>   0   2.2
>   0   2.2
>   0   2.2
>   1   3.3
>   1   3.3
>   2   4.4
>  ...
> 
> into something like the following
> 
>   y x freq
>   0   1.11
>   0   2.23
>   1   3.32
>   2   4.41
>  ...
> 
> Now, the variance of means of each row in table above is ZERO 
> because the individual elements that comprise each row are 
> identical. Therefore your method of using inverse-variance 
> will not work here.
> 
> Then is it valid then to use lm( y ~ x, weights=freq ) ?

No, because the weights argument gives inverse-variance weights not case
weights.

Regards,
 John

> 
> Regards, Adai
> 
> 
> 
> S Ellison wrote:
> > Hadley,
> > 
> > You asked
> >> .. what is the usual way to do a linear regression when you have 
> >> aggregated data?
> > 
> > Least squares generally uses inverse variance weighting. 
> For aggregated data fitted as mean values, you just need the 
> variances for the _means_. 
> > 
> > So if you have individual means x_i and sd's s_i that arise 
> from aggregated data with n_i observations in group i, the 
> natural weighting is by inverse squared standard error of the 
> mean. The appropriate weight for x_i would then be 
> n_i/(s_i^2). In R, that's n/(s^2), as n and s would be 
> vectors with the same length as x. If all the groups had the 
> same variance, or nearly so, s is a scalar; if they have the 
> same number of observations, n is a scalar. 
> > 
> > Of course, if they have the same variance and same number 
> of observations, they all have the same weight and you 
> needn't weight them at all: see previous posting!
> > 
> > Steve E
> > 
> > 
> > 
> > ***
> > This email and any attachments are confidential. Any use, 
> > co...{{dropped}}
> > 
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide 
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> > 
> > 
> > 
> 
>

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Vignettes menu

2007-05-09 Thread Prof Brian Ripley
Please ask questions about Bioconductor packages of the maintainer or or 
the Bioconductor list.  This is not a (mis-)feature of R, and does not 
happen with most packages.

On Wed, 9 May 2007, Alejandro wrote:

> Hello,
> when i try to load a library with library command (for example
> library(Biobase)) a new menu appears on R command console (Vignettes)
> with links to the help documents.
> Is it possible to eliminate this menu or avoid the appearance of this menu?
>
> Thanks in advance,
> Alejandro

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Including data when building an R package in windows

2007-05-09 Thread Duncan Murdoch
On 09/05/2007 6:12 AM, michael watson (IAH-C) wrote:
> Turns out calling the file DetectiV.rda (rather than .Rdata) fixed it.

The documentation says to use ".RData", not ".Rdata".  Case-sensitivity 
is important here, because you're building something that is supposed to 
be portable across many platforms.

Duncan Murdoch

> Odd.
> 
> 
> -Original Message-
> From: michael watson (IAH-C)
> Sent: Wed 09/05/2007 11:09 AM
> To: michael watson (IAH-C); r-help@stat.math.ethz.ch
> Subject: RE: [R] Including data when building an R package in windows
>  
> I forgot to mention.  After using package.skeleton(), I replaced the six .rda 
> files with a single .Rdata file that contained all six data frames.
> 
> -Original Message-
> From: [EMAIL PROTECTED] on behalf of michael watson (IAH-C)
> Sent: Wed 09/05/2007 10:58 AM
> To: r-help@stat.math.ethz.ch
> Subject: [R] Including data when building an R package in windows
>  
> I've done this before, but when I tried the same thing this time, it didn't 
> work.
> 
> I'm using R 2.4.1 on windows.
> 
> I have 6 data frames that I want to include in a package I am building.  
> Instead of making users issue six different "data(...)" commands, I want to 
> wrap them all up in one file so that users issue one "data(...)" command and 
> have access to all six data sets.
> 
> I had the functions and data loaded in R, nothing else, used 
> package.skeleton() to create the structure.
> 
> Edited everything I needed to (help etc)
> 
> Ran "R CMD INSTALL --build DetectiV" in MS-DOS, the package built.
> 
> Installed the zip file.  Everything fine.
> 
> In R:
> 
>> library(DetectiV)
>> data(DetectiV)
> Warning message:
> data set 'DetectiV' not found in: data(DetectiV) 
> 
> C:\Program Files\R\R-2.4.1\library\DetectiV\data contains filelist and 
> Rdata.zip.
> 
> filelist is:
> 
> DetectiV.Rdata
> filelist
> 
> Rdata.zip contains a file called DetectiV.Rdata.
> 
> This the exact same structure I have in place for another of my packages - 
> and that one works when I issue data(...) commands, whereas this one doesn't.
> 
> So, any ideas what I am doing wrong?
> 
> Thanks
> Mick
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Including data when building an R package in windows

2007-05-09 Thread michael watson \(IAH-C\)
Turns out calling the file DetectiV.rda (rather than .Rdata) fixed it.
Odd.


-Original Message-
From: michael watson (IAH-C)
Sent: Wed 09/05/2007 11:09 AM
To: michael watson (IAH-C); r-help@stat.math.ethz.ch
Subject: RE: [R] Including data when building an R package in windows
 
I forgot to mention.  After using package.skeleton(), I replaced the six .rda 
files with a single .Rdata file that contained all six data frames.

-Original Message-
From: [EMAIL PROTECTED] on behalf of michael watson (IAH-C)
Sent: Wed 09/05/2007 10:58 AM
To: r-help@stat.math.ethz.ch
Subject: [R] Including data when building an R package in windows
 
I've done this before, but when I tried the same thing this time, it didn't 
work.

I'm using R 2.4.1 on windows.

I have 6 data frames that I want to include in a package I am building.  
Instead of making users issue six different "data(...)" commands, I want to 
wrap them all up in one file so that users issue one "data(...)" command and 
have access to all six data sets.

I had the functions and data loaded in R, nothing else, used package.skeleton() 
to create the structure.

Edited everything I needed to (help etc)

Ran "R CMD INSTALL --build DetectiV" in MS-DOS, the package built.

Installed the zip file.  Everything fine.

In R:

>library(DetectiV)
>data(DetectiV)
Warning message:
data set 'DetectiV' not found in: data(DetectiV) 

C:\Program Files\R\R-2.4.1\library\DetectiV\data contains filelist and 
Rdata.zip.

filelist is:

DetectiV.Rdata
filelist

Rdata.zip contains a file called DetectiV.Rdata.

This the exact same structure I have in place for another of my packages - and 
that one works when I issue data(...) commands, whereas this one doesn't.

So, any ideas what I am doing wrong?

Thanks
Mick

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Including data when building an R package in windows

2007-05-09 Thread michael watson \(IAH-C\)
I forgot to mention.  After using package.skeleton(), I replaced the six .rda 
files with a single .Rdata file that contained all six data frames.

-Original Message-
From: [EMAIL PROTECTED] on behalf of michael watson (IAH-C)
Sent: Wed 09/05/2007 10:58 AM
To: r-help@stat.math.ethz.ch
Subject: [R] Including data when building an R package in windows
 
I've done this before, but when I tried the same thing this time, it didn't 
work.

I'm using R 2.4.1 on windows.

I have 6 data frames that I want to include in a package I am building.  
Instead of making users issue six different "data(...)" commands, I want to 
wrap them all up in one file so that users issue one "data(...)" command and 
have access to all six data sets.

I had the functions and data loaded in R, nothing else, used package.skeleton() 
to create the structure.

Edited everything I needed to (help etc)

Ran "R CMD INSTALL --build DetectiV" in MS-DOS, the package built.

Installed the zip file.  Everything fine.

In R:

>library(DetectiV)
>data(DetectiV)
Warning message:
data set 'DetectiV' not found in: data(DetectiV) 

C:\Program Files\R\R-2.4.1\library\DetectiV\data contains filelist and 
Rdata.zip.

filelist is:

DetectiV.Rdata
filelist

Rdata.zip contains a file called DetectiV.Rdata.

This the exact same structure I have in place for another of my packages - and 
that one works when I issue data(...) commands, whereas this one doesn't.

So, any ideas what I am doing wrong?

Thanks
Mick

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Increasing precision of rgenoud solutions

2007-05-09 Thread Paul Smith
Dear All

I am using rgenoud to solve the following maximization problem:

myfunc <- function(x) {
  x1 <- x[1]
  x2 <- x[2]
  if (x1^2+x2^2 > 1)
return(-999)
  else x1+x2
}

genoud(myfunc, nvars=2,
Domains=rbind(c(0,1),c(0,1)),max=TRUE,boundary.enforcement=2,solution.tolerance=0.01)

How can one increase the precision of the solution

$par
[1] 0.7072442 0.7069694

?

I have tried solution.tolerance but without a significant improvement.

Any ideas?

Thanks in advance,

Paul

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Including data when building an R package in windows

2007-05-09 Thread michael watson \(IAH-C\)
I've done this before, but when I tried the same thing this time, it didn't 
work.

I'm using R 2.4.1 on windows.

I have 6 data frames that I want to include in a package I am building.  
Instead of making users issue six different "data(...)" commands, I want to 
wrap them all up in one file so that users issue one "data(...)" command and 
have access to all six data sets.

I had the functions and data loaded in R, nothing else, used package.skeleton() 
to create the structure.

Edited everything I needed to (help etc)

Ran "R CMD INSTALL --build DetectiV" in MS-DOS, the package built.

Installed the zip file.  Everything fine.

In R:

>library(DetectiV)
>data(DetectiV)
Warning message:
data set 'DetectiV' not found in: data(DetectiV) 

C:\Program Files\R\R-2.4.1\library\DetectiV\data contains filelist and 
Rdata.zip.

filelist is:

DetectiV.Rdata
filelist

Rdata.zip contains a file called DetectiV.Rdata.

This the exact same structure I have in place for another of my packages - and 
that one works when I issue data(...) commands, whereas this one doesn't.

So, any ideas what I am doing wrong?

Thanks
Mick

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Survival problem with two populations?

2007-05-09 Thread nelson -
Hi all,
   i'm modeling a survival problem about the duration of a treatment.
Patient leave the treatment following a mixed weibull distribution (2
weibull). The first are patient that returns to take drugs and the
latter are patient  that ends the treatment because they don't want to
take drugs anymore. The two curves are obviously mixed as in this
figure


*
*
**
***
  **
**

What survival model can i apply? I think kaplan meyer and cox are
tight to models described by only one distribution and i don't know
other R modules that let me do other type of analysis...

any suggestion?

thanks and sorry for my english,
  nelson



-- 
--
Consulenze Linux e Windows

http://nelsonenterprise.net

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Removing a list of Objects

2007-05-09 Thread Gabor Csardi
I assumed that 'a' is a character vector. In your case it seams that 'a'
is a list with a single element, a character vector. I've no idea 
why you would want to use such a data structure for this, but if you
do want to, then the command is 

rm(a[[1]])

See also ?rm and maybe also 'Introduction to R' could be useful.

Gabor

On Wed, May 09, 2007 at 02:33:16PM +0530, Patnaik, Tirthankar  wrote:
> Hi Gabor,
>   Tried this, and didn't quite work.
> 
> > a <- list(paste("C243.Daily",sep="",1:5))
> > a
> [[1]]
> [1] "C243.Daily1" "C243.Daily2" "C243.Daily3" "C243.Daily4"
> "C243.Daily5"
> 
> > rm(list=a)
> Error in remove(list, envir, inherits) : invalid first argument
> >  
> 
> -Tir
> 
> -Original Message-
> From: Gabor Csardi [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, May 09, 2007 12:37 PM
> To: Patnaik, Tirthankar [GWM-CIR]
> Cc: r-help@stat.math.ethz.ch
> Subject: Re: [R] Removing a list of Objects
> 
> Hmmm,
> 
> rm(list=a)
> 
> is what you want.
> 
> Gabor
> 
> On Wed, May 09, 2007 at 10:29:05AM +0530, Patnaik, Tirthankar  wrote:
> > Hi,
> > I have a simple beginner's question on removing a list of
> objects. 
> > Say I have objects C243.Daily1, C243.Daily2...C243.Daily5 in my 
> > workspace. I'd like to remove these without using rm five times.
> > 
> > So I write. 
> > 
> > > a <- list(paste("C243.Daily",sep="",1:5))
> > 
> > > rm(a)
> > 
> > Obviously this wouldn't work, as it would only remove the object a.
> > 
> > But is there any way I could do this, like on the lines of a UNIX `
> > (grave-accent)
> > 
> > Something like
> > 
> > Prompt> rm `find . -type f -name "foo"`
> > 
> > TIA and best,
> > -Tir
> > 
> > __
> > R-help@stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide 
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> 
> -- 
> Csardi Gabor <[EMAIL PROTECTED]>MTA RMKI, ELTE TTK

-- 
Csardi Gabor <[EMAIL PROTECTED]>MTA RMKI, ELTE TTK

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Removing a list of Objects

2007-05-09 Thread gyadav

try this 

rm(list=ls(pat="C243.Daily")


> ls(pat=".")
 [1] ".chutes"  ".densityplot" 
".densityplot.default" ".densityplot.formula"
 [5] ".eda" ".eda.ts" ".fancy.stripchart" 
".freqpoly" 
 [9] ".hist.and.boxplot"".lag" ".lm"   
".median.test" 
[13] ".plot.hist.and.box"   ".scatterplot" ".sim"
".violinplot" 
[17] ".violinplot.default"  ".violinplot.formula"  ".z.test"   
 
> ls(pat=".l")
[1] ".lag" ".lm" 
> rm(list = ls(pat=".l"))
> ls(pat=".l")
character(0)


-  Regards,

  \\\|///
   \\   --   //
(  o   o  )
oOOo-(_)-oOOo
|
| Gaurav Yadav
| Assistant Manager, CCIL, Mumbai (India)
| Mob: +919821286118 Email: [EMAIL PROTECTED]
| Man is made by his belief, as He believes, so He is.
|   --- Bhagavad Gita 
|___Oooo
 oooO(  )
 (  )   )   /
  \   ((_/
\_ )




"Patnaik, Tirthankar " <[EMAIL PROTECTED]> 
Sent by: [EMAIL PROTECTED]
05/09/2007 02:33 PM

To
"Gabor Csardi" <[EMAIL PROTECTED]>
cc
r-help@stat.math.ethz.ch
Subject
Re: [R] Removing a list of Objects






Hi Gabor,
 Tried this, and didn't quite work.

> a <- list(paste("C243.Daily",sep="",1:5))
> a
[[1]]
[1] "C243.Daily1" "C243.Daily2" "C243.Daily3" "C243.Daily4"
"C243.Daily5"

> rm(list=a)
Error in remove(list, envir, inherits) : invalid first argument
> 

-Tir

-Original Message-
From: Gabor Csardi [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 09, 2007 12:37 PM
To: Patnaik, Tirthankar [GWM-CIR]
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] Removing a list of Objects

Hmmm,

rm(list=a)

is what you want.

Gabor

On Wed, May 09, 2007 at 10:29:05AM +0530, Patnaik, Tirthankar  wrote:
> Hi,
>I have a simple beginner's question on removing a list of
objects. 
> Say I have objects C243.Daily1, C243.Daily2...C243.Daily5 in my 
> workspace. I'd like to remove these without using rm five times.
> 
> So I write. 
> 
> > a <- list(paste("C243.Daily",sep="",1:5))
> 
> > rm(a)
> 
> Obviously this wouldn't work, as it would only remove the object a.
> 
> But is there any way I could do this, like on the lines of a UNIX `
> (grave-accent)
> 
> Something like
> 
> Prompt> rm `find . -type f -name "foo"`
> 
> TIA and best,
> -Tir
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-- 
Csardi Gabor <[EMAIL PROTECTED]>MTA RMKI, ELTE TTK

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




DISCLAIMER AND CONFIDENTIALITY CAUTION:\ \ This message and ...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to read several text files at once!

2007-05-09 Thread Petr Klasterecky
A for-loop looks like the best solution here. An outline:

csum <- 0 #or a matrix of 0
for (i in (1:253)){
tmpa <- read.table(file=paste("A",i,sep=""), header=TRUE)
tmpb <- read.table(file=paste("B",i,sep=""), header=TRUE)
tmpc <- tmpa + tmpb #or whatever operation you like
write.table(tmpc, file=paste("C",i,sep=""))
csum <- csum + ...
}#end for
write.table(csum, file="csum.dat")

See ?write.table for more details on writing data to files.
Petr


Faramarzi Monireh napsal(a):
> Dear R users,
> I am a beginner in R. I have 506 text files (data frame) in one folder namely 
> DATA. The files are called A1 to A253 (253 files) and B1 to B253 (another 253 
> files). Each file has two columns; V1 (row number)
> and V2 (the value for each row name). Now I would like to add the values of
> V2 in each A-file with its relative value in B-file and save it as a
> new data frame named as C (e.g. C1 with V1 (row number) and V2
> (A1$V2+B1$V2) ). Therefore, at the end I will have 253 C files 
> (C1 to C253). I also would like to sum a number of the C files with each 
> other (e.g. C1+ C2+ …+C50) and save as a new file like C_sum.
> 
> I already tried to write a short script to do all together but it did not
> work. I only was able to do for each C file separately. The main problem
> is that I do not know how to read several text files and how to use those
> files to make C files and afterwards C-sum files. I would be gratful if 
> somebody can help me to write a short script to do all together.
> Thank you very much in advance for your cooperation,
> Monireh
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 

-- 
Petr Klasterecky
Dept. of Probability and Statistics
Charles University in Prague
Czech Republic

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Removing a list of Objects

2007-05-09 Thread Patnaik, Tirthankar
Hi Gabor,
Tried this, and didn't quite work.

> a <- list(paste("C243.Daily",sep="",1:5))
> a
[[1]]
[1] "C243.Daily1" "C243.Daily2" "C243.Daily3" "C243.Daily4"
"C243.Daily5"

> rm(list=a)
Error in remove(list, envir, inherits) : invalid first argument
>  

-Tir

-Original Message-
From: Gabor Csardi [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 09, 2007 12:37 PM
To: Patnaik, Tirthankar [GWM-CIR]
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] Removing a list of Objects

Hmmm,

rm(list=a)

is what you want.

Gabor

On Wed, May 09, 2007 at 10:29:05AM +0530, Patnaik, Tirthankar  wrote:
> Hi,
>   I have a simple beginner's question on removing a list of
objects. 
> Say I have objects C243.Daily1, C243.Daily2...C243.Daily5 in my 
> workspace. I'd like to remove these without using rm five times.
> 
> So I write. 
> 
> > a <- list(paste("C243.Daily",sep="",1:5))
> 
> > rm(a)
> 
> Obviously this wouldn't work, as it would only remove the object a.
> 
> But is there any way I could do this, like on the lines of a UNIX `
> (grave-accent)
> 
> Something like
> 
> Prompt> rm `find . -type f -name "foo"`
> 
> TIA and best,
> -Tir
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-- 
Csardi Gabor <[EMAIL PROTECTED]>MTA RMKI, ELTE TTK

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Vignettes menu

2007-05-09 Thread Alejandro
Hello,
when i try to load a library with library command (for example 
library(Biobase)) a new menu appears on R command console (Vignettes) 
with links to the help documents.
Is it possible to eliminate this menu or avoid the appearance of this menu?

Thanks in advance,
Alejandro

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] statistics/correlation question NOT R question

2007-05-09 Thread gyadav

Hi Horace and Mark

@@@ i myself know that this may be of little help but then also i am going 
with this. Secondly the in the solution by Horace if corr(x, y) is beta 
then 
it implies that var(x) = var(y). Is that you want Mark. Well what i did i 
am writing it down hereas under, may be its wrong.Please comment

var(y_t) = var(beta * x_t) + e_t)
=> var(y_t) = beta * var(x_t) + var(e_t) + cov(beta * x_t , e_t)
as cov(beta * x_t , e_t) = 0 hence
var(y_t) = beta * var(x_t) + var(e_t)(i)

then 

corr(x_t, y_t) = beta = cov(x_t, y_t)/ (sigma_x * sigma_y) 
 (ii)

further E[y_t] = beta * E[x_t] + E[e_t].as E[e_t] = 0 hence

beta = E[y_t] / E[x_t](iii)

Now what to do ???





Mark, I suppose you make the usual assumptions, ie. E[x]=0, 
E[x*epsilon]=0, the correlation is just simply,

corr(x,y) = beta * ( var(x) / var(y) )

And you could get var(y) from var(x) and var(epsilon).

HTH.

Horace




This is not an R question but if anyone can help me, it's much
appreciated.

Suppose I have a series ( stationary ) y_t and a series x_t ( stationary
)and x_t has variance sigma^2_x and epsilon is normal 
(0, sigma^2_epsilon )

and the two series have the relation

  y_t = Beta*x_t + epsilon

My question is if there are particular values that sigma^2_x and
sigma^2_epsilon have to take in order for corr(x_t,y_t) to equal Beta ?

I attempted to figure this out using two different methods and in one
case I end up involving sigma^2_epsilon and in the other I don't
and I'm not sure if either method is correct. I think I need to use
results form the conditional bivariate normal but i'm really not sure.
Also, it's not a homework problem because I am too old to have homework.
Thanks for any insights/solutions.


This is not an offer (or solicitation of an offer) to buy/se...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




DISCLAIMER AND CONFIDENTIALITY CAUTION:\ \ This message and ...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Need an working examples

2007-05-09 Thread Dieter Menne
Aurel Razvan Duica  amdocs.com> writes:

> Can you please provide me with an working example of clustering in R
> (not very complicated) and with an example of survey in R.
> 
> We would appreciate this very much. We need them as soon as possible.

Sooner is not possible: there are really simple examples both in stat (e.g.
hclust) and in package cluster (starting from agnes).


Dieter

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Need an working examples

2007-05-09 Thread Aurel Razvan Duica

Hi,

Can you please provide me with an working example of clustering in R
(not very complicated) and with an example of survey in R.

We would appreciate this very much. We need them as soon as possible.



Thanks and Regards,
Aurel Razvan Duica
Programming SME

Infra Integration 
DeVelopment Center of Choice, Cyprus
+357-25-886298 (Desk)
+357-25-886220 (Fax)
AMDOCS > INTEGRATED CUSTOMER MANAGEMENT





This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp
[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Odp: creating a new column

2007-05-09 Thread Petr PIKAL
Hi

without knowing your code, R version and error message it is hard to say 
what is wrong. I think I answered already this or similar question  but 
nevertheless:

If your data are in data frame

ifelse(mm$censoringTime>mm$survivalTime,mm$survivalTime, mm$censoringTime)

gives you a vector of required values

if you have matrix

ifelse(m[,3]>m[,4],m[,4], m[,3])

gives you the same.

Sou you need to add it to your existing structure by cbind() or 
data.frame()

Regards
Petr
[EMAIL PROTECTED]

[EMAIL PROTECTED] napsal dne 07.05.2007 16:27:37:

> hie l would like to create a 6th column "actual surv time" from the 
following data 
> 
>   the condition being
>   if  censoringTime>survivaltime then actual survtime =survival time
>   else actual survtime =censoring time
> 
>   the code l used to create the data is
> 
>s=2
>while(s!=0){ n=20
>  m<-matrix(nrow=n,ncol=4)
> colnames(m)=c("treatmentgrp","strata","censoringTime","survivalTime")
> for(i in 1:20) 
m[i,]<-c(sample(c(1,2),1,replace=TRUE),sample(c(1,2),
> 1,replace=TRUE),rexp(1,.007),rexp(1,.002))
> m<-cbind(m,0)
>  m[m[,3]>m[,4],5]<-1
>  colnames(m)[5]<-"censoring"
>   print(m)
>s=s-1
>   treatmentgrp strata censoringTime survivalTime censoring
>[1,] 1  1   1.0121591137.80922 0
>[2,] 2  2  32.971439 247.21786 0
>[3,] 2  1  85.758253 797.04949 0
>[4,] 1  1  16.999171  78.92309 0
>[5,] 2  1 272.909896 298.21483 0
>[6,] 1  2 138.230629 935.96765 0
>[7,] 2  2  91.529859 141.08405 0
> 
> 
>   l keep getting an error message when i try to  create the 6th column
> 
> 
> 
> 
> 
> -
> 
> 
>[[alternative HTML version deleted]]
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to read several text files at once!

2007-05-09 Thread Faramarzi Monireh

Dear R users,
I am a beginner in R. I have 506 text files (data frame) in one folder namely 
DATA. The files are called A1 to A253 (253 files) and B1 to B253 (another 253 
files). Each file has two columns; V1 (row number)
and V2 (the value for each row name). Now I would like to add the values of
V2 in each A-file with its relative value in B-file and save it as a
new data frame named as C (e.g. C1 with V1 (row number) and V2
(A1$V2+B1$V2) ). Therefore, at the end I will have 253 C files 
(C1 to C253). I also would like to sum a number of the C files with each other 
(e.g. C1+ C2+ …+C50) and save as a new file like C_sum.

I already tried to write a short script to do all together but it did not
work. I only was able to do for each C file separately. The main problem
is that I do not know how to read several text files and how to use those
files to make C files and afterwards C-sum files. I would be gratful if 
somebody can help me to write a short script to do all together.
Thank you very much in advance for your cooperation,
Monireh

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] regarding SsfPack in Ox

2007-05-09 Thread gyadav

Hi All R Users,

I have a code in Ox language using its SSF Pack. Is there any similar 
package in R which provides all the same functionality.

Any pointers would be helpful to me.

Thanks in adv

-  Regards,

  \\\|///
   \\   --   //
(  o   o  )
oOOo-(_)-oOOo
|
| Gaurav Yadav
| Assistant Manager, CCIL, Mumbai (India)
| Mob: +919821286118 Email: [EMAIL PROTECTED]
| Man is made by his belief, as He believes, so He is.
|   --- Bhagavad Gita 
|___Oooo
 oooO(  )
 (  )   )   /
  \   ((_/
\_ )



DISCLAIMER AND CONFIDENTIALITY CAUTION:\ \ This message and ...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] S-plus coding

2007-05-09 Thread Martin Maechler
> "Rolf" == Rolf Turner <[EMAIL PROTECTED]>
> on Fri, 4 May 2007 08:02:21 -0300 (ADT) writes:

Rolf> T. Kounouni wrote:
>> Hi, how can i use data to forecast next time period value, if data
>> has been influenced by a change in legislation?  thank you.

Rolf> Well, you could use chicken entrails.

Rolf> cheers,

Rolf> Rolf Turner
Rolf> [EMAIL PROTECTED]

Rolf> P. S.  What has this question got to do with ``S-plus coding''?

Yes, indeed.
And even then, what would "S-plus coding" have to do with R-help?

Please Mr. Kounouni, do read the posting guide *before* 
*any* further posting to R-help!

Rolf> PLEASE do read the posting guide
Rolf> http://www.R-project.org/posting-guide.html
  ^^^

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Removing a list of Objects

2007-05-09 Thread Gabor Csardi
Hmmm,

rm(list=a)

is what you want.

Gabor

On Wed, May 09, 2007 at 10:29:05AM +0530, Patnaik, Tirthankar  wrote:
> Hi,
>   I have a simple beginner's question on removing a list of
> objects. Say I have objects C243.Daily1, C243.Daily2...C243.Daily5 in my
> workspace. I'd like to remove these without using rm five times.
> 
> So I write. 
> 
> > a <- list(paste("C243.Daily",sep="",1:5))
> 
> > rm(a)
> 
> Obviously this wouldn't work, as it would only remove the object a.
> 
> But is there any way I could do this, like on the lines of a UNIX `
> (grave-accent)
> 
> Something like
> 
> Prompt> rm `find . -type f -name "foo"`
> 
> TIA and best,
> -Tir
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-- 
Csardi Gabor <[EMAIL PROTECTED]>MTA RMKI, ELTE TTK

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Removing a list of Objects

2007-05-09 Thread Patnaik, Tirthankar
Hi,
I have a simple beginner's question on removing a list of
objects. Say I have objects C243.Daily1, C243.Daily2...C243.Daily5 in my
workspace. I'd like to remove these without using rm five times.

So I write. 

> a <- list(paste("C243.Daily",sep="",1:5))

> rm(a)

Obviously this wouldn't work, as it would only remove the object a.

But is there any way I could do this, like on the lines of a UNIX `
(grave-accent)

Something like

Prompt> rm `find . -type f -name "foo"`

TIA and best,
-Tir

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.