Re: [R] Rounding problem R vs Excel

2003-06-04 Thread Duncan Murdoch
On Tue, 3 Jun 2003 09:49:44 +0100, you wrote in message
[EMAIL PROTECTED]:

Duncan
If the numbers are not represently exactly how does R resolve problems like
the one below? Is there something that needs to be set up in the R
environment like the number of significant figures?

 x-4.145*100+0.5
 x
[1] 415
 floor(x)
[1] 414

R doesn't do anything to resolve this problem; it's just the way the
IEEE standard floating point formats work.  In Excel 97, 4.145*100+0.5
is exactly equal to 415; I would guess this is either because they use
a binary coded decimal format instead of the IEEE floating point
types, or they round results internally in some way.  R doesn't
support BCD formats, and doesn't do tricky rounding behind your back.
You get what you ask for.

If you want the calculation above to give you exactly 415, the
standard workaround in languages without BCD formats is to work in
some decimal multiple of the actual numbers you're interested in, e.g.
1.  Then you would store 4.145 as 41450, multiply by 100 (i.e.
100*1) and divide by 1 to give 4145000, and add 5000, to give
415.  All of these numbers are exactly representable in double
precision floating point types, because they are all integers with
fewer than 53 bits in their binary representations.  

Doing this means you need to change the definitions of *, /, ^, and
lots of other low level functions, but + and - work in the usual way.
It might be an interesting project to write a package that does all of
this.

Duncan Murdoch

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] (no subject)

2003-06-04 Thread Roger Koenker
Although not immediately relevant to the present inquiry this might still be
an opportune moment to mention again an item that is at the top of
my R-wish-listr:

The SparseM function slm (for sparse lm) is well suited for problems like this in which
the design matrix is quite sparse, so it would be great to have a version of
model.matrix that would return a matrix in one of the formats of SparseM.  I've
hesitated to dig into this having looked a bit at the C, but if some brave soul
were looking for a nice well-defined summer amusement...

One can handle this by hand in specific instances, but it would be great to have
a more automated way to do this via the formula approach.

I think it is not uncommon in large regression problems that this would signficantly
expand the range of applications that could be handled by R, especially on smaller
machines.  Regression/anova problems need only store the non-zero elements of X
and computational effort on such problems should grow roughly proportionally
to the number of these non-zero elements.

Roger

url:www.econ.uiuc.edu   Roger Koenker   Dept. of Economics UCL,
email   [EMAIL PROTECTED]   Department of Economics Drayton House,
vox:217-333-4558University of Illinois  30 Gordon St,
fax:217-244-6678Champaign, IL 61820 London,WC1H 0AX, UK
vox:020-7679-5838

On Tue, 3 Jun 2003, Ida Scheel wrote:

 It is over 3000 levels. I have enough RAM to do it, and I have run
 smaller examples (smaller datasets) with the same code which works.

 Peter Dalgaard BSA wrote:

 Ida Scheel [EMAIL PROTECTED] writes:
 
 
 
 Hei,
 
 I am trying to fit an ANOVA-model by lm. I get the error-message
 
 Error in lm.fit(x, y, offset = offset, ...) :
 negative length vectors are not allowed
 
 which I don't understand. My data looks fine, but one factor has
 extremely many levels. Does anyone have a tip?
 
 
 
 Not really. It sounds like it might be a bug. How many levels? Can you
 generate a simple example (possibly with simulated data) showing the
 same behaviour?
 
 
 


   [[alternate HTML version deleted]]

 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: Rtips (was Re: [R] ? building a database with a the great examples

2003-06-04 Thread Paul E. Johnson
I think there is a lot of merit in this idea.  I think there is a big 
question about authentication and protection of Wikis from vandalism.

I've set up Wikis for other projects that I started after Rtips.  I have 
not seen your Wiki software before,  but it looks pretty nice.  I see it 
does have diff support, so old pages can be restored, yes?  But it 
doesn't authenticate users, which causes me some concern.  (I understand 
the Wiki philosophy that we should not be concerned about 
authentication, but I've never bought into it all the way).

I have a TWiki site here:

http://www.ku.edu/cgiwrap/pauljohn/twiki/view

This one I hacked up special to use authentication on some pages so that 
people have to log in before they can edit.

I had not realized before I looked at your page that Wiki 
implementations are customized for document format.  For page sections, 
your Wiki uses

= aHeading =  

but Twiki uses

---+ aHeading

That's kindof a bummer.

Detlef Steuer wrote:

On 02-Jun-2003 Paul E. Johnson wrote:
 

Perhaps you want to start maintaining Rtips itself!  
   

Perhaps it is time to start a wiki for R?
For those not familiar with the idea of wikis look here:
http://www.wikipedia.org/ (incredible wiki encyclopedia)
http://www.wikipedia.org/wiki/WikiWiki (for a description of the mechanisms of
wikiwikis)
I just did a quick hack to set one up:
http://fawn.unibw-hamburg.de/cgi-bin/Rwiki.pl?RwikiHome
Any comments welcome!

If the community uses the site I promise to do what I can to keep it 
running.

Detlef
 



--
Paul E. Johnson   email: [EMAIL PROTECTED]
Dept. of Political Sciencehttp://lark.cc.ukans.edu/~pauljohn
University of Kansas  Office: (785) 864-9086
Lawrence, Kansas 66045FAX: (785) 864-5700
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] S+ style implementation of GAM for R?

2003-06-04 Thread Douglas Beare
Hi,
I've got the R library mgcv for GAM written by Simon Wood which works well
in many instances.  However, over the years I
got attached to the S+ implementation of GAM which allows loess smoothing in
more than 1 dimension as well as spline smoothing.
Has anyone ported the S+ GAM library to R?
Regards,
Doug Beare.
Fisheries Research Services,
Marine Laboratory,
Victoria Road,
Torry,
Aberdeen, UK.
Tel. 44 (0) 1224 295314

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] tseries adf.test

2003-06-04 Thread Pfaff, Bernhard
I have a question regarding the adf.test command in the tseries library.

I have a vector of time series observations (2265 daily log prices
for the
OEX to be exact).  I also have this same data in first-differenced form.  I
want to test both vectors individually for staionarity with an Augmented
Dickey-Fuller test.  I noticed when I use the adf.test command from the
tseries library, the general regression command used incorporates a constant
and a linear trend -- (trend order of 1, I presume).  My specific
questions are as follows: (1) is it possible to alter the function to use a
regression that does not incluse a linear trend? , because (2) it seems to
me that I do not need to detrend if I've already taken first differences.

Thanks in advance for your assistance.
Rick


Hello Rick,

you might find the following link useful:

http://www.econ.uiuc.edu/~econ472/tutorial9.html

Pls note, that one typically follows a testing strategy in order to infer
the characteristics of the time series in question (pure random walk, random
with drift or random walk with drift and deterministic trend). The F-type
test statistics (denoted by phi1, phi2, phi3 in the literature) can be
calculated by making use of anova() and checking against the relevant
critical values of these test statistics.

HTH,
Bernhard


--
If you have received this e-mail in error or wish to read our e-mail 
disclaimer statement and monitoring policy, please refer to 
http://www.drkw.com/disc/email/ or contact the sender.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] S+ style implementation of GAM for R?

2003-06-04 Thread Simon Wood
 I've got the R library mgcv for GAM written by Simon Wood which works well
 in many instances.  However, over the years I
 got attached to the S+ implementation of GAM which allows loess smoothing in
 more than 1 dimension as well as spline smoothing.
 Has anyone ported the S+ GAM library to R?

- I've not come across a port (but note that mgcv does allow
multidimensional smooths, albeit spline based and not loess). 
best, Simon
_
 Simon Wood [EMAIL PROTECTED]www.stats.gla.ac.uk/~simon/
  Department of Statistics, University of Glasgow, Glasgow, G12 8QQ
   Direct telephone: (0)141 330 4530  Fax: (0)141 330 4814

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] libraries in .First

2003-06-04 Thread Laurie Sindlinger
Dear all,

   I have a question regarding the .First function. I have included 
help.start() and several libraries in my .First as:

.First - function() {
help.start(browser = netscape7)
library(lattice)
library(modreg)
library(splines)
library(MASS)
library(maps)}
The libraries maps and splines do not seem to be available when I start 
R, but I have found if I change the order of the libraries in .First, 
these libraries may be available and others may not. Has anyone had a 
similar problem, or could anyone offer some suggestions? Thank you!

Laurie Sindlinger

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] X11 not available

2003-06-04 Thread f . grignola
Hi,

I just installed R in a Linux box. Unfortunately, I don't have root access and
had to install it in my home directory.
The software runs fine, but I cannot make it print graphics to the screen. If I
type X11 I get: X11 is not available (though it is working for other software).
I noticed that it prints graphs to a default ps file.
Any suggestions about how to get around this?

Many thanks,

FG

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Building an R package under Windows NT

2003-06-04 Thread Benjamin . STABLER
The version of Perl I have is:

  This is perl, version 5.005_02 built for MSWin32-x86-object

  Copyright 1987-1998, Larry Wall

  Binary build 506 provided by ActiveState Tool Corp.
http://www.ActiveState.com
  Built 15:40:37 Oct 27 1998

It was installed by my IS department some time ago.  I will try installing
Perl 5.8 and see if that works.

Since Rcmd build -h does not print a description of the acceptable values
for --docs=, I thought I would look for something similar.  The
--docs=TYPE option for build and the --type=TYPE for Rdconv looked
related so I thought I would give it a try.  Without knowing what the
correct values for --docs=TYPE are, how am I suppose to know that
--docs=TYPE is not the same as --type=TYPE for specifing what types of
help documents to create?  I don't think it is a stretch to see how these
two could be confused.


-Original Message-
From: Prof Brian Ripley [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 03, 2003 12:48 AM
To: STABLER Benjamin
Subject: RE: [R] Building an R package under Windows NT


On Mon, 2 Jun 2003 [EMAIL PROTECTED] wrote:

 Professor Ripley,
 
 I just downloaded and installed the most current tools from your site
 (http://www.stats.ox.ac.uk/pub/Rtools/), I have version 
5.005_02 of Perl,
 and I still get the same error.  Do you think Perl 5.8 would 
*fix* this
 problem?

I have no idea, but I do know that correcting all of *your* 
errors would 
solve the problem.  As that version of Perl was AFAIK never 
released for 
Windows (and has not been available for several years if it 
was), I think 
you need to try to follow the instructions *exactly* (which 
you have not 
done re Perl, which says

  The Windows port of perl5, available via
  http://www.activestate.com/Products/ActivePerl/.
  BEWARE: you do need the *Windows* port and not the Cygwin one.

).
 
 I didn't think the docs=html option would work as I got the 
html option from
 the Rdconv type=TYPE help since the build help does not list 
the options.  I
 didn't think to look at the install help.

Why do you think --type (sic) takes the same values as --docs ?


-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] libraries in .First

2003-06-04 Thread Prof Brian Ripley
If this is R 1.7.0, the preferred mechanism is options(defaultPackages), 
as in the following (from the examples in .First)

 # Example of Rprofile.site
 local({
   old - getOption(defaultPackages)
   options(defaultPackages = c(old, MASS))
 })

.First is run too early to be loading packages.

Oh, and the preferred way to set the browser is environmental variable
R_BROWSER, usually in R_HOME/etc/Renviron{.site}.

On Tue, 3 Jun 2003, Laurie Sindlinger wrote:

 Dear all,
 
 I have a question regarding the .First function. I have included 
 help.start() and several libraries in my .First as:
 
 .First - function() {
 help.start(browser = netscape7)
 library(lattice)
 library(modreg)
 library(splines)
 library(MASS)
 library(maps)}
 
 The libraries maps and splines do not seem to be available when I start 
 R, but I have found if I change the order of the libraries in .First, 
 these libraries may be available and others may not. Has anyone had a 
 similar problem, or could anyone offer some suggestions? Thank you!
 
 Laurie Sindlinger
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Building an R package under Windows NT

2003-06-04 Thread Benjamin . STABLER
The emme2 package is not in the $RHOME/library directory.  My sh.exe is the
most current one from the tools available at
http://www.stats.ox.ac.uk/pub/Rtools/.I searched for sh.exe and that is
only one I've got on my system.


-Original Message-
From: Duncan Murdoch [mailto:[EMAIL PROTECTED]
Sent: Monday, June 02, 2003 5:09 PM
To: STABLER Benjamin
Cc: [EMAIL PROTECTED]
Subject: Re: [R] Building an R package under Windows NT


On Mon, 2 Jun 2003 14:08:55 -0700 , you wrote:

Thanks for the suggestions.  

1) I fixed the zip.exe PATH issue.
2) I removed unnecessary quotes around C:\Program Files\R
3) I ran Rcmd install emme2 with the following result:

Where is emme2?  You shouldn't keep the source in the
$RHOME/library/emme2 directory, which is where it will be installed.

-- Making package emme2 
  adding build stamp to DESCRIPTION
  installing R files
 175373 [main] sh 352 proc_subproc: Couldn't duplicate my 
handle0xA4 for
pid 1736008448, Win32 error 6

This is an error in sh.exe.  Are you getting the one from the R
toolset, or possibly another one?

Duncan Murdoch


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] kmeans

2003-06-04 Thread Luis Miguel Almeida da Silva
Dear helpers
 
I was working with kmeans from package mva and found some strange situations. When I 
run several times the kmeans algorithm with the same dataset I get the same partition. 
I simulated a little example with 6 observations and run kmeans giving the centers and 
making just one iteration. I expected that the algorithm just allocated the 
observations to the nearest center but think this is not the result that I get...
 
Here are the simulated data
 
 dados-matrix(c(-1,0,2,2.5,7,9,0,3,0,6,1,4),6,2)
 dados
 [,1] [,2]
[1,] -1.00
[2,]  0.03
[3,]  2.00
[4,]  2.56
[5,]  7.01
[6,]  9.04
 plot(dados)
 dados-matrix(c(-1,0,2,2.5,7,9,0,5,0,6,1,4),6,2)
 plot(dados)
 A-kmeans(dados,dados[c(3,4),],1)
 A
$cluster
[1] 1 1 1 1 2 2
$centers
   [,1] [,2]
1 0.875 2.75
2 8.000 2.50
$withinss
[1] 38.9375  6.5000
$size
[1] 4 2
 
 
Any hints?
 
Thanks a lot 
 
Luis Silva

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Update VR_7.1-6

2003-06-04 Thread Dirk Enzmann
The update of VR by downloading VR_7.1-6.zip and using install.packages
(from local zip files) fails with the following error message:

Error in file(file, r) : unable to open connection
In addition: Warning message:
cannot open file `VR/DESCRIPTION'

Other packages can be installed without problems, except of dse_2003.4-1
with a similar error message.

Why?

Operating System: Windows NT (4.0)
R Version: R 1.7.0

*
Dr. Dirk Enzmann
Criminological Research Institute of Lower Saxony
Luetzerodestr. 9
D-30161 Hannover
Germany

phone: +49-511-348.36.32
fax:   +49-511-348.36.10
email: [EMAIL PROTECTED]

http://www.kfn.de

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Update VR_7.1-6

2003-06-04 Thread Thomas Lumley
On Tue, 3 Jun 2003, Dirk Enzmann wrote:

 The update of VR by downloading VR_7.1-6.zip and using install.packages
 (from local zip files) fails with the following error message:

 Error in file(file, r) : unable to open connection
 In addition: Warning message:
 cannot open file `VR/DESCRIPTION'

 Other packages can be installed without problems, except of dse_2003.4-1
 with a similar error message.


Because there is a bug in the handling of package bundles in R 1.7.0.  VR
and dse are bundles. R1.7.1 will be coming out soon and fixes this. Or you
can manually unzip the bundle in the appropriate place in the library
directory.

-thomas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Update VR_7.1-6

2003-06-04 Thread Gavin Simpson
Hi Dirk,

dse and VR are bundles or packages.  There is a bug in rw1070 in the way 
it installs bundles.

Download the zip file directly and unzip it to the library directory in 
wherever R is installed on your system.

This has been discussed recently (this morning) on the list.  Search the 
archives to read more.

G

Dirk Enzmann wrote:
The update of VR by downloading VR_7.1-6.zip and using install.packages
(from local zip files) fails with the following error message:
Error in file(file, r) : unable to open connection
In addition: Warning message:
cannot open file `VR/DESCRIPTION'
Other packages can be installed without problems, except of dse_2003.4-1
with a similar error message.
Why?

Operating System: Windows NT (4.0)
R Version: R 1.7.0
*
Dr. Dirk Enzmann
Criminological Research Institute of Lower Saxony
Luetzerodestr. 9
D-30161 Hannover
Germany
phone: +49-511-348.36.32
fax:   +49-511-348.36.10
email: [EMAIL PROTECTED]
http://www.kfn.de

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Gavin Simpson [T] +44 (0)20 7679 5522
ENSIS Research Fellow [F] +44 (0)20 7679 7565
ENSIS Ltd.  ECRC [E] [EMAIL PROTECTED]
UCL Department of Geography   [W] http://www.ucl.ac.uk/~ucfagls/cv/
26 Bedford Way[W] http://www.ucl.ac.uk/~ucfagls/
London.  WC1H 0AP.
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Update VR_7.1-6

2003-06-04 Thread Prof Brian Ripley
I have explained that on R-help earlier today (and when VR_7.1-4 was 
announced).  Please look back a few hours in the R-help postings.

On Tue, 3 Jun 2003, Dirk Enzmann wrote:

 The update of VR by downloading VR_7.1-6.zip and using install.packages
 (from local zip files) fails with the following error message:
 
 Error in file(file, r) : unable to open connection
 In addition: Warning message:
 cannot open file `VR/DESCRIPTION'
 
 Other packages can be installed without problems, except of dse_2003.4-1
 with a similar error message.
 
 Why?

Because VR and dse are bundles and someone broke the installation of
bundles in R 1.7.0 on Windows.

You can download and unzip the zip files in rw1070/library.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] kmeans

2003-06-04 Thread Prof Brian Ripley
On Tue, 3 Jun 2003, Luis Miguel Almeida da Silva wrote:

  I was working with kmeans from package mva and found some strange
 situations. When I run several times the kmeans algorithm with the same
 dataset I get the same partition. 

Why does that surprise you?

 I simulated a little example with 6
 observations and run kmeans giving the centers and making just one
 iteration. I expected that the algorithm just allocated the observations
 to the nearest center but think this is not the result that I get...

That's not what the documentation says it does:

 The data given by `x' is clustered by the k-means algorithm. When
 this terminates, all cluster centres are at the mean of their
 Voronoi sets (the set of data points which are nearest to the
 cluster centre).

which is true in your example.  It has run one iteration of re-allocation; 
as you can see by reading the source code or the reference.

[...]

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] coefficient of logistic regression

2003-06-04 Thread Thomas W Blackwell
Ahmet  -

In a logistic regression model, fitted probabilities make
sense for individual cases (rows in the data set), as well
as for future cases (predictions) for which no outcome
(success or failure) has been observed yet.  Fitted
probabilities are calculated from the matrix formula:

  Pr[success]  =  exp( X %*% beta) / (1 + exp( X %*% beta)

where  X  is an [n x (p+1)] matrix, containing all p predictor
variables as columns, preceded by a column of 1s for the
intercept, and  beta  is the [(p+1) x 1] vector of logistic
regression coefficients.

One can interpret the sign and the magnitude of an individual
regression coeffient by saying that an increase of 1 unit in
predictor variable [i] will increase or decrease the odds of
success by a multiplier of  exp(beta[i]).  When  beta[i]  0
the odds increase, because  exp(beta[i])  1,  and when
beta[i]  0  the odds decrease, because  exp(beta[i])  1.

I hope this explanation helps.

-  tom blackwell  -  u michigan medical school  -  ann arbor  -

On Tue, 3 Jun 2003, orkun wrote:

 Hello

 in logistic regression,
 I want to know that it is possible to get probability values of each
 predictors by
 using following formula for each predictor one by one (keeping constant
 the others)
   exp(coef)/(1+exp(coef)) 

 thanks in advance
 Ahmet Temiz

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] rw1070 and package bundles; was: Update VR_7.1-6

2003-06-04 Thread Uwe Ligges
Prof Brian Ripley wrote:
I have explained that on R-help earlier today (and when VR_7.1-4 was 
announced).  Please look back a few hours in the R-help postings.

On Tue, 3 Jun 2003, Dirk Enzmann wrote:


The update of VR by downloading VR_7.1-6.zip and using install.packages
(from local zip files) fails with the following error message:
Error in file(file, r) : unable to open connection
In addition: Warning message:
cannot open file `VR/DESCRIPTION'
Other packages can be installed without problems, except of dse_2003.4-1
with a similar error message.
Why?


Because VR and dse are bundles and someone broke the installation of
bundles in R 1.7.0 on Windows.
You can download and unzip the zip files in rw1070/library.


The CRAN maintainers also got first messages from users who didn't know 
about this bug. In order to avoid a huge amount of further questions and 
bug reports related to these problems, I decided to move the two package 
bundles from CRAN/bin/windows/contrib/1.7 to a subfolder until R-1.7.1 
will be released.
Please read CRAN/bin/windows/contrib/1.7/ReadMe for details.

The change presumably will show up on CRAN master within 24 hours.

Uwe Ligges

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Building an R package under Windows NT

2003-06-04 Thread Benjamin . STABLER
I finally figured out the problem.  I went ahead and installed Perl 5.8 but
that didn't do it.  The problem was that an existing program's directory was
earlier in the PATH and so one (or more) of the components of the Rcmd
INSTALL was an older version.  It wasn't zip since I renamed the old zip.
Anyway, thanks for your help.

Regards,
Ben Stabler

-Original Message-
From: Prof Brian Ripley [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 03, 2003 12:48 AM
To: STABLER Benjamin
Subject: RE: [R] Building an R package under Windows NT


On Mon, 2 Jun 2003 [EMAIL PROTECTED] wrote:

 Professor Ripley,
 
 I just downloaded and installed the most current tools from your site
 (http://www.stats.ox.ac.uk/pub/Rtools/), I have version 
5.005_02 of Perl,
 and I still get the same error.  Do you think Perl 5.8 would 
*fix* this
 problem?

I have no idea, but I do know that correcting all of *your* 
errors would 
solve the problem.  As that version of Perl was AFAIK never 
released for 
Windows (and has not been available for several years if it 
was), I think 
you need to try to follow the instructions *exactly* (which 
you have not 
done re Perl, which says

  The Windows port of perl5, available via
  http://www.activestate.com/Products/ActivePerl/.
  BEWARE: you do need the *Windows* port and not the Cygwin one.

).
 
 I didn't think the docs=html option would work as I got the 
html option from
 the Rdconv type=TYPE help since the build help does not list 
the options.  I
 didn't think to look at the install help.

Why do you think --type (sic) takes the same values as --docs ?


-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] (no subject)

2003-06-04 Thread Gilda Garibotti
Hi,
I would like to know if it is possible to get printed output while a loop is taking 
place.
Example:
for(i in 1:10){
 print(i)
 some long process
}

This will print the values of i only after the loop is finished, what I would like is 
to 
see them when the process enters the i-th iteration to keep track of how the 
program is running.

Thank you,
Gilda

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] (no subject)

2003-06-04 Thread Peter Dalgaard BSA
Gilda Garibotti [EMAIL PROTECTED] writes:

 Hi,
 I would like to know if it is possible to get printed output while a loop is taking 
 place.
 Example:
 for(i in 1:10){
  print(i)
  some long process
 }
 
 This will print the values of i only after the loop is finished, what I would like 
 is to 
 see them when the process enters the i-th iteration to keep track of how the 
 program is running.

Windows, right? (This is system dependent) There's a menu item
entitled Buffer output or something to that effect. Turn it off and
print() calls display immediately. Lengthy output becomes slower,
though. 

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] coefficient of logistic regression

2003-06-04 Thread John Fox
Dear Ahmet,

Sorry for the slow response, but I've been busy all today, coincidentally 
teaching a workshop on logistic regression.

Tom Blackwell sent you a useful suggestion for interpreting coefficients on 
the odds scale. If you want to trace out the partial relationship of the 
fitted probability of response to a particular predictor holding others 
constant, you can set the other predictors to typical values and let the 
predictor in question vary over its range, transforming the fitted log-odds 
to the probability scale.

You may be interested in my effects package (on CRAN or at 
http://socserv.socsci.mcmaster.ca/jfox/Misc/effects/index.html), which 
makes these kinds of displays for linear and generalized-linear models, 
including those with interactions.

Regards,
 John
At 03:06 PM 6/3/2003 +0300, orkun wrote:
John Fox wrote:

At 11:54 AM 6/3/2003 +0300, orkun wrote:

in logistic regression,
I want to know that it is possible to get probability values of each 
predictors by
using following formula for each predictor one by one (keeping constant 
the others)
 exp(coef)/(1+exp(coef)) 


Dear Ahmet,

This will almost surely give you nonsense, since it produces a fitted 
probability ignoring the constant in the model (assuming that there is 
one), setting other predictors to 0 and the predictor in question to 1. 
What is it that you want to do?

I hope that this helps,
 John

thank you

Say, I just want to find each predictor's particular effect on dependent 
variables.
Actual model is to prepare landslide susceptibility map on GIS. So  I want 
to know
what the effect as probability value comes from each predictor. For 
instane what is the effect
of  slope on landslide susceptibility. Should I keep others constant ?

kind regards




-
John Fox
Department of Sociology
McMaster University
Hamilton, Ontario, Canada L8S 4M4
email: [EMAIL PROTECTED]
phone: 905-525-9140x23604
web: www.socsci.mcmaster.ca/jfox
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Question about looking up names

2003-06-04 Thread Ross Boylan
I think I now understand how R looks up names.  Could anyone tell me if
I have this right?

First it looks up the nested environments created by lexical scoping.
Then, if it gets to the top (.GlobalEnv) it also looks through the list
of things that have been attached.

It never looks in the call stack unless you explicitly ask it to, or
mess with the environment frames.

The reason I ask is that it's not entirely clear to me from the R
Language Definition how these 3 search spaces (environments/lexical
scoping; call stack/dynamic scoping; attach/search list) are related. 
For example the discussion of 3.5.3 (the call stack) observes that
dynamic scoping contradicts the default scoping rules in R.  I spent
some time trying to figure out how it could do both, before deciding it
doesn't.  I suppose the implicit corollary of the contradiction referred
to in 3.5.3--so we don't do that and you must intervene to achieve
dynamic scoping--was obvious to the authors.  It just wasn't obvious to
me.  Since I'm still not sure, I thought I'd check.

Thanks.
-- 
Ross Boylan  wk: (415) 502-4031
530 Parnassus Avenue (Library) rm 115-4  [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   hm: (415) 550-1062
University of California, San Francisco
San Francisco, CA 94143-0840

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Question about looking up names

2003-06-04 Thread Thomas Lumley
On 3 Jun 2003, Ross Boylan wrote:

 I think I now understand how R looks up names.  Could anyone tell me if
 I have this right?

 First it looks up the nested environments created by lexical scoping.
 Then, if it gets to the top (.GlobalEnv) it also looks through the list
 of things that have been attached.

 It never looks in the call stack unless you explicitly ask it to, or
 mess with the environment frames.

 The reason I ask is that it's not entirely clear to me from the R
 Language Definition how these 3 search spaces (environments/lexical
 scoping; call stack/dynamic scoping; attach/search list) are related.
 For example the discussion of 3.5.3 (the call stack) observes that
 dynamic scoping contradicts the default scoping rules in R.  I spent
 some time trying to figure out how it could do both, before deciding it
 doesn't.  I suppose the implicit corollary of the contradiction referred
 to in 3.5.3--so we don't do that and you must intervene to achieve
 dynamic scoping--was obvious to the authors.  It just wasn't obvious to
 me.  Since I'm still not sure, I thought I'd check.


Yes.


-thomas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Logistic regression problem: propensity score matching

2003-06-04 Thread Paul
Hello all.

I am doing one part of an evaluation of a mandatory welfare-to-work 
programme in the UK.
As with all evaluations, the problem is to determine what would have 
happened if the initiative had not taken place.
In our case, we have a number of pilot areas and no possibility of 
random assignment.
Therefore we have been given control areas.
My problem is to select for survey individuals in the control areas who 
match as closely as possible the randomly selected sample of action area 
participants.
As I understand the methodology, the procedure is to run a logistic 
regression to determine the odds of a case being in the sample, across 
both action and control areas, and then choose for control sample the 
control area individual whose odds of being in the sample are closest to 
an actual sample member.

So far, I have following the multinomial logistic regression example in 
Fox's Companion to Applied Regression.
Firstly, I would like to know if the predict() is producing odds ratios 
(or probabilities) for being in the sample, which is what I am aiming 
for. Secondly, how do I get rownames (my unique identifier) into the 
output from predict() - my input may be faulty somehow and the wrong 
rownames being picked up - as I need to export back to database to sort 
and match in names, addresses and phone numbers for my selected samples.

My code is as follows:
londonpsm - sqlFetch(channel, London_NW_london_pilots_elig, 
rownames=ORCID)
attach(londonpsm)
mod.multinom - multinom(sample ~ AGE + DISABLED + GENDER + ETHCODE + 
NDYPTOT + NDLTUTOT + LOPTYPE)
lonoutput - predict(mod.multinom, sample, type='probs')
london2 - data.frame(lonoutput)

The Logistic regression seems to work, although summary() says the it is 
not a matrix.
The output looks like odds ratios, but I would like to know whether this 
is so.

Thank you
Paul Bivand
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Question about looking up names

2003-06-04 Thread Ross Boylan
On Tue, 2003-06-03 at 16:34, Robert Gentleman wrote:

 Also, note that you can get the effect of lexical scope by doing
 things like
 
Do you mean you can get the effects of dynamic scope?

  f- function(x) x+y
  e1 - new.env()
  assign(y, 10, env=e1)
  environment(f) - e1
 
 #now like lexical scope; you can futz with f's environment, assigning,
 # modifying as you like
 


P.S. Thanks to everyone who responded.  So fast!

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Rounding problem R vs Excel

2003-06-04 Thread Marc Schwartz
On Tue, 2003-06-03 at 08:32, Duncan Murdoch wrote: 
 On Tue, 3 Jun 2003 09:49:44 +0100, you wrote in message
 [EMAIL PROTECTED]:
 
 Duncan
 If the numbers are not represently exactly how does R resolve problems like
 the one below? Is there something that needs to be set up in the R
 environment like the number of significant figures?
 
  x-4.145*100+0.5
  x
 [1] 415
  floor(x)
 [1] 414
 
 R doesn't do anything to resolve this problem; it's just the way the
 IEEE standard floating point formats work.  In Excel 97, 4.145*100+0.5
 is exactly equal to 415; I would guess this is either because they use
 a binary coded decimal format instead of the IEEE floating point
 types, or they round results internally in some way.  R doesn't
 support BCD formats, and doesn't do tricky rounding behind your back.
 You get what you ask for.
 
 If you want the calculation above to give you exactly 415, the
 standard workaround in languages without BCD formats is to work in
 some decimal multiple of the actual numbers you're interested in, e.g.
 1.  Then you would store 4.145 as 41450, multiply by 100 (i.e.
 100*1) and divide by 1 to give 4145000, and add 5000, to give
 415.  All of these numbers are exactly representable in double
 precision floating point types, because they are all integers with
 fewer than 53 bits in their binary representations.  
 
 Doing this means you need to change the definitions of *, /, ^, and
 lots of other low level functions, but + and - work in the usual way.
 It might be an interesting project to write a package that does all of
 this.
 
 Duncan Murdoch


In Excel, the IEEE standard (754) is used to internally represent
floats. A MS-KB article on this is here:

http://support.microsoft.com/default.aspx?scid=kb;[LN];214118

Another, more detailed, is here:

http://support.microsoft.com/default.aspx?scid=kb;EN-US;78113


What is curious about this situation, and apropos to Prof. Ripley's
comments about the difference between internal representation, rounding
and displayed values, is the following information. Note how the results
of cell calculations differ between Excel, OpenOffice.org Calc and
Gnumeric. In each case, I use a format setting of 20 digits after the
decimal place with scientific notation. This is best read with a fixed
width font.


OOo Calc 1.0.2 and 1.1 Beta2:

Cell Formula  Value
= 4.145 * 100 + 0.5   4.1500E+02
= 0.5 - 0.4 - 0.1 0.E+00
=(0.5 - 0.4 - 0.1)0.E+00


Excel 2002 (XP):

Cell Formula  Value
= 4.145 * 100 + 0.5   4.1500E+02
= 0.5 - 0.4 - 0.1 0.E+00
=(0.5 - 0.4 - 0.1)-2.7755575615628900E-17


Gnumeric 1.0.12:

Cell Formula  Value
= 4.145 * 100 + 0.5   +4.14943157E+02
= 0.5 - 0.4 - 0.1 -2.77555756156289135106E-17
*Gnumeric does not appear to allow the surrounding parens.


For comparison, R 1.7.1 Beta under RH 9 and WinXP:

 print(4.145 * 100 + 0.5, digits = 20)
[1] 414.94
 formatC(4.145 * 100 + 0.5, format = E, digits = 20)
[1] 4.14943157E+02

 print(0.5 - 0.4 - 0.1, digits = 20)
[1] -2.775557561562891e-17
 formatC(0.5 - 0.4 - 0.1, format = E, digits = 20)
[1] -2.77555756156289135106E-17


What is interesting is the change in the displayed value in Excel when
the second formula is surrounded by parens (which I found purely by
accident). This would suggest that there may be something going on in
the parsing of the cell formula that affects the calculation and
displayed value. Also note the precision of the resultant number.

Presuming that each of the spreadsheet programs are using IEEE standard
internal representation, there are clearly differences in the way in
which each visually displays the values, both by default and when
explicitly formatted.


Using the following cell formula:

= 1.333 + 1.225 - 1.333 - 1.225

there is an indication in the second MS-KB article above, that Excel 97
introduced an optimization dealing with results near zero. The
example above when performed in Excel 97 and later correctly displays 0
or 0.000E+00 in scientific notation. 

whereas 

Rather than displaying 0, Excel 95 displays -2.22044604925031E-16.

The terms optimization and correctly displays are an interesting
choice of words.


I have a post to one of the OOo forums regarding my inability to
replicate the IEEE precision issues in Calc under any circumstances
using the three formulas and any numeric formatting options. It may be
that the OOo folks copied the MS Excel optimization with no override.


FYI...the IEEE has a reference site for the standard here:

http://grouper.ieee.org/groups/754/


HTH,

Marc Schwartz

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] X11 not available

2003-06-04 Thread Ott Toomet
Hello,

 | From: [EMAIL PROTECTED]
 | Date: Tue, 03 Jun 2003 16:23:20 +
 | 
 | Hi,
 | 
 | I just installed R in a Linux box. Unfortunately, I don't have root access and
 | had to install it in my home directory.
 | The software runs fine, but I cannot make it print graphics to the screen. If I
 | type X11 I get: X11 is not available (though it is working for other software).
 | I noticed that it prints graphs to a default ps file.
 | Any suggestions about how to get around this?
 | 
 | Many thanks,
 | 
 | FG

perhaps you have compiled it without x11 support.  The reason could be
missing XFree86-devel.rpm or something similar.  Have you checked whad did
the configuration script say?

Just a suggestion

Ott

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Your Message Contained a Potential Virus

2003-06-04 Thread Peter Dalgaard BSA
[EMAIL PROTECTED] writes:

 Result: Virus Detected
 Virus Name: [EMAIL PROTECTED]
 File Attachment: screensaver.scr
 Attachment Status: deleted
 
 --- Original message information follows ---
 
 From: [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Date: Tue, 3 Jun 2003 21:38:00 --0400
 Subject: Re: Application
 Received: (from HZHANG2-LAP [24.74.137.211])
  by mskavs1.mskcc.org (SAVSMTP 3.1.1.32) with SMTP id M2003060321303704373
  for [EMAIL PROTECTED]; Tue, 03 Jun 2003 21:30:38 -0400

Grrr Well they should go talk to mr./ms. Zhang about the virus on
his/her laptop shouldn't they? In any case, as far as I understand the
mail standards, delivery errors should go to the contents of the
Sender: field, not From:, exactly to avoid bothering entire mailing
lists.

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] plot rpart tree's from list object

2003-06-04 Thread Christian Schulz
Hello,

i want the post plot's from a rpart list object with
18 tree's , getting no error - but getting no files,too?
Perhaps i should using assign!?

for (i in 1:length(treeList)) {
post(treeList[[i]],filename=paste(Tree,i,sep=.ps), title=Arbeitszufriedenheit,
digits=getOption(digits) - 0,use.n=TRUE)
}

many thanks for help,
christian

[[alternate HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] (no subject)

2003-06-04 Thread Paul Lemmens
Hoi Peter,

--On woensdag 4 juni 2003 0:16 +0200 Peter Dalgaard BSA 
[EMAIL PROTECTED] wrote:

Gilda Garibotti [EMAIL PROTECTED] writes:

Hi,
I would like to know if it is possible to get printed output while a
loop is taking place. Example:
for(i in 1:10){
 print(i)
 some long process
}
This will print the values of i only after the loop is finished, what I
would like is to  see them when the process enters the i-th iteration to
keep track of how the  program is running.
Windows, right? (This is system dependent) There's a menu item
entitled Buffer output or something to that effect. Turn it off and
print() calls display immediately. Lengthy output becomes slower,
though.
If you don't want to depend on you (or other people) turning of the 
buffering, use something like

cat(this or that); flush.console.

regards,
Paul
--
Paul Lemmens
NICI, University of Nijmegen  ASCII Ribbon Campaign /\
Montessorilaan 3 (B.01.03)Against HTML Mail \ /
NL-6525 HR Nijmegen  X
The Netherlands / \
Phonenumber+31-24-3612648
Fax+31-24-3616066
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Logistic regression problem: propensity score matching

2003-06-04 Thread Prof Brian Ripley
1) Why are you using multinom when this is not a multinomial logistic 
regression?  You could just use a binomial glm.

2) The second argument to predict() is `newdata'.  `sample' is an R 
function, so what did you mean to have there?  I think the predictions 
should be a named vector if `sample' is a data frame.

3) There are many more examples of such things (and more explanation) in 
Venables  Ripley's MASS (the book).

On Wed, 4 Jun 2003, Paul Bivand wrote:

 I am doing one part of an evaluation of a mandatory welfare-to-work 
 programme in the UK.
 As with all evaluations, the problem is to determine what would have 
 happened if the initiative had not taken place.
 In our case, we have a number of pilot areas and no possibility of 
 random assignment.
 Therefore we have been given control areas.
 My problem is to select for survey individuals in the control areas who 
 match as closely as possible the randomly selected sample of action area 
 participants.
 As I understand the methodology, the procedure is to run a logistic 
 regression to determine the odds of a case being in the sample, across 
 both action and control areas, and then choose for control sample the 
 control area individual whose odds of being in the sample are closest to 
 an actual sample member.
 
 So far, I have following the multinomial logistic regression example in 
 Fox's Companion to Applied Regression.
 Firstly, I would like to know if the predict() is producing odds ratios 
 (or probabilities) for being in the sample, which is what I am aiming 
 for. 

You asked for `probs', so you got probabilities.

 Secondly, how do I get rownames (my unique identifier) into the 
 output from predict() - my input may be faulty somehow and the wrong 
 rownames being picked up - as I need to export back to database to sort 
 and match in names, addresses and phone numbers for my selected samples.
 
 My code is as follows:
 londonpsm - sqlFetch(channel, London_NW_london_pilots_elig, 
 rownames=ORCID)
 attach(londonpsm)
 mod.multinom - multinom(sample ~ AGE + DISABLED + GENDER + ETHCODE + 
 NDYPTOT + NDLTUTOT + LOPTYPE)
 lonoutput - predict(mod.multinom, sample, type='probs')
 london2 - data.frame(lonoutput)
 
 The Logistic regression seems to work, although summary() says the it is 
 not a matrix.

what is `it'?

 The output looks like odds ratios, but I would like to know whether this 
 is so.

No.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: Rtips (was Re: [R] ? building a database with a the great examples

2003-06-04 Thread Detlef Steuer

On 03-Jun-2003 Paul E. Johnson wrote:
 I think there is a lot of merit in this idea.  I think there is a big 
 question about authentication and protection of Wikis from vandalism.

Yes. But I`ll do my daily backups, so major vandalism wouldn't be such a
problem. (I hope.)
Personally I don't like to have a list of people who are allowed to edit the
pages. (and I dont like to keep such a list current ...)
Under preferences you can choose to give yourself an identity, but that's not
for authentication.

 
 I've set up Wikis for other projects that I started after Rtips.  I have 
 not seen your Wiki software before,  but it looks pretty nice.  I see it 
 does have diff support, so old pages can be restored, yes?  

Yes! It keeps the diffs for quite some time. So if anyone realizes minor, i.e.
pagewise, vandalism content can be rebuild.

 But it 
 doesn't authenticate users, which causes me some concern.  (I understand 
 the Wiki philosophy that we should not be concerned about 
 authentication, but I've never bought into it all the way).

I think the R community is growing very fast. The work to give
passwords to interested people frightens me. And I would never ask
for such a password personally. I'm willing to write some short note _now_, but
not if I have to wait a day or so to get access. Authentication fits if you
have a well defined group you expect to add contents. 

My position therefore: give it a try. If vandalism turns out to be a problem,
I'll have to think about it.

 
 I have a TWiki site here:
 
 http://www.ku.edu/cgiwrap/pauljohn/twiki/view
 
 This one I hacked up special to use authentication on some pages so that 
 people have to log in before they can edit.
 
 I had not realized before I looked at your page that Wiki 
 implementations are customized for document format.  For page sections, 
 your Wiki uses
 
 = aHeading =  
 
 but Twiki uses
 
 ---+ aHeading
 
 That's kindof a bummer.

Yes. I don't understand the authors of wikis, too. I just chose a recommended
 one. If there is anything serious against UseModWiki, _now_ would be the
time to switch. UseMod is very easy to set up, an needs little resources.
These two points are very important for me. 


detlef

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Your Message Contained a Potential Virus

2003-06-04 Thread Symantec_AntiVirus_mskavs1
Text: This is an automated message. Please read it carefully.

You should know that your recent email message detailed below, to Memorial 
Sloan-Kettering Cancer Center, was identified as potentially containing a virus.

From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]


If we have been able to repair your message, it has been delivered.  If we have not, 
it has been blocked and the recipient(s) of your email have been informed that the 
email has been blocked.

Please note that as a matter of policy certain attachment types are blocked by default 
based on the file extension.  If you have a legitimate file to send which is being 
blocked, please consider renaming the extension or enclosing it within a ZIP file.

Thank You


Viruses found:

--- Scan information follows ---

Result: Virus Detected
Virus Name: [EMAIL PROTECTED]
File Attachment: screensaver.scr
Attachment Status: deleted

--- Original message information follows ---

From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Date: Tue, 3 Jun 2003 21:38:00 --0400
Subject: Re: Application
Received: (from HZHANG2-LAP [24.74.137.211])
 by mskavs1.mskcc.org (SAVSMTP 3.1.1.32) with SMTP id M2003060321303704373
 for [EMAIL PROTECTED]; Tue, 03 Jun 2003 21:30:38 -0400

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Mode of MCMC chain

2003-06-04 Thread Patrik Waldmann
Hello,

are there any functions in R for estimation of the mode of a MCMC-chain?

Best,

Patrik Waldmann
[[alternate HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Strip location and grid colour in Lattice

2003-06-04 Thread Mulholland, Tom
I am probably missing something quite obvious, but any help would be
appreciated. I am continually getting people misreading the lattice plots
because they are expecting the strip (with the factor names in them) to be
below the graph. Is there anyway of achieving this.
 
Secondly, from a more personal note I find the grid formed by the axes to be
a bit overpowering and would like to make it a little less bold by changing
it to a grey of some kind. I can't see that the scales options have anythig
in their that I could use. I can change the label colours and tick marks,
but then I draw a blank.
 
While I'm on a role, I find that quite often I have to resort to the at and
label sections of the scales function to get my tickmarks looking OK. This
seems to be when  am producing line graphs with one of the scales being a
date (POSIXct). What is not clear to me is if all POSIXct variables are the
same. The xyplot doco indicates that the at co-ordinates should be native
co-ordinates. Can anyone point me to where in the voluminous documentation
one looks to understand what this means. I have found that on some occasions
the co-ordinates are in seconds (as the documentation on POSIXct states, but
this afternoon I found that the values seemed to be in years. Which wasn't a
problem other than I wish I could understand what was actually happening.
 
For the years example, when the data is originally imported the years came
in as integers.
 
 
 
str(rbd)
  `data.frame':   541 obs. of  6 variables:
  $ Year: int  1993 1994 1995 1996 1997 1998 1999 2000 2001 1993 ...
  $ Hosp: Factor w/ 75 levels ALBANY HOSP..,..: 23 23 23 23 23 23
23 23 23 28 ...
  $ Beddays : int  2431 2507 2201 2985 2702 2461 2535 2970 3271 1246 ...
  $ HD  : Factor w/ 21 levels Avon HD.,Bunb..,..: 10 10 10 10 10
10 10 10 10 10 ...
  $ HR  : Factor w/ 6 levels Goldfields-..,..: 3 3 3 3 3 3 3 3 3 3
...
  $ HospCode: int  127 127 127 127 127 127 127 127 127 128 ...
 
Thinking that I needed a date I promptly put
 
 rbd$Year - as.POSIXct(ISOdate(rbd$Year,6,30))

then onwards and forwards
 
for (h in levels(rbd$HR)){
 HRData - subset(rbd,HR==h)
 HRData$CommnDesc - HRData$CommnDesc[,drop=T]
 temp -c((FormatLabels(levels(HRData$CommnDesc)[1],20)))
 for (j in 2:length(levels(HRData$CommnDesc))){
   temp - c(temp,FormatLabels(levels(HRData$CommnDesc)[j],20))
  }
 levels(HRData$CommnDesc) - temp
 
p1 -  bwplot(Beddays~Year |CommnDesc,HRData,
   panel = panel.linejoin,
   horizontal=F,bty=n,
   as.table=T,
   par.strip.text=list(lines=3.5,cex=0.8,style=1),
   main=paste(h,Inpatient Beddays),
   scales=list(x=list(cex=0.8,rot=90,
   at=c(2,6,9),
   labels=c(94,98,01),col=navy))
)
print(p1)
savePlot(file=paste(OutputPath,Inpatient beddays -(lattice) by region,h,
,j,sep=),type=wmf)
}

Of course there are a few things in here that are probably not the right way
to do things, but I tend to be more interested in the output, rather than
whether or not my programming is up to speed. But it has been a little bug
bear of mine about dropping factors when subsetting the data. I've noticed
subset options as I've been going through assorted bits and pieces, but
there never seems to be enough time to follow up.
 
This is in striking contrast to a previous attempt (most of the code however
is at home not here), but the functions that I worked out for the at and
label functions were
 
ProcLab - function(DateData,breakNum){
 maxplot - round(as.numeric(max(DateData)),digits=0)
 minplot - round(as.numeric(min(DateData)),digits=0)
 maxplotnum- round(((maxplot-minplot)/86400)+1,digits=0)
 jumpnum - (maxplotnum/((breakNum)-1))*.98
   lablist - seq(min(DateData),max(DateData),jumpnum*86400)
}
 
ProcAt - function(DateData,breakNum){
 maxplot - round(as.numeric(max(DateData)),digits=0)
 minplot - round(as.numeric(min(DateData)),digits=0)
 maxplotnum- round(((maxplot-minplot)/86400)+1,digits=0)
 jumpnum - (maxplotnum/((breakNum)-1))*.98
 atlist - seq(0,maxplotnum,jumpnum) 
}
 
The kludges were in because without them the whole thing fell over,
presumeably because I would needed to have set the limits as well.
 

 
_
 
Tom Mulholland
Senior Policy Officer
WA Country Health Service
189 Royal St, East Perth, WA, 6004
 
Tel: (08) 9222 4062
e-mail: [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] 
 
The contents of this e-mail transmission are confidential and may be
protected by professional privilege. The contents are intended only for the
named recipients of this e-mail. If you are not the intended recipient, you
are hereby notified that any use, reproduction, disclosure or distribution
of the information contained in this e-mail is prohibited. Please notify the
sender immediately.
 

[[alternate HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] (no subject)

2003-06-04 Thread Christian Hoffmann
Hi everybody,

I finally hope to reach the person who started the thread [R] (no 
subject).  The innermost level of text was my original request.

Kind regards
Christian
At 10:50 2003-06-04 +0200, you wrote:
Hoi Christian,

--On woensdag 4 juni 2003 10:36 +0200 Christian Hoffmann 
[EMAIL PROTECTED] wrote:

Hi Paul

At 08:44 2003-06-04 +0200, you wrote:
Hoi Christian,

--On woensdag 4 juni 2003 8:39 +0200 Christian Hoffmann
[EMAIL PROTECTED] wrote:
Please avoid no subject. There might be people (like me, when I am in
a hurry or in a bad mood) who just delete such messages. It would be a
pity to miss interesting information, yet.



Please avoid prematurely emailing somebody who did not originate a
thread  that is under your current scrutiny!
No offence meant, but...
How could I find out this person if I already deleted so many messages of
that kind?
No offence either, but why mail me, and not the person who sent you the 
first mail (in this current thread i.e.) without the subject? IMHO either 
be consistent and mail everybody, or mail nobody!?

regards,
Paul


--
Paul Lemmens
NICI, University of Nijmegen  ASCII Ribbon Campaign /\
Montessorilaan 3 (B.01.03)Against HTML Mail \ /
NL-6525 HR Nijmegen  X
The Netherlands / \
Phonenumber+31-24-3612648
Fax+31-24-3616066

Dr.sc.math.Christian W. Hoffmann
Mathematics and Statistical Computing
Landscape Dynamics and Spatial Development
Swiss Federal Research Institute WSL
Zuercherstrasse 111
CH-8903 Birmensdorf, Switzerland
phone: ++41-1-739 22 77fax: ++41-1-739 22 15
e-mail: [EMAIL PROTECTED]
www: http://www.wsl.ch/staff/christian.hoffmann/
- Coordinator of cooperation WSL - UFU Ekaterinburg/Russia -
Please avoid sending me Word or PowerPoint attachments.
See http://www.fsf.org/philosophy/no-word-attachments.html
Dangers of .doc, .rtf: http://www.heise.de/newsticker/data/jk-27.01.02-001/
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] predict.glm(glm.ob,type=terms)

2003-06-04 Thread orkun
hello

pgeo-predict.glm(glm.ob,type=resp) works fine.

But I need to get predictions values in terms of each factor variables.
pgeo-predict.glm(glm.ob,type=terms)
gives Error in rep(1/n,n) %*% model.matrix(object): non conformable
arguments
Could anyone tell me why ?

Ahmet Temiz
Turkey
__



__
The views and opinions expressed in this e-mail message are the ... {{dropped}}
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] convert factor to numeric

2003-06-04 Thread Philipp Pagel

Hi R-experts!

Every once in a while I need to convert a factor to a vector of numeric
values. as.numeric(myfactor) of course returns a nice numeric vector of
the indexes of the levels which is usually not what I had in mind:

 v - c(25, 3.78, 16.5, 37, 109)
 f - factor(v)
 f
[1] 25   3.78 16.5 37   109
Levels: 3.78 16.5 25 37 109
 as.numeric(f)
[1] 3 1 2 4 5


What I really want is a function unfactor that returns v:
 unfactor(f)
[1]  25.00   3.78  16.50  37.00 109.00

Of course I could use something like

 as.numeric(levels(f)[as.integer(f)])

But I just can't believe there is no R function to do this in a more
readable way. Actually, the behaviour of as.numeric() doesn't strike me
as very intuitive. I'm sure it has been implemented that way for a
reason - but what is it?

cu
Philipp

-- 
Dr. Philipp PagelTel.  +49-89-3187-3675
Institute for Bioinformatics / MIPS  Fax.  +49-89-3187-3585
GSF - National Research Center for Environment and Health
Ingolstaedter Landstrasse 1
85764 Neuherberg
Germany

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Rounding problem R vs Excel

2003-06-04 Thread Duncan Murdoch
On 04 Jun 2003 00:24:08 -0500, you wrote:

Excel 2002 (XP):

Cell Formula  Value
= 0.5 - 0.4 - 0.1 0.E+00
=(0.5 - 0.4 - 0.1)-2.7755575615628900E-17
 ...
What is interesting is the change in the displayed value in Excel when
the second formula is surrounded by parens (which I found purely by
accident). This would suggest that there may be something going on in
the parsing of the cell formula that affects the calculation and
displayed value. 

Interesting?  I'd say horrifying.  When (expr) does not evaluate
the same as expr, what can you trust?

Duncan Murdoch

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help