Re: [R] Installing 2.4.0

2006-11-22 Thread Prof Brian Ripley
These are old compilers (2.95.3 is really old, much older than your OS).
Are they really built for Solaris 10?  The underlying problem in both 
cases seems to be a mismatch between your OS and your compiler, something 
discussed in the R-admin manual as a problem on Solaris.

I would suggest getting a current version of gcc for Solaris 10 and using 
that.  (www.freesunware.com has 3.4.6, and that works for us on Solaris 
8.)


On Tue, 21 Nov 2006, Denise Mauldin wrote:


 Hello,

 I'm trying to install R 2.4.0 on a SunOS 5.10 Generic_118833-24 sun4v
 sparc SUNW,Sun-Fire-T200 machine.  The machine has gcc version 3.3.2 and
 gcc version 2.95.3 20010315 (release).  When I try to compile using 3.3.2
 I get an error with signal.h included from dcigettext.c.  When I compile
 using gcc 2.95.3 I get an error with va_copy.

 Undefined symbol va_copy first referenced in file connections.o
 ld: fatal: Symbol referencing errors.
 No output written to R.bin
 collect2: ld returned 1 exit status

 A friend suggested this was an error with stdarg.h and so I searched
 through the gcc2.95.3 directories and found a version of it and tried to
 include it using the -I and -L arguments to gcc, but this results in the
 same error.  I've tried using both the R-2.4.0 (R-latest) and R-patched
 gzip source files.

 Is there any additional information that you need in order to
 help me?  What steps should I take to solve this problem?

 Thanks,
 Denise

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] saving graphics in jpeg format

2006-11-22 Thread Prof Brian Ripley
On Tue, 21 Nov 2006, Paulo Barata wrote:

 Dear R users,

 I need to save a graph in jpeg format. After plotting the graph,
 when the graphics window is active, in the File menu, the
 Save as / Jpeg / 100% quality correctly saves the graph in jpeg format.
 But I would like to know, how could I control the resolution (in dpi)
 of the saved jpeg file? I need to produce a jpeg at 1200 dpi.

jpeg files do not have a dpi: they are dimensioned in pixels.  I think you 
mean you want a file at 1200 ppi (pixels per inch), as 'dpi' (dots per 
inch) is a printer parameter (sometimes also used of monochrome screens).

 I have tried also the jpeg function in the package grDevices.
 A simple command like this works correctly:

 jpeg(filename=test01.jpg, width = 800, height = 600, quality = 100,
pointsize = 250)
 barplot(1:5,col=1:5)
 dev.off()

 But I can't figure out the relation between pointsize, pixels, points
 and dpi. For example, to be specific, to save a graph measuring
 width = 6 inches, height = 4 inches, at 1200 dpi, which parameters
 should I use in the jpeg function?

width=6*1200, height=4*1200, res=1200

Note that 1200ppi is a very high resolution, and 200dpi is more usual.

Please take local advice on this issue, as I believe the problem is a 
misunderstanding of resolution.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to merge these dataframes

2006-11-22 Thread antonio rodriguez
Hi,

Having 3 dataframes with different row numbers, but equal column names 
(see below) I want to merge them by Var1 so I've tried:

merge(j1,j2,j3,by=Var1)
merge(j,j1,j2,by=names(Var1))

But always got the same message:

Erro en fix.by(by.x, x) : 'by' must specify column(s) as numbers, names 
or logical

What I'm doing wrong?

Thanks,

Antonio


j1

Var1 Freq
1  1988-01-131
2  1988-01-161
3  1988-01-203
4  1988-01-252
5  1988-01-301
6  1988-02-015
7  1988-02-084
8  1988-02-141
9  1988-02-161
10 1988-02-184

j2

Var1 Freq
1  1988-01-131
2  1988-01-161
3  1988-01-203
4  1988-01-252
5  1988-01-301
6  1988-02-014
7  1988-02-084
8  1988-02-161
9  1988-02-184
10 1988-02-242
11 1988-03-041
12 1988-03-071

j3
  Var1 Freq
1 1988-01-131
2 1988-01-161
3 1988-01-203
4 1988-01-252
5 1988-01-301
6 1988-02-014
7 1988-02-084
8 1988-02-161
9 1988-02-184

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to merge these dataframes

2006-11-22 Thread Prof Brian Ripley
merge() merges *two* data frames, as is very clearly stated on its help 
page.  In your first version, j3 is matching by.x (and the second is 
wrong, as j does not exist and names() applies to an object).

You can do

j4 - merge(j1, j2, by=Var1, all=TRUE)
j5 - merge(j4, j3, all=TRUE)
j5
  Var1 Freq.x Freq.y Freq
1  1988-01-13  1  11
2  1988-01-16  1  11
3  1988-01-20  3  33
4  1988-01-25  2  22
5  1988-01-30  1  11
6  1988-02-01  5  44
7  1988-02-08  4  44
8  1988-02-14  1 NA   NA
9  1988-02-16  1  11
10 1988-02-18  4  44
11 1988-02-24 NA  2   NA
12 1988-03-04 NA  1   NA
13 1988-03-07 NA  1   NA

(or omit all=TRUE if you only want columns which are in all 3 data 
frames).


On Wed, 22 Nov 2006, antonio rodriguez wrote:

 Hi,

 Having 3 dataframes with different row numbers, but equal column names
 (see below) I want to merge them by Var1 so I've tried:

 merge(j1,j2,j3,by=Var1)
 merge(j,j1,j2,by=names(Var1))

 But always got the same message:

 Erro en fix.by(by.x, x) : 'by' must specify column(s) as numbers, names
 or logical

 What I'm doing wrong?

 Thanks,

 Antonio


 j1

Var1 Freq
 1  1988-01-131
 2  1988-01-161
 3  1988-01-203
 4  1988-01-252
 5  1988-01-301
 6  1988-02-015
 7  1988-02-084
 8  1988-02-141
 9  1988-02-161
 10 1988-02-184

 j2

Var1 Freq
 1  1988-01-131
 2  1988-01-161
 3  1988-01-203
 4  1988-01-252
 5  1988-01-301
 6  1988-02-014
 7  1988-02-084
 8  1988-02-161
 9  1988-02-184
 10 1988-02-242
 11 1988-03-041
 12 1988-03-071

 j3
  Var1 Freq
 1 1988-01-131
 2 1988-01-161
 3 1988-01-203
 4 1988-01-252
 5 1988-01-301
 6 1988-02-014
 7 1988-02-084
 8 1988-02-161
 9 1988-02-184

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to merge these dataframes [Solved]

2006-11-22 Thread antonio rodriguez
Thanks to everybody. I've found in a previous post too:

merge(merge(j1,j2),j3)

BR

Antonio

 //


Prof Brian Ripley escribió:
 merge() merges *two* data frames, as is very clearly stated on its 
 help page.  In your first version, j3 is matching by.x (and the second 
 is wrong, as j does not exist and names() applies to an object).

 You can do

 j4 - merge(j1, j2, by=Var1, all=TRUE)
 j5 - merge(j4, j3, all=TRUE)
 j5
  Var1 Freq.x Freq.y Freq
 1  1988-01-13  1  11
 2  1988-01-16  1  11
 3  1988-01-20  3  33
 4  1988-01-25  2  22
 5  1988-01-30  1  11
 6  1988-02-01  5  44
 7  1988-02-08  4  44
 8  1988-02-14  1 NA   NA
 9  1988-02-16  1  11
 10 1988-02-18  4  44
 11 1988-02-24 NA  2   NA
 12 1988-03-04 NA  1   NA
 13 1988-03-07 NA  1   NA

 (or omit all=TRUE if you only want columns which are in all 3 data 
 frames).


 On Wed, 22 Nov 2006, antonio rodriguez wrote:

 Hi,

 Having 3 dataframes with different row numbers, but equal column names
 (see below) I want to merge them by Var1 so I've tried:

 merge(j1,j2,j3,by=Var1)
 merge(j,j1,j2,by=names(Var1))

 But always got the same message:

 Erro en fix.by(by.x, x) : 'by' must specify column(s) as numbers, names
 or logical

 What I'm doing wrong?

 Thanks,

 Antonio


 j1

Var1 Freq
 1  1988-01-131
 2  1988-01-161
 3  1988-01-203
 4  1988-01-252
 5  1988-01-301
 6  1988-02-015
 7  1988-02-084
 8  1988-02-141
 9  1988-02-161
 10 1988-02-184

 j2

Var1 Freq
 1  1988-01-131
 2  1988-01-161
 3  1988-01-203
 4  1988-01-252
 5  1988-01-301
 6  1988-02-014
 7  1988-02-084
 8  1988-02-161
 9  1988-02-184
 10 1988-02-242
 11 1988-03-041
 12 1988-03-071

 j3
  Var1 Freq
 1 1988-01-131
 2 1988-01-161
 3 1988-01-203
 4 1988-01-252
 5 1988-01-301
 6 1988-02-014
 7 1988-02-084
 8 1988-02-161
 9 1988-02-184

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] advanced plotting

2006-11-22 Thread Jim Lemon
downunder wrote:
 Hi all. I need some help. I have to plot so many observation in a coordinate
 system that you can't see really much. Is there any possiblilty in R to
 reduce the size of a plotted point? In the plot command i could find a
 solution.
  plot(,type = p ,..)
 
 thanks in advance
 
 lars

Hi Lars,

You might be interested in the count.overplot function in the plotrix 
package that displays a single count for each cluster of points. You can 
adjust the size of the cluster using the tol argument.

Jim

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] differences between aov and lme

2006-11-22 Thread domerg
Hi,

we have a split-plot experiment in which we measured the yield of crop 
fields. The factors we studied were:

B : 3 blocks
I : 2 main plots for presence of Irrigation
V : 2 plots for Varieties
N : 3 levels of Nitrogen

Each block contains two plots (irrigated or not) . Each plot is divided 
into two secondary parcels for the two varieties.
Each of these parcels is divided into three subplots corresponding to 
three ordered levels of nitrogen.

We found in Venables  Ripley  (Modern Applied Statistics with S-plus, 
3rd edition) the multistratum model for the same type of dataset but for 
three levels (without the Irrigation partition):
aov(Y~N*V+Error(B/V), qr=T)

which we adapted to our model:
aov(Y~N*V*I+Error(B/V/I))

In Pinheiro  Bates (Mixed-effect models in S and S-plus) and as we saw 
in the message Re: lme and lmer syntax from Ronaldo Reis-Jr, Wed 26 Oct 
2005, we fitted also the mixed model :
lme(Y~N*V*I, random~1|B/V/I)

On a random simulated response Y, we didn't obtain similar results - 
only one factor with the same F-value - from the aov function
and the lme one (oppositely  to the  example used by Venables  Ripley 
and Pinheiro  Bates).

Is there a mistake in one of our two models or an explanation of this 
difference?

Thanks a lot in advance.


Caroline Domerg and Frederic Chiroleu
UMR 53 PVBMT (Peuplements Vegetaux et Bio-agresseurs en Milieu Tropical)
CIRAD
Pôle de Protection des Plantes (3P) - Saint-Pierre
Ile de la Réunion

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] differences between aov and lme

2006-11-22 Thread Prof Brian Ripley

The way you describe this, it would appear to be B/I/V, not B/V/I.


From the footer:



PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Had you given us access to the data, it would have been much easier to 
answer the question: if you need further help please do so.



On Wed, 22 Nov 2006, domerg wrote:


Hi,

we have a split-plot experiment in which we measured the yield of crop
fields. The factors we studied were:

B : 3 blocks
I : 2 main plots for presence of Irrigation
V : 2 plots for Varieties
N : 3 levels of Nitrogen

Each block contains two plots (irrigated or not) . Each plot is divided
into two secondary parcels for the two varieties.
Each of these parcels is divided into three subplots corresponding to
three ordered levels of nitrogen.

We found in Venables  Ripley  (Modern Applied Statistics with S-plus,
3rd edition) the multistratum model for the same type of dataset but for
three levels (without the Irrigation partition):
aov(Y~N*V+Error(B/V), qr=T)

which we adapted to our model:
aov(Y~N*V*I+Error(B/V/I))

In Pinheiro  Bates (Mixed-effect models in S and S-plus) and as we saw
in the message Re: lme and lmer syntax from Ronaldo Reis-Jr, Wed 26 Oct
2005, we fitted also the mixed model :
lme(Y~N*V*I, random~1|B/V/I)

On a random simulated response Y, we didn't obtain similar results -
only one factor with the same F-value - from the aov function
and the lme one (oppositely  to the  example used by Venables  Ripley
and Pinheiro  Bates).

Is there a mistake in one of our two models or an explanation of this
difference?

Thanks a lot in advance.


Caroline Domerg and Frederic Chiroleu
UMR 53 PVBMT (Peuplements Vegetaux et Bio-agresseurs en Milieu Tropical)
CIRAD
Pôle de Protection des Plantes (3P) - Saint-Pierre
Ile de la Réunion

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] problem with the plm package

2006-11-22 Thread Giangiacomo Bravo
Hi all,

I have a problem in installing and using the plm 
package using R 2.2.0 on windows xp.

I installed it from a .zip file downloaded from 
the CRAN. Apparently everything is ok:
  utils:::menuInstallLocal()
package 'plm' successfully unpacked and MD5 sums checked
updating HTML package descriptions

However, when I try to load it:
  library(plm)
Errore in lazyLoadDBfetch(key, datafile, compressed, envhook) :
 ReadItem: tipo 241 sconosciuto
Inoltre: Warning message:
il pacchetto 'plm' è stato creato con R versione 2.4.0
Errore: caricamento pacchetto/namespace fallito per 'plm'

Sorry I installed R in Italian, but that should 
be easy to understand. Errore means, of course, 
error and sconosciuto means unknown. Basically 
the package cannot be loaded because there is an 
unknowwn item (at least, that's what I understand).

Does anybody have an idea of the problem?

Thank you,

Giangiacomo

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] odd behaviour of %%?

2006-11-22 Thread Jonathan Williams
Dear R Helpers,

I am trying to extract the modulus from divisions by a sequence of
fractions.
I noticed that %% seems to behave inconsistently (to my untutored eye),
thus:

 0.1%%0.1
[1] 0
 0.2%%0.1
[1] 0
 0.3%%0.1
[1] 0.1
 0.4%%0.1
[1] 0
 0.5%%0.1
[1] 0.1
 0.6%%0.1
[1] 0.1
 0.7%%0.1
[1] 0.1
 0.8%%0.1
[1] 0
 0.9%%0.1

The modulus for 0.1, 0.2, 0.4 and 0.8 is zero, as I'd expect. But, the
modulus
for 0.3, 0.6, 0.7 and 0.9 is 0.1 - which I did not expect. I can see no
obvious
rule that predicts whether x%%0.1 will give an answer of 0 or 0.1.  I could
find
no explanation of the way that %% works in the R manuals. So, I have 3
questions:-

1) Why is the modulus of 0.3%%0.1 (and 0.5%%0.1 and 0.6%%0.1...) not zero?
2) Are there any algorithms in R that use the %% operator with fractional
divisors
in this way, and do they know about its apparently inconsistent behaviour?
3) If %% is not intended for use with fractional divisors, then would it be
a
good idea to trap attempts to use them?

Thanks, in advance, for your help

Jonathan Williams

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] problem with the plm package

2006-11-22 Thread Prof Brian Ripley
You downloaded a Windows binary package for R 2.4.0 and tried to load it 
into R 2.2.0.  That does not work, as you found out.


You need to install from the source package, or (better) update your 
version of R (as the posting guide asked you to do before posting).



On Wed, 22 Nov 2006, Giangiacomo Bravo wrote:


Hi all,

I have a problem in installing and using the plm
package using R 2.2.0 on windows xp.

I installed it from a .zip file downloaded from
the CRAN. Apparently everything is ok:
 utils:::menuInstallLocal()
package 'plm' successfully unpacked and MD5 sums checked
updating HTML package descriptions

However, when I try to load it:
 library(plm)
Errore in lazyLoadDBfetch(key, datafile, compressed, envhook) :
ReadItem: tipo 241 sconosciuto
Inoltre: Warning message:
il pacchetto 'plm' è stato creato con R versione 2.4.0
Errore: caricamento pacchetto/namespace fallito per 'plm'

Sorry I installed R in Italian, but that should
be easy to understand. Errore means, of course,
error and sconosciuto means unknown. Basically
the package cannot be loaded because there is an
unknowwn item (at least, that's what I understand).

Does anybody have an idea of the problem?

Thank you,

Giangiacomo

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Mango Solutions Announces R Public Training Course in Paris

2006-11-22 Thread Romain Francois
Mango Solutions are pleased to announce the above course in Paris as 
part of our schedule for Q1 2007.

---
Introduction to R and R Programming - 12th February 2008-14th February
---

(Please find a french version of this announcement from 
http://www.mango-solutions.com/services/rtraining/r_paris_training_07_french.html)

* Who Should Attend ?

This is a course suitable for beginners and improvers in the R language 
and is ideal for people wanting an all round introduction to R

* Course Goals

- To allow attendees to understand the technology behind the R package
- Improve attendees programming style and confidence
- To enable users to access a wide range of available functionality
- To enable attendees to program in R within their own environment
- To understand how to embed R routines within other applications

* Course Outline

1. Introduction to the R language and the R community
2. The R Environment
3. R data objects
4. Using R functions
5. The apply family of functions
6. Writing R functions
7. Standard Graphics
8. Advanced Graphics
9. R Statistics
10. R Applications


The cost of these courses is €1800 for commercial attendees and €850 for 
academic attendees. A €100 discount will be offered to members of the R 
Foundation (http://www.r-project.org/foundation/main.html).

Should your organization have more than 3 possible attendees why not 
talk to us about hosting a customized and focused course delivered at 
your premises? Details of further courses in alternative locations are 
available at http://www.mango-solutions.com/services/training.html

More details about this course :
- in french : 
http://www.mango-solutions.com/services/rtraining/r_paris_training_07_french.html
- in english : 
http://www.mango-solutions.com/services/rtraining/r_paris_training_07.html

Should you want to book a place on this course or have any questions 
please contact [EMAIL PROTECTED]


Cordialement,

Romain Francois

--
Mango Solutions
Tel +44 (0)1249 467 467
Mob +44 (0)7813 526 123
Fax +44 (0)1249 467 468

data analysis that delivers

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] odd behaviour of %%?

2006-11-22 Thread Peter Dalgaard
Jonathan Williams [EMAIL PROTECTED] writes:

 Dear R Helpers,
 
 I am trying to extract the modulus from divisions by a sequence of
 fractions.
 I noticed that %% seems to behave inconsistently (to my untutored eye),
 thus:
 
  0.1%%0.1
 [1] 0
  0.2%%0.1
 [1] 0
  0.3%%0.1
 [1] 0.1
  0.4%%0.1
 [1] 0
  0.5%%0.1
 [1] 0.1
  0.6%%0.1
 [1] 0.1
  0.7%%0.1
 [1] 0.1
  0.8%%0.1
 [1] 0
  0.9%%0.1
 
 The modulus for 0.1, 0.2, 0.4 and 0.8 is zero, as I'd expect. But, the
 modulus
 for 0.3, 0.6, 0.7 and 0.9 is 0.1 - which I did not expect. I can see no
 obvious
 rule that predicts whether x%%0.1 will give an answer of 0 or 0.1.  I could
 find
 no explanation of the way that %% works in the R manuals. So, I have 3
 questions:-
 
 1) Why is the modulus of 0.3%%0.1 (and 0.5%%0.1 and 0.6%%0.1...) not zero?
 2) Are there any algorithms in R that use the %% operator with fractional
 divisors
 in this way, and do they know about its apparently inconsistent behaviour?
 3) If %% is not intended for use with fractional divisors, then would it be
 a
 good idea to trap attempts to use them?

These are not fractions but floating point numbers. See FAQ 7.31

http://cran.r-project.org/doc/FAQ/R-FAQ.html#Why-doesn_0027t-R-think-these-numbers-are-equal_003f

and the references therein for reasons why 0.3 is not an integer
multiple of 0.1 in binary. etc.

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] odd behaviour of %%?

2006-11-22 Thread Brandt, T. (Tobias)
To answer part 1) of your question, see point 7.31 in the R FAQ
(http://cran.r-project.org/doc/FAQ/R-FAQ.html#Why-doesn_0027t-R-think-these-
numbers-are-equal_003f).

 0.3 - 3*0.1
[1] -5.551115e-17

It always amazes me in how many different guises the problem of floating
point representation crops up.

HTH
 

-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of 
Jonathan Williams
Sent: 22 November 2006 12:15 PM
To: Ethz. Ch
Subject: [R] odd behaviour of %%?

Dear R Helpers,

I am trying to extract the modulus from divisions by a 
sequence of fractions.
I noticed that %% seems to behave inconsistently (to my untutored eye),
thus:

 0.1%%0.1
[1] 0
 0.2%%0.1
[1] 0
 0.3%%0.1
[1] 0.1
 0.4%%0.1
[1] 0
 0.5%%0.1
[1] 0.1
 0.6%%0.1
[1] 0.1
 0.7%%0.1
[1] 0.1
 0.8%%0.1
[1] 0
 0.9%%0.1

The modulus for 0.1, 0.2, 0.4 and 0.8 is zero, as I'd expect. 
But, the modulus for 0.3, 0.6, 0.7 and 0.9 is 0.1 - which I 
did not expect. I can see no obvious rule that predicts 
whether x%%0.1 will give an answer of 0 or 0.1.  I could find 
no explanation of the way that %% works in the R manuals. So, I have 3
questions:-

1) Why is the modulus of 0.3%%0.1 (and 0.5%%0.1 and 
0.6%%0.1...) not zero?
2) Are there any algorithms in R that use the %% operator with 
fractional divisors in this way, and do they know about its 
apparently inconsistent behaviour?
3) If %% is not intended for use with fractional divisors, 
then would it be a good idea to trap attempts to use them?

Thanks, in advance, for your help

Jonathan Williams

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



Nedbank Limited Reg No 1951/09/06. The following link displays the names of 
the Nedbank Board of Directors and Company Secretary. [ 
http://www.nedbank.co.za/terms/DirectorsNedbank.htm ]
This email is confidential and is intended for the addressee only. The 
following link will take you to Nedbank's legal notice. [ 
http://www.nedbank.co.za/terms/EmailDisclaimer.htm ]


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Dealing with NAs

2006-11-22 Thread Amir Safari
  
  
  
 Dear R Users,
  
 Once the  function svm ( ) in library{e1071} like any function in R is run 
with  some variables including missing values ( NA ), it is needed to include  
na.pass=TRUE arguement in function in order to deal with NAs. The  arguement 
na.omit is not desired, since it redueces the length of  variables.
  
 A problem when running the function svm   including na.pass, is that the 
result of fitting consists a part of  seemingly rigth fiitted values from the 
first of the length of fitted  values to somewhere in a series and another part 
of fitted values which  are NAs comes to the end of length of fitted values in 
a series.
  
  But we expect fitted values without any NA.
  
  How it can be corrected ?
  
  Thank you so much for help.
  Amir

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Statistical Software Comparison

2006-11-22 Thread John C Frain
Some comments that I hope may be of use -

Yes the STATA manuals are very good.  At $495 per set they should be.
Buying individual volumes is not an option as there are many
cross-references between different volumes and the price of the set is
discounted relative to buying individual volumes.  The online manuals are
just a basic summary of the syntax of the commands and are terrible.  Other
commercial packages that I know (e.g. RATS, EVIEWS, MATLAB, GAUSS, ) have
much better help files and distribute their manuals in pdf format along with
the product.  I have switched to R for most of my econometrics partly for
reason of $$ but also for the simple reason that I can get my analysis done
in  R.  For simple econometrics I do return to simpler products or a product
that has a specific purpose  (eg GRETL for basic econometrics , RATS for
time series analysis etc.).

R and STATA are both superb products.  Both are so large that most people
will be familiar with only a small part of each.  Both are
programmable.  Modern or experimental methods can be implemented  in both.
Which programming language you prefer is probably a matter of taste.  If you
can find a person who is working in your field and is familiar with one of
the languages and is willing to help you then my advice would be to use his
expertise regardless of the package that he is using.   One area where R is
much better than STATA would be in finance where R has the advantage of the
superb package Rmetrics.  Any person choosing between R and STATA  should
look for similar packages in their expertise.  The Task Views on CRAN are
very good summaries or R facilities in the areas of econometrics and
finance.

I have added empirical content to a statistics course for mathematics
masters students using R.  I teach empirical econometrics using STATA and
MATLAB (for institutional reasons)  but would prefer yo use R.






On 22/11/06, Robert Duval [EMAIL PROTECTED] wrote:

 I'll answer the ones I know of:

  4. Manuals (theory included in the manuals).
 Stata manuals are superb. The online help manuals are really minimal,
 but the complete set of manuals for sale is really good. Not only they
 discuss the Stata implementetation, but they give a concise
 theoretical discussion of what the statistical methods are actually
 doing.

 While they don't get to talk much about the inner workings of Stata,
 (as some of the R manuals do) I like them much better to the R ones.

 Many of the statistical procedures are illustrated with examples using
 the datasets included with the software

  5. Support (in this aspect there is no comparison with R,
  the R list is the best known support).

 R list has a better support than Statalist, but still Statalist is
 quite active and helpful.
 Plus they are more polite... no RTFM or stuff like that.
 If you own a Stata license, you can get direct support from somebody
 at StataCorp (in addition to Statalist). This is specially relevant if
 you have questions on how Stata is estimating something, bugs, etc.


  6. Numerical stability.
 Quite stable. The only glitch I've observed is that after new releases
 their routines are not very reliable... meaning they sometimes change
 the way something is being computed and might they mess up something
 that previously was running fine.

 Right now Stata 9 is pretty stable, but if Stata 10 would come up in
 the market now, I would probably wait for a couple of months and make
 sure everything is well tested.


 One last thing, while I abandoned the Stata world to move to R (due to
 $$), I have to say that the only thing I really miss about it is its
 ability to handle large datasets. Stata comes with great Data
 management routines, and it can hold large amount of data in its
 memory.
 Here R is light years behind. This is particularly relevant if you
 have to clean-up large datasets before you actually start doing
 statistics.

 hope this helps
 robert

 On 11/21/06, Kenneth Cabrera [EMAIL PROTECTED] wrote:
  Hi R users:
 
  I want to know if any of you had used
  Stata or Statgraphics.
 
  What are the advantages and disadvantages with
  respect to R on the following aspects?
 
  1. Statistical functions or options for advanced
  experimental design (fractional, mixed models,
  greco-latin squares, split-plot, etc).
  2. Bayesian approach to experimental design.
  3. Experimental design planing options.
  4. Manuals (theory included in the manuals).
  5. Support (in this aspect there is no comparison with R,
  the R list is the best known support).
  6. Numerical stability.
  7. Implementation of modern statistical approaches.
 
  Thank you for your help.
 
  Kenneth
 
  --
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 

 

[R] Problem with RWeka-rJava packages

2006-11-22 Thread Alejandra Malberti
Hello:
I´m trying to execute Apriori(file.arff) command of RWeka package.
I´m working with:
Operating System: Windows XP home
R-2.4.0
RWeka_0.2-11
rJava_0.4-11
classpath= .;C:\Archivos de programa\Java\jdk1.5.0\lib;C:\Archivos de
programa\R\R- 2.4.0\library\RWeka\jar

An error occurs when .jnew command is executed, on class
weka/core/Instances :

.jnew(weka/core/Instances)

Failed to create object of class ´weka/core/Instances´  and a warning
message: createObject.GetMethodID(weka/core/Instances,()V) failed

What can I do?
Thanks everybody.

Alejandra Malberti

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] odd behaviour of %%?

2006-11-22 Thread Ted Harding
On 22-Nov-06 Jonathan Williams wrote:
 Dear R Helpers,
 
 I am trying to extract the modulus from divisions by a
 sequence of fractions.
 I noticed that %% seems to behave inconsistently (to my
 untutored eye), thus:
 
 0.1%%0.1
 [1] 0
 0.2%%0.1
 [1] 0
 0.3%%0.1
 [1] 0.1
 0.4%%0.1
 [1] 0
 0.5%%0.1
 [1] 0.1
 0.6%%0.1
 [1] 0.1
 0.7%%0.1
 [1] 0.1
 0.8%%0.1
 [1] 0
 0.9%%0.1

This is yet another manifestation of the fact that R (like
most computer languages) stores numbers as binary representations
of fixed length. This is OK for inetegers up to a certain
maximum value, but relatively few factions (those which are
multiples of powers of 1/2) will be stored exactly.

In particular, 0.1 is not stored exactly.

When you do 0.2 %% 0.1, you will be getting the remainder of
twice the binary representation of 0.1, modulo 0.1, which will
be zero; and similarly for 0.4 and 0.8, since these are going
up by powers of 2 and the relationships between their binary
representations will match what you would expect.

However, the binary representation of 0.3 does not correspond
exactly to 3 times the binary representation of 0.1:

  0.3 - 3*0.1
  [1] -5.551115e-17

so 3*0.1 is represented by a binary fraction slightly greater
than the binary representation of 0.3, so 0.1 only goes into
0.3 twice, with a remainder slightly less than  0.1 which rounds
to 0.1 when displayed as a result:

   0.3 %% 0.1
  [1] 0.1
   0.1 - (0.3 %% 0.1)
  [1] 2.775558e-17

Similarly for 0.5, 0.6, 0.7 and 0.9.

 The modulus for 0.1, 0.2, 0.4 and 0.8 is zero, as I'd expect.
 But, the modulus for 0.3, 0.6, 0.7 and 0.9 is 0.1 - which I
 did not expect. I can see no obvious rule that predicts whether
 x%%0.1 will give an answer of 0 or 0.1. I could find no
 explanation of the way that %% works in the R manuals. So,
 I have 3 questions:-
 
 1) Why is the modulus of 0.3%%0.1 (and 0.5%%0.1 and 0.6%%0.1...)
 not zero?

See above.

 2) Are there any algorithms in R that use the %% operator with
 fractional divisors in this way,

I don't know -- though others will!

 and do they know about its apparently inconsisten behaviour?

People who write algorithms should be aware of such effects of
binary representation, so a well-written algorithm will take
account of this danger.

 3) If %% is not intended for use with fractional divisors,
 then would it be a good idea to trap attempts to use them?

The best protection against this kind of thing is circumspection
on the part of the programmer -- who, therefore should do his
own trapping.

An alternative aproach to the calculations you made could be

   0.3 - 0.1*(0.3/0.1)
  [1] 0

which is algebraically equivalent, and gives the right answer:

   identical(0.3 - 0.1*(0.3/0.1),0)
  [1] TRUE

However, this is not always going to work either:

   101.1 - (101.1/0.1)*0.1
  [1] -1.164153e-10

so it's prudent to wrap it in a suitable round():

   round(101.1 - (101.1/0.1)*0.1, digits=9)
  [1] 0

but you still have to be aware of what value to set the number
of digits to round to:

   round(101.1 - (101.1/0.1)*0.1, digits=10)
  [1] -1e-10

Hoping this helps,
Ted.


E-Mail: (Ted Harding) [EMAIL PROTECTED]
Fax-to-email: +44 (0)870 094 0861
Date: 22-Nov-06   Time: 11:33:05
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] XEmacs: substitute \n by real linefeeds

2006-11-22 Thread Christian Hoffmann
Hi,

This may be a useful tip for R users on emacs:

Problem: substitute \n by a real linefeed, so that

aa \n bb \n ccc

becomes

aa
  bb
  ccc

?

Solution:  M-% RT \n RT C-q 12 RT

It works, because ASCII 12 = RT

Enjoy!
Christian
-- 
Dr. Christian W. Hoffmann,
Swiss Federal Research Institute WSL
Zuercherstrasse 111, CH-8903 Birmensdorf, Switzerland
Tel +41-44-7392-277 (office), -111(exchange), -215  (fax)
[EMAIL PROTECTED],  www.wsl.ch/staff/christian.hoffmann

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme - plot - labels

2006-11-22 Thread Doktor, Daniel
Hello there,

I am using the 'nlme' package to analyse among group and in between
group variances. I do struggle a little using the respective
plot-functions. I created a grouped object:

group.lme - groupedData(obsday ~ oro | id,
data=read.table(data-lme.txt, header=T),
labels=list(x = Day of Year, y = ID number))

When I plot, however

plot(group.lme)

the y-labels appear on the x-axis and the x-labels do not are not drawn
at all.

Any advice? I am using R 2.2.1 on Windows XP

Thanks,

Daniel

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Latent models

2006-11-22 Thread Bi-Info (http://members.home.nl/bi-info)
Hai,

Can anyone help me with some literature (R related) about latent models?

Thanx,

Wilfred

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] RBloomberg Multi-ticker problem

2006-11-22 Thread davidr
Bloomberg recommends getting only one ticker at a time for intraday
data.

David L. Reiner
Rho Trading Securities, LLC
Chicago  IL  60605
312-362-4963

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Shubha Vishwanath
Karanth
Sent: Wednesday, November 22, 2006 1:09 AM
To: r-help@stat.math.ethz.ch; [EMAIL PROTECTED]
Subject: [R] RBloomberg Multi-ticker problem

Hi,

 

I am trying to download data from Bloomberg through R. If I try to
download intraday data for multiple tickers and only one field, I get
the error, written below in red. How do I get rid of this error?

 

 

 dat-blpGetData(conn, c(NOK1V FH Equity,AUA AV Equity),
LAST_PRICE,
start=as.chron(as.Date(2006-9-13,%Y-%m-%d)),barfields=VOLUME,
barsize=10,retval=data.frame)

Error in if (typ[n] == character) { : missing value where TRUE/FALSE
needed

*   blpDisconnect(conn)

 

Thank you,

Shubha.


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fitting mixed-effects models with lme with fixed error term variances

2006-11-22 Thread Douglas Bates
On 11/21/06, Gregor Gorjanc [EMAIL PROTECTED] wrote:
 Douglas Bates bates at stat.wisc.edu writes:
 ...
  Can you be more specific about which parameters you want to fix and
  which you want to vary in the optimization?

 It would be nice to have the ability to fix all variances i.e. variances of
 random effects.

That gets tricky in terms of the parameterization of the
variance-covariance matrices for vector-valued random effects.  These
matrices are not expressed in the conventional parameterization of
variances and covariances or even variances and correlation because
the conditions for the resulting matrix to be positive definite are
not simple bounds or easily expressed transformations then the matrix
is larger than 2 by 2.  I suppose what I could do is to allow these
matrices to be specified in the parameterization that is used in the
optimization and provide a utility function to map from the
conventional parameters to these.  That would mean that you couldn't
fix ,say, the variance of the intercept term for vector-valued random
effects but allow the variance of a slope for the same grouping factor
to be estimated.  Well, you could but only in the fortune(Yoda)
sense.

By the way, if you fix all the variances then what are you optimizing
over?  The fixed effects?  In that case the solution can be calculated
explicitly for a linear mixed model.  The conditional estimates of the
fixed effects given the variance components are the solution to a
penalized linear least squares problem.  (Yes, the solution can also
be expressed as a generalized linear least squares problem but there
are advantages to using the penalized least squares representation.
See

@Article{bates04:_linear,
  author =   {Douglas M. Bates and Saikat DebRoy},
  title ={Linear mixed models and penalized least squares},
  journal =  {Journal of Multivariate Analysis},
  year = 2004,
  volume =   91,
  number =   1,
  pages ={1--17}
}
)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Latent models

2006-11-22 Thread ronggui

Is ltm what you want?

packageDescription(ltm)

Package: ltm
Title: Latent Trait Models under IRT
Version: 0.6-1
Date: 2006-10-02
Author: Dimitris Rizopoulos [EMAIL PROTECTED]
Maintainer: Dimitris Rizopoulos [EMAIL PROTECTED]
Description: Analysis of multivariate dichotomous and polytomous data
   using latent trait models under the Item Response Theory
   approach. It includes the Rasch, the Two-Parameter Logistic,
   the Birnbaum's Three-Parameter, and the Graded Response Models.
Depends: R(= 2.3.0), MASS, gtools, msm
LazyLoad: yes
LazyData: yes
License: GPL version 2 or later
URL: http://wiki.r-project.org/rwiki/doku.php?id=packages:cran:ltm
Packaged: Tue Oct 3 09:07:38 2006; dimitris
Built: R 2.4.0; ; 2006-10-03 17:16:04; windows


If you are looking function for Latent Class Analysis (LCA), then
lca in package e1071 is what you want.


On 11/22/06, Bi-Info (http://members.home.nl/bi-info) [EMAIL PROTECTED] wrote:

Hai,

Can anyone help me with some literature (R related) about latent models?

Thanx,

Wilfred

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




--
Ronggui Huang
Department of Sociology
Fudan University, Shanghai, China
黄荣贵
复旦大学社会学系

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] What training algorithm does nnet package use?

2006-11-22 Thread Wee-Jin Goh
Greetings list,

I've just swapped from the neural package to the nnet package and  
I've noticed that the training is orders of magnitude faster, and the  
results are way more accurate.

This leads me to wonder, what training algorithm is nnet using? Is  
it a modification on the standard backpropagation? Or a completely  
different algorithm? I'm trying to account for the speed differences  
between neural and nnet, and the documentation on the nnet package is  
rather sparse on what training algorithm is used (either that, or I'm  
getting blind and missed it totally).

Any help would be much appreciated.

Regards,
Wee-Jin

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] problem with the plm package (2)

2006-11-22 Thread Giangiacomo Bravo
Thanks a lot for the quick answerd I received and that helped me to 
solve my first (simple) problem with the plm package.

Unfortunatley, here's another one:
I do not know why, but I'm unable to estimate a simple model in the 
form y ~  1 + x

When I try
  zz - plm(y ~ 1 + x , data=mydata)
I obtain
Errore in X.m[, coef.within, drop = F] : numero di dimensioni errato
which means that the number of dimensions is wrong in X.m[, 
coef.within, drop = F].

However, I have no problem to estimate a more complex model, e.g.
  zz - plm(y ~ 1 + x + I(x^2), data=mydata)
in this case everything is ok.

What's wrong?

Thank you,
Giangiacomo

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Latent models

2006-11-22 Thread Prof Brian Ripley
Those are only a few of the possible latent models.  Factor analysis and 
SEM are also latent variable models, as indeed are the mixed models of 
nlme, lme4, mmlcr, polr models   Then there are the graphical models 
of (at least) packages ggm and latentnet, polytomous LDA in package 
polLCA 

help.search(latent, agrep=FALSE) on a comprehensive installation helped 
me find some of these.

On Wed, 22 Nov 2006, ronggui wrote:

 Is ltm what you want?
 packageDescription(ltm)
 Package: ltm
 Title: Latent Trait Models under IRT
 Version: 0.6-1
 Date: 2006-10-02
 Author: Dimitris Rizopoulos [EMAIL PROTECTED]
 Maintainer: Dimitris Rizopoulos [EMAIL PROTECTED]
 Description: Analysis of multivariate dichotomous and polytomous data
   using latent trait models under the Item Response Theory
   approach. It includes the Rasch, the Two-Parameter Logistic,
   the Birnbaum's Three-Parameter, and the Graded Response Models.
 Depends: R(= 2.3.0), MASS, gtools, msm
 LazyLoad: yes
 LazyData: yes
 License: GPL version 2 or later
 URL: http://wiki.r-project.org/rwiki/doku.php?id=packages:cran:ltm
 Packaged: Tue Oct 3 09:07:38 2006; dimitris
 Built: R 2.4.0; ; 2006-10-03 17:16:04; windows


 If you are looking function for Latent Class Analysis (LCA), then
 lca in package e1071 is what you want.


 On 11/22/06, Bi-Info (http://members.home.nl/bi-info) [EMAIL PROTECTED] 
 wrote:
 Hai,
 
 Can anyone help me with some literature (R related) about latent models?
 
 Thanx,
 
 Wilfred
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 




-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] problems with garchFit

2006-11-22 Thread T Mu
Hi all,

I post it on both r-help and r-finance since I don't know where is most
appropriate for this topic. Sorry if it bothers you.

I did garch fitting on SP500 monthly returns with garchFit from fSeries. I
got same coefficients from all cond.dist except normal. I thought that is
probabaly usual for the data. But when I play with it, I got another
question.

I plot skew normal with skew = 1 and a standard normal, they overlap
eachother, so I think they are the same. Skew = 1 means no skewness (I can
not find the paper defining the distribution).

library(fSeries)
curve(dsnorm(x, 0, 1, 1), -2, 2, add = F, col = 'red') #skew normal with
skew 1
curve(dnorm(x, 0, 1), -2, 2, add = T, col = 'blue') #normal

Then I try them as innovations,

#normal innovation
garch_norm - garchFit(series = logr, include.mean = F)

#skew normal innovation

#this line do not include skew, so it got same result as normal
garch_snorm - garchFit(series = logr, cond.dist = 'dsnorm', include.mean =
F, include.skew = F)

#this line includes skew, but use default skew = 1, and it got results
different from normal, which I don't understand

garch_snorm - garchFit(series = logr, cond.dist = 'dsnorm', include.mean =
F, include.skew = T)

Have I done something wrong? I am attaching the code, thank you.

Tian

#GARCH analysis of monthly return
rm(list=ls(all=TRUE))
sp500 - read.csv('sp_m90.csv', header=TRUE)
sp500 - sp500[,2] #only adjusted close
n - length(sp500)

logr - log(sp500[1:n-1] / sp500[2:n])
acf(logr)
ar5 - arima(logr, order = c(5, 0, 0), include.mean = T)
logr- ar5$res
acf(logr)

#fit GARCH distribution
hist(logr, freq = F, ylim = c(0, 12), breaks = 'FD')

norm_fit - normFit(logr)
curve(dnorm(x, norm_fit$est[1], norm_fit$est[2]), -.15, .15, add = TRUE,
col=2)

t_fit - stdFit(logr)
curve(dstd(x, t_fit$est[1], t_fit$est[2], t_fit$est[3]), -.15, .15, add =
TRUE, col=6)

snorm_fit - snormFit(logr)
curve(dsnorm(x, snorm_fit$est[1], snorm_fit$est[2], snorm_fit$est[3]), -.25,
.15, add = TRUE, col=4)

st_fit - sstdFit(logr)
curve(dsstd(x, st_fit$est[1], st_fit$est[2], st_fit$est[3], st_fit$est[4]),
-.25, .15, add = TRUE, col=3)

library(fSeries)

#normal innovation
garch_norm - garchFit(series = logr, include.mean = F)

#t inovation
garch_t - garchFit(series = logr, cond.dist = 'dstd', include.mean = F,
include.shape = T)
garch_t1 - garchFit(series = logr, cond.dist = 'dstd', include.mean = F,
shape = t_fit$est[3], include.shape = T)

#skew normal innovation
garch_snorm - garchFit(series = logr, cond.dist = 'dsnorm', include.mean =
F, include.skew = T)
garch_snorm1 - garchFit(series = logr, cond.dist = 'dsnorm', include.mean =
F, skew = snorm_fit$est[3], include.skew = T)

#skew t innovation
garch_st - garchFit(series = logr, cond.dist = 'dsstd', include.mean = F,
include.skew = T, include.shape = T)
garch_st1 - garchFit(series = logr, cond.dist = 'dsstd', include.mean = F,
skew = st_fit$est[4], shape = st_fit$est[3], include.skew = T, include.shape= T)

vix - read.csv('D:/Documents and Settings/Mu Tian/Desktop/8780/8780
project/vix_m.csv', header=TRUE)
vix - (vix[,2]/100) / (12^.5)

plot_sd - function(x, ylim = null, col = null, ...) {
xcsd = [EMAIL PROTECTED]
plot(xcsd, type = l, col = col, ylab = x,
main = Conditional SD, ylim = ylim)
abline(h = 0, col = grey, lty = 3)
grid()
}

plot_sd(garch_norm, ylim = c(0.02, 0.13), col = 2)
xcsd = [EMAIL PROTECTED]
lines(xcsd, col = 3)
lines(1:n, vix)

#predict
predict(garch_norm)
predict(garch_t)

#demonstration of skew distributions
#skew normal
curve(dsnorm(x, 0, 1, .1), -2, 2, add = F, col = 'green')
curve(dsnorm(x, 0, 1, snorm_fit$est[3]), type = 'l', col = 'blue', add = T)
curve(dsnorm(x, 0, 1, 1), -2, 2, add = T, col = 'red') #normal

#skew t
curve(dsstd(x, 0, 1, 4, 1), -2, 2, add = F, col = 'red')
curve(dsstd(x, 0, 1, st_fit$est[3], st_fit$est[4]), type = 'l', col =
'blue', add = T)
curve(dsstd(x, 0, 1, 100, .5), -2, 2, add = T, col = 'green')

#t
curve(dstd(x, 0, 1, 4), -2, 2, add = T, col = 'red')
curve(dstd(x, 0, 1, t_fit$est[3]), type = 'l', col = 'blue', add = T)
curve(dstd(x, 0, 1, 100), -2, 2, add = T, col = 'green')

curve(dsnorm(x, 0, 1, 1), -2, 2, add = F, col = 'red') #normal
curve(dnorm(x, 0, 1), -2, 2, add = T, col = 'blue') #normal
curve(dsnorm(x, 0, 1, .1), -2, 2, add = T, col = 'green') #normal

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fitting mixed-effects models with lme with fixed error term variances

2006-11-22 Thread Gregor Gorjanc
Douglas Bates wrote:
 On 11/21/06, Gregor Gorjanc [EMAIL PROTECTED] wrote:
 Douglas Bates bates at stat.wisc.edu writes:
 ...
  Can you be more specific about which parameters you want to fix and
  which you want to vary in the optimization?

 It would be nice to have the ability to fix all variances i.e.
 variances of
 random effects.
 
 That gets tricky in terms of the parameterization of the
 variance-covariance matrices for vector-valued random effects.  These
 matrices are not expressed in the conventional parameterization of
 variances and covariances or even variances and correlation because
 the conditions for the resulting matrix to be positive definite are
 not simple bounds or easily expressed transformations then the matrix
 is larger than 2 by 2.  I suppose what I could do is to allow these
 matrices to be specified in the parameterization that is used in the
 optimization and provide a utility function to map from the
 conventional parameters to these.  That would mean that you couldn't
 fix ,say, the variance of the intercept term for vector-valued random
 effects but allow the variance of a slope for the same grouping factor
 to be estimated.  Well, you could but only in the fortune(Yoda)
 sense.
 

Yes, I agree here. Thank you for the detailed answer.

 By the way, if you fix all the variances then what are you optimizing
 over?  The fixed effects?  In that case the solution can be calculated
 explicitly for a linear mixed model.  The conditional estimates of the
 fixed effects given the variance components are the solution to a
 penalized linear least squares problem.  (Yes, the solution can also
 be expressed as a generalized linear least squares problem but there
 are advantages to using the penalized least squares representation.

Yup. It would really be great to be able to do that nicely in R, say use
lmer() once and since this might take some time use estimates of
variance components next time to get fixed and random effects. Is this
possible with lmer or any related function - not in fortune(Yoda) sense ;)

-- 
Lep pozdrav / With regards,
Gregor Gorjanc
--
University of Ljubljana PhD student
Biotechnical Faculty
Zootechnical Department URI: http://www.bfro.uni-lj.si/MR/ggorjan
Groblje 3   mail: gregor.gorjanc at bfro.uni-lj.si

SI-1230 Domzale tel: +386 (0)1 72 17 861
Slovenia, Europefax: +386 (0)1 72 17 888

--
One must learn by doing the thing; for though you think you know it,
 you have no certainty until you try. Sophocles ~ 450 B.C.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Probit analysis

2006-11-22 Thread Dipjyoti Chakraborty
Respected Sir/Madam,



I have a question regarding calculation of LD50 (Lethal Dose) and IC50 (50%
inhibitory concentration) of an antimicrobial experiment.



I have used a compound isolated from a plant and observed its effect on the
fungus *Fusarium oxysporum* by the food poisoning method. Solutions of the
compound at concentrations of 0, 50, 100, 150, 200 and 250µg/ ml were added
to sterilized medium in Petri dish (9cm dia.) and after transferring the
mycelia of fungal strains, the dishes were incubated in the dark. When the
mycelium of fungi reached the edges of the control (0µg/ ml) dishes, the
antifungal indices were calculated. Each test was repeated three times and
the average was calculated. The formula to calculate antifungal index (AI)
was



AI % = (1- Da / Db) * 100

Where Da = diameter of growth zone in experiment (cm); Db = diameter of
growth zone in control (cm).



The results are as follows



Concentration (µg/ ml)

0

50

100

150

200

250

AI% (average of 3 replicates)

0

10

21.59

33.89

47.96

59.93















I have the Version 2.3.1 (2006-06-01) of R loaded in my computer and want to
calculate LD50 and if possible IC50 values with the data. I have checked the
Help menu---Manuals in PDF---An introduction to R. There is an example of
calculating LD50 in Chapter 11, page no. 63.

But I cannot fit my data in the model—some error messages are coming.



Please suggest some functions on R or the method by which I can do the
calculations. I am not at all used to programming language so a detailed
solution would be very much helpful.



Thanking you in anticipation.

Sincerely yours.


-- 
Dipjyoti Chakraborty
C/o Dr Adinpunya Mitra
Natural Products Biotechnology Lab
Agriculture  Food Technology Depart.
IIT-KGP
Kharagpur-721 302-India

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Summary, was Re: Confidence interval for relative risk

2006-11-22 Thread Michael Dewey
At 14:43 10/11/2006, Michael Dewey wrote:

After considerable help from list members and some digging of my own
I have prepared a summary of the findings which I have posted (see
link below). Broadly there were four suggestions
1 - Wald-type intervals,
2 - transforming the odds ratio confidence interval,
3 - method based on score test,
4 - method based on likelihood.

Method 3 was communicated to me off-list
  ===
I haven't followed all of the thread either but has someone already 
pointed out to you confidence intervals that use the score method? 
For example Agresti (Categorical Data Analysis 2nd edition, p. 77-78) 
note that 'although computationally more complex, these methods often 
perform better'. However, they also note that 'currently they are not 
available in standard software'.

But then again, R is not standard software: the code (riskscoreci) 
can be found from here: 
http://www.stat.ufl.edu/~aa/cda/R/two_sample/R2/index.html

best regards,
Jukka Jokinen


and so I reproduce it here. Almost a candidate for the fortunes package there.
The other three can be found from the archive
under the same subject although not all in the same thread.

Methods 3 and 4 seem to have more going for them as far
as I can judge.

Thanks to David Duffy, Spencer Graves,
Jukka Jokinen, Terry Therneau, and Wolfgan Viechtbauer.
The views and calculations presented here and in the summary
are my own and
any errors are my responsibility not theirs.

The summary document is available from here

http://www.zen103156.zen.co.uk/rr.pdf


Original post follows.


The concrete problem is that I am refereeing
a paper where a confidence interval is
presented for the risk ratio and I do not find
it credible. I show below my attempts to
do this in R. The example is slightly changed
from the authors'.

I can obtain a confidence interval for
the odds ratio from fisher.test of
course

=== fisher.test example ===

  outcome - matrix(c(500, 0, 500, 8), ncol = 2, byrow = TRUE)
  fisher.test(outcome)

 Fisher's Exact Test for Count Data

data:  outcome
p-value = 0.00761
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
  1.694792  Inf
sample estimates:
odds ratio
Inf

=== end example ===

but in epidemiology authors often
prefer to present risk ratios.

Using the facility on CRAN to search
the site I find packages epitools and Epi
which both offer confidence intervals
for the risk ratio

=== Epi example ===

  library(Epi)
  twoby2(outcome[c(2,1),c(2,1)])
2 by 2 table analysis:
--
Outcome   : Col 1
Comparing : Row 1 vs. Row 2

   Col 1 Col 2P(Col 1) 95% conf. interval
Row 1 8   500  0.01570.0079   0.0312
Row 2 0   500  0.0.  NaN

95% conf. interval
  Relative Risk:Inf   NaN  Inf
  Sample Odds Ratio:Inf   NaN  Inf
Conditional MLE Odds Ratio:Inf1.6948  Inf
 Probability difference: 0.01570.0027   0.0337

  Exact P-value: 0.0076
 Asymptotic P-value: NaN
--

=== end example ===

So Epi gives me a lower limit of NaN but the same confidence
interval and p-value as fisher.test

=== epitools example ===

  library(epitools)
  riskratio(outcome)
$data
   Outcome
Predictor  Disease1 Disease2 Total
   Exposed1  5000   500
   Exposed2  5008   508
   Total10008  1008

$measure
   risk ratio with 95% C.I.
Predictor  estimate lower upper
   Exposed11NANA
   Exposed2  Inf   NaN   Inf

$p.value
   two-sided
Predictor  midp.exact fisher.exact  chi.square
   Exposed1 NA   NA  NA
   Exposed2 0.00404821  0.007610478 0.004843385

$correction
[1] FALSE

attr(,method)
[1] Unconditional MLE  normal approximation (Wald) CI
Warning message:
Chi-squared approximation may be incorrect in: chisq.test(xx, correct =
correction)

=== end example ===

And epitools also gives a lower limit
of NaN.

=== end all examples ===

I would prefer not to have to tell the authors of the
paper I am refereeing that
I think they are wrong unless I can help them with what they
should have done.

Is there another package I should have tried?

Is there some other way of doing this?

Am I doing something fundamentally wrong-headed?




Michael Dewey
http://www.aghmed.fsnet.co.uk

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Questions regarding integrate function

2006-11-22 Thread Le Wang
A follow-up. Thanks to Matthias, the codes worked very well.

Le

 On 11/18/06, Matthias Kohl [EMAIL PROTECTED] wrote:
  Hello,
 
  your integrand needs to be a function which accepts a numeric vector as
  first argument and returns a vector of the same length (see ?integrate).
  Your function does not fulfill this requirement. Hence, you have to
  rewrite your function or use sapply, apply or friends; something like
 
  newintegrand - function(x) sapply(x, integrand)
 
  By the way you use a very old R version and I would recommend to update
  to R 2.4.0.
 
  hth
  Matthias
 
  Le Wang schrieb:
 
  Hi there. Thanks for your time in advance.
  
  I am using R 2.2.0 and OS: Windows XP.
  
  My final goal is to calculate 1/2*integral of
  (f1(x)^1/2-f2(x)^(1/2))^2dx (Latex codes:
  $\frac{1}{2}\int^{{\infty}}_{\infty}
  (\sqrt{f_1(x)}-\sqrt{f_2(x)})^2dx $.) where f1(x) and f2(x) are two
  marginal densities.
  
  My problem:
  
  I have the following R codes using adapt package. Although adapt
  function is mainly designed for more than 2 dimensions, the manual
  says it will also call up integrate if the number of dimension
  equals one. I feed in the data x1 and x2 and bandwidths h1 and h2.
  These codes worked well when my final goal was to take double
  integrals.
  
 integrand - function(x) {
 # x input is evaluation point for x1 and x2, a 2x1 vector
 x1.eval - x[1]
 x2.eval - x[2]
 # n is the number of observations
 n - length(x1)
 # x1 and x2 are the vectors read from data.dat
 # Compute the marginal densities
 f.x1 - sum(dnorm((x1.eval-x1)/h1))/(n*h1)
 f.x2 - sum(dnorm((x2.eval-x2)/h2))/(n*h2)
 # Return the integrand #
 return((sqrt(f.x1)-sqrt(f.x2))**2)
  
 }
  
 estimate-0.5*adapt(1, lo=lo.default, up=up.default,
 minpts=minpts.default, maxpts=maxpts.default,
 functn=integrand, eps=eps.default, x1, x2,h1,h2)$value
  
  
  But when I used it for one-dimension, it failed. Some of my
  colleagues suggested getting rid of x2.eval in the integrand
  because it is only one integral. But after I changed it, it still
  didn't work. R gave the error msg: evaluation of function gave a
  result of wrong length
  
  I am not a frequent R user..although I looked up the mailing list
  for a while and there were few postings asking similar questions, I
  can't still figure out why my codes won't work. Any help will be
  appreciated.
  
  Le
  -
  ~~
  Le Wang, Ph.D
  Population Center
  University of Minnesota
  
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide 
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
  
  
 
 
  --
  Dr. rer. nat. Matthias Kohl
  E-Mail: [EMAIL PROTECTED]
  Home: www.stamats.de
 
-- 
~~
Le Wang, Ph.D
Population Center
University of Minnesota

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Sued Tuerkei.

2006-11-22 Thread Sued Tuerkei.
Sehr geehrte Damen und Herren,
Als SI GROUP Für Herbst und Winter haben wir Ihnen ein preislich
unschlagbares Angebot an die türkische Riviera-BELEK Halb Pansion 5 Sterne 
Hotels anzubieten.
Ab Zuerich und Basel.
INCLUSIV FLUG  Hin und Zurück und
8-tägige Ferien an der Türkischen Riviera-Antalya-5 Sterne Halb 
Pansion(Frühstück und Abendessen)
TERMINE UND PREISE-
06.01. bis 13.01.2007  Euro 295
13.01. bis 20.01.2007  Euro 295
20.01. bis 27.01.2007  Euro 295
27.01. bis 03.02.2007  Euro 295
03.02. bis 10.02.2007  Euro 295
10.02. bis 17.02.2007  Euro 295
17.02. bis 24.02.2007  Euro 295
24.02. bis 03.03.2007  Euro 295
03.03. bis 10.03.2007  Euro 295
10.03. bis 17.03.2007  Euro 295
17.03. bis 24.03.2007  Euro 295
24.03. bis 31.03.2007  Euro 295
31.03. bis 07.04.2007  Euro 295
07.04. bis 14.04.2007  Euro 295
14.04. bis 21.04.2007  Euro 295
21.04. bis 28.04.2007  Euro 395
28.04. bis 05.05.2007  Euro 390
Inklusivleistungen:
6 Übernachtungen im 5 - Sterne - Hotel im Doppelzimmer.
1 Übernachtung in einem Thermalhotel in Pamukkale.
7 x Halbpension.
Begrüßungscocktail.
1 eintägiger Ausflug nach Antalya mit Bootsfahrt.
Alle Eintritte während der Reise.
Klimatisierte Busse für 2 Ausflüge.
Reiseleitung.

Mit Herzlichen Grüssen.
SI GROUP
www.si-groupreisen.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Statistical Software Comparison

2006-11-22 Thread Thomas Lumley
On Tue, 21 Nov 2006, Kenneth Cabrera wrote:

 Hi R users:

 I want to know if any of you had used
 Stata or Statgraphics.

We use Stata for teaching courses aimed at graduate students in other 
departments, and also (as a consequence) on a lot of medical/public health 
research projects.  It is easier to learn than R, and has good support for 
all the methods we teach in the service courses  [unlike, eg, SPSS or 
Minitab].  Part of the reason it is easier to learn is that there is a 
very regular syntax. [There is also a GUI, now, but it isn't a very good 
one and we were using Stata for teaching before it had a GUI].

 What are the advantages and disadvantages with
 respect to R on the following aspects?

 1. Statistical functions or options for advanced
experimental design (fractional, mixed models,
greco-latin squares, split-plot, etc).

Stata is not very good at this sort of thing. Neither is R, yet, since 
lme() is really for longitudinal data and lmer() is still developing.

 2. Bayesian approach to experimental design.

Not much here, either, in Stata

 3. Experimental design planing options.
Or here.

 4. Manuals (theory included in the manuals).

Stata is excellent. They usually give formulas as well as references (and 
sometimes algorithms and computational notes that are not in the 
references).  The only problem is they keep growing and dividing, so the 
cost of a complete set goes up quite rapidly with each release (and the 
volume that you want is always on the other side of the room or lent out 
to someone).

The online help is also good. It suffers relative to R from the examples 
not necessarily being directly executable.

 5. Support (in this aspect there is no comparison with R,
the R list is the best known support).

The Stata list is pretty good, too.  You can see it at
http://www.hsph.harvard.edu/statalist/

 6. Numerical stability.

For most purposes this is not really an issue and I haven't pushed Stata 
to the edge. I haven't seen any problems.

Stata does have a smaller range of built-in optimizers, and they seem to 
have stopped at the Marquadt algorithm.  This has only once been a problem 
for me (in fitting log-binomial generalized linear models), but could be a 
problem in implementing new methods.

 7. Implementation of modern statistical approaches.


It depends on the area. It's not bad at all in biostatistics and in some 
areas of econometrics. As with R there is also a lot of user-written code, 
some of it of excellent quality.

The Stata language is better than it looks, but some things can be easily 
programmed in it and some can't. The last two versions of Stata have 
introduced language changes in order to be able to implement better 
graphics and linear mixed models, and you can also now call C code from 
Stata, so things are improving.

Algorithms that are suited to a `one rectangular dataset' view of the 
world are often very fast in Stata, but the penalty for not vectorizing is 
even stiffer than in R.


-thomas

Thomas Lumley   Assoc. Professor, Biostatistics
[EMAIL PROTECTED]   University of Washington, Seattle

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] problems with garchFit

2006-11-22 Thread T Mu
Hi all,

Thank you for responses. If any one need the data, I can email it to you. I
don't think I can attach it to R-help. It is only SP 500 monthly returns I
downloaded from Yahoo finance, with only date and adj. close kept.

Thank you,

Tian


On 11/22/06, T Mu [EMAIL PROTECTED] wrote:

 Hi all,

 I post it on both r-help and r-finance since I don't know where is most
 appropriate for this topic. Sorry if it bothers you.

 I did garch fitting on SP500 monthly returns with garchFit from fSeries.
 I got same coefficients from all cond.dist except normal. I thought that
 is probabaly usual for the data. But when I play with it, I got another
 question.

 I plot skew normal with skew = 1 and a standard normal, they overlap
 eachother, so I think they are the same. Skew = 1 means no skewness (I can
 not find the paper defining the distribution).

 library(fSeries)
 curve(dsnorm(x, 0, 1, 1), -2, 2, add = F, col = 'red') #skew normal with
 skew 1
 curve(dnorm(x, 0, 1), -2, 2, add = T, col = 'blue') #normal

 Then I try them as innovations,

 #normal innovation
 garch_norm - garchFit(series = logr, include.mean = F)

 #skew normal innovation

 #this line do not include skew, so it got same result as normal
 garch_snorm - garchFit(series = logr, cond.dist = 'dsnorm', include.mean= F,
 include.skew = F)

 #this line includes skew, but use default skew = 1, and it got results
 different from normal, which I don't understand

 garch_snorm - garchFit(series = logr, cond.dist = 'dsnorm', include.mean= F,
 include.skew = T)

 Have I done something wrong? I am attaching the code, thank you.

 Tian

 #GARCH analysis of monthly return
 rm(list=ls(all=TRUE))
 sp500 - read.csv('sp_m90.csv', header=TRUE)
 sp500 - sp500[,2] #only adjusted close
 n - length(sp500)

 logr - log(sp500[1:n-1] / sp500[2:n])
 acf(logr)
 ar5 - arima(logr, order = c(5, 0, 0), include.mean = T)
 logr- ar5$res
 acf(logr)

 #fit GARCH distribution
 hist(logr, freq = F, ylim = c(0, 12), breaks = 'FD')

 norm_fit - normFit(logr)
 curve(dnorm(x, norm_fit$est[1], norm_fit$est[2]), -.15, .15, add = TRUE,
 col=2)

 t_fit - stdFit(logr)
 curve(dstd(x, t_fit$est[1], t_fit$est[2], t_fit$est[3]), -.15, .15, add =
 TRUE, col=6)

 snorm_fit - snormFit(logr)
 curve(dsnorm(x, snorm_fit$est[1], snorm_fit$est[2], snorm_fit$est[3]),
 -.25, .15, add = TRUE, col=4)

 st_fit - sstdFit(logr)
 curve(dsstd(x, st_fit$est[1], st_fit$est[2], st_fit$est[3],
 st_fit$est[4]), -.25, .15, add = TRUE, col=3)

 library(fSeries)

 #normal innovation
 garch_norm - garchFit(series = logr, include.mean = F)

 #t inovation
 garch_t - garchFit(series = logr, cond.dist = 'dstd', include.mean = F,
 include.shape = T)
 garch_t1 - garchFit(series = logr, cond.dist = 'dstd', include.mean = F,
 shape = t_fit$est[3], include.shape = T)

 #skew normal innovation
 garch_snorm - garchFit(series = logr, cond.dist = 'dsnorm', include.mean= F,
 include.skew = T)
 garch_snorm1 - garchFit(series = logr, cond.dist = 'dsnorm', include.mean= 
 F, skew = snorm_fit$est[3],
 include.skew = T)

 #skew t innovation
 garch_st - garchFit(series = logr, cond.dist = 'dsstd', include.mean = F,
 include.skew = T, include.shape = T)
 garch_st1 - garchFit(series = logr, cond.dist = 'dsstd', include.mean =
 F, skew = st_fit$est[4], shape = st_fit$est[3], include.skew = T,
 include.shape = T)

 vix - read.csv('D:/Documents and Settings/Mu Tian/Desktop/8780/8780
 project/vix_m.csv', header=TRUE)
 vix - (vix[,2]/100) / (12^.5)

 plot_sd - function(x, ylim = null, col = null, ...) {
 xcsd = [EMAIL PROTECTED]
 plot(xcsd, type = l, col = col, ylab = x,
 main = Conditional SD, ylim = ylim)
 abline(h = 0, col = grey, lty = 3)
 grid()
 }

 plot_sd(garch_norm, ylim = c(0.02, 0.13), col = 2)
 xcsd = [EMAIL PROTECTED]
 lines(xcsd, col = 3)
 lines(1:n, vix)

 #predict
 predict(garch_norm)
 predict(garch_t)

 #demonstration of skew distributions
 #skew normal
 curve(dsnorm(x, 0, 1, .1), -2, 2, add = F, col = 'green')
 curve(dsnorm(x, 0, 1, snorm_fit$est[3]), type = 'l', col = 'blue', add =
 T)
 curve(dsnorm(x, 0, 1, 1), -2, 2, add = T, col = 'red') #normal

 #skew t
 curve(dsstd(x, 0, 1, 4, 1), -2, 2, add = F, col = 'red')
 curve(dsstd(x, 0, 1, st_fit$est[3], st_fit$est[4]), type = 'l', col =
 'blue', add = T)
 curve(dsstd(x, 0, 1, 100, .5), -2, 2, add = T, col = 'green')

 #t
 curve(dstd(x, 0, 1, 4), -2, 2, add = T, col = 'red')
 curve(dstd(x, 0, 1, t_fit$est[3]), type = 'l', col = 'blue', add = T)
 curve(dstd(x, 0, 1, 100), -2, 2, add = T, col = 'green')

 curve(dsnorm(x, 0, 1, 1), -2, 2, add = F, col = 'red') #normal
 curve(dnorm(x, 0, 1), -2, 2, add = T, col = 'blue') #normal
 curve(dsnorm(x, 0, 1, .1), -2, 2, add = T, col = 'green') #normal


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 

Re: [R] Latent models

2006-11-22 Thread Bi-Info (http://members.home.nl/bi-info)
Hai,

Wrong button. The email went out too quick...

This might be the one, Brian.

The problem was, and is, that the data is composed of two interrelated
sets of variables, from two completely different ways of measurement,
which I have to connect through some latent structure. Most variables
are dichotomous / slightly scaled (1,2,3) and only one is numerically
scaled. It's not reasonable to assume a linear relation, because I
looked at the Rank linearity condition and it is not satisfied, probably
none of the relations is linear. The statistics book advised me to look
for latent models.

I'll take a look at these packages. Additional suggestions are welcome.

Thanks,

Wilfred

Prof Brian Ripley schreef:
 Those are only a few of the possible latent models.  Factor analysis 
 and SEM are also latent variable models, as indeed are the mixed 
 models of nlme, lme4, mmlcr, polr models   Then there are the 
 graphical models of (at least) packages ggm and latentnet, polytomous 
 LDA in package polLCA 

 help.search(latent, agrep=FALSE) on a comprehensive installation 
 helped me find some of these.

 On Wed, 22 Nov 2006, ronggui wrote:

 Is ltm what you want?
 packageDescription(ltm)
 Package: ltm
 Title: Latent Trait Models under IRT
 Version: 0.6-1
 Date: 2006-10-02
 Author: Dimitris Rizopoulos [EMAIL PROTECTED]
 Maintainer: Dimitris Rizopoulos [EMAIL PROTECTED]
 Description: Analysis of multivariate dichotomous and polytomous data
   using latent trait models under the Item Response Theory
   approach. It includes the Rasch, the Two-Parameter Logistic,
   the Birnbaum's Three-Parameter, and the Graded Response Models.
 Depends: R(= 2.3.0), MASS, gtools, msm
 LazyLoad: yes
 LazyData: yes
 License: GPL version 2 or later
 URL: http://wiki.r-project.org/rwiki/doku.php?id=packages:cran:ltm
 Packaged: Tue Oct 3 09:07:38 2006; dimitris
 Built: R 2.4.0; ; 2006-10-03 17:16:04; windows


 If you are looking function for Latent Class Analysis (LCA), then
 lca in package e1071 is what you want.


 On 11/22/06, Bi-Info (http://members.home.nl/bi-info) 
 [EMAIL PROTECTED] wrote:
 Hai,

 Can anyone help me with some literature (R related) about latent 
 models?

 Thanx,

 Wilfred

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.






__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] NEWBIE: Help explaining use of lm()?

2006-11-22 Thread Jeff Newmiller
James W. MacDonald wrote:
 Hi Kevin,
 
 The hint here would be the fact that you are coding your 'Groups' 
 variable as a factor, which implies (at least in this situation) that 
 the Groups are unordered levels.
 
 Your understanding of a linear model fit is correct when the independent 
 varible is continuous (like the amount of hormone administered). 
 However, in this case, with unordered factor levels, a linear model is 
 the same as fitting an analysis of variance (ANOVA) model.
 
 Note that the statistic for an ANOVA (the F-statistic) is a 
 generalization of the t-statistic to more than two groups. In essence 
 the question you are asking with the lm() fit is 'Is the mean age at 
 walking different from a baseline mean for any of these groups?' as 
 compared to the t-test which asks 'Is the mean age at walking different 
 between Group x and Group y?'.
 
 The way you have set up the analysis uses the 'active' group as a 
 baseline and compares all others to that. In the output you supply, the 
 intercept is the mean age that the active group started walking, and it 
 appears that there is moderate evidence that the 'ctr.8w' group started 
 walking later (the estimate being a little over 2 months later).
 
 If you were to do individual t-tests making the same comparisons you 
 would end up with the same conclusion, but slightly different p-values, 
 etc, because the degrees of freedom would be different.

One concept that this explanation seems to skip lightly over (and that
took me awhile to absorb) is what it means to substitute a factor in
place of a continuous independent variable in lm().  I don't claim
to be an expert in the field, though, so comments on my elaboration
are welcome.

The use of continuous independent variables in lm() leads to a linear
combination of these variables (m_1*x_1 + m_2*x_2 + ... + m_n*x_n) added
to an intercept (b).  If you replace one continuous variable with a factor,
(x_n - f_1) you divide the independent variable space ([x_1,x_2,...,x_n-1])
into separate compartments each with their own intercept.  That
intercept is represented as a combination of an intercept
specific to the applicable value of the factor on top of an
overall intercept:

   m_1*x_1+m_2*x_2+...+m_n-1*x_n-1 + b_f_1 + b

so that the term that used to represent the influence of a continuous
variable (m_n*x_n) is replaced with one of a list of constants
(b_a, b_b, b_c if factor f_1 has three levels a, b, and c).

In Kevin's example, no continuous variables are included at all, so
the solution coefficients simply consist of a list of
intercepts, of which the first is present by default because the
model doesnt specify a -1 term, and the remaining ones are the list
of intercepts b_f_1.

It is important to understand clearly that a single scaling factor
m_n applied to a continuous variable has been replaced by a specific
value defined for each level of the factor.  The loss of continuity
means that the influence of the variable no longer shows up in a
simple scaling factor (a single number) and must be called out
individually.

The compartmentalization suggests that separate lm() fits could
be applied for each level of the factor, combining the two intercept
terms for each fit but the use of a single fit permits the uncertainty
in separate components of the intercept to be identified with the
individual factor levels as well as overall, which (I believe)
explains Jim's assertion that your example amounts to an ANOVA.

I, too, HTH :)

 HTH,
 
 Jim
 
 
 
 Zembower, Kevin wrote:
 
I'm attempting the heruclean task of teaching myself Introductory
Statistics and R at the same time. I'm working through Peter Dalgaard's
Introductory Statistics with R, but don't understand why the answer to
one of the exercises works. I'm hoping someone will have the patience to
explain the answer to me, both in the statistics and R areas.

Exercise 6.1 says:
The zelazo data are in the form of a list of vectors, one for each of
the four groups. Convert the data to a form suitable for the use of lm,
and calculate the relevant tests. ...

This stumped me right from the beginning. I thought I understood that
linear models tried to correlate an independent variable (such as the
amount of a hormone administered) to a dependent variable (such as the
height of a cornstalk). Its output was a model that could state, for
every 10% increase in the hormone, the height increased by X%.

The zelazo data are the ages at walking (in months) of four groups of
infants, two controls and two experimentals subjected to different
exercise regimens. I don't understand why lm() can be used at all in
this circumstance. My initial attempt was to use t.test(), which the
answer key does also. I would have never thought to use lm() except for
the requirement in the problem. I've pasted in the output of the
exercise below, for those without the dataset. Would someone explain why
lm() is appropriate to use in this situation, and what the results mean
'in 

Re: [R] Fitting mixed-effects models with lme with fixed error term variances

2006-11-22 Thread Douglas Bates
On 11/22/06, Gregor Gorjanc [EMAIL PROTECTED] wrote:
 Douglas Bates wrote:
  On 11/21/06, Gregor Gorjanc [EMAIL PROTECTED] wrote:
  Douglas Bates bates at stat.wisc.edu writes:
  ...
   Can you be more specific about which parameters you want to fix and
   which you want to vary in the optimization?
 
  It would be nice to have the ability to fix all variances i.e.
  variances of
  random effects.
 
  That gets tricky in terms of the parameterization of the
  variance-covariance matrices for vector-valued random effects.  These
  matrices are not expressed in the conventional parameterization of
  variances and covariances or even variances and correlation because
  the conditions for the resulting matrix to be positive definite are
  not simple bounds or easily expressed transformations then the matrix
  is larger than 2 by 2.  I suppose what I could do is to allow these
  matrices to be specified in the parameterization that is used in the
  optimization and provide a utility function to map from the
  conventional parameters to these.  That would mean that you couldn't
  fix ,say, the variance of the intercept term for vector-valued random
  effects but allow the variance of a slope for the same grouping factor
  to be estimated.  Well, you could but only in the fortune(Yoda)
  sense.
 

 Yes, I agree here. Thank you for the detailed answer.

  By the way, if you fix all the variances then what are you optimizing
  over?  The fixed effects?  In that case the solution can be calculated
  explicitly for a linear mixed model.  The conditional estimates of the
  fixed effects given the variance components are the solution to a
  penalized linear least squares problem.  (Yes, the solution can also
  be expressed as a generalized linear least squares problem but there
  are advantages to using the penalized least squares representation.

 Yup. It would really be great to be able to do that nicely in R, say use
 lmer() once and since this might take some time use estimates of
 variance components next time to get fixed and random effects. Is this
 possible with lmer or any related function - not in fortune(Yoda) sense ;)

Not quite.  There is now a capability in lmer to specify starting
estimates for the relative precision matrices which means that you can
use the estimates from one fit as starting estimates for another fit.
It looks like

fm1 - lmer(...)
fm2 - lmer(y ~ x + (...), start = [EMAIL PROTECTED])

I should say that in my experience this has not been as effective as I
had hoped it would be.  What I see in the optimization is that the
first few iterations reduce the deviance quite quickly but the
majority of the time is spent refining the estimates near the optimum.
 So, for example, if it took 40 iterations to converge from the rather
crude starting estimates calculated within lmer it might instead take
35 iterations if you give it good starting estimates.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Probit Analysis

2006-11-22 Thread Dipjyoti Chakraborty

Respected Sir/Madam,



I have a question regarding calculation of LD50 (Lethal Dose) and IC50 (50%
inhibitory concentration) of an antimicrobial experiment.



I have used a compound isolated from a plant and observed its effect on the
fungus *Fusarium oxysporum* by the food poisoning method. Solutions of the
compound at concentrations of 0, 50, 100, 150, 200 and 250µg/ ml were added
to sterilized medium in Petri dish (9cm dia.) and after transferring the
mycelia of fungal strains, the dishes were incubated in the dark. When the
mycelium of fungi reached the edges of the control (0µg/ ml) dishes, the
antifungal indices were calculated. Each test was repeated three times and
the average was calculated. The formula to calculate antifungal index (AI)
was



AI % = (1- Da / Db) * 100

Where Da = diameter of growth zone in experiment (cm); Db = diameter of
growth zone in control (cm).



The results are as follows



Concentration (µg/ ml)

0

50

100

150

200

250

AI% (average of 3 replicates)

0

10

21.59

33.89

47.96

59.93















I have the Version 2.3.1 (2006-06-01) of R loaded in my computer and want to
calculate LD50 and if possible IC50 values with the data. I have checked the
Help menu---Manuals in PDF---An introduction to R. There is an example of
calculating LD 50 in Chapter 11, page no. 63.




Please suggest some functions on R or the method by which I can do the
calculations. I am not at all used to programming language so a detailed
solution would be very much helpful.

I am sending the full analysis that I am doing just below this letter. I am
also sending it as attachment.
There is a warning message that i have marked in red--if it is not in color
in your mail---then just see the message after the comand fmp -
glm(Ymat~x,family = binomial(link=probit), data = Fusarium.
It says non integer counts in a binomial glm! in: eval(expr, envir, enclos)

Actually I am not quite sure I am doing the correct thing for probit
analysis. You can see that I do get a value of ld50 at the last step, but I
do not know whether my methodology is correct.



Fusarium - data.frame(x = c(0,50,100,150,200,250), n = rep(90,6),


+ y = c(90,81,75.4,58.5,51.3,65.1))


Fusarium$Ymat - cbind(Fusarium$y, Fusarium$n - Fusarium$y)



fmp - glm(Ymat ~ x, family = binomial(link=probit), data = Fusarium)


Warning message:

non-integer counts in a binomial glm! in: eval(expr, envir, enclos)


summary(fmp)




Call:

glm(formula = Ymat ~ x, family = binomial(link = probit), data = Fusarium)



Deviance Residuals:

  1 2 3 4 5 6

3.26268  -0.06454  -0.15297  -2.48265  -1.99479   3.13791



Coefficients:

 Estimate Std. Error z value Pr(|z|)

(Intercept)  1.5767662  0.1345939  11.715   2e-16 ***

x   -0.0056714  0.0007908  -7.172 7.41e-13 ***

---

Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1



(Dispersion parameter for binomial family taken to be 1)



   Null deviance: 84.795  on 5  degrees of freedom

Residual deviance: 30.662  on 4  degrees of freedom

AIC: 58.283



Number of Fisher Scoring iterations: 5




summary(fml)




Call:

glm(formula = Ymat ~ x, family = binomial, data = kalythos)



Deviance Residuals:

1   2   3   4   5

1.386   0.838  -2.021  -2.151   2.398



Coefficients:

Estimate Std. Error z value Pr(|z|)

(Intercept)  2.069867   0.284782   7.268 3.64e-13 ***

x   -0.006615   0.001591  -4.157 3.22e-05 ***

---

Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1



(Dispersion parameter for binomial family taken to be 1)



   Null deviance: 35.262  on 4  degrees of freedom

Residual deviance: 17.083  on 3  degrees of freedom

AIC: 44.426



Number of Fisher Scoring iterations: 4




ld50 - function(b) -b[1]/b[2]



ldp - ld50(coef(fmp)); ldl - ld50(coef(fml)); c(ldp, ldl)


(Intercept) (Intercept)

  278.0220312.8854


Thanking you in anticipation.

Sincerely yours.


--
Dipjyoti Chakraborty
C/o Dr Adinpunya Mitra
Natural Products Biotechnology Lab
Agriculture  Food Technology Depart.
IIT-KGP
Kharagpur-721 302-India
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Newbie problem ... Forest plot

2006-11-22 Thread Michael Dewey
At 15:16 22/11/2006, you wrote:
When I use studlab parameter I don`t have (don't know how to put) 
names inside graph, X and Y axis, and I have Black/White forest 
plot, and I need colored.

1 - please cc to r-help so that others can learn from your experience
2 - when you use the studlab parameter what happens, and how does 
that differ from what you wanted to happen?



Michael Dewey
http://www.aghmed.fsnet.co.uk

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] plotting a groupedData object

2006-11-22 Thread Lisa Avery
Hello all,

I am plotting a groupedData object and the key at the top of the graph runs
off the page. I don't want to set key=F because it is useful (or would be if
I could see it).

 

Is it possible to explicitly cause the key to wrap? I have used this
function before with no trouble but now I have just 5 groups with rather
long descriptions (which I can't meaningfully shorten).

 

Thank you for your help,

 

Lisa Avery


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] problems with garchFit

2006-11-22 Thread T Mu
Hi all,

Got a few inquiries for data, thank you.

I attach code and data below. Copy it to your R script and it should be
runnable.

Now the Snorm with skew = 1 does have same result with Norm (for this set of
data). But all other innovations still got same coefficients.

Thank you,

Tian

#GARCH analysis of monthly return
rm(list=ls(all=TRUE))

logr - c(0.0107098575567282, 0.0238439568947517, 0.0170914961728296,
0.0138732445655177, -0.00210495583051609, -0.00709127573708259,
-0.0385827936083661, 0.00490449365266998, 0.00385685394698455,
-0.00672488595832236, 0.0179700812080439, -0.00813073003671577,
0.0274033576545142, -0.0250778743362483, -0.000252972595615477,
-0.0184633479948336, 0.0281585718422547, -0.00732056749937242,
0.0223343433626314, -0.0274913993701448, -0.0264806322770031,
0.0115490548518633, -0.0327936279909903, 0.0247646111707177,
0.0306908994386586, 0.00673907585573934, 0.00214245677973209,
-0.00489315945076054, -0.0420701182378048, 0.0106513092268381,
0.00483314418308997, -0.0241112735170185, -0.0236721006924634,
0.00495722080665149, 0.00995100224049281, 0.0423410192839968,
-7.4654666429242e-05, 0.0463263884424721, -0.0191941125904601,
0.0105374637681618, 0.00891562645629928, 0.00408074598837779,
0.0424677854668133, 0.0707494700070023, 0.00114499423035526,
-0.0243277242813142, -0.0349753736938974, -0.0694071571523498,
0.0483227046536378, 0.0757365410062828, -0.123739048470341, -
0.00230833563215873,
-0.0894777518603734, -0.0823922232983805, -0.0163008223196004,
-0.0705625628068879, 0.028902196230284, -0.0281627666976421,
-0.0228742536894077, 0.000367412011421635, 0.0653064704108988,
0.0107593083066410, -0.0924344952694643, -0.0734334859099414,
-0.0179761012283746, -0.0325320322718984, -0.00210059232386598,
0.066829130533506, -0.0735364239526057, -0.104008970439016,
0.0268723664276674,
-0.00313268679975119, -0.0906340137330617, -0.0121396649961373,
-0.0621441723072229, 0.0517502775657374, -0.0236541332868366,
0.0164737511342563, -0.029336578252438, -0.0384578572805523,
0.0851459320997492, -0.0274909426329228, -0.0594227029752583,
0.0490549196316311, 0.0117045927385518, 0.0534838867245415, -
0.036145147104809,
-0.0134516584002713, -0.0397486953757553, 0.0458303597691911,
-0.0324653458761952, 0.0300639359263621, 0.0308827205446822,
-0.0399929706878522, 0.0330129512566845, 0.0476656477094465,
0.0502661919846773, 0.0700555274728492, 0.0533484191777131, -
0.164763950096769,
-0.0188612611651085, 0.0315025148659792, -0.026183523443084,
0.00185764564623504, 0.0415605496104671, 0.0609005489249022,
0.00292109288076270, 0.00843129126020518, 0.0364435424654694,
-0.0422639215772581, 0.0446111390279706, -0.0663607453035695,
0.0680648616436016, 0.0353571745939384, 0.0497475635284525,
0.0495856850474945,
-0.0507265004938753, -0.00126783225527763, 0.052332767500398,
-0.0289178666588803, 0.0636310849271901, 0.0185883019590537,
0.0456074218283171, 0.0114612960614454, -0.0540054008094362,
-0.00492372717398046, 0.0154182797009216, 0.0061641662077013,
0.000707504517465771, -0.000268063662287813, 0.0249188086513136,
0.0101155993444056, 0.0330509893647089, -0.0121696993613180,
0.0321366073123792, -0.00749818182068220, 0.0241037519137133,
0.013877482054341, 0.0284900964619952, 0.0203986640653641,
0.0197845872016367,
0.0282608312416226, 0.0168097602528238, 0.00504624695886614,
-0.047483970806859, 0.0134418482374845, -0.0344232233373743,
0.0297312626164135, 0.0238263428912005, -0.0343340943399759,
0.00514305696494622, 0.00428675908876259, -0.0540037546954394,
-0.0376835393117851, 0.0248059444691968, 0.00286271109440267,
-0.0201726197962942, 0.0120294090576544, -0.0172160135048806,
0.0266745775509609, -0.0125191786357240, -0.00642292839233706,
0.0152853839659078, -0.0329232528272489, 0.0113467545039784,
0.00325116163241795, -0.000156615943755373, 0.00287917888713352,
0.0226350405134303, -0.00507382415825686, 0.00188653413193280,
-0.0314680606930909, 0.0314404828152606, -0.0246891655066154,
-0.00621437278185084, 0.0203328943393480, -0.0292515600983759,
0.00236594272243002, -0.0273027918584042, 0.0986116252116223,
-0.0520744999798322, 0.00458681078225353, -0.0265072106468637,
0.0122803714060437, 0.036704412456513, -0.0562553901943863,
0.0307005847712896,
-0.00685811877481276, 0.0147820750313374, 0.057936537214,
0.0335011692685766, 0.0173466769183716, 0.0510289606341445, -
0.0138986659133316,
-0.0597185581683447, -0.106240705003888, -0.0124147403601571,
-0.0161039038326663, 0.0808230314002266, -0.0344330483221435,
0.0167876627118019, 0.00132482639172523)

hist(logr, freq = F, ylim = c(0, 13), breaks = 'FD')

norm_fit - normFit(logr)
curve(dnorm(x, norm_fit$est[1], norm_fit$est[2]), -.15, .15, add = TRUE,
col=2)

t_fit - stdFit(logr)
curve(dstd(x, t_fit$est[1], t_fit$est[2], t_fit$est[3]), -.15, .15, add =
TRUE, col=6)

snorm_fit - snormFit(logr)
curve(dsnorm(x, snorm_fit$est[1], snorm_fit$est[2], snorm_fit$est[3]), -.25,
.15, add = TRUE, col=4)

st_fit - sstdFit(logr)
curve(dsstd(x, st_fit$est[1], st_fit$est[2], 

Re: [R] lme - plot - labels

2006-11-22 Thread Douglas Bates
On 11/22/06, Doktor, Daniel [EMAIL PROTECTED] wrote:
 Hello there,

 I am using the 'nlme' package to analyse among group and in between
 group variances. I do struggle a little using the respective
 plot-functions. I created a grouped object:

 group.lme - groupedData(obsday ~ oro | id,
 data=read.table(data-lme.txt, header=T),
 labels=list(x = Day of Year, y = ID number))

 When I plot, however

 plot(group.lme)

 the y-labels appear on the x-axis and the x-labels do not are not drawn
 at all.

I can't reproduce the problem.  Try

 library(nlme)
 data(sleepstudy, package = lme4)
 sleep - groupedData(Reaction ~ Days|Subject, sleepstudy,
+labels = list(x = Days of sleep deprivation, y = Average
reaction time (ms)))
 plot(sleep)

Under R-2.4.0 that produces a plot with correctly labeled axes.

 Any advice?

Well there is always the old read the manual approach.  The formula
for a groupedData object is of the form response ~ covariate |
groups
 I am using R 2.2.1 on Windows XP

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] plotting a groupedData object

2006-11-22 Thread Douglas Bates
On 11/22/06, Lisa Avery [EMAIL PROTECTED] wrote:
 Hello all,

 I am plotting a groupedData object and the key at the top of the graph runs
 off the page. I don't want to set key=F because it is useful (or would be if
 I could see it).

 Is it possible to explicitly cause the key to wrap? I have used this
 function before with no trouble but now I have just 5 groups with rather
 long descriptions (which I can't meaningfully shorten).

Plotting a groupedData object is just a convenient way of accessing
the 'xyplot' function from the lattice package.  If you need finer
control over the plot than is available through groupedData then I
suggest that you call xyplot directly.  Wading through the
descriptions of all the options in xyplot is a formidable task but the
results are worth it.  In particular it took me a long time to work
out how to use the auto.key argument to get exactly what I wanted but
now I can generally get what I want on the first or second try.

The nlme package was initially written for S and the graphics used the
trellis formulation.  Later it was ported to R and lattice graphics.
With Deepayan's continuing development of the lattice package it is
now far beyond the capabilities of trellis graphics so I would
recommend using lattice directly.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] dataframe manipulation

2006-11-22 Thread antonio rodriguez
Hi,

Having a dataframe 'l1' (dput output is below):

 dim(l1)
1274   2

 l1[1:12,]

Var1 Freq
1  1988-01-131
2  1988-01-161
3  1988-01-203
4  1988-01-252
5  1988-01-301
6  1988-02-015
7  1988-02-084
8  1988-02-141
9  1988-02-161
10 1988-02-184
11 1988-02-242
12 1988-03-041


I want to extract the times 1988-01, 1988-02,...,2005-12  are 
present, in order to know, for example, that 1988-01 = 5 days   1988-02 
= 6 days and so on. The Freq column doesn't matter in this case, just 
the Var1 column

I've been trying with:

regexpr(2005-03-,l1$Var1) 

But the result is not satisfactory since I need to find and count where 
these values are.

Thanks

Antonio


dput(l1,control=all)

structure(list(Var1 = structure(as.integer(c(1, 2, 3, 4, 5, 6,
7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38,
39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54,
55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70,
71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86,
87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101,
102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114,
115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127,
128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140,
141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153,
154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166,
167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179,
180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192,
193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218,
219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231,
232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244,
245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257,
258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270,
271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283,
284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296,
297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309,
310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322,
323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335,
336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348,
349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361,
362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374,
375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387,
388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400,
401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413,
414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426,
427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439,
440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452,
453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465,
466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478,
479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491,
492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504,
505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517,
518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530,
531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543,
544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556,
557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569,
570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582,
583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595,
596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608,
609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621,
622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634,
635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647,
648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660,
661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673,
674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686,
687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699,
700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712,
713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725,
726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738,
739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751,
752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764,
765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777,
778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790,
791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803,
804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816,
817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829,
830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842,
843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855,
856, 857, 858, 859, 860, 861, 862, 863, 

Re: [R] dataframe manipulation

2006-11-22 Thread Marc Schwartz
On Wed, 2006-11-22 at 20:27 +0100, antonio rodriguez wrote:
 Hi,
 
 Having a dataframe 'l1' (dput output is below):
 
  dim(l1)
 1274   2
 
  l1[1:12,]
 
 Var1 Freq
 1  1988-01-131
 2  1988-01-161
 3  1988-01-203
 4  1988-01-252
 5  1988-01-301
 6  1988-02-015
 7  1988-02-084
 8  1988-02-141
 9  1988-02-161
 10 1988-02-184
 11 1988-02-242
 12 1988-03-041
 
 
 I want to extract the times 1988-01, 1988-02,...,2005-12  are 
 present, in order to know, for example, that 1988-01 = 5 days   1988-02 
 = 6 days and so on. The Freq column doesn't matter in this case, just 
 the Var1 column
 
 I've been trying with:
 
 regexpr(2005-03-,l1$Var1) 
 
 But the result is not satisfactory since I need to find and count where 
 these values are.
 
 Thanks
 
 Antonio

Using just the example data:

 table(substr(DF$Var1, 1, 7))

1988-01 1988-02 1988-03 
  5   6   1 

See ?substr for more information.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] dataframe manipulation

2006-11-22 Thread Gabor Grothendieck
If x is the object at the end of your post:

library(zoo)
z - zoo(x$Freq, as.Date(x$Var1))
aggregate(z, as.yearmon, length)

If out contains the result of aggregate then coredata(out) are the
counts and time(out) are the corresponding year-months.
See ?yearmon

On 11/22/06, antonio rodriguez [EMAIL PROTECTED] wrote:
 Hi,

 Having a dataframe 'l1' (dput output is below):

  dim(l1)
 1274   2

  l1[1:12,]

Var1 Freq
 1  1988-01-131
 2  1988-01-161
 3  1988-01-203
 4  1988-01-252
 5  1988-01-301
 6  1988-02-015
 7  1988-02-084
 8  1988-02-141
 9  1988-02-161
 10 1988-02-184
 11 1988-02-242
 12 1988-03-041


 I want to extract the times 1988-01, 1988-02,...,2005-12  are
 present, in order to know, for example, that 1988-01 = 5 days   1988-02
 = 6 days and so on. The Freq column doesn't matter in this case, just
 the Var1 column

 I've been trying with:

 regexpr(2005-03-,l1$Var1)

 But the result is not satisfactory since I need to find and count where
 these values are.

 Thanks

 Antonio


 dput(l1,control=all)

 structure(list(Var1 = structure(as.integer(c(1, 2, 3, 4, 5, 6,
 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38,
 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54,
 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70,
 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86,
 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101,
 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114,
 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127,
 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140,
 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153,
 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166,
 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179,
 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192,
 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218,
 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231,
 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244,
 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257,
 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270,
 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283,
 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296,
 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309,
 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322,
 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335,
 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348,
 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361,
 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374,
 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387,
 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400,
 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413,
 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426,
 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439,
 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452,
 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465,
 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478,
 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491,
 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504,
 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517,
 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530,
 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543,
 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556,
 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569,
 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582,
 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595,
 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608,
 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621,
 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634,
 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647,
 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660,
 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673,
 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686,
 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699,
 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712,
 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725,
 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738,
 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751,
 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764,
 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777,
 778, 779, 780, 781, 782, 783, 

Re: [R] dataframe manipulation

2006-11-22 Thread antonio rodriguez
Marc Schwartz escribió:
 Using just the example data:

   
 table(substr(DF$Var1, 1, 7))
 

 1988-01 1988-02 1988-03 
   5   6   1 
   
This is just what I wanted!

Many thanks Marc!

BR

Antonio


 See ?substr for more information.

 HTH,

 Marc Schwartz





__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] dataframe manipulation[Solved]

2006-11-22 Thread antonio rodriguez
Gabor Grothendieck escribió:
 If x is the object at the end of your post:

 library(zoo)
 z - zoo(x$Freq, as.Date(x$Var1))
 aggregate(z, as.yearmon, length)
Gabor,

Very nice output of this option. I get:

ene 1988 feb 1988 mar 1988 abr 1988 may 1988 jun 1988 jul 1988 ago 1988
   56565374
sep 1988 oct 1988 nov 1988 dic 1988 ene 1989 feb 1989 mar 1989 abr 1989
   78666346

Thanks!

Antonio



 If out contains the result of aggregate then coredata(out) are the
 counts and time(out) are the corresponding year-months.
 See ?yearmon

 On 11/22/06, antonio rodriguez [EMAIL PROTECTED] wrote:
 Hi,

 Having a dataframe 'l1' (dput output is below):

  dim(l1)
 1274   2

  l1[1:12,]

Var1 Freq
 1  1988-01-131
 2  1988-01-161
 3  1988-01-203
 4  1988-01-252
 5  1988-01-301
 6  1988-02-015
 7  1988-02-084
 8  1988-02-141
 9  1988-02-161
 10 1988-02-184
 11 1988-02-242
 12 1988-03-041


 I want to extract the times 1988-01, 1988-02,...,2005-12  are
 present, in order to know, for example, that 1988-01 = 5 days   1988-02
 = 6 days and so on. The Freq column doesn't matter in this case, just
 the Var1 column

 I've been trying with:

 regexpr(2005-03-,l1$Var1)

 But the result is not satisfactory since I need to find and count where
 these values are.

 Thanks

 Antonio


 dput(l1,control=all)

 structure(list(Var1 = structure(as.integer(c(1, 2, 3, 4, 5, 6,
 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38,
 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54,
 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70,
 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86,
 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101,
 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114,
 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127,
 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140,
 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153,
 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166,
 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179,
 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192,
 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218,
 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231,
 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244,
 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257,
 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270,
 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283,
 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296,
 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309,
 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322,
 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335,
 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348,
 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361,
 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374,
 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387,
 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400,
 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413,
 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426,
 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439,
 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452,
 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465,
 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478,
 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491,
 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504,
 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517,
 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530,
 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543,
 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556,
 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569,
 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582,
 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595,
 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608,
 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621,
 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634,
 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647,
 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660,
 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673,
 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686,
 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699,
 700, 701, 702, 703, 704, 705, 

[R] data in form of a date

2006-11-22 Thread James J. Roper
Dear all,

I often use dates and times in analyses.  I just can't figure out how to 
format my date or time column in R.  So, apparently R sees the date as 
something other than date (character).  Let's say I am opening a CSV 
file, one of the columns of which is a date or time.  How do I specify 
that when opening the file?

Thanks for the help,

Jim

-- 
-
James J. Roper, Ph.D.
Universidade Federal do Paraná
Depto. de Zoologia
Caixa Postal 19020
81531-990 Curitiba, Paraná, Brasil
=
E-mail: [EMAIL PROTECTED]
Phone/Fone/Teléfono:   55 41 33611764
celular:   55 41 99870543
=
Zoologia na UFPR
http://zoo.bio.ufpr.br/zoologia/
Ecologia e Conservação na UFPR
http://www.bio.ufpr.br/ecologia/
-
http://jjroper.sites.uol.com.br
Currículo Lattes
http://lattes.cnpq.br/2553295738925812

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] data in form of a date

2006-11-22 Thread Leeds, Mark \(IED\)
there are probably many other ways but check out read.zoo. I cans end you of 
read.zoo also if you like.
Let me know.



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of James J. Roper
Sent: Wednesday, November 22, 2006 4:18 PM
To: r-help@stat.math.ethz.ch
Subject: [R] data in form of a date

Dear all,

I often use dates and times in analyses.  I just can't figure out how to format 
my date or time column in R.  So, apparently R sees the date as something other 
than date (character).  Let's say I am opening a CSV file, one of the columns 
of which is a date or time.  How do I specify that when opening the file?

Thanks for the help,

Jim

--
-
James J. Roper, Ph.D.
Universidade Federal do Paraná
Depto. de Zoologia
Caixa Postal 19020
81531-990 Curitiba, Paraná, Brasil
=
E-mail: [EMAIL PROTECTED]
Phone/Fone/Teléfono:   55 41 33611764
celular:   55 41 99870543
=
Zoologia na UFPR
http://zoo.bio.ufpr.br/zoologia/
Ecologia e Conservação na UFPR
http://www.bio.ufpr.br/ecologia/
-
http://jjroper.sites.uol.com.br
Currículo Lattes
http://lattes.cnpq.br/2553295738925812

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


This is not an offer (or solicitation of an offer) to buy/se...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] What training algorithm does nnet package use?

2006-11-22 Thread Wee-Jin Goh
Just to add to this, I also need to know what language is the nnet  
package written in? Is it in pure R or is it a wrapper for a C  
library. It really is performing very quickly, going through 200  
epochs in seconds when it took neural minutes, and neural is  
written in R.

It all may sound like a trivial question, but it is important as I  
need to know all this for the analysis I'm doing for a paper.

Cheers,
Wee-Jin


On 22 Nov 2006, at 15:41, Wee-Jin Goh wrote:

 Greetings list,

 I've just swapped from the neural package to the nnet package and
 I've noticed that the training is orders of magnitude faster, and the
 results are way more accurate.

 This leads me to wonder, what training algorithm is nnet using? Is
 it a modification on the standard backpropagation? Or a completely
 different algorithm? I'm trying to account for the speed differences
 between neural and nnet, and the documentation on the nnet package is
 rather sparse on what training algorithm is used (either that, or I'm
 getting blind and missed it totally).

 Any help would be much appreciated.

 Regards,
 Wee-Jin

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] installation step for RSperl

2006-11-22 Thread Yuandan Zhang
Hi,

I try to use R within perl. however, I have a bit difficulty to install
RSperl. I followd steps from
http://www.omegahat.org/RSPerl/. but still can' t make it work.

could someone list a fe w clean steps that I can follow to install it?

cheers


-- 

Yuandan Zhang
张元旦
.

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] question about the solve function in library Matrix

2006-11-22 Thread yyan liu
Hi:
I have some problems when I use the function solve function in a loop. In 
the following code, I have a diagonal martix ttt whose  elements change in 
every iteration in a loop. I defined a dpoMatrixclass  before the loop so I 
do not need to define this class every time in the loop. The reason is to save 
some computing time. The code is below. The inverse of the matrix ttt
 should change according to the change of ttt in the loop. However, the 
values in sigma.dpo.solve, which is the inverse of ttt does not change.  
Can anybody figure out what is wrong with my code? Thanks a lot in advance!
 
 liu 
 
 library(Matrix)
 ttt-diag(1,2)
 sigma.dpo-as(ttt, dpoMatrix)
 for(i in 1:5)
   { ttt-diag(i,2)
 [EMAIL PROTECTED]-as.vector(ttt)
 sigma.dpo.solve-solve(sigma.dpo)
 print(-)
 print(sigma.dpo)
 print(sigma.dpo.solve)
   }
 ##
 
 

-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] data in form of a date

2006-11-22 Thread Gabor Grothendieck
Read the help desk article in R News 4/1.

On 11/22/06, James J. Roper [EMAIL PROTECTED] wrote:
 Dear all,

 I often use dates and times in analyses.  I just can't figure out how to
 format my date or time column in R.  So, apparently R sees the date as
 something other than date (character).  Let's say I am opening a CSV
 file, one of the columns of which is a date or time.  How do I specify
 that when opening the file?

 Thanks for the help,

 Jim

 --
 -
 James J. Roper, Ph.D.
 Universidade Federal do Paraná
 Depto. de Zoologia
 Caixa Postal 19020
 81531-990 Curitiba, Paraná, Brasil
 =
 E-mail: [EMAIL PROTECTED]
 Phone/Fone/Teléfono:   55 41 33611764
 celular:   55 41 99870543
 =
 Zoologia na UFPR
 http://zoo.bio.ufpr.br/zoologia/
 Ecologia e Conservação na UFPR
 http://www.bio.ufpr.br/ecologia/
 -
 http://jjroper.sites.uol.com.br
 Currículo Lattes
 http://lattes.cnpq.br/2553295738925812

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] DSC 2007 registration and scientific program

2006-11-22 Thread Paul Murrell
Hi

DSC 2007, a conference on systems and environments for statistical
computing, will take place in Auckland, New Zealand on February 15  16,
2007.

This is a reminder that the deadline for early bird DSC registration is
December 1, 2006.

Also, the accepted abstracts for DSC are now available on the web site
http://www.stat.auckland.ac.nz/dsc-2007/abstracts/abstracts.html

Finally, a reminder that there is a small amount of financial assistance
being offered to help people attend DSC.  For details, please see
http://www.stat.auckland.ac.nz/dsc-2007/#funding

Please visit the conference web page at
http://www.stat.auckland.ac.nz/dsc-2007/

Paul
(on behalf of the Organising Committee)
-- 
Dr Paul Murrell
Department of Statistics
The University of Auckland
Private Bag 92019
Auckland
New Zealand
64 9 3737599 x85392
[EMAIL PROTECTED]
http://www.stat.auckland.ac.nz/~paul/

___
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-announce

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fitting mixed-effects models with lme with fixed error term variances

2006-11-22 Thread Gregor Gorjanc
Hello!

Douglas Bates wrote:
 On 11/22/06, Gregor Gorjanc [EMAIL PROTECTED] wrote:
 Douglas Bates wrote:
  On 11/21/06, Gregor Gorjanc [EMAIL PROTECTED] wrote:
  Douglas Bates bates at stat.wisc.edu writes:
  ...
   Can you be more specific about which parameters you want to fix and
   which you want to vary in the optimization?
 
  It would be nice to have the ability to fix all variances i.e.
  variances of
  random effects.
 
...
  effects but allow the variance of a slope for the same grouping factor
  to be estimated.  Well, you could but only in the fortune(Yoda)
  sense.
 

 Yes, I agree here. Thank you for the detailed answer.

  By the way, if you fix all the variances then what are you optimizing
  over?  The fixed effects?  In that case the solution can be calculated
  explicitly for a linear mixed model.  The conditional estimates of the
  fixed effects given the variance components are the solution to a
  penalized linear least squares problem.  (Yes, the solution can also
  be expressed as a generalized linear least squares problem but there
  are advantages to using the penalized least squares representation.

 Yup. It would really be great to be able to do that nicely in R, say use
 lmer() once and since this might take some time use estimates of
 variance components next time to get fixed and random effects. Is this
 possible with lmer or any related function - not in fortune(Yoda)
 sense ;)
 
 Not quite.  There is now a capability in lmer to specify starting
 estimates for the relative precision matrices which means that you can
 use the estimates from one fit as starting estimates for another fit.
 It looks like
 
 fm1 - lmer(...)
 fm2 - lmer(y ~ x + (...), start = [EMAIL PROTECTED])
 
 I should say that in my experience this has not been as effective as I
 had hoped it would be.  What I see in the optimization is that the
 first few iterations reduce the deviance quite quickly but the
 majority of the time is spent refining the estimates near the optimum.
 So, for example, if it took 40 iterations to converge from the rather
 crude starting estimates calculated within lmer it might instead take
 35 iterations if you give it good starting estimates.

Yes, I also have the same experience with other programs, say VCE[1].
What I was hopping for was just solutions from linear mixed model i.e.
either via GLS or PLS representations and no optimization if values for
variance components are passed as input - there should be a way to
distinguish that from just passing starting values..

[1]http://vce.tzv.fal.de/index.pl

Gregor

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] integrate R code to WinEdt file

2006-11-22 Thread Aimin Yan
I want to integrate R code to my WinEdt file.
Can someone tell me how to do this?
I copy R code to my WinEdt file, but it does't work.
Aimin Yan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Sweave question

2006-11-22 Thread Aimin Yan
I try Sweave
and  get Sweave-test-1.tex

but hot to run LaTeX on 'Sweave-test-1.tex'?

I am using WinEdt.

thanks,

Aimin

  Sweave(testfile)
Writing to file Sweave-test-1.tex
Processing code chunks ...
  1 : print term verbatim
  2 : term hide
  3 : echo print term verbatim
  4 : term verbatim
  5 : echo term verbatim
  6 : echo term verbatim eps pdf
  7 : echo term verbatim eps pdf

You can now run LaTeX on 'Sweave-test-1.tex'
  LaTeX()
Error: could not find function LaTeX
 

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sweave question

2006-11-22 Thread Schweitzer, Markus
Hello,

R actually creates a latexfile. I am actually using Miktex and the
editor Texniccenter so I open the *.tex file there and then I can do
whatever I want.
Usually I use latex=PDF to create pdf files.

So I think you have to handle your .tex file with another application.
(How do you usually create something out of latexfiles?

hope that helped..




Markus Schweitzer http://www.pokertips.tk 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Aimin Yan
Sent: Donnerstag, 23. November 2006 07:16
To: r-help@stat.math.ethz.ch
Subject: [R] Sweave question

I try Sweave
and  get Sweave-test-1.tex

but hot to run LaTeX on 'Sweave-test-1.tex'?

I am using WinEdt.

thanks,

Aimin

  Sweave(testfile)
Writing to file Sweave-test-1.tex
Processing code chunks ...
  1 : print term verbatim
  2 : term hide
  3 : echo print term verbatim
  4 : term verbatim
  5 : echo term verbatim
  6 : echo term verbatim eps pdf
  7 : echo term verbatim eps pdf

You can now run LaTeX on 'Sweave-test-1.tex'
  LaTeX()
Error: could not find function LaTeX
 

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] data in form of a date

2006-11-22 Thread antonio rodriguez
James J. Roper escribió:
 Dear all,

 I often use dates and times in analyses.  I just can't figure out how to 
 format my date or time column in R.  So, apparently R sees the date as 
 something other than date (character).  Let's say I am opening a CSV 
 file, one of the columns of which is a date or time.  How do I specify 
 that when opening the file?

 Thanks for the help,

 Jim

   
Jim,

Suppose a matrix F (6575,189) where rows are days and columns some grid 
points:

First we set the starting date of the series (6575 is the total number 
of days):

library(zoo)
x.Date-as.Date(1987-12-31)+ c(1:6575)
F.zoo - zoo(F, x.Date)

#then take a look

F.zoo[,1]

HTH

Antonio

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] installation step for RSperl

2006-11-22 Thread Duncan Temple Lang
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


You have to give us some details about
what went wrong and details about the
platform/operating system you are using.

  D.

Yuandan Zhang wrote:
 Hi,
 
 I try to use R within perl. however, I have a bit difficulty to install
 RSperl. I followd steps from
 http://www.omegahat.org/RSPerl/. but still can' t make it work.
 
 could someone list a fe w clean steps that I can follow to install it?
 
 cheers
 
 
 
 
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

- --
Duncan Temple Lang[EMAIL PROTECTED]
Department of Statistics  work:  (530) 752-4782
4210 Mathematical Sciences Building   fax:   (530) 752-7099
One Shields Ave.
University of California at Davis
Davis,
CA 95616,
USA
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (Darwin)

iD8DBQFFZUvo9p/Jzwa2QP4RAqYtAJ98rZE2w2RSIIdNShh1xf1OX2Z4jgCfSryK
nrrDdgBNhMNH9Gi/iUTalWM=
=euUt
-END PGP SIGNATURE-

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] What training algorithm does nnet package use?

2006-11-22 Thread Dieter Menne
Wee-Jin Goh wjgoh at brookes.ac.uk writes:

 
 Just to add to this, I also need to know what language is the nnet  
 package written in? Is it in pure R or is it a wrapper for a C  
 library. 

As usual, you can download the full source to find out what you want, but it's a
bit hidden. Simply said, nnet (R+C) is part of package MASS is part of bundle
VR, and can be downloaded as a tar.gz from

http://cran.at.r-project.org/src/contrib/Descriptions/VR.html

(No private flames, please, in case I should have mixed up package and bundle).

/*  nnet/nnet.c by W. N. Venables and B. D. Ripley  Copyright (C) 1992-2002
 *
 * weights are stored in order of their destination unit.
 * the array Conn gives the source unit for the weight (0 = bias unit)
 * the array Nconn gives the number of first weight connecting to each unit,
 * so the weights connecting to unit i are Nconn[i] ... Nconn[i+1] - 1.
 *
 */

#include R.h
#include R_ext/Applic.h

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.