Re: [R] R-help Digest, Vol 54, Issue 30

2007-08-31 Thread David Duffy
Ron Crump wrote:
 Hi,
 
 I have a dataframe that contains pedigree information;
 that is individual, sire and dam identities as separate
 columns. It also has date of birth.
 
 These identifiers are not numeric, or not sequential.
 
 Obviously, an identifier can appear in one or two columns,
 depending on whether it was a parent or not. These should
 be consistent.
 
 Not all identifiers appear in the individual column - it
 is possible for a parent not to have its own record if its
 parents were not known.
 
 Missing parental (sire and/or dam) identifiers can occur.
 
 I need to export the data for use in another program that
 requires the pedigree to be coded as integers, increasing
 with date of birth (therefore sire and dam always have
 lower identifiers than their offspring) and with missing
 values coded as 0.
 
 How would I go about doing this?


You might look at http://www.qimr.edu.au/davidD/sib-pair.R,
specifically the read.pedigree() and wrlink() functions.  The former is not
very impressive speedwise -- I usually perform these tasks in the
my Sib-pair (Fortran) program, which is on the same webpage.  It will order
the pedigree by generational position, so a DOB is not required to do the sort.

Terry Therneau's kinship package does that ordering, but doesn't include
output routines for the Linkage format.

David Duffy.


| David Duffy (MBBS PhD) ,-_|\
| email: [EMAIL PROTECTED]  ph: INT+61+7+3362-0217 fax: -0101  / *
| Epidemiology Unit, Queensland Institute of Medical Research   \_,-._/
| 300 Herston Rd, Brisbane, Queensland 4029, Australia  GPG 4D0B994A v

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R-help Digest, Vol 46, Issue 27

2006-12-27 Thread Grant Izmirlian
On Wednesday 27 December 2006 06:00, [EMAIL PROTECTED] wrote:
 jingjiangyan

I agree, you can use 'assign'. To be more explicit, you could use the 
following function. 

jingjiangyan - 
function(formula, data)
{
  m - match.call()
  %,% - function(x,y)paste(x,y,sep=)
  d.nm - as.character(m$data)
  y.nm - as.character(formula[[2]])
  x.nm - as.character(formula[[3]])
  for(i in levels(data[[x.nm]])){
var.name - d.nm %,% . %,% i
var.val - data[[y.nm]][data[[x.nm]]==i]
cmd - var.name %,%  -  %,% var.val
eval(cmd)
assign(var.name, var.val, globalenv())
  }
}

Next, assuming the data.frame listed in the previous posting, 'df' 
exists in your workspace, the call 

  jingjiangyan(bb ~ aa, data=df)

would produce the desired results.

Cheers,
Grant Izmirlian
-- 
Հրանդ Իզմիրլյան

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R-help Digest, Vol 39, Issue 13

2006-05-13 Thread Alan Cobo-Lewis
r-help@stat.math.ethz.ch on Saturday, May 13, 2006 at 6:00 AM -0500 wrote:
 lme(biomass~age, random=~woods/age)?

Jörn

Consult Pinheiro and Bates (2000, Mixed-effects models in S and S-Plus, 
Springer, ISBN 0-387-98957-0 ref 7 at 
http://www.r-project.org/doc/bib/R-books.html ) for how to fit more elaborate 
models, but two straightforward ones that might be adequate
are
lme( biomass~age, random=~1|woods )
and
lme( biomass~age, random=~age|woods )

In the lme4 library corresponding syntax is
lmer( biomass~age+(1|woods) )
and
lmer( biomass~age+(age|woods) )

For vignettes on the lme4 library see the mlmRev library and
@ARTICLE{Rnews:Bates:2005,
  AUTHOR = {Douglas Bates},
  TITLE = {Fitting Linear Mixed Models in {R}},
  JOURNAL = {R News},
  YEAR = 2005,
  VOLUME = 5,
  NUMBER = 1,
  PAGES = {27--30},
  MONTH = {May},
  URL = {[ http://CRAN.R-project.org/doc/Rnews/ 
]http://CRAN.R-project.org/doc/Rnews/}
}

alan

--
Alan B. Cobo-Lewis, Ph.D.   (207) 581-3840 tel
Department of Psychology(207) 581-6128 fax
University of Maine
Orono, ME 04469-5742[EMAIL PROTECTED]

http://www.umaine.edu/visualperception

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 38, Issue 30

2006-04-30 Thread isaac . martin
Mi nueva dirección de correo es: [EMAIL PROTECTED]

New e-mail address: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 38, Issue 19

2006-04-19 Thread isaac . martin
Mi nueva dirección de correo es: [EMAIL PROTECTED]

New e-mail address: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 38, Issue 9

2006-04-09 Thread isaac . martin
Mi nueva dirección de correo es: [EMAIL PROTECTED]

New e-mail address: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 37, Issue 26

2006-03-26 Thread isaac . martin
Mi nueva dirección de correo es: [EMAIL PROTECTED]

New e-mail address: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 37, Issue 15

2006-03-15 Thread isaac . martin
Mi nueva dirección de correo es: [EMAIL PROTECTED]

New e-mail address: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 37, Issue 12

2006-03-12 Thread Ferran Carrascosa
Hi r-users,

I would like to know if R have any solution to the Address standardization.
The problem is to classify a database of addresses with the real
addresses of a streets of Spain. Ideally, I would like to assign
Postal code, census data and other geographic information.

If this is not possible I would like to know solutions in R about text
mining, text classification, distance within text data,...

Any help will be appreciate

Thanks in advance

Ferran Carrascosa

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 37, Issue 12

2006-03-12 Thread Liaw, Andy
 From: Ferran Carrascosa
 
 Hi r-users,
 
 I would like to know if R have any solution to the Address 
 standardization. The problem is to classify a database of 
 addresses with the real addresses of a streets of Spain. 
 Ideally, I would like to assign Postal code, census data and 
 other geographic information.

I have no idea about this one...
 
 If this is not possible I would like to know solutions in R 
 about text mining, text classification, distance within text data,...

RSiteSearch(text mining) produced hits that look relevant.

Andy
 
 Any help will be appreciate
 
 Thanks in advance
 
 Ferran Carrascosa
 
 __
 R-help@stat.math.ethz.ch mailing list 
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 37, Issue 1

2006-03-01 Thread isaac . martin
Mi nueva dirección de correo es: [EMAIL PROTECTED]

New e-mail address: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 36, Issue 21

2006-02-21 Thread Evgeniy Kachalin
Hello, dear R users.

I've already sent a question here, but I'm not sure that it had been read.

I need to visualize classification of my numerical data based on 2-3 
factors. As I suppose, the best way is a tree.
With an orbitrary function at the ends (leaves), or at least with means 
of my data at the ends.

What is the way to do it? As I found, ctree offers binary 
classification, but it that the only way? Of course, tree is not only 
way, may be you could offer other ways.

Thank you.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 36, Issue 21

2006-02-21 Thread Evgeniy Kachalin
Evgeniy Kachalin wrote:
 Hello, dear R users.
 
 I've already sent a question here, but I'm not sure that it had been read.
 
 I need to visualize classification of my numerical data based on 2-3 
 factors. As I suppose, the best way is a tree.
 With an orbitrary function at the ends (leaves), or at least with means 
 of my data at the ends.
 
 What is the way to do it? As I found, ctree offers binary 
 classification, but it that the only way? Of course, tree is not only 
 way, may be you could offer other ways.
 
Or the best way of it is to do it with replacement, like a 'heatmap', 
but with means in the cells instead of colors, if it is possible.

Sorry for the second letter.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 35, Issue 24

2006-01-25 Thread Ted Harding
I've been reluctant to step into this topic, but now
feel that it may be helpful to make a certain point.

On the internet, for the most part, the person behind
the email is invisible and intangible. It is therefore
possible, when someone puts their foot down, to stamp
inadvertently on someone else's already broken toes.

A friend of mine, very intelligent, very knowledgeable
and creative, very articulate, nevertheless when writing
uses spelling which can be a close approximation to
random, and some interesting variants of grammar and
vocabulary as well.

The reason: dyslexia.

While most of us hit the wrong keys at times (and when
we read back over what we've written tend to see what we
intended to write rather than what we did write), and
when backed against the wall would admit that we could
have got it right if we had paid better attention, there
are some people who can't help getting it wrong.

But, on the internet, one cannot readily recognise who
they are (though in some cases, if one knows the signs,
one may guess).

Best wishes to all,
Ted.


E-Mail: (Ted Harding) [EMAIL PROTECTED]
Fax-to-email: +44 (0)870 094 0861
Date: 25-Jan-06   Time: 10:06:35
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 35, Issue 24

2006-01-25 Thread François Pinard
[Gabor Grothendieck]

[...] this list is inhabited by some rather rude participants but
everyone puts up with them in the hope that they do have some useful
remarks.

I've been witnessing this list for about one year, and also read *lots* 
of archived messages.  While it is true that a few members do not use 
white gloves, are rather fond on concise replies, and do express strong 
opinions at times, they never went overboard insulting people and always 
kept a reasonable measure, at least so far that I could see (yet who 
knows, outliers might happen! :-).

(*) Our whole society is a bit shy and shivers easily when opinions are 
expressed nowadays, I often observed than people quickly get insecure,
feel attacked, and overreact (by running away or starting a fight).

there is even a group of thought that feels it is a justifiable way to
keep the list volume under control.

This may work because of the starred paragraph above, that is, for wrong 
reasons.  Best is, and this often occurs on the R list, when everything 
(facts, opinions) is being shared efficiently, without useless arguing.  
Then, threads quickly fade out.

-- 
François Pinard   http://pinard.progiciels-bpi.ca

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 35, Issue 24

2006-01-24 Thread Keith . Chamberlain
Dear Prof Ripley,

First of all, unless you are an english professor, then I do not think you have
any business policing language. I'm still very much a student, both in R, and
regarding signal analysis. My competence on the subject as compared too your
own level of expertise, or my spelling for that matter, may be a contension for
you, but it would have been better had you kept that opinion too yourself. There
are plenty of other reasons besides laziness or carelessness that people will
consistently error in language use, such as learning disorders, head injuries,
and/or vertigo.

On the contrary, I am aware of the definition of a periodogram, and I know what
the unnormalized periodogram in the data I presented looks like. Spec.pgram()
is actually normalized too something, because it's discrete integral is not
well above the SS amplitude of the signal it computed the periodogram for. In
other words, the powers are not in units of around 4,000, which the peak would
be if the units were merely the modulus squared of the Fourier coeficients of
the data I presented. Alas, the modulus squared of the Fourier coeficients IS
the TWO SIDED unnormalized periodogram, ranging from [-fc, fc] | fc=nyquist
critical frequency. The definition of the ONE SIDED periodogram IS the modulus
squared of the Fourier coeficients ranging over [0, fc], but since the function
is even, data points in (0, fc) non-inclusive, need to be multiplied by 2. Thus
is according too the definition given by Press, et al (1988, 1992, 2002, c.f.
cp 12  13). I'm assuming that R returns an FFT in the same layout as Press, et
al describe.

Press, et al. are also very clear about the existence of far too many ways of
normalizing the periodogram too document, which they stated before delving into
particularly how they normalized to the mean squared amplitude of the signal
that the periodogram was computed from. In the page before, and perhaps this is
where some of the confusion arises from, they document the calculations for MS
and SS amplitudes and time integral squared amplitude of the signal in the
time domain, not the frequency domain. The page after that, their example
only shows how to normalize a periodogram so its sum is equal too the MS
amplitude. In short, but starting from SS amplitude:

a). sum(a[index=(1:N) or t=(0:N-1)]^2) = SS amplitude calculated in time domain

b). 1/N * sum(Mod(fft[-fc:fc])^2) = two sided periodogram that sums too the SS
amplitude

c). Same as b but over the range [0, fc], and (0, fc) multiplied by 2 is the one
sided periodogram, also sums too the SS amplitude

For MS amplitude, the procedures are identical, only the time domain is divided
by N, and the frequency domain figures are divided by N^2 instead of N.

When the periodogram is in power per unit time, as in the above, so that the
power is interpretable at N/2+1 independent frequencies, it is a normalized
periodogram. spec.pgram() IS normalized, I just do not know what it's
normalized too because I can not seem to get spec.pgram to stop tapering (at
which point the normalization should be dead on, not just close).

By the way, normalized does not automatically mean anything unless to what
is stated. I could normalize something arbitrarily to the number of tics on my
dogs back side, and still call it normed, or erroneously refer too it as
unnormed. If normalized is suposed to mean something specific, then I am
confident that more than 90% of undergraduates are not familiar with what the
term should mean. Stats and coding and using programs are a human endeavor.
This human seems to have made meaning out of terms differently than what those
who wrote the documentation seem to have intended. Only, I do not know where
the documentation or my understanding may have been missled (R docs, Numerical
Recipes, or any other source I looked at since I started).

Cheers,
KeithC.

First, please look up `too' in your dictionary.

Second, please study the references on the help page, which give the
details.  That is what references are for!  The references will also
answer your question about the reference distribution.

The help page does not say it is `normalized' at all: it says it computes
the peridogram, and you seem unaware of the definitions of the latter (and
beware, there are more than one).

On Tue, 24 Jan 2006, Keith Chamberlain wrote:

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 35, Issue 24

2006-01-24 Thread Spencer Graves
Dear Mr. Chamberlain:

  You asked for free consulting, and as near as I can tell, you got 
pretty good advice.  Now you complain that you don't like the packaging. 
  If you can't stand the heat, get out of the kitchen.

  Professor Brian Ripley has an international reputation based on solid 
contributions to human knowledge over many years.  He is an expert in 
statistical science, not diplomacy.  Professor Ripley has been 
incredibly generous in donating substantial portions of his time for 
many years both to help make R what it is today and to answering 
questions on this listserve.  I think he deserves a great deal of 
respect for not only the time he has devoted to this but to how much he 
has achieved with that time.

  What would you like him to do as a result of your email?  Retire? 
Stop contributing to this listserve and to the R project more generally? 
  I sincerely hope he does not consider such.  It would be a great loss 
to humanity if he did.

  Mr. Chamberlain, if English (or as Prof. Ripley might say, 
American) is your mother tongue, then your deplorable lack of skill in 
its use raises serious questions about the standard of academic 
excellence at the University of Colorado, which I had previously thought 
was a great university and the finest Colorado had to offer.  Of course, 
if English is a second language for you, then I would not complain. 
Rather, I would be humbled and honored that you chose to meet the rest 
of the world in my native tongue.  Another question:  The web lists you 
as a senior in psychology.  Have you learned anything in your study of 
psychology?  I would think that psychology students should meet a much 
higher standard for social skills and communications than you have 
displayed today.  Would you like me to forward your correspondence to, 
say, the editor of the Flatiron News there in Boulder or Prof. W. Edward 
Craighead, the chair of the Psychology Dept., asking if a degree from 
the once-great University of Colorado is supposed to imply that the 
degree holder meets any standard for academic excellence in comportment 
and the use of language?

  Sincerely,
  Spencer Graves

[EMAIL PROTECTED] wrote:

 Dear Prof Ripley,
 
 First of all, unless you are an english professor, then I do not think you 
 have
 any business policing language. I'm still very much a student, both in R, and
 regarding signal analysis. My competence on the subject as compared too your
 own level of expertise, or my spelling for that matter, may be a contension 
 for
 you, but it would have been better had you kept that opinion too yourself. 
 There
 are plenty of other reasons besides laziness or carelessness that people will
 consistently error in language use, such as learning disorders, head injuries,
 and/or vertigo.
 
 On the contrary, I am aware of the definition of a periodogram, and I know 
 what
 the unnormalized periodogram in the data I presented looks like. Spec.pgram()
 is actually normalized too something, because it's discrete integral is not
 well above the SS amplitude of the signal it computed the periodogram for. In
 other words, the powers are not in units of around 4,000, which the peak would
 be if the units were merely the modulus squared of the Fourier coeficients of
 the data I presented. Alas, the modulus squared of the Fourier coeficients IS
 the TWO SIDED unnormalized periodogram, ranging from [-fc, fc] | fc=nyquist
 critical frequency. The definition of the ONE SIDED periodogram IS the modulus
 squared of the Fourier coeficients ranging over [0, fc], but since the 
 function
 is even, data points in (0, fc) non-inclusive, need to be multiplied by 2. 
 Thus
 is according too the definition given by Press, et al (1988, 1992, 2002, 
 c.f.
 cp 12  13). I'm assuming that R returns an FFT in the same layout as Press, 
 et
 al describe.
 
 Press, et al. are also very clear about the existence of far too many ways of
 normalizing the periodogram too document, which they stated before delving 
 into
 particularly how they normalized to the mean squared amplitude of the signal
 that the periodogram was computed from. In the page before, and perhaps this 
 is
 where some of the confusion arises from, they document the calculations for MS
 and SS amplitudes and time integral squared amplitude of the signal in the
 time domain, not the frequency domain. The page after that, their example
 only shows how to normalize a periodogram so its sum is equal too the MS
 amplitude. In short, but starting from SS amplitude:
 
 a). sum(a[index=(1:N) or t=(0:N-1)]^2) = SS amplitude calculated in time 
 domain
 
 b). 1/N * sum(Mod(fft[-fc:fc])^2) = two sided periodogram that sums too the SS
 amplitude
 
 c). Same as b but over the range [0, fc], and (0, fc) multiplied by 2 is the 
 one
 sided periodogram, also sums too the SS amplitude
 
 For MS amplitude, the procedures are identical, only the time domain is 
 divided
 by N, and 

Re: [R] R-help Digest, Vol 35, Issue 24

2006-01-24 Thread François Pinard
[EMAIL PROTECTED], addressing to Brian Ripley]

First of all, unless you are an english professor, then I do not think
you have any business policing language.

We all do mistakes (English or otherwise).  I'm very grateful that 
people forgive my own errors, and I try to be tolerant to others.  (Yet, 
it happens that people lacking good will ask for stronger reactions.)

This is the business of everybody, really, building a better community 
in every possible aspect, and the means for this go through interaction 
and collaboration.  Let's all be humble enough to ponder the criticism 
of others, improve ourselves, and so increase the value of our share.

-- 
François Pinard   http://pinard.progiciels-bpi.ca

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 35, Issue 24

2006-01-24 Thread Gabor Grothendieck
Its not really you.  Its a fact of life that this list is inhabited by
some rather rude participants but everyone puts up with
them in the hope that they do have some useful remarks.
This has been discussed repeatedly on the list and there
is even a group of thought that feels it is a justifiable way
to keep the list volume under control.

On 1/24/06, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
 Dear Prof Ripley,

 First of all, unless you are an english professor, then I do not think you 
 have
 any business policing language. I'm still very much a student, both in R, and
 regarding signal analysis. My competence on the subject as compared too your
 own level of expertise, or my spelling for that matter, may be a contension 
 for
 you, but it would have been better had you kept that opinion too yourself. 
 There
 are plenty of other reasons besides laziness or carelessness that people will
 consistently error in language use, such as learning disorders, head injuries,
 and/or vertigo.

 On the contrary, I am aware of the definition of a periodogram, and I know 
 what
 the unnormalized periodogram in the data I presented looks like. Spec.pgram()
 is actually normalized too something, because it's discrete integral is not
 well above the SS amplitude of the signal it computed the periodogram for. In
 other words, the powers are not in units of around 4,000, which the peak would
 be if the units were merely the modulus squared of the Fourier coeficients of
 the data I presented. Alas, the modulus squared of the Fourier coeficients IS
 the TWO SIDED unnormalized periodogram, ranging from [-fc, fc] | fc=nyquist
 critical frequency. The definition of the ONE SIDED periodogram IS the modulus
 squared of the Fourier coeficients ranging over [0, fc], but since the 
 function
 is even, data points in (0, fc) non-inclusive, need to be multiplied by 2. 
 Thus
 is according too the definition given by Press, et al (1988, 1992, 2002, 
 c.f.
 cp 12  13). I'm assuming that R returns an FFT in the same layout as Press, 
 et
 al describe.

 Press, et al. are also very clear about the existence of far too many ways of
 normalizing the periodogram too document, which they stated before delving 
 into
 particularly how they normalized to the mean squared amplitude of the signal
 that the periodogram was computed from. In the page before, and perhaps this 
 is
 where some of the confusion arises from, they document the calculations for MS
 and SS amplitudes and time integral squared amplitude of the signal in the
 time domain, not the frequency domain. The page after that, their example
 only shows how to normalize a periodogram so its sum is equal too the MS
 amplitude. In short, but starting from SS amplitude:

 a). sum(a[index=(1:N) or t=(0:N-1)]^2) = SS amplitude calculated in time 
 domain

 b). 1/N * sum(Mod(fft[-fc:fc])^2) = two sided periodogram that sums too the SS
 amplitude

 c). Same as b but over the range [0, fc], and (0, fc) multiplied by 2 is the 
 one
 sided periodogram, also sums too the SS amplitude

 For MS amplitude, the procedures are identical, only the time domain is 
 divided
 by N, and the frequency domain figures are divided by N^2 instead of N.

 When the periodogram is in power per unit time, as in the above, so that the
 power is interpretable at N/2+1 independent frequencies, it is a normalized
 periodogram. spec.pgram() IS normalized, I just do not know what it's
 normalized too because I can not seem to get spec.pgram to stop tapering (at
 which point the normalization should be dead on, not just close).

 By the way, normalized does not automatically mean anything unless to what
 is stated. I could normalize something arbitrarily to the number of tics on my
 dogs back side, and still call it normed, or erroneously refer too it as
 unnormed. If normalized is suposed to mean something specific, then I am
 confident that more than 90% of undergraduates are not familiar with what the
 term should mean. Stats and coding and using programs are a human endeavor.
 This human seems to have made meaning out of terms differently than what those
 who wrote the documentation seem to have intended. Only, I do not know where
 the documentation or my understanding may have been missled (R docs, Numerical
 Recipes, or any other source I looked at since I started).

 Cheers,
 KeithC.

 First, please look up `too' in your dictionary.

 Second, please study the references on the help page, which give the
 details.  That is what references are for!  The references will also
 answer your question about the reference distribution.

 The help page does not say it is `normalized' at all: it says it computes
 the peridogram, and you seem unaware of the definitions of the latter (and
 beware, there are more than one).

 On Tue, 24 Jan 2006, Keith Chamberlain wrote:

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the 

Re: [R] R-help Digest, Vol 35, Issue 23

2006-01-23 Thread Dr. Herwig Meschke
 summary.aov(aovRes, split=list(interval = list(i1 vs i2 = 1, i2 vs
 i3 = 2, i3 vs i4 = 3, i4 vs i5 = 4, i5 vs i6 = 5)))
 
try
class(aovRes) #- aovlist !
summary.aovlist(aovRes, spit=...)
or simply
summary(aovRes, spit=...)

Hoping this helps,
Herwig

-- 
Dr. Herwig Meschke
Wissenschaftliche Beratung
Hagsbucher Weg 27
D-89150 Laichingen

phone +49 7333 210 417 / fax +49 7333 210 418
email [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 35, Issue 14

2006-01-16 Thread Achim Zeileis
On Sun, 15 Jan 2006, Werner Wernersen wrote:

 Dear all,

 Is anybody aware of a tutorial, introduction, overview
 or alike  for cluster
 analysis with R? I have been searching for something
 like that but it seems
 there are only a few rather specialized articles
 around.

As an overview (rather than an introduction or tutorial), the Cluster task
view might be helpful to you:
  http://CRAN.R-project.org/src/contrib/Views/Cluster.html
Z

 I would very much appreciate any hint.

 Thanks a million,
Werner






 ___
 Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 35, Issue 14

2006-01-14 Thread Werner Wernersen
Dear all,

Is anybody aware of a tutorial, introduction, overview
or alike  for cluster 
analysis with R? I have been searching for something
like that but it seems 
there are only a few rather specialized articles
around.

I would very much appreciate any hint.

Thanks a million,
   Werner






___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 35, Issue 14

2006-01-14 Thread Prof Brian Ripley
On Sun, 15 Jan 2006, Werner Wernersen wrote:

 Is anybody aware of a tutorial, introduction, overview
 or alike  for cluster
 analysis with R? I have been searching for something
 like that but it seems
 there are only a few rather specialized articles
 around.

Chapter 11 of MASS (the book discussed in the FAQ).

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 35, Issue 7

2006-01-08 Thread Evgeniy Kachalin
Uwe Ligges пишет:
 Evgeniy Kachalin wrote:
 
 Hello, dear participants!

 Could you tip me, is there any simple and nice way to build 
 scatter-plot for three different types of data (, and o and * - signs, 
 for example) with legend.

 Now i can guess only that way:

 plot(x~y,data=subset(mydata,factor1=='1'), pch='.',col='blue')
 points(x~y,data=subset(mydata,factor1=='2'), pch='*',col='green')
 points( etc

 What is the simple and nice way?
 Thank you very much for your kindness and help.

 
 
 Example:
 
 
 with(iris,
   plot(Sepal.Length, Sepal.Width, pch = as.integer(Species)))
 with(iris,
   legend(7, 4.4, legend = unique(as.character(Species)),
 pch = unique(as.integer(Species
 

Uwe, sorry for my stupid question. You mean that when pch=factor , plot 
can recycle the factor and use it for subscripts or marks.

Then pch=as.integer(Species) results in c(1,2,3) for 3 factor levels. 
And I need symbols 15,16,17 and colors red, blue, green.

So then I do:
iris$Species-spec.symb
iris$Species-spec.col
levels(spec.symb)-c(15,16,17)
levels(spec.col)-c('red','green','blue')

That's the only way?
More of that!!! 'Plot' does not like factors in 'pch'. So it must be so:
plot(x~y,data, pch=as.integer(as.character(spec.symb))).
That's totally crazy...

-- 
Evgeniy

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] R-help Digest, Vol 35, Issue 7

2006-01-08 Thread Kyosti H Kurikka
Hi!

Just use your factors for indexing c(15,16,17) and
c(red,green,blue). So, with the iris data:

with(iris, plot(Sepal.Length, Sepal.Width,
   pch=c(15,16,17)[as.integer(Species)],
   col=c(red,green,blue)[as.integer(Species)] ))

Best regards,
Kyosti Kurikka


  Evgeniy Kachalin wrote:
 
  Hello, dear participants!
 
  Could you tip me, is there any simple and nice way to build
  scatter-plot for three different types of data (, and o and * - signs,
  for example) with legend.
 
  Now i can guess only that way:
 
  plot(x~y,data=subset(mydata,factor1=='1'), pch='.',col='blue')
  points(x~y,data=subset(mydata,factor1=='2'), pch='*',col='green')
  points( etc
 
  What is the simple and nice way?
  Thank you very much for your kindness and help.
 
 
 
  Example:
 
 
  with(iris,
plot(Sepal.Length, Sepal.Width, pch = as.integer(Species)))
  with(iris,
legend(7, 4.4, legend = unique(as.character(Species)),
  pch = unique(as.integer(Species
 

 Uwe, sorry for my stupid question. You mean that when pch=factor , plot
 can recycle the factor and use it for subscripts or marks.

 Then pch=as.integer(Species) results in c(1,2,3) for 3 factor levels.
 And I need symbols 15,16,17 and colors red, blue, green.

 So then I do:
 iris$Species-spec.symb
 iris$Species-spec.col
 levels(spec.symb)-c(15,16,17)
 levels(spec.col)-c('red','green','blue')

 That's the only way?
 More of that!!! 'Plot' does not like factors in 'pch'. So it must be so:
 plot(x~y,data, pch=as.integer(as.character(spec.symb))).
 That's totally crazy...

 --
 Evgeniy

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 35, Issue 7

2006-01-08 Thread Uwe Ligges
Evgeniy Kachalin wrote:

 Uwe Ligges пишет:
 
 Evgeniy Kachalin wrote:

 Hello, dear participants!

 Could you tip me, is there any simple and nice way to build 
 scatter-plot for three different types of data (, and o and * - 
 signs, for example) with legend.

 Now i can guess only that way:

 plot(x~y,data=subset(mydata,factor1=='1'), pch='.',col='blue')
 points(x~y,data=subset(mydata,factor1=='2'), pch='*',col='green')
 points( etc

 What is the simple and nice way?
 Thank you very much for your kindness and help.



 Example:


 with(iris,
   plot(Sepal.Length, Sepal.Width, pch = as.integer(Species)))
 with(iris,
   legend(7, 4.4, legend = unique(as.character(Species)),
 pch = unique(as.integer(Species

 
 Uwe, sorry for my stupid question. You mean that when pch=factor , plot 
 can recycle the factor and use it for subscripts or marks.

Yes, it can recycle, but in the example above it does not recycle but 
takes the whole Species vector.


 Then pch=as.integer(Species) results in c(1,2,3) for 3 factor levels. 
 And I need symbols 15,16,17 and colors red, blue, green.

What about adding 14 as in as.integer(Species)+14, or 1 for the colors, 
respectively?



 So then I do:
 iris$Species-spec.symb
 iris$Species-spec.col
 levels(spec.symb)-c(15,16,17)
 levels(spec.col)-c('red','green','blue')
 
 That's the only way?

This is one qay of many.


 More of that!!! 'Plot' does not like factors in 'pch'. So it must be so:
 plot(x~y,data, pch=as.integer(as.character(spec.symb))).
 That's totally crazy...

You can set up your own pch variable of course, if you don't like it 
this fast and easy way.

Uwe Ligges

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] R-help Digest, Vol 35, Issue 7

2006-01-07 Thread Evgeniy Kachalin
Hello, dear participants!

Could you tip me, is there any simple and nice way to build scatter-plot 
for three different types of data (, and o and * - signs, for example) 
with legend.

Now i can guess only that way:

plot(x~y,data=subset(mydata,factor1=='1'), pch='.',col='blue')
points(x~y,data=subset(mydata,factor1=='2'), pch='*',col='green')
points( etc

What is the simple and nice way?
Thank you very much for your kindness and help.

-- 
Evgeniy Kachalin

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 35, Issue 7

2006-01-07 Thread Uwe Ligges
Evgeniy Kachalin wrote:

 Hello, dear participants!
 
 Could you tip me, is there any simple and nice way to build scatter-plot 
 for three different types of data (, and o and * - signs, for example) 
 with legend.
 
 Now i can guess only that way:
 
 plot(x~y,data=subset(mydata,factor1=='1'), pch='.',col='blue')
 points(x~y,data=subset(mydata,factor1=='2'), pch='*',col='green')
 points( etc
 
 What is the simple and nice way?
 Thank you very much for your kindness and help.
 


Example:


with(iris,
   plot(Sepal.Length, Sepal.Width, pch = as.integer(Species)))
with(iris,
   legend(7, 4.4, legend = unique(as.character(Species)),
 pch = unique(as.integer(Species


Uwe Ligges

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 34, Issue 14

2005-12-11 Thread Dominik Schaub
Guten Tag,

Ich bin vom 12. bis 23. Dezember 2005 im Militär-WK.
Ich werde die Mails somit nur verzögert beantworten können.

Für dringende Fälle:
Während diesen zwei Wochen bin ich via Natel (am besten per SMS) erreichbar 
unter der Nummer 079 438 27 68.

Mit freundlichem Gruss
Dominik Schaub

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 33, Issue 27

2005-11-27 Thread A.J. Rossini
 From: Duncan Murdoch [EMAIL PROTECTED]

 I'd recommend using the RWinEdt package instead for a different way to
 integrate winedit with R.

winedit and winedt are two different editors, last I checked.

best,
-tony

[EMAIL PROTECTED]
Muttenz, Switzerland.
Commit early,commit often, and commit in a repository from which we can easily
roll-back your mistakes (AJR, 4Jan05).

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 32, Issue 26

2005-10-26 Thread Alan Cobo-Lewis
r-help@stat.math.ethz.ch on Wednesday, October 26, 2005 at 6:00 AM -0500 wrote:

Ronaldo,
Try Harold's suggestion. The df still won't agree, because lmer (at least in 
its current version) just puts an upper bound on the df. But that should be OK, 
because all those t tests are approximations anyways, and you can get better 
confidence
intervals (credible intervals, whatever) by using the mcmcsamp() function that 
works with lmer()
alan


Doran, Harold [EMAIL PROTECTED] responded:


There is an issue with implicit nesting in lmer. In your lme() model you nest
block/irrigation/density/fertilizer. In lmer you need to do something like
(I dind't include all of your variables, but I think the makes the point)

lmer(yield~irrigation*density*fertilizer+(1|fertilizer:density)+(1|density), 
data)

Which notes that fertilizer is nested in density. 

Try this and then compare the results. 

Ronaldo Reis-Jr. [EMAIL PROTECTED], wrote:

I make the correct model with aov, lme do compare with lmer.

But I cant make a correct model in lmer. Look that the aov and lme results are
similars, but very different from lmer. In aov and lme is used the correct DF
for each variable, in lmer it use a same DF for all? Denom=54.


--
Alan B. Cobo-Lewis, Ph.D.   (207) 581-3840 tel
Department of Psychology(207) 581-6128 fax
University of Maine
Orono, ME 04469-5742[EMAIL PROTECTED]

http://www.umaine.edu/visualperception

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 32, Issue 26

2005-10-26 Thread Doran, Harold
In addition to the response below, Doug Bates has talked about this on
this list previously. I did

 RSiteSearch('bates degrees of freedom lmer')

The first one that came up has Doug's response to this question as well

Harold
 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alan Cobo-Lewis
Sent: Wednesday, October 26, 2005 8:53 AM
To: r-help@stat.math.ethz.ch
Subject: Re: [R] R-help Digest, Vol 32, Issue 26

r-help@stat.math.ethz.ch on Wednesday, October 26, 2005 at 6:00 AM -0500
wrote:

Ronaldo,
Try Harold's suggestion. The df still won't agree, because lmer (at
least in its current version) just puts an upper bound on the df. But
that should be OK, because all those t tests are approximations anyways,
and you can get better confidence intervals (credible intervals,
whatever) by using the mcmcsamp() function that works with lmer() alan


Doran, Harold [EMAIL PROTECTED] responded:


There is an issue with implicit nesting in lmer. In your lme() model 
you nest block/irrigation/density/fertilizer. In lmer you need to do 
something like (I dind't include all of your variables, but I think 
the makes the point)

lmer(yield~irrigation*density*fertilizer+(1|fertilizer:density)+(1|den
sity), data)

Which notes that fertilizer is nested in density. 

Try this and then compare the results. 

Ronaldo Reis-Jr. [EMAIL PROTECTED], wrote:

I make the correct model with aov, lme do compare with lmer.

But I cant make a correct model in lmer. Look that the aov and lme 
results are similars, but very different from lmer. In aov and lme is 
used the correct DF for each variable, in lmer it use a same DF for
all? Denom=54.


--
Alan B. Cobo-Lewis, Ph.D.   (207) 581-3840 tel
Department of Psychology(207) 581-6128 fax
University of Maine
Orono, ME 04469-5742[EMAIL PROTECTED]

http://www.umaine.edu/visualperception

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 31, Issue 30

2005-09-30 Thread John Maindonald
With lme4, use of mcmcsamp can be insightful.  (Douglas Bates
drew my attention to this function in a private exchange of emails.)
The distributions of random effects are simulated on a log scale,
where the distributions are much closer to symmetry than on the
scale of the random effects themselves.  As far as I can see, this is
a straightforward use of MCMC to estimate model parameters; it is
not clear to me the results from the lmer() fit are used.
John Maindonald.


On 30 Sep 2005, at 8:00 PM, [EMAIL PROTECTED] wrote:

 From: Roel de Jong [EMAIL PROTECTED]
 Date: 29 September 2005 11:19:38 PM
 To: r-help@stat.math.ethz.ch
 Subject: [R] standard error of variances and covariances of the  
 randomeffects with LME


 Hello,

 how do I obtain standard errors of variances and covariances of the  
 random effects with LME comparable to those of for example MlWin? I  
 know you shouldn't use them because the distribution of the  
 estimator isn't symmetric blablabla, but I need a measure of the  
 variance of those estimates for pooling my multiple imputation  
 results.

 Regards,
   Roel.

John Maindonald email: [EMAIL PROTECTED]
phone : +61 2 (6125)3473fax  : +61 2(6125)5549
Centre for Bioinformation Science, Room 1194,
John Dedman Mathematical Sciences Building (Building 27)
Australian National University, Canberra ACT 0200.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 31, Issue 9

2005-09-10 Thread Wuming Gong
?summary.lm and check the Value section.

Wuming

On 9/10/05, Ping Yao [EMAIL PROTECTED] wrote:
 Hi:
 I use lm (linear model) to analyze 47 variables , 8 responses
 So I use loop to finish it .
 I want the program to show the results that P-value is less than 0.05.
 How can I cite the P-valus from lm result ?
 
 Ping
 
 The code:
 
 
 #using LM to model general fati
 for (j in 48:52) {
 for (i in 3:46){
 gen.fat-y_x[,j]
 gen.fat-as.numeric(gen.fat)
 
 snp_marker-y_x[,i]
 
 x-colnames(y_x)
 
 #snp_marker-as.matrix(snp_marker)
 #mode(snp_marker)
 cat(phenotype is = ,x[j] , \n)
 cat(snp marker is = ,x[i] , \n)
 
 zz-summary(lm.D9 - lm(gen.fat~snp_marker))
 
 print(zz)
 
 return
 }
 }
 
 [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 31, Issue 9

2005-09-10 Thread Wuming Gong
Hi Ping, 

You can use zz$coefficients[,4] to get the p values for each estimated
coefficients in your context.

Wuming 

On 9/11/05, Ping Yao [EMAIL PROTECTED] wrote:
 Wuming:
 Thanks for your help.
I use the fuction:
call(fstatistic,zz)
   call(p-value,zz)
  
  I can get each variable P-values,but I can't  get P-value of the model.
  How can I do ?
   
one of the results is following :
  
  Call:
  lm(formula = gen.fat ~ snp_marker)
  
  Residuals:
   Min   1Q   Median   3Q  Max 
  -10.5455  -3.0481   0.4545   3.9519   6.9519 
  
  Coefficients:
Estimate Std. Error t value Pr(|t|)
  (Intercept)13.0481 0.4518  28.881   2e-16 ***
  snp_markerallele2   0.5107 0.9102   0.561   0.5753
  snp_markerBoth  1.4974 0.6927   2.162   0.0318 *  
  ---
  Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 
  
  Residual standard error: 4.607 on 212 degrees of freedom
  Multiple R-Squared: 0.02166,Adjusted R-squared: 0.01244 
  F-statistic: 2.347 on 2 and 212 DF,  p-value: 0.0981 
  
  I use the code :
  
  zz-summary(lm.D9 - lm(gen.fat~snp_marker))
coe-coef(lm.D9)# the bare coefficients
  if (coe[2]=.05||coe[3]=.05||coe[4]=.05||coe[5]=.05) {
  cat(phenotype is  = ,x[j] , \n)
  cat(snp marker is  = ,x[i] , \n)
 sign-call(fstatistic,zz)
   call(p-value,zz)
 
#print(coe)
print(zz)
 
  }
  
  
  
  
  
 
 On 9/10/05, Wuming Gong [EMAIL PROTECTED] wrote:
  ?summary.lm and check the Value section.
  
  Wuming
  
  On 9/10/05, Ping Yao [EMAIL PROTECTED] wrote:
   Hi:
   I use lm (linear model) to analyze 47 variables , 8 responses 
   So I use loop to finish it .
   I want the program to show the results that P-value is less than 0.05.
   How can I cite the P-valus from lm result ?
  
   Ping
  
   The code:
   
  
   #using LM to model general fati
   for (j in 48:52) {
   for (i in 3:46){
   gen.fat-y_x[,j]
   gen.fat-as.numeric(gen.fat)
  
   snp_marker-y_x[,i]
  
   x-colnames(y_x) 
  
   #snp_marker-as.matrix(snp_marker)
   #mode(snp_marker)
   cat(phenotype is = ,x[j] , \n)
   cat(snp marker is = ,x[i] , \n)
  
   zz-summary( lm.D9 - lm(gen.fat~snp_marker))
  
   print(zz)
  
   return
   }
   }
  
   [[alternative HTML version deleted]]
  
   __ 
   R-help@stat.math.ethz.ch mailing list
   https://stat.ethz.ch/mailman/listinfo/r-help
   PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html
  
  
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 31, Issue 9

2005-09-09 Thread Ping Yao
Hi:
I use lm (linear model) to analyze 47 variables , 8 responses 
So I use loop to finish it .
I want the program to show the results that P-value is less than 0.05.
How can I cite the P-valus from lm result ?

Ping

The code:


#using LM to model general fati
for (j in 48:52) {
for (i in 3:46){
gen.fat-y_x[,j]
gen.fat-as.numeric(gen.fat)

snp_marker-y_x[,i]

x-colnames(y_x)

#snp_marker-as.matrix(snp_marker)
#mode(snp_marker)
cat(phenotype is = ,x[j] , \n)
cat(snp marker is = ,x[i] , \n)

zz-summary(lm.D9 - lm(gen.fat~snp_marker))

print(zz)

return
}
}

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 30, Issue 26

2005-08-26 Thread Jean-Marc Ottorini
Dear R helpers,

   For me ( i.e. R 2.1.1 on Mac OS X), using  trellis.device 
(postscript, onefile = F, etc ...  with the lattice library within a R 
function works fine to obtain the desired graph as an EPS file , 
provided that :

1) the command dev.off() is not included in this function

2) and it is  issued at the  command level after the function has 
been exited

I would like to know if there is a way to close the EPS file within the 
function itself, freeing the user to issue the closing command (I 
already  tried trellis.device (), and trellis.device (null) without any 
success).

Regards,

J.-M.

  
Jean-Marc Ottorini   LERFoB, UMR INRA-ENGREF 1092
  email  [EMAIL PROTECTED]  INRA - Centre de Nancy
  voice  +33-0383-394046F54280 - Champenoux
  fax+33-0383-394034 France

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 30, Issue 6

2005-08-08 Thread David Duffy
On Fri, 5 Aug 2005 Julia Reid  wrote:

 Subject: [R] GAP pointer

 I am trying to do a simple segregation analysis using the GAP package. I
 have the documentation for pointer but I desperately need an example so
 that I can see how to format the datfile and the jobfile. For each
 individual, I have FamilyId, SubjectId, FatherId, MotherId, and
 AffectedStatus (0/1). I would like to obtain the likelihood ratio
 statistic for transmission.
 I would greatly appreciate any help on this subject.
 Best to all,
 Julia Reid

I wouldn't use Pointer myself (there are lots of more recent packages*),
but look at the examples in
http://cedar.genetics.soton.ac.uk/pub/PROGRAMS/pointer/pointer.tar.Z
and the manual, which is in the book:

Morton N.E., Rao D.C  Lalouel J-M (1983).
Methods in Genetic Epidemiology. Karger
PO Box, CH-4009 Basel (Switzerland).
ISBN 3-8055-3668-2

which you will find in many academic libraries.

David Duffy.


* Don't you use Pap or JPap at Myriad?

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 28, Issue 28

2005-06-28 Thread A. Mani
On Tuesday 28 June 2005 15:30, [EMAIL PROTECTED] wrote:
Re :   37. Re: A. Mani : colours in Silhouette (Mulholland, Tom)
   
 Message: 37
 Date: Tue, 28 Jun 2005 09:08:24 +0800
 From: Mulholland, Tom [EMAIL PROTECTED]
 Subject: Re: [R] A. Mani : colours in Silhouette
 To: [EMAIL PROTECTED], r-help@stat.math.ethz.ch
 Message-ID:
  [EMAIL PROTECTED]
 Content-Type: text/plain; charset=iso-8859-1

 It's not so much a problem, as not working the way you expected.
 cluster:::plot.partition is used to do the plotting. If you look at the
 code for this you can see the difficulty in putting every possible
 permutation into the code. If for example you want the silhouette plot to
 be red using col = red is not intuitive as the cluster plot (which comes
 up first) has more than one colour. If you have a look at methods(plot)
 (assuming that you have loaded the cluster package) you will see that there
 is a specific piece of code in the form of plot.silhouette. It has an
 asterisk next to it so you need to use cluster:::plot.silhouette to see the
 code. It has what you need.

 args(cluster:::plot.silhouette)

  function (x, nmax.lab = 40, max.strlen = 5, main = NULL, sub = NULL,

 xlab = expression(Silhouette width  * s[i]), col = gray,
 do.col.sort = length(col)  1, border = 0, cex.names = par(cex.axis),
 do.n.k = TRUE, do.clus.stat = TRUE, ...)


  data(ruspini)
   pr4 - pam(ruspini, 4)
   si - silhouette(pr4)
   plot(si,col = red)

I tried that before with many more options and got a blank image. It must have 
been due to the options.
 The issue is that whenever code is written there is always a choice as to
 what functionality is put in place. Just because something can be done,
 does not mean it will or in some cases should be done. In this case the
 help for plot.partition notes that For more flexibility, use
 'plot(silhouette(x), ...)', see 'plot.silhouette'.

 Tom

 Thanks for that I found out something I will find useful in the future.

  -Original Message-
  From: [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED] Behalf Of A. Mani
  Sent: Tuesday, 28 June 2005 4:30 AM
  To: r-help@stat.math.ethz.ch
  Subject: [R] A. Mani : colours in Silhouette
 
 
  Hello,
 In cluster analysis with cluster, how does one colour
  the silhouette
  plots ? For example in using pam. There seems to be some
  problem there.
  Everything else can be coloured.
 
  Thanks,
 

 A. Mani
 Member, Cal. Math. Soc

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 28, Issue 11

2005-06-11 Thread dwfu
Dear all,
I'm new using R and in (geo)statistics. I have a problem with solving
my homework questions. We are working with variograms and trying to
write down basic  equations for different models (spherical,
exponential, Gaussian). I tried to use the 'gstat' and 'geoR' packages
to solve the questions but as I said before I'm new in R and always
encountered with some syntax errors (I can send some specific examples
later). If one of you used this packages and could help me,  I will be
very glad.
Best Wishes,
Emre Duran

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 28, Issue 11

2005-06-11 Thread Uwe Ligges
dwfu wrote:
 Dear all,
 I'm new using R and in (geo)statistics. I have a problem with solving
 my homework questions. We are working with variograms and trying to
 write down basic  equations for different models (spherical,
 exponential, Gaussian). I tried to use the 'gstat' and 'geoR' packages
 to solve the questions but as I said before I'm new in R and always
 encountered with some syntax errors (I can send some specific examples
 later). If one of you used this packages and could help me,  I will be
 very glad.

Please read the posting guide which tells you:
  - Use an informative subject line.
  - Basic statistics and classroom homework: R-help is not intended for 
these.
  - Provide small reproducible examples.

Uwe Ligges

 Best Wishes,
 Emre Duran
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R-help Digest

2005-05-06 Thread Sebastian Schoenherr
Hi folks,
I have to create my own time series, Is it possible to generate ARIMA time
series, where i can define the range of the values in the y axis. (e.g: Values
only between 0 and 1)

Best regards
Sebastian

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest

2005-05-06 Thread Prof Brian Ripley
On Fri, 6 May 2005, Sebastian Schoenherr wrote:
Hi folks,
I have to create my own time series, Is it possible to generate ARIMA time
series, where i can define the range of the values in the y axis. (e.g: Values
only between 0 and 1)
No.
Take a look at the definition of an ARIMA process.  Suppose e.g. you have 
an AR(1) process.  Then if innovations are positive and the coefficient is 
positive the value can be arbitrarily large.  You can construct all sorts 
of similar counter-examples.

This isn't a real problem is it?
--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-help Digest, Vol 4, Issue 27 ( -Reply)

2003-06-28 Thread Peter Dalgaard BSA
Leo Wang-Kit Cheung [EMAIL PROTECTED] writes:

 Hi,
 
 I am out of town and will get back to you on the 13th of July.
 
 Leo
 
  [EMAIL PROTECTED] 06/27/03 00:32 
 
 Send R-help mailing list submissions to
   [EMAIL PROTECTED]
 
 To subscribe or unsubscribe via the World Wide Web, visit
   https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 or, via email, send a message with subject or body 'help' to
   [EMAIL PROTECTED]
 
 You can reach the person managing the list at
   [EMAIL PROTECTED]
 
 When replying, please edit your Subject line so it is more specific
 than Re: Contents of R-help digest...
 
 
 Today's Topics:
 
1. create help files ([EMAIL PROTECTED])

...and a full week of digested messages gets quoted back to the list.
Let's hope that not every single digest-subscriber does likewise when
he/she goes on holiday!

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help