Re: [R] customizing RGui/Rconsole

2009-08-24 Thread Dieter Menne



Philip A. Viton wrote:
 
 
 Is it possible to change the behavior of RGui/RConsole under MS 
 Windows so that, if you submit a bunch of commands from a script 
 window, when they've finished, focus is given to the Console, and not 
 (as now) to the script?
 
 

That's normally the job of the sending side, because RGui does not know,
when it's finished. You might have a look at the source code of Tinn-R which
has the required feature as an option.

Dieter


-- 
View this message in context: 
http://www.nabble.com/customizing-RGui-Rconsole-tp25106128p2579.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Generate random sample interval

2009-08-24 Thread Tammy Ma

Hi, R users,

I have a problem about how to generate random sample interval from a duration.

For example, during a time duration 0-70s, I want to generate a sample which 
last 10s.
the sample could be 0-10 or 30-40s or 25-35s etc
How could I do it in R`??

Thanks a lot.

Tammy

_
Share your memories online with anyone you want.
http://www.microsoft.com/middleeast/windows/windowslive/products/photos-share.aspx?tab=1
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Odp: When factor is better than other types, such as vector and frame?

2009-08-24 Thread Petr PIKAL
Hi

r-help-boun...@r-project.org napsal dne 23.08.2009 05:00:11:

 Hi,
 
 It is easy to understand the types vector and frame.
 
 But I am wondering why the type factor is designed in R. What is the
 advantage of factor compare with other data types in R? Can somebody
 give an example in which case the type factor is much better than
 other data types?

Although your expressions do not correspond much with naming conventions 
in R, usage of factor is sometimes preferable to character values.

consider e.g.

set.seed(111)
df-data.frame(1:5, fac=sample(letters[1:2], 5, replace=T))
plot(df[,1], pch=as.numeric(df[,2]))
df[,2]-as.character(df[,2])
plot(df[,1], pch=as.numeric(df[,2]))

Warning message:
In plot.xy(xy, type, ...) : NAs introduced by coercion

Another advantage is simple and straightforward manipulation with levels.

levels(df[,2])-c(yes, no)
 df
  X1.5 fac
11  no
22  no
33 yes
44  no
55 yes

together with easy ordering option of levels and subsequent plotting order 
in boxplots and similar.

 factor(df$fac, levels=levels(df$fac))
[1] no  no  yes no  yes
Levels: yes no
 factor(df$fac, levels=levels(df$fac)[2:1])
[1] no  no  yes no  yes
Levels: no yes

You need to get used to some features which are sometimes surprising but 
has a reason like levels persisting in subset.

 str(df[df$fac==no,])
'data.frame':   3 obs. of  2 variables:
 $ X1.5: int  1 2 4
 $ fac : Factor w/ 2 levels yes,no: 2 2 2


Regards
Petr



 
 Regards,
 Peng
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Convert list to data frame while controlling column types

2009-08-24 Thread Petr PIKAL
Hi

r-help-boun...@r-project.org napsal dne 23.08.2009 17:29:48:

 On 8/23/2009 9:58 AM, David Winsemius wrote:
  I still have problems with this statement. As I understand R, this
 should be impossible. I have looked at both you postings and neither of
 them clarify the issues. How can you have blanks or spaces in an R
 numeric vector?
 
 
 Just because I search numeric columns doesn't mean that my regex matches
 them!  I posted some info on my data frame in an earlier email:
 
 str(final_dataf)
 'data.frame':   1127 obs. of  43 variables:
  $ block  : Factor w/ 1 level 2: 1 1 1 1 1 1 1 1 1 1 ...
  $ treatment  : Factor w/ 4 levels I,M,N,T: 1 1 1 1 1 1 ...
  $ transect   : Factor w/ 1 level 4: 1 1 1 1 1 1 1 1 1 1 ...
  $ tag: chr  NA 121AL 122AL 123AL ...
 ...
  $ h1 : num  NA NA NA NA NA NA NA NA NA NA ...
 ...
 
 You can see that I do indeed have some numeric columns.  And while I

Well, AFAICS you have a data frame with 3 columns which are factors and 1 
which is character. I do not see any numeric column. If you want to change 
block and transect to numeric you can use

df$block - as.numeric(as.character(df$block))


 search them for spaces, I only do so because my dataset isn't so large
 as to require me to exclude them from the search.  If my dataset grows
 too big at some point, I will exclude numeric columns, and other columns
 which cannot hold blanks or spaces.
 
 To clarify further with an example:
 
  df = data.frame(a=c(1,2,3,4,5),b=c(a,,c,d, ))
  df = as.data.frame(lapply(df, function(x){ is.na(x) -
 + grep('^\\s*$',x); return(x) }), stringsAsFactors = FALSE)
  df
   ab
 1 1a
 2 2 NA
 3 3c
 4 4d
 5 5 NA

which can be done also by
df[,2] - levels(df[,2])[1:2]-NA

but maybe with less generalization


  str(df)
 'data.frame':   5 obs. of  2 variables:
  $ a: num  1 2 3 4 5
  $ b: Factor w/ 5 levels , ,a,c,..: 3 NA 4 5 NA
 
 And one final clarification: I left out as.data.frame in my previous
 solution.  So it now becomes:
 
  final_dataf = as.data.frame(lapply(final_dataf, function(x){ is.na(x)
 + - grep('^\\s*$',x); return(x) }), stringsAsFactors = FALSE)

Again not too much of clarification, in your first data frame second 
column is a factor with some levels you want to convert to NA, which can 
be done by different approaches.

Your final_dataf is same as df.

Columns which shall be numeric and are read as factor/character by 
read.table likely contain some values which strictly can not be considered 
numeric. You can see them quite often in Excel like programs and some 
examples are

1..2, o.5, 12.o5 and or spaces, - e.t.c.

and you usually need handle them by hand.

Regards
Petr

 
 Hope that clarifies things, and thanks for your help.
 
 Thanks,
 Allie
 
 
 On 8/23/2009 9:58 AM, David Winsemius wrote:
  
  On Aug 23, 2009, at 2:47 AM, Alexander Shenkin wrote:
  
  On 8/21/2009 3:04 PM, David Winsemius wrote:
 
  On Aug 21, 2009, at 3:41 PM, Alexander Shenkin wrote:
 
  Thanks everyone for their replies, both on- and off-list.  I should
  clarify, since I left out some important information.  My original
  dataframe has some numeric columns, which get changed to character 
by
  gsub when I replace spaces with NAs.
 
  If you used is.na() -  that would not happen to a true _numeric_ 
vector
  (but, of course, a numeric vector in a data.frame could not have 
spaces,
  so you are probably not using precise terminology).
 
  I do have true numeric columns, but I loop through my entire 
dataframe
  looking for blanks and spaces for convenience.
  
  I still have problems with this statement. As I understand R, this
  should be impossible. I have looked at both you postings and neither 
of
  them clarify the issues. How can you have blanks or spaces in an R
  numeric vector?
  
  
 
  You would be well
  advised to include the actual code rather than applying loose
  terminology subject you your and our misinterpretation.
 
  I did include code in my previous email.  Perhaps you were looking 
for
  different parts.
 
 
  ?is.na
 
 
  I am guessing that you were using read.table() on the original data, 
in
  which case you should look at the colClasses parameter.
 
 
  yep - I use read.csv, and I do use colClasses.  But as I mentioned
  earlier, gsub converts those columns to characters.  Thanks for the 
tip
  about is.na() -.  I'm now using the following, thus side-stepping 
the
  whole controlling as.data.frame's column conversion issue:
 
  final_dataf = lapply(final_dataf, function(x){ is.na(x) -
  + grep('^\\s*$',x); return(x) })
  
  
  Good that you have a solution.
  
  David Winsemius, MD
  Heritage Laboratories
  West Hartford, CT
 
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


Re: [R] Generate random sample interval

2009-08-24 Thread Tammy Ma

 I need a simple algorithm to generate random ranges. 

 From: metal_lical...@live.com
 To: r-help@r-project.org
 Date: Mon, 24 Aug 2009 09:55:35 +0300
 Subject: [R] Generate random sample interval
 
 
 Hi, R users,
 
 I have a problem about how to generate random sample interval from a duration.
 
 For example, during a time duration 0-70s, I want to generate a sample which 
 last 10s.
 the sample could be 0-10 or 30-40s or 25-35s etc
 How could I do it in R`??
 
 Thanks a lot.
 
 Tammy
 
 _
 Share your memories online with anyone you want.
 http://www.microsoft.com/middleeast/windows/windowslive/products/photos-share.aspx?tab=1
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

_
Show them the way! Add maps and directions to your party invites. 
http://www.microsoft.com/windows/windowslive/products/events.aspx
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Generate random sample interval

2009-08-24 Thread David Winsemius


On Aug 24, 2009, at 2:55 AM, Tammy Ma wrote:



Hi, R users,

I have a problem about how to generate random sample interval from a  
duration.


For example, during a time duration 0-70s, I want to generate a  
sample which last 10s.

the sample could be 0-10 or 30-40s or 25-35s etc
How could I do it in R`??


?runif

Wouldn't you just generate a runif variable for the starting time that  
was in the range of 0-60 and then add 10 to each such start time? If  
the problem is any more difficult, then you will need to be more  
precise.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to add 95% confidential interval as vertical lines to x axe in density plot

2009-08-24 Thread Mao Jianfeng
Dear R-help listers,

I want to add 95% confidential interval as vertical lines to x
axe in density plot. I have found the library(hdrcde) can do this
work, but I do not know
how to handle functions of this library when I used ggplot2 to draw the graph.

Thank you in advance.

The data and codes followed:

# dummy data
factor-rep(c(Alice,Jone,Mike),each=100)
factor-factor(factor)
traits1-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits2-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits3-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits4-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits5-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits6-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits7-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits8-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits9-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits10-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits11-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits12-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits13-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits14-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits15-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits16-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits17-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits18-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits19-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits20-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))
traits21-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))

myda-data.frame(factor,traits1,traits2,traits3,traits4,traits5,traits6,traits7,traits8,traits9,traits10,traits11,traits12,traits13,traits14,traits15,traits16,traits17,traits18,
traits19,traits20,traits21)


library(ggplot2)
d = melt(myda, id = factor)

str(d)

pdf(test33.pdf)
p =
ggplot(data=d, mapping=aes(x=value, y=..density..)) +
facet_wrap(~variable)+
stat_density(aes(fill=factor), alpha=0.5, col=NA, position = 'identity') +
stat_density(aes(colour = factor), geom=path, position = 'identity')
print(p)
dev.off()

Mao J-F

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Help in building new function

2009-08-24 Thread Ana Paula Mora
Hi:

I've installed the precompiled binary for Windows. I need to use an existing
function, but I want to introduce some slight changes to it.

1. Is there a way for me to find the source files through windows explorer?
I know I can see it using edit(object name) but I want to know if I can see
it via explorer in some location under the R directory.

2. I don´t want to modify the code of that function until I'm sure that my
changes are not causing any harm. So, I got the R-2.9.1.tar.gz file and open
the arima.R file. I change the name of the function to myarima, the name of
the file to myarima.R, save it and the loaded it using the source command.
So far, so good. When I try to execute my function I get the error message
Error in Delta %+% c(1, -1) : object 'R_TSconv' not found. So far, the
only change I made is add a Hello world in the first line, so my change is
not the source of the problem. Looks like my function (although it is
exactly the same) is not being able to see this object.

Can someone help me out? Am I missing something?

Thanks a lot in advance.

Regards,

Ana

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help in building new function

2009-08-24 Thread Gabor Grothendieck
1. Just enter
  arima
at the R console to see its source code (without comments).  The source
tar.gz for R is found by googling for R, clicking on CRAN in left column
and choosing mirror.  Or to view it online or get it via svn:
  https://svn.r-project.org/R/

2. You want myarima's free variables to be found in the stats package so
set the environment of myarima like this:

environment(myarima) - asNamespace(stats)


On Mon, Aug 24, 2009 at 2:18 AM, Ana Paula Moraanamor...@gmail.com wrote:
 Hi:

 I've installed the precompiled binary for Windows. I need to use an existing
 function, but I want to introduce some slight changes to it.

 1. Is there a way for me to find the source files through windows explorer?
 I know I can see it using edit(object name) but I want to know if I can see
 it via explorer in some location under the R directory.

 2. I don´t want to modify the code of that function until I'm sure that my
 changes are not causing any harm. So, I got the R-2.9.1.tar.gz file and open
 the arima.R file. I change the name of the function to myarima, the name of
 the file to myarima.R, save it and the loaded it using the source command.
 So far, so good. When I try to execute my function I get the error message
 Error in Delta %+% c(1, -1) : object 'R_TSconv' not found. So far, the
 only change I made is add a Hello world in the first line, so my change is
 not the source of the problem. Looks like my function (although it is
 exactly the same) is not being able to see this object.

 Can someone help me out? Am I missing something?

 Thanks a lot in advance.

 Regards,

 Ana

        [[alternative HTML version deleted]]


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R 2.9.2 is released

2009-08-24 Thread Peter Dalgaard
I've rolled up R-2.9.2.tar.gz a short while ago.

This is a maintenance release and fixes a number of mostly minor issues.

See the full list of changes below.

You can get it from

http://cran.r-project.org/src/base/R-2/R-2.9.2.tar.gz

or wait for it to be mirrored at a CRAN site nearer to you. Binaries
for various platforms will appear in due course (Duncan Murdoch is out
of town, so Windows binaries may take a few days longer than usual).

For the R Core Team

Peter Dalgaard


These are the md5sums for the freshly created files, in case you wish
to check that they are uncorrupted:

70447ae7f2c35233d3065b004aa4f331  INSTALL
433182754c05c2cf7a04ad0da474a1d0  README
4f004de59e24a52d0f500063b4603bcb  OONEWS
ff4bd9073ef440b1eb43b1428ce96872  ONEWS
17cbe5399dc80f39bd8c8b3c244d6a50  NEWS
7abcbbc7480df75a11a00bb09783db90  THANKS
070cca21d9f8a6af15f992edb47a24d5  AUTHORS
a6f89e2100d9b6cdffcea4f398e37343  COPYING.LIB
eb723b61539feef013de476e68b5c50a  COPYING
020479f381d5f9038dcb18708997f5da  RESOURCES
4767b922bb20620946ee9a4275a6ee6c  FAQ
112e2a1306cf71320e45d14e87e5b913  R-2.9.2.tar.gz
112e2a1306cf71320e45d14e87e5b913  R-latest.tar.gz


Here is the relevant part of the NEWS file:

CHANGES IN R VERSION 2.9.2

NEW FEATURES

o   install.packages(NULL) now lists packages only once even if they
occur in more than one repository (as the latest compatible
version of those available will always be downloaded).

o   approxfun() and approx() now accept a 'rule' of length two, for
easy specification of different interpolation rules on left and
right.

They no longer segfault for invalid zero-length specification
of 'yleft, 'yright', or 'f'.

o   seq_along(x) is now equivalent to seq_len(length(x)) even where
length() has an S3/S4 method; previously it (intentionally)
always used the default method for length().

o   PCRE has been updated to version 7.9 (for bug fixes).

o   agrep() uses 64-bit ints where available on 32-bit platforms
and so may do a better job with complex matches.
(E.g. PR#13789, which failed only on 32-bit systems.)

DEPRECATED  DEFUNCT

o   R CMD Rd2txt is deprecated, and will be removed in 2.10.0.
(It is just a wrapper for R CMD Rdconv -t txt.)

o   tools::Rd_parse() is deprecated and will be removed in 2.10.0
(which will use only Rd version 2).

BUG FIXES

o   parse_Rd() still did not handle source reference encodings
properly.

o   The C utility function PrintValue no longer attempts to print
attributes for CHARSXPs as those attributes are used
internally for the CHARSXP cache.  This fixes a segfault when
calling it on a CHARSXP from C code.

o   PDF graphics output was producing two instances of anything
drawn with the symbol font face. (Report from Baptiste Auguie.)

o   length(x) - newval and grep() could cause memory corruption.
(PR#13837)

o   If model.matrix() was given too large a model, it could crash
R. (PR#13838, fix found by Olaf Mersmann.)

o   gzcon() (used by load()) would re-open an open connection,
leaking a file descriptor each time. (PR#13841)

o   The checks for inconsistent inheritance reported by setClass()
now detect inconsistent superclasses and give better warning
messages.

o   print.anova() failed to recognize the column labelled
P(|Chi|) from a Poisson/binomial GLM anova as a p-value
column in order to format it appropriately (and as a
consequence it gave no significance stars).

o   A missing PROTECT caused rare segfaults during calls to
load().  (PR#13880, fix found by Bill Dunlap.)

o   gsub() in a non-UTF-8 locale with a marked UTF-8 input
could in rare circumstances overrun a buffer and so segfault.

o   R CMD Rdconv --version was not working correctly.

o   Missing PROTECTs in nlm() caused random errors. (PR#13381 by
Adam D.I. Kramer, analysis and suggested fix by Bill Dunlap.)

o   Some extreme cases of pbeta(log.p = TRUE) are more accurate
(finite values  -700 rather than -Inf).  (PR#13786)

pbeta() now reports on more cases where the asymptotic
expansions lose accuracy (the underlying TOMS708 C code was
ignoring some of these, including the PR#13786 example).

o   new.env(hash = TRUE, size = NA) now works the way it has been
documented to for a long time.

o   tcltk::tk_choose.files(multi = TRUE) produces better-formatted
output with filenames containing spaces.  (PR#13875)

o   R CMD check --use-valgrind did not run valgrind on the package
tests.

o   The tclvalue() and the print() and as.xxx methods for class
tclObj crashed R with an invalid object -- seen with an
object saved from an earlier session.

o   R CMD BATCH garbled options -d debugger 

[R] Using R CMD Batch similarly to gnuplot pause mouse

2009-08-24 Thread W Eryk Wolski
I have a gnuplot script:

plot 'gp.dat' using 1:2 with line
pause mouse

and would like to have R CMD BATCH script that works similarily...

x-read.table(gp.dat)
options(device=windows)
plot(x[,1],x[,2])


what I am looking for now is an R functions which behaves like pause mouse
in gnuplot, i.e.

pause - command displays any text associated with the command and then waits
a specified amount of time or until the carriage return is pressed.
If the current terminal supports mousing, then pause mouse will terminate on
either a mouse click or on ctrl-C.


Thanks in advance


-- 
Witold Eryk Wolski

Heidmark str 5
D-28329 Bremen
tel.: 04215261837
www.brementango.de

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to add 95% confidential interval as vertical lines to x axein density plot

2009-08-24 Thread ONKELINX, Thierry
Show us how you extract the confidence interval from the functions in
the hdrcde library and then we might be able to help you.

HTH,

Thierry
 




ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature
and Forest
Cel biometrie, methodologie en kwaliteitszorg / Section biometrics,
methodology and quality assurance
Gaverstraat 4
9500 Geraardsbergen
Belgium
tel. + 32 54/436 185
thierry.onkel...@inbo.be
www.inbo.be

To call in the statistician after the experiment is done may be no more
than asking him to perform a post-mortem examination: he may be able to
say what the experiment died of.
~ Sir Ronald Aylmer Fisher

The plural of anecdote is not data.
~ Roger Brinner

The combination of some data and an aching desire for an answer does not
ensure that a reasonable answer can be extracted from a given body of
data.
~ John Tukey

-Oorspronkelijk bericht-
Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
Namens Mao Jianfeng
Verzonden: maandag 24 augustus 2009 9:40
Aan: r-help@r-project.org
Onderwerp: [R] how to add 95% confidential interval as vertical lines to
x axein density plot

Dear R-help listers,

I want to add 95% confidential interval as vertical lines to x axe in
density plot. I have found the library(hdrcde) can do this work, but I
do not know how to handle functions of this library when I used ggplot2
to draw the graph.

Thank you in advance.

The data and codes followed:

# dummy data
factor-rep(c(Alice,Jone,Mike),each=100)
factor-factor(factor)
traits1-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6)) traits2-c(rnorm(100, mean=1, sd=1),
rnorm(100, mean=3, sd=3), rnorm(100, mean=6, sd=6))
traits3-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6)) traits4-c(rnorm(100, mean=1, sd=1),
rnorm(100, mean=3, sd=3), rnorm(100, mean=6, sd=6))
traits5-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6)) traits6-c(rnorm(100, mean=1, sd=1),
rnorm(100, mean=3, sd=3), rnorm(100, mean=6, sd=6))
traits7-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6)) traits8-c(rnorm(100, mean=1, sd=1),
rnorm(100, mean=3, sd=3), rnorm(100, mean=6, sd=6))
traits9-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6)) traits10-c(rnorm(100, mean=1, sd=1),
rnorm(100, mean=3, sd=3), rnorm(100, mean=6, sd=6))
traits11-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6)) traits12-c(rnorm(100, mean=1, sd=1),
rnorm(100, mean=3, sd=3), rnorm(100, mean=6, sd=6))
traits13-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6)) traits14-c(rnorm(100, mean=1, sd=1),
rnorm(100, mean=3, sd=3), rnorm(100, mean=6, sd=6))
traits15-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6)) traits16-c(rnorm(100, mean=1, sd=1),
rnorm(100, mean=3, sd=3), rnorm(100, mean=6, sd=6))
traits17-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6)) traits18-c(rnorm(100, mean=1, sd=1),
rnorm(100, mean=3, sd=3), rnorm(100, mean=6, sd=6))
traits19-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6)) traits20-c(rnorm(100, mean=1, sd=1),
rnorm(100, mean=3, sd=3), rnorm(100, mean=6, sd=6))
traits21-c(rnorm(100, mean=1, sd=1), rnorm(100, mean=3, sd=3),
rnorm(100, mean=6, sd=6))

myda-data.frame(factor,traits1,traits2,traits3,traits4,traits5,traits6,
traits7,traits8,traits9,traits10,traits11,traits12,traits13,traits14,tra
its15,traits16,traits17,traits18,
traits19,traits20,traits21)


library(ggplot2)
d = melt(myda, id = factor)

str(d)

pdf(test33.pdf)
p =
ggplot(data=d, mapping=aes(x=value, y=..density..)) +
facet_wrap(~variable)+ stat_density(aes(fill=factor), alpha=0.5, col=NA,
position = 'identity') + stat_density(aes(colour = factor), geom=path,
position = 'identity')
print(p)
dev.off()

Mao J-F

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Druk dit bericht a.u.b. niet onnodig af.
Please do not print this message unnecessarily.

Dit bericht en eventuele bijlagen geven enkel de visie van de schrijver weer 
en binden het INBO onder geen enkel beding, zolang dit bericht niet bevestigd is
door een geldig ondertekend document. The views expressed in  this message 
and any annex are purely those of the writer and may not be regarded as stating 
an official position of INBO, as long as the message is not confirmed by a duly 
signed document.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and 

[R] Lattice xyplot: modify line width of plot lines

2009-08-24 Thread ukoenig

# Hi all,
# I want to increase the line width of the plotted lines
# in a xy-lattice plot. My own attempts were all in vain.
# Without the group option the line width is modified -
# with the option it is funnily enough not.
# Please have a look at my syntax.
#
# Many thanks in advance
# Udo




library(lattice)

data - data.frame(cbind(1:2,c(1,1,2,2), c(0.5,0.9,1.0,1.8)))
names(data) - c(BMI,time,Choline)

data$BMI - factor(data$BMI)
levels(data$BMI) - c(=17.5,17.5)
data$time - factor(data$time)
levels(data$time) - c(Admission,Discharge)


#Show names of settings
names(trellis.par.get())

#Try to set the line width of the two plotted colored lines
line-trellis.par.get(plot.line)
line
line$lwd=3
trellis.par.set(plot.line, line)
line


#Without group option: Line width is changed
xyplot(Choline ~ time,
   data=data,
   type=l,
   scales=list(relation=free),
   auto.key=list(title=BMI Group, border=FALSE),
   xlab=c(Point in Time),
   ylab=c(Concentration of Choline))

#With group option: Line width is not changed
xyplot(Choline ~ time,
   group=BMI,
   data=data,
   type=l,
   scales=list(relation=free),
   auto.key=list(title=BMI Group, border=FALSE),
   xlab=c(Point in Time),
   ylab=c(Concentration of Choline))

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lattice xyplot: modify line width of plot lines

2009-08-24 Thread Chuck Cleland
On 8/24/2009 4:47 AM, ukoe...@med.uni-marburg.de wrote:
 # Hi all,
 # I want to increase the line width of the plotted lines
 # in a xy-lattice plot. My own attempts were all in vain.
 # Without the group option the line width is modified -
 # with the option it is funnily enough not.
 # Please have a look at my syntax.
 #
 # Many thanks in advance
 # Udo

  You need to change the superpose.line setting:

xyplot(Choline ~ time,
   groups=BMI,
   data=data,
   type=l,
   scales=list(relation=free),
   auto.key=list(title=BMI Group,
 border=FALSE, lines=TRUE, points=FALSE),
   xlab=c(Point in Time),
   ylab=c(Concentration of Choline),
   par.settings = list(superpose.line = list(lwd=3)))

 
 
 library(lattice)
 
 data - data.frame(cbind(1:2,c(1,1,2,2), c(0.5,0.9,1.0,1.8)))
 names(data) - c(BMI,time,Choline)
 
 data$BMI - factor(data$BMI)
 levels(data$BMI) - c(=17.5,17.5)
 data$time - factor(data$time)
 levels(data$time) - c(Admission,Discharge)
 
 
 #Show names of settings
 names(trellis.par.get())
 
 #Try to set the line width of the two plotted colored lines
 line-trellis.par.get(plot.line)
 line
 line$lwd=3
 trellis.par.set(plot.line, line)
 line
 
 
 #Without group option: Line width is changed
 xyplot(Choline ~ time,
data=data,
type=l,
scales=list(relation=free),
auto.key=list(title=BMI Group, border=FALSE),
xlab=c(Point in Time),
ylab=c(Concentration of Choline))
 
 #With group option: Line width is not changed
 xyplot(Choline ~ time,
group=BMI,
data=data,
type=l,
scales=list(relation=free),
auto.key=list(title=BMI Group, border=FALSE),
xlab=c(Point in Time),
ylab=c(Concentration of Choline))
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 


-- 
Chuck Cleland, Ph.D.
NDRI, Inc. (www.ndri.org)
71 West 23rd Street, 8th floor
New York, NY 10010
tel: (212) 845-4495 (Tu, Th)
tel: (732) 512-0171 (M, W, F)
fax: (917) 438-0894

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lattice xyplot: modify line width of plot lines

2009-08-24 Thread ukoenig

Now it works.
Many thanks, Chuck!



Quoting Chuck Cleland cclel...@optonline.net:


On 8/24/2009 4:47 AM, ukoe...@med.uni-marburg.de wrote:

# Hi all,
# I want to increase the line width of the plotted lines
# in a xy-lattice plot. My own attempts were all in vain.
# Without the group option the line width is modified -
# with the option it is funnily enough not.
# Please have a look at my syntax.
#
# Many thanks in advance
# Udo


  You need to change the superpose.line setting:

xyplot(Choline ~ time,
   groups=BMI,
   data=data,
   type=l,
   scales=list(relation=free),
   auto.key=list(title=BMI Group,
 border=FALSE, lines=TRUE, points=FALSE),
   xlab=c(Point in Time),
   ylab=c(Concentration of Choline),
   par.settings = list(superpose.line = list(lwd=3)))




library(lattice)

data - data.frame(cbind(1:2,c(1,1,2,2), c(0.5,0.9,1.0,1.8)))
names(data) - c(BMI,time,Choline)

data$BMI - factor(data$BMI)
levels(data$BMI) - c(=17.5,17.5)
data$time - factor(data$time)
levels(data$time) - c(Admission,Discharge)


#Show names of settings
names(trellis.par.get())

#Try to set the line width of the two plotted colored lines
line-trellis.par.get(plot.line)
line
line$lwd=3
trellis.par.set(plot.line, line)
line


#Without group option: Line width is changed
xyplot(Choline ~ time,
   data=data,
   type=l,
   scales=list(relation=free),
   auto.key=list(title=BMI Group, border=FALSE),
   xlab=c(Point in Time),
   ylab=c(Concentration of Choline))

#With group option: Line width is not changed
xyplot(Choline ~ time,
   group=BMI,
   data=data,
   type=l,
   scales=list(relation=free),
   auto.key=list(title=BMI Group, border=FALSE),
   xlab=c(Point in Time),
   ylab=c(Concentration of Choline))

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




--
Chuck Cleland, Ph.D.
NDRI, Inc. (www.ndri.org)
71 West 23rd Street, 8th floor
New York, NY 10010
tel: (212) 845-4495 (Tu, Th)
tel: (732) 512-0171 (M, W, F)
fax: (917) 438-0894



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] compare observed and fitted GAM values

2009-08-24 Thread Simon Wood
You haven't given quite enough information to be sure, but I would guess that 
this is not really a problem, but rather the interesting proporty of GLMs 
fitted with a canonical link described in e.g. section 2.1.8 of Wood (2006) 
Generalized additive models: and introduction with R, or at the beginning of  
the GLM chapter in Venables and Ripley MASS. 

On Friday 21 August 2009 22:03, Lucía Rueda wrote:
 Hi,

 I am comparing the observed and fitted values of my GAM model, which
 includes the explanatory variables: longitude, depth, ssh, year and month.
 When I compare observed and fitted values for longitude, depth and ssh it
 works. But when I try to do it for month and year (which are as factors in
 the GAM model) it doesn't work. My observed and fitted values are exactly
 the same.. How is that possible?� Thanks

  Obs_factor1-aggregate(x=albdata$turtles,by=list(albdata$Year),FUN=mean)
  names(Obs_factor1)=c(Bin,Observed)
  Obs_factor1

 ��� Bin��� Observed
 1� 1997 0.017094017
 2� 1998 0.010652463
 3� 1999 0.02300
 4� 2000 0.017167382
 5� 2001 0.030465950
 6� 2002 0.007446809
 7� 2003 0.010568032
 8� 2004 0.011450382
 9� 2005 0.016270338
 10 2006 0.017006803
 11 2007 0.030969031
 12 2008 0.066455696

  Fit_factor1-aggregate(x=predict(gam.def.lon,type=response),by=list(alb
 data$Year),FUN=mean) names(Fit_factor1)=c(Bin,Fitted)
  Fit_factor1

 ��� Bin� Fitted
 1� 1997 0.017094017
 2� 1998 0.010652463
 3� 1999 0.02300
 4� 2000 0.017167382
 5� 2001 0.030465950
 6� 2002 0.007446809
 7� 2003 0.010568032
 8� 2004 0.011450382
 9� 2005 0.016270338
 10 2006 0.017006803
 11 2007 0.030969031
 12 2008 0.066455696






   [[alternative HTML version deleted]]

-- 
 Simon Wood, Mathematical Sciences, University of Bath, Bath, BA2 7AY UK
 +44 1225 386603  www.maths.bath.ac.uk/~sw283 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] natural sorting a data frame /vector by row

2009-08-24 Thread Moumita Das
How to  NATURAL sort a vector or data frame* by row*  , in ascending order ?


V1   V2V3 V4
i1 5.00e-01 1.036197e-17  4.825338e+16 0.
i104.001692e-18 1.365740e-17  2.930053e-01 0.76973827
i12   -1.052843e-17 1.324484e-17 -7.949081e-01 0.42735000
i132.571236e-17 1.357336e-17  1.894325e+00 0.05922715
i2-5.630739e-18 1.638267e-17 -3.437010e-01 0.73133282
i3 4.291387e-18 1.207522e-17  3.553879e-01 0.72257050
i4 1.472662e-17 1.423051e-17  1.034863e+00 0.30163897
i5 5.00e-01 1.003323e-17  4.983441e+16 0.
i6 5.147966e-18 1.569095e-17  3.280850e-01 0.74309614
i7 1.096044e-17 1.555829e-17  7.044760e-01 0.48173041
i8-1.166290e-18 1.287370e-17 -9.059482e-02 0.92788026
i9 1.627371e-17 1.540567e-17  1.056345e+00 0.29173427
recmeanC2  9.275880e-17 6.322780e-17  1.467057e+00 0.14349903
 NA   NANA NA
recmeanC3  1.283534e-17 2.080644e-17  6.168929e-01 0.53781390
recmeanC4-3.079466e-17 2.565499e-17 -1.200338e+00 0.23103743



I want a sequence of rows as :--  *recmeanC2 ,recmeanC3,recmeanC4* and the *NA
row  in the third position from the top*(presently it's third from down)
-- 
Thanks
Moumita

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] image() generates many border lines in pdf, not on screen (quartz) - R 2.9.1 GUI 1.28 Tiger build 32-bit (5444) - OS X 10.5.8

2009-08-24 Thread Stefan Evert


On 23 Aug 2009, at 20:26, Uwe Ligges wrote:


Since it looks like nobody answered so far:

Your code is not reproducible, we do not have rfc, y, zVals nor  
NoCols.


It's much easier to reproduce: just type in the first example from the  
image help page


x - y - seq(-4*pi, 4*pi, len=27)
r - sqrt(outer(x^2, y^2, +))
image(z = z - cos(r^2)*exp(-r/6), col=gray((0:32)/32))

then save from the quartz() display (I used the menu) and view with  
Adobe Reader 9 (I seem to have 9.0.0).  Instead of the fine white  
lines you always get with Preview.app and other inaccurate PDF  
renderers, there are now huge gaps between the pixels (around 1/10th  
of pixel width).


This is very probably a bug in the Quartz device (or Quartz itself),  
as the lines go away if you save the plot with dev.copy2pdf(), which I  
normally use.


@OP: Do you have any particular reason for using quartz.save() or the  
menu item instead of dev.copy2pdf()?


You could also try to place a screenshot somewhere on a webpage  
including the info about the settings of the corresponding viewer.


I've tried switching off _all_ of the numerous anti-aliasing options  
of Adobe Reader 9; absolutely no difference.


Hope this helps,
Stefan

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] calculating probability

2009-08-24 Thread maram salem
Hi all,
I've a trivial question. If (q) is a continous variable,actually a vector of 
1000 values. how to calculate the probability that q is greater than a specific 
value, i.e. P(q45)??
Thanks
Maram


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Import/export ENVI files

2009-08-24 Thread Lucas Sevilla García

Hi! I'm a beginner with this webpage so, I don't know if I'm sending my 
question to the correct site. Anyway, I'm working with R and I need to import 
and export ENVI files, (*.HDR files). A colleague told me that there is a 
package to import/export envi files but I haven't found that package, so does 
anyone know something about this? thank you so much :) . Ciaooo

_
[[elided Hotmail spam]]

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R with MPI

2009-08-24 Thread polemon
Hello, I plan to use R with my cluster with OpenMPI.
I need the packaged 'snow' and 'Rmpi' for that, however, I get an error
while downloading and installing them:
When I do a:
install.packages(Rmpi, dependencies=T)

I get this error:
checking for mpi.h... no
Try to find libmpi.so or libmpich.a
checking for main in -lmpi... no
libmpi not found. exiting...

However, mpi.h is present via the openmpi-devel package on my RHEL 5.3.

Some of those packages need sprng 2.0 (rsprng, for instance, which is a
dependency for another MPI-related package). Sprng 2.0, however, isn't in
developement for years, I wonder how I am supposed to keep my software up to
date...

Any ideas on how to workaround that mpi.h problem?

Please help,

--polemon

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] hdf5 package segfault when processing large data

2009-08-24 Thread Budi Mulyono
Hi there,

I am currently working on something that uses hdf5 library. I think
hdf5 is a great data format, I've used it somewhat extensively in
python via PyTables. I was looking for something similar to that in R.
The closest I can get is this library: hdf5. While it does not work
the same way as PyTables did, but it's good enough to let them
exchange data via hdf5 file.

There is just 1 problem, I keep getting Segfault error when trying to
process large files (10MB), although this is by no mean large when we
talk about hdf5 capabilities. I have included the example code and
data below. I have tried with different OS (WinXP and Ubuntu 8.04),
architecture (32 and 64bit) and R versions (2.7.1, 2.72, and 2.9.1),
but all of them present the same problem. I was wondering if anyone
have any clue as to what's going on here and maybe can advice me to
handle it.

Thank you, appreciate any help i can get.

Cheers,

Budi

The example script

library(hdf5)
fileName - sample.txt
myTable - read.table(fileName,header=TRUE,sep=\t,as.is=TRUE)
hdf5save(test.hdf, myTable)


The data example, the list continue for more than 250,000 rows: sample.txt

DateTimef1  f2  f3  f4  f5
2007032807:56   463 463.07  462.9   463.01  1100
2007032807:57   463.01  463.01  463.01  463.01  200


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] transforming data glm

2009-08-24 Thread Mcdonald, Grant
Dear sir,

I am fitting a glm with default identity link:


 model-glm(timetoacceptsecs~maleage*maletub*relweight*malemobtrue*femmobtrue)

the model is overdisperesed and plot model shows a low level of linearity of 
the residuals.  The overdispersion and linearity of residulas on the normal Q-Q 
plot is corrected well by using:


 
model-glm(log(timetoacceptsecs)~maleage*maletub*relweight*malemobtrue*femmobtrue))

Boxcox of my model also suggests that the log transformation is what i should 
do.

I ask how i am able to do this by changing the link function or error family of 
my glm and not diretly taking the log of the response variable.  

For instance:
model-glm(log(timetoacceptsecs)~maleage*maletub*relweight*malemobtrue*femmobtrue,
 family=poisson))
does not improve my model in terms of overdispersion etc as much as taking the 
log.

Thank you

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Copy Paste from tktext on Mac

2009-08-24 Thread Anne Skoeries

Hi there,

a text Window is supposed to map the shortcuts for copying and pasting  
(Ctrl-C, Ctrl-V) automatically.
I'm working under Mac OS X and my text window doesn't really map these  
functions automatically - it works fine under Windows. Is there an  
easy way to map copypaste functions to a text window under Mac OS X?


This is what I'm doing with my text window:

tt - tktoplevel()
txt - tktext(tt, bg = white, height=30, width=100, borderwidth=2)
scr - tkscrollbar(tt, orient = vertical, repeatinterval = 1,  
command = function(...) tkyview(txt, ...))

tkconfigure(txt, yscrollcommand = function(...) tkset(scr, ...))
tkgrid(txt, column=0, row=0, columnspan=2, sticky=nwse)

Session Info:
R version 2.9.1 (2009-06-26)
i386-apple-darwin8.11.1

locale:
de_DE.UTF-8/de_DE.UTF-8/C/C/de_DE.UTF-8/de_DE.UTF-8

attached base packages:
[1] tcltk stats graphics  grDevices utils datasets   
methods   base


other attached packages:
[1] tkrplot_0.0-18 rpart_3.1-44   relimp_1.0-1

Thanks so much!
--
Anne Skoeries

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] error in creating gantt chart.

2009-08-24 Thread rajclinasia

hi every one,
i have a excel sheet like this

   labels   starts ends
1  first task 1-Jan-04 3-Mar-04
2 second task 2-Feb-04 5-May-04
3  third task 3-Mar-04 6-Jun-04
4 fourth task 4-Apr-04 8-Aug-04
5  fifth task 5-May-04 9-Sep-04

now i converted this excel sheet into csv file and i read the csv file into
R with the below code.

my.gantt.info-read.csv(C:/Documents and
Settings/balakrishna/Desktop/one.csv).

and for create gantt chart i used below code.

 gantt.chart(my.gantt.info).

if i run this above code i am getting the error like this 

Error in x$starts : $ operator is invalid for atomic vectors.

 can anybody help in this aspect it would be very appreciable.
Thanks in Advance.
-- 
View this message in context: 
http://www.nabble.com/error-in-creating-gantt-chart.-tp25115102p25115102.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Text in barplot.

2009-08-24 Thread koj

Dear all,

i want to display a text in a barplot.

Actual I simply work with the following lines:
text(0.3,0.2,substring(teammembers.paint,1,140),cex=0.45,pos=4,srt=90)
text(0.48,0.2,substring(teammembers.paint,141,1000),cex=0.45,pos=4,srt=90)

The problem is, that teammembers.paint could be very long, so I split text
with substring - but this is no good solution because the separation must be
fixed.

Is there any other possibility to display a long text in a nicer way, e.g in
a box with the possibility of a line-break?

Thank you very much in advance,

Jens.
-- 
View this message in context: 
http://www.nabble.com/Text-in-barplot.-tp25111937p25111937.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Text in barplot.

2009-08-24 Thread jim holtman
?strwrap

On Mon, Aug 24, 2009 at 3:53 AM, kojjens.k...@gmx.li wrote:

 Dear all,

 i want to display a text in a barplot.

 Actual I simply work with the following lines:
 text(0.3,0.2,substring(teammembers.paint,1,140),cex=0.45,pos=4,srt=90)
 text(0.48,0.2,substring(teammembers.paint,141,1000),cex=0.45,pos=4,srt=90)

 The problem is, that teammembers.paint could be very long, so I split text
 with substring - but this is no good solution because the separation must be
 fixed.

 Is there any other possibility to display a long text in a nicer way, e.g in
 a box with the possibility of a line-break?

 Thank you very much in advance,

 Jens.
 --
 View this message in context: 
 http://www.nabble.com/Text-in-barplot.-tp25111937p25111937.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] discriminant analysis

2009-08-24 Thread Rubén Roa Ureta

Beatriz Yannicelli wrote:

Dear all:

Is it possible to conduct a discriminant analysis in R with categorical and
continuous variables as predictors?

Beatriz
  

Beatriz,
Simply doing this in the R console:
RSiteSearch(discriminant)
yields many promising links. In particular, check documentation of 
package mda.

HTH
Rubén

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] image() generates many border lines in pdf, not on screen (quartz) - R 2.9.1 GUI 1.28 Tiger build 32-bit (5444) - OS X 10.5.8

2009-08-24 Thread Uwe Ligges



Stefan Evert wrote:


On 23 Aug 2009, at 20:26, Uwe Ligges wrote:


Since it looks like nobody answered so far:

Your code is not reproducible, we do not have rfc, y, zVals nor NoCols.


It's much easier to reproduce: just type in the first example from the 
image help page


x - y - seq(-4*pi, 4*pi, len=27)
r - sqrt(outer(x^2, y^2, +))
image(z = z - cos(r^2)*exp(-r/6), col=gray((0:32)/32))

then save from the quartz() display 



Well, not possible for me on Windows .

Anyway, please use the pdf() device directly and see if it works:

pdf(path/to/file.pdf)
   x - y - seq(-4*pi, 4*pi, len=27)
   r - sqrt(outer(x^2, y^2, +))
   image(z = z - cos(r^2)*exp(-r/6), col=gray((0:32)/32))
dev.off()





(I used the menu) and view with 
Adobe Reader 9 (I seem to have 9.0.0).  


You should upgrade (even just for security reasons and many bugfixes), 
9.1.3 is recent on Windows.


Uwe Ligges



Instead of the fine white lines 
you always get with Preview.app and other inaccurate PDF renderers, 
there are now huge gaps between the pixels (around 1/10th of pixel width).


This is very probably a bug in the Quartz device (or Quartz itself), as 
the lines go away if you save the plot with dev.copy2pdf(), which I 
normally use.


@OP: Do you have any particular reason for using quartz.save() or the 
menu item instead of dev.copy2pdf()?


You could also try to place a screenshot somewhere on a webpage 
including the info about the settings of the corresponding viewer.


I've tried switching off _all_ of the numerous anti-aliasing options of 
Adobe Reader 9; absolutely no difference.


Hope this helps,
Stefan

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem with passing a string to subset

2009-08-24 Thread Sebastien Bihorel

Thanks Baptiste

The eval(parse()) combination is just what I need.

baptiste auguie wrote:

Try this,

mystr -c==1
subset(foo, eval(parse(text = mystr)) )

library(fortunes)
fortune(parse) # try several times

# I prefer this, but there is probably a better way
mycond- quote(c==1)
subset(foo, eval(bquote(.(mycond))) )

HTH,

baptiste

2009/8/21 Sebastien Bihorel sebastien.biho...@cognigencorp.com:
  

Dear R-users,

The following question bothered me for the whole afternoon: how can one pass
a string as the conditioning argument to subset? I tried plain mystr,
eval(mystr), expression(mystr), etc... I don't to be able to find the
correct syntax



foo - data.frame(a=1:10,b=10:1,c=rep(1:2,5))
mystr-c==1
subset(foo,c==1)
  

 a  b c
1 1 10 1
3 3  8 1
5 5  6 1
7 7  4 1
9 9  2 1


subset(foo,mystr)
  

Error in subset.data.frame(foo, mystr) :
 'subset' must evaluate to logical



Any help would be greatly appreciated.

Sebastien

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.









__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] calculating probability

2009-08-24 Thread Uwe Ligges



maram salem wrote:

Hi all,
I've a trivial question. If (q) is a continous variable,actually a vector of 1000 
values. how to calculate the probability that q is greater than a specific value, 
i.e. P(q45)??


Do you want to estimate any distribution or do you just want the 
empirical information on the number of values greater than 45? Foir the 
latter:


  mean(q  45)

Uwe Ligges




Thanks
Maram


  
	[[alternative HTML version deleted]]






__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] calculating probability

2009-08-24 Thread David Winsemius


On Aug 24, 2009, at 8:11 AM, maram salem wrote:


Hi all,
I've a trivial question. If (q) is a continous variable,actually a  
vector of 1000 values. how to calculate the probability that q is  
greater than a specific value, i.e. P(q45)??


sum(q45)/1000  # if no NA's in vector
sum(q45, na.rm=TRUE)/ sum(!is.na(q))  # if NA's in vector

--

David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] natural sorting a data frame /vector by row

2009-08-24 Thread Ottorino-Luca Pantani

Moumita Das ha scritto:

How to  NATURAL sort a vector or data frame* by row*  , in ascending order ?


V1   V2V3 V4
i1 5.00e-01 1.036197e-17  4.825338e+16 0.
i104.001692e-18 1.365740e-17  2.930053e-01 0.76973827
i12   -1.052843e-17 1.324484e-17 -7.949081e-01 0.42735000
i132.571236e-17 1.357336e-17  1.894325e+00 0.05922715
i2-5.630739e-18 1.638267e-17 -3.437010e-01 0.73133282
i3 4.291387e-18 1.207522e-17  3.553879e-01 0.72257050
i4 1.472662e-17 1.423051e-17  1.034863e+00 0.30163897
i5 5.00e-01 1.003323e-17  4.983441e+16 0.
i6 5.147966e-18 1.569095e-17  3.280850e-01 0.74309614
i7 1.096044e-17 1.555829e-17  7.044760e-01 0.48173041
i8-1.166290e-18 1.287370e-17 -9.059482e-02 0.92788026
i9 1.627371e-17 1.540567e-17  1.056345e+00 0.29173427
recmeanC2  9.275880e-17 6.322780e-17  1.467057e+00 0.14349903
 NA   NANA NA
recmeanC3  1.283534e-17 2.080644e-17  6.168929e-01 0.53781390
recmeanC4-3.079466e-17 2.565499e-17 -1.200338e+00 0.23103743



I want a sequence of rows as :--  *recmeanC2 ,recmeanC3,recmeanC4* and the *NA
row  in the third position from the top*(presently it's third from down)
  
I do not understand what NATURAL stand for, but I'm not mothertongue 
in English.

Is this the order you want ?

recmeanC2  9.275880e-17 6.322780e-17  1.467057e+00 0.14349903
recmeanC3  1.283534e-17 2.080644e-17  6.168929e-01 0.53781390
recmeanC4-3.079466e-17 2.565499e-17 -1.200338e+00 0.23103743
NA   NANA NA
i1 5.00e-01 1.036197e-17  4.825338e+16 0.
i2-5.630739e-18 1.638267e-17 -3.437010e-01 0.73133282
i3 4.291387e-18 1.207522e-17  3.553879e-01 0.72257050
i4 1.472662e-17 1.423051e-17  1.034863e+00 0.30163897
i5 5.00e-01 1.003323e-17  4.983441e+16 0.
i6 5.147966e-18 1.569095e-17  3.280850e-01 0.74309614
i7 1.096044e-17 1.555829e-17  7.044760e-01 0.48173041
i8-1.166290e-18 1.287370e-17 -9.059482e-02 0.92788026
i9 1.627371e-17 1.540567e-17  1.056345e+00 0.29173427
i104.001692e-18 1.365740e-17  2.930053e-01 0.76973827
***no i11 ? ***
i12   -1.052843e-17 1.324484e-17 -7.949081e-01 0.42735000
i132.571236e-17 1.357336e-17  1.894325e+00 0.05922715

If so I'm afraid there's no a simple way to do it.
This a possible solution

df.newdata - cbind.data.frame(df.yourdata, foo=c(13, 15, 16, 17, 1, 2:12, 
13:15)
df.newdataOrdered - df.newdata[sort(df.newdata$foo),]

another solution could be to rename the items in column 1 
--

Ottorino

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Copy Paste from tktext on Mac

2009-08-24 Thread David Winsemius


On Aug 24, 2009, at 8:21 AM, Anne Skoeries wrote:


Hi there,

a text Window is supposed to map the shortcuts for copying and  
pasting (Ctrl-C, Ctrl-V) automatically.
I'm working under Mac OS X and my text window doesn't really map  
these functions automatically - it works fine under Windows.


That is a bit vague. When discussing user interface behavior, it would  
make sense to be more explicit about keystrokes. It's not clear here  
that you aware that the Mac (which did start delivering this GUI  
pizzazz long before MS-Windows was even halfway stable) uses cmd-C and  
cmd-V as the to- and from-clipboard key operations, rather than crtl-C  
and ctrl-V?


Is there an easy way to map copypaste functions to a text window  
under Mac OS X?


??clipboard

Brings up two hits on my machine one for connections and one for  
TkCommands. The first has a section on functions to access the  
Clipboard from R.  The second help page should appear if you have  
tcltk loaded, although I don't at the moment so I cannot test that.


?connections
?TkCommands  # probably

There is a specific help list for Mac questions and a lot of the  
really knowledgeable people read it more regularly than r-help.






This is what I'm doing with my text window:

tt - tktoplevel()
txt - tktext(tt, bg = white, height=30, width=100, borderwidth=2)
scr - tkscrollbar(tt, orient = vertical, repeatinterval = 1,  
command = function(...) tkyview(txt, ...))

tkconfigure(txt, yscrollcommand = function(...) tkset(scr, ...))
tkgrid(txt, column=0, row=0, columnspan=2, sticky=nwse)

Session Info:
R version 2.9.1 (2009-06-26)
i386-apple-darwin8.11.1

locale:
de_DE.UTF-8/de_DE.UTF-8/C/C/de_DE.UTF-8/de_DE.UTF-8

attached base packages:
[1] tcltk stats graphics  grDevices utils datasets   
methods   base


other attached packages:
[1] tkrplot_0.0-18 rpart_3.1-44   relimp_1.0-1

Thanks so much!
--
Anne Skoeries

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to specify between group variance in lme

2009-08-24 Thread Bond, Stephen
Hello r-help,

I am using lme with two specs for the variance func

varComb(varFixed(~1/n)),varPower(~Age))

this produces worse forecasts than the lm model with simple

weights=n

I think due to the fact that the lme spec works on variance inside the group. I 
need to show it that 1/n scales the variance between groups.

Is that possible?

I cannot disclose my dataset, but could post plots if that is possible somehow, 
let me know. Anova shows that everything in random=~poly(age,2) is significant.

Thank you all very much.
Stephen

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] image() generates many border lines in pdf, not on screen (quartz) - R 2.9.1 GUI 1.28 Tiger build 32-bit (5444) - OS X 10.5.8

2009-08-24 Thread Stefan Evert
Your code is not reproducible, we do not have rfc, y, zVals nor  
NoCols.
It's much easier to reproduce: just type in the first example from  
the image help page

   x - y - seq(-4*pi, 4*pi, len=27)
   r - sqrt(outer(x^2, y^2, +))
   image(z = z - cos(r^2)*exp(-r/6), col=gray((0:32)/32))
then save from the quartz() display


Well, not possible for me on Windows .

Anyway, please use the pdf() device directly and see if it works:


Works just as well as dev.copy2pdf() ... it really appears to be a bug  
in the Quartz framework.


Unfortunately, I don't know why the OP doesn't just use the pdf()  
device.


(I used the menu) and view with Adobe Reader 9 (I seem to have  
9.0.0).


You should upgrade (even just for security reasons and many  
bugfixes), 9.1.3 is recent on Windows.


The newest I could get from Adobe is 9.1.0.  Strangely enough, even  
though I have configured it to check for updates on startup, Adobe  
Reader never suggested to me that an update may be available.


Thanks for mentioning this.

Cheers,
Stefan

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Import/export ENVI files

2009-08-24 Thread Barry Rowlingson
2009/8/24 Lucas Sevilla García luckocto...@hotmail.com:

 Hi! I'm a beginner with this webpage so, I don't know if I'm sending my 
 question to the correct site. Anyway, I'm working with R and I need to import 
 and export ENVI files, (*.HDR files). A colleague told me that there is a 
 package to import/export envi files but I haven't found that package, so does 
 anyone know something about this? thank you so much :) . Ciaooo

 I'm guessing they are geographic data? Gridded, raster data perhaps?
This information might have helped... Remember there are 36^3 = 46656
possible alpha-numeric three-character file endings and I'm not sure I
know all of them. Lucky for you this one rang a bell...

 So, you probably want the rgdal package. If you've not got it
already, then do this in R:

 install.packages(rgdal)

 when that's done, you need to load it:

 library(rgdal)

 and then try and read your file:

 map = readGDAL(file.hdr)

 and then try:

 summary(map)
 image(map)

 That's a start.

Barry

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to specify between group variance in lme

2009-08-24 Thread Bond, Stephen
Clarification:

Lm is much better than the base forecast from lme level=0,

Level=1 produces a much tighter fit than lm.

I was expecting that level=0 would produce something very close to lm, but it 
does not.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] error in creating gantt chart.

2009-08-24 Thread David Winsemius


On Aug 24, 2009, at 8:22 AM, rajclinasia wrote:



hi every one,
i have a excel sheet like this

  labels   starts ends
1  first task 1-Jan-04 3-Mar-04
2 second task 2-Feb-04 5-May-04
3  third task 3-Mar-04 6-Jun-04
4 fourth task 4-Apr-04 8-Aug-04
5  fifth task 5-May-04 9-Sep-04

now i converted this excel sheet into csv file and i read the csv  
file into

R with the below code.

my.gantt.info-read.csv(C:/Documents and
Settings/balakrishna/Desktop/one.csv).


 my.gantt.info-read.csv(textConnection(  labels   starts  
ends

+ 1  first task 1-Jan-04 3-Mar-04
+ 2 second task 2-Feb-04 5-May-04
+ 3  third task 3-Mar-04 6-Jun-04
+ 4 fourth task 4-Apr-04 8-Aug-04
+ 5  fifth task 5-May-04 9-Sep-04))
 my.gantt.info
 labels...starts.ends
1 1  first task 1-Jan-04 3-Mar-04
2 2 second task 2-Feb-04 5-May-04
3 3  third task 3-Mar-04 6-Jun-04
4 4 fourth task 4-Apr-04 8-Aug-04
5 5  fifth task 5-May-04 9-Sep-04

So that may look successful to you but that data.frame contains all of  
that data in a single (character) column. Why? Because a function was  
expecting commas on a file that did not have any. You will probably  
get further along if you use read.table with header=TRUE. Maybe you  
did something different or the file did have commas. With such a small  
file, you really should present the results of


dput(my.gantt.info)

That will contain all the values and attributes of the R object  ...  
no more guessing, which is what we are doing now.


and for create gantt chart i used below code.

gantt.chart(my.gantt.info).


That looks wrong. my.gantt.info is an R object. Is gantt.chart (from  
whatever unspecified package) really expecting to have its arguments  
quoted? I would bet against that possibility.


I would also guess that, even if the data input issues are not a  
problem and the quotes are removed, you still have not converted those  
character values that you think look like dates into objects that R  
will interpret as dates.


?as.Date



if i run this above code i am getting the error like this

Error in x$starts : $ operator is invalid for atomic vectors.

can anybody help in this aspect it would be very appreciable.
Thanks in Advance.



David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R survival package error message - bug?!

2009-08-24 Thread Damjan Krstajic

Dear all,

I have encountered a weird behaviour in R 
survival package which seems to me to be a bug. 

The weird behaviour happens when I am using 
100 variables in the ridge function when calling 
coxph with following formula Surv(time = futime,
event = fustat, type = right) ~ ridge(X1, X2, 
X3, X4, X5, X6, X7, X8, X9, X10, X11, X12, X13, 
X14, X15, X16, X17, X18, X19, X20, X21, X22, X23,
X24, X25, X26, X27, X28, X29, X30, X31, X32, X33,
X34, X35, X36, X37, X38, X39, X40, X41, X42, X43, 
X44, X45, X46, X47, X48, X49, X50, X51, X52, X53, 
X54, X55, X56, X57, X58, X59, X60, X61, X62, X63, 
X64, X65, X66, X67, X68, X69, X70, X71, X72, X73,
X74, X75, X76, X77, X78, X79, X80, X81, X82, X83,
X84, X85, X86, X87, X88, X89, X90, X91, X92, X93, 
X94, X95, X96, X97, X98, X99, X100, 
theta = lambda, scale = FALSE)

and get the error message 
Error in model.frame.default(formula = form,
data = sdf) : invalid variable names 
Calls: coxph - eval - eval -
model.frame - model.frame.default Execution halted

I am using R 2.8.0 and the latest version of 
the survival package 2.35-4.

Here is how you can re-create the error message

 library(survival)
Loading required package: splines
 x-as.data.frame(matrix(rnorm(100*500),ncol=100))
 x$y-1:500
 x$status-1
 lambda-1.0
 ff-as.formula(paste(Surv(y,status)~ridge(,
paste(names(x)[1:100],collapse=,), 
,theta = lambda, scale = FALSE)))
 coxph(ff,x)
Error in model.frame.default(formula = ff, 
data = x) : invalid variable names
 print (ff)
Surv(y, status) ~ ridge(V1, V2, V3, V4, V5, V6, V7, V8, V9, V10,
V11, V12, V13, V14, V15, V16, V17, V18, V19, V20, V21, V22,
V23, V24, V25, V26, V27, V28, V29, V30, V31, V32, V33, V34,
V35, V36, V37, V38, V39, V40, V41, V42, V43, V44, V45, V46,
V47, V48, V49, V50, V51, V52, V53, V54, V55, V56, V57, V58,
V59, V60, V61, V62, V63, V64, V65, V66, V67, V68, V69, V70,
V71, V72, V73, V74, V75, V76, V77, V78, V79, V80, V81, V82,
V83, V84, V85, V86, V87, V88, V89, V90, V91, V92, V93, V94,
V95, V96, V97, V98, V99, V100, theta = lambda, scale = FALSE)

Also I have found that the code breaks with number of variables greater
than 97. For 97 and less it works fine. I have found that it breaks at the
following line of the coxph function

if (is.R())
m - eval(temp, parent.frame())

I have been trying to understand why it breaks there and have not
progressed much so far.

With kind regards
DK


_

icons.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R with MPI

2009-08-24 Thread polemon
On Mon, Aug 24, 2009 at 2:20 PM, polemon pole...@gmail.com wrote:

 Hello, I plan to use R with my cluster with OpenMPI.
 I need the packaged 'snow' and 'Rmpi' for that, however, I get an error
 while downloading and installing them:
 When I do a:
 install.packages(Rmpi, dependencies=T)

 I get this error:
 checking for mpi.h... no
 Try to find libmpi.so or libmpich.a
 checking for main in -lmpi... no
 libmpi not found. exiting...

 However, mpi.h is present via the openmpi-devel package on my RHEL 5.3.

 Some of those packages need sprng 2.0 (rsprng, for instance, which is a
 dependency for another MPI-related package). Sprng 2.0, however, isn't in
 developement for years, I wonder how I am supposed to keep my software up to
 date...

 Any ideas on how to workaround that mpi.h problem?

 Please help,

 --polemon


I did as described here:
http://www.cybaea.net/Blogs/Data/R-tips-Installing-Rmpi-on-Fedora-Linux.html

Since Fedora and RHEL are pretty equal, I gave that installation a shot, and
from what I can tell, I got pretty far.
The package installed well, but when I try to load it with library(Rmpi):

 library(Rmpi)
Error in dyn.load(file, DLLpath = DLLpath, ...) :
  unable to load shared library '/opt/R/lib64/R/library/Rmpi/libs/Rmpi.so':
  libmpi.so.0: cannot open shared object file: No such file or directory
Error in library(Rmpi) : .First.lib failed for 'Rmpi'
Error in dyn.unload(file.path(libpath, libs, paste(Rmpi,
.Platform$dynlib.ext,  :
  dynamic/shared library '/opt/R/lib64/R/library/Rmpi/libs/Rmpi.so' was not
loaded

As you can see, R is installed in /opt/R, libmpi.so.0 is available:

/usr/lib/lam/lib/libmpi.so.0
/usr/lib/lam/lib/libmpi.so.0.0.0
/usr/lib/openmpi/1.2.7-gcc/lib/libmpi.so
/usr/lib/openmpi/1.2.7-gcc/lib/libmpi.so.0
/usr/lib/openmpi/1.2.7-gcc/lib/libmpi.so.0.0.0
/usr/lib64/lam/lib/libmpi.so.0
/usr/lib64/lam/lib/libmpi.so.0.0.0
/usr/lib64/openmpi/1.2.7-gcc/lib/libmpi.so
/usr/lib64/openmpi/1.2.7-gcc/lib/libmpi.so.0
/usr/lib64/openmpi/1.2.7-gcc/lib/libmpi.so.0.0.0

What should I do, to make Rmpi available in R?

Cheers,

--polemon

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Convert list to data frame while controlling column types

2009-08-24 Thread Alexander Shenkin
On 8/24/2009 2:06 AM, Petr PIKAL wrote:
 Hi
 
 r-help-boun...@r-project.org napsal dne 23.08.2009 17:29:48:
 
 On 8/23/2009 9:58 AM, David Winsemius wrote:
 I still have problems with this statement. As I understand R, this
 should be impossible. I have looked at both you postings and neither of
 them clarify the issues. How can you have blanks or spaces in an R
 numeric vector?


 Just because I search numeric columns doesn't mean that my regex matches
 them!  I posted some info on my data frame in an earlier email:

 str(final_dataf)
 'data.frame':   1127 obs. of  43 variables:
  $ block  : Factor w/ 1 level 2: 1 1 1 1 1 1 1 1 1 1 ...
  $ treatment  : Factor w/ 4 levels I,M,N,T: 1 1 1 1 1 1 ...
  $ transect   : Factor w/ 1 level 4: 1 1 1 1 1 1 1 1 1 1 ...
  $ tag: chr  NA 121AL 122AL 123AL ...
 ...
  $ h1 : num  NA NA NA NA NA NA NA NA NA NA ...
 ...

 You can see that I do indeed have some numeric columns.  And while I
 
 Well, AFAICS you have a data frame with 3 columns which are factors and 1 
 which is character. I do not see any numeric column. If you want to change 
 block and transect to numeric you can use
 
 df$block - as.numeric(as.character(df$block))

If you take a closer look at my data frame listing, you'll see that it
is 1127 obs. of  43 variables.  I edited the column listing for
readability, and you'll see even in my editing listing I do indeed have
one numeric column - h1.  And as I mentioned earlier, I use
colClasses, so no need to change anything to numeric here.

 
 search them for spaces, I only do so because my dataset isn't so large
 as to require me to exclude them from the search.  If my dataset grows
 too big at some point, I will exclude numeric columns, and other columns
 which cannot hold blanks or spaces.

 To clarify further with an example:

 df = data.frame(a=c(1,2,3,4,5),b=c(a,,c,d, ))
 df = as.data.frame(lapply(df, function(x){ is.na(x) -
 + grep('^\\s*$',x); return(x) }), stringsAsFactors = FALSE)
 df
   ab
 1 1a
 2 2 NA
 3 3c
 4 4d
 5 5 NA
 
 which can be done also by
 df[,2] - levels(df[,2])[1:2]-NA
 
 but maybe with less generalization

Yes - my point was to show how I looped through an entire data frame
looking for \\s*, even when some of the columns were numeric.  I gave
this simple example with a 2-column data frame to illustrate that point.

 str(df)
 'data.frame':   5 obs. of  2 variables:
  $ a: num  1 2 3 4 5
  $ b: Factor w/ 5 levels , ,a,c,..: 3 NA 4 5 NA

 And one final clarification: I left out as.data.frame in my previous
 solution.  So it now becomes:

 final_dataf = as.data.frame(lapply(final_dataf, function(x){ is.na(x)
 + - grep('^\\s*$',x); return(x) }), stringsAsFactors = FALSE)
 
 Again not too much of clarification, in your first data frame second 
 column is a factor with some levels you want to convert to NA, which can 
 be done by different approaches.

This clarification was to show the code that worked (for posterity), as
my previous post left out an argument.  It seems that perhaps you missed
the previous emails.

 Your final_dataf is same as df.

Yes, that is the point.  As I mentioned in the first email of this
thread, I was trying to get around as.data.frame's automatic conversion
routines, in order to retain the original column types.  And it turned
out that gsub() was more of the problem than as.data.frame() was.
Please refer to the earlier emails for more information on that.

 Columns which shall be numeric and are read as factor/character by 
 read.table likely contain some values which strictly can not be considered 
 numeric. You can see them quite often in Excel like programs and some 
 examples are
 
 1..2, o.5, 12.o5 and or spaces, - e.t.c.
 
 and you usually need handle them by hand.
 
 Regards
 Petr
 
 Hope that clarifies things, and thanks for your help.

 Thanks,
 Allie


 On 8/23/2009 9:58 AM, David Winsemius wrote:
 On Aug 23, 2009, at 2:47 AM, Alexander Shenkin wrote:

 On 8/21/2009 3:04 PM, David Winsemius wrote:
 On Aug 21, 2009, at 3:41 PM, Alexander Shenkin wrote:

 Thanks everyone for their replies, both on- and off-list.  I should
 clarify, since I left out some important information.  My original
 dataframe has some numeric columns, which get changed to character 
 by
 gsub when I replace spaces with NAs.
 If you used is.na() -  that would not happen to a true _numeric_ 
 vector
 (but, of course, a numeric vector in a data.frame could not have 
 spaces,
 so you are probably not using precise terminology).
 I do have true numeric columns, but I loop through my entire 
 dataframe
 looking for blanks and spaces for convenience.
 I still have problems with this statement. As I understand R, this
 should be impossible. I have looked at both you postings and neither 
 of
 them clarify the issues. How can you have blanks or spaces in an R
 numeric vector?


 You would be well
 advised to include the actual code rather than applying loose
 

Re: [R] Selecting groups with R

2009-08-24 Thread Michael A. Miller
To drop empty factor levels from a subset, I use the following:

a.subset - subset(dataset, Color!='BLUE')
ifac - sapply(a.subset,is.factor)
a.subset[ifac] - lapply(a.subset[ifac],factor)

Mike


 dataset
  Color Score
1   RED10
2   RED13
3   RED12
4 WHITE22
5 WHITE27
6 WHITE25
7  BLUE18
8  BLUE17
9  BLUE16
 table(dataset)
   Score
Color   10 12 13 16 17 18 22 25 27
  BLUE   0  0  0  1  1  1  0  0  0
  RED1  1  1  0  0  0  0  0  0
  WHITE  0  0  0  0  0  0  1  1  1
 
 a.subset - subset(dataset, Color!='BLUE')
 a.subset
  Color Score
1   RED10
2   RED13
3   RED12
4 WHITE22
5 WHITE27
6 WHITE25
 
 table(a.subset)
   Score
Color   10 12 13 22 25 27
  BLUE   0  0  0  0  0  0
  RED1  1  1  0  0  0
  WHITE  0  0  0  1  1  1
 
 ifac - sapply(a.subset,is.factor)
 a.subset[ifac] - lapply(a.subset[ifac],factor)
 
 table(a.subset)
   Score
Color   10 12 13 22 25 27
  RED1  1  1  0  0  0
  WHITE  0  0  0  1  1  1

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] error in creating gantt chart.

2009-08-24 Thread Stefan Grosse
On Mon, 24 Aug 2009 05:22:17 -0700 (PDT) rajclinasia r...@clinasia.com
wrote:

R my.gantt.info-read.csv(C:/Documents and
R Settings/balakrishna/Desktop/one.csv).
R 
R and for create gantt chart i used below code.
R 
R  gantt.chart(my.gantt.info).

This again is why others have pointed you first to have a look at the
basics of R which can be read at:
http://cran.r-project.org/manuals.html
or more extensively in:
http://cran.r-project.org/other-docs.html

You start to program without knowing the very basics of R. Which is in
this case the data structures. my.gantt.info is a data frame. But with
the quotation marks you don't even give this as an object to
gantt.chart because with the quotation mark you have created a single
variable with a single value which is the characters between the
quotation marks that have nothing to do with the data - hence the atomic
stuff error message...

If you would have taken the real data you need the objects name which
is: gantt.chart(my.gantt.info) but this would not work either since
as I have already pointed out in an earlier mail lase week: you need a
list but not a data.frame. So please also consider the documentation
which you can do with
?gantt.chart
which directly points you towards this.

So what you need is a list. What that is you can see either in the
example of
example(gantt.chart)
what is a list? thats basics. Its an object type (there is data.frame,
matrix, list and so on)
?list
?as.list

Please learn the basics before asking such questions because you
save a lot of time for yourself and we do as well because we would not
need to answer such questions.

Stefan

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] CRAN (and crantastic) updates this week

2009-08-24 Thread Hadley Wickham
CRAN (and crantastic) updates this week

New packages



Updated packages




New reviews
---



This email provided as a service for the R community by
http://crantastic.org.

Like it?  Hate it?  Please let us know: crana...@gmail.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] CRAN (and crantastic) updates this week

2009-08-24 Thread hadley wickham
Sorry, we had some problems with the initial sending of our weekly digest
which resulted in a rather empty email. Here is the correct version:

CRAN (and crantastic) updates this week

New packages


* atm (0.1.0)
Charlotte Maia
http://crantastic.org/packages/atm

An R package for creating additive models with semiparametric
predictors, emphasizing term objects, especially (1) implementation
of a term class hierarchy, and (2) interpretation and evaluation of
term estimates as functions of explanatories.

* gnumeric (0.5-1)
Karoly Antal
http://crantastic.org/packages/gnumeric

Read data files readable into R. Can read whole sheet or a range, from
several file formats, including the native format of gnumeric.
Reading is done by using ssconvert (a file converter utility
included in the gnumeric distribution) to convert the requested part
to CSV.

* KFAS (0.3.1)
Jouni Lehtonen
http://crantastic.org/packages/KFAS

Fast multivariate Kalman filter, smoother, simulation smoother and
forecasting. Uses exact diffuse initialisation when distributions of
some or all elements of initial state vector are unknown.

* munsell (0.1)
Charlotte Wickham
http://crantastic.org/packages/munsell

Functions for exploring and using the Munsell colour system

* oosp (0.1.0)
Charlotte Maia
http://crantastic.org/packages/oosp

An R package designed to support object oriented statistical
programming, especially by extending S3 capabilities, providing
pointer and component objects, and providing basic support for
symbolic-numeric statistical programming.

* PLIS (1.0)
Zhi Wei
http://crantastic.org/packages/PLIS

PLIS is a multiple testing procedure for testing several groups of
hypotheses. Linear dependency is expected from the hypotheses within
the same group and is modeled by hidden Markov Models. It is noted
that, for PLIS, a smaller p value does not necessarily imply more
significance because of dependency among the hypotheses. A typical
applicaiton of PLIS is to analyze genome wide association studies
datasets, where SNPs from the same chromosome are treated as a group
and exhibit strong linear genomic dependency.

* rrv (0.0.1)
Charlotte Maia
http://crantastic.org/packages/rrv

An incomplete R package for working with random return variables. The
current package provides limited support for formatting money. More
features will be added in the future.

* ttrTests (1.0)
David St John
http://crantastic.org/packages/ttrTests

Four core functions evaluate the efficacy of a technical trading rule.
- Conditional return statistics - Bootstrap resampling statistics -
Reality Check for data snooping bias among parameter choices -
Robustness, or Persistence, of parameter choices


Updated packages


AdMit (1-01.03), alr3 (1.1.9), alr3 (1.1.10), cmprskContin (1.1),
dataframes2xls (0.4.3), digeR (1.2), doBy (4.0.1), DoE.base (0.7),
FrF2 (0.97-1), FrF2 (0.97-3), hdrcde (2.10), HH (2.1-30), JM (0.4-0),
memisc (0.95-21), minet (2.0.0), MLDS (0.2-0), monomvn (1.7-3), mrt
(0.3), nsRFA (0.6-9), PBSmodelling (2.21), plink (1.2-0), plotrix
(2.7), RaschSampler (0.8-2), RCurl (1.0-0), rgdal (0.6-14), RSiena
(1.0.5), rtv (0.3.0), sdef (1.1), SIS (0.2), spdep (0.4-36)

New reviews
---

* sqldf, by m.e.driscoll
http://crantastic.org/reviews/26

* SensoMineR, by padmanabhan.vijayan
http://crantastic.org/reviews/25

* plyr, by eamani
http://crantastic.org/reviews/24



This email provided as a service for the R community by
http://crantastic.org.

Like it? Hate it? Please let us know: crana...@gmail.com.

On Mon, Aug 24, 2009 at 4:18 PM, Hadley Wickham crana...@gmail.com wrote:

 CRAN (and crantastic) updates this week

 New packages
 


 Updated packages
 



 New reviews
 ---



 This email provided as a service for the R community by
 http://crantastic.org.

 Like it?  Hate it?  Please let us know: crana...@gmail.com.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] CRAN (and crantastic) updates this week

2009-08-24 Thread Ronggui Huang
I like it. Thanks.

2009/8/24 hadley wickham crana...@gmail.com:
 Sorry, we had some problems with the initial sending of our weekly digest
 which resulted in a rather empty email. Here is the correct version:

 CRAN (and crantastic) updates this week

 New packages
 

 * atm (0.1.0)
 Charlotte Maia
 http://crantastic.org/packages/atm

 An R package for creating additive models with semiparametric
 predictors, emphasizing term objects, especially (1) implementation
 of a term class hierarchy, and (2) interpretation and evaluation of
 term estimates as functions of explanatories.

 * gnumeric (0.5-1)
 Karoly Antal
 http://crantastic.org/packages/gnumeric

 Read data files readable into R. Can read whole sheet or a range, from
 several file formats, including the native format of gnumeric.
 Reading is done by using ssconvert (a file converter utility
 included in the gnumeric distribution) to convert the requested part
 to CSV.

 * KFAS (0.3.1)
 Jouni Lehtonen
 http://crantastic.org/packages/KFAS

 Fast multivariate Kalman filter, smoother, simulation smoother and
 forecasting. Uses exact diffuse initialisation when distributions of
 some or all elements of initial state vector are unknown.

 * munsell (0.1)
 Charlotte Wickham
 http://crantastic.org/packages/munsell

 Functions for exploring and using the Munsell colour system

 * oosp (0.1.0)
 Charlotte Maia
 http://crantastic.org/packages/oosp

 An R package designed to support object oriented statistical
 programming, especially by extending S3 capabilities, providing
 pointer and component objects, and providing basic support for
 symbolic-numeric statistical programming.

 * PLIS (1.0)
 Zhi Wei
 http://crantastic.org/packages/PLIS

 PLIS is a multiple testing procedure for testing several groups of
 hypotheses. Linear dependency is expected from the hypotheses within
 the same group and is modeled by hidden Markov Models. It is noted
 that, for PLIS, a smaller p value does not necessarily imply more
 significance because of dependency among the hypotheses. A typical
 applicaiton of PLIS is to analyze genome wide association studies
 datasets, where SNPs from the same chromosome are treated as a group
 and exhibit strong linear genomic dependency.

 * rrv (0.0.1)
 Charlotte Maia
 http://crantastic.org/packages/rrv

 An incomplete R package for working with random return variables. The
 current package provides limited support for formatting money. More
 features will be added in the future.

 * ttrTests (1.0)
 David St John
 http://crantastic.org/packages/ttrTests

 Four core functions evaluate the efficacy of a technical trading rule.
 - Conditional return statistics - Bootstrap resampling statistics -
 Reality Check for data snooping bias among parameter choices -
 Robustness, or Persistence, of parameter choices


 Updated packages
 

 AdMit (1-01.03), alr3 (1.1.9), alr3 (1.1.10), cmprskContin (1.1),
 dataframes2xls (0.4.3), digeR (1.2), doBy (4.0.1), DoE.base (0.7),
 FrF2 (0.97-1), FrF2 (0.97-3), hdrcde (2.10), HH (2.1-30), JM (0.4-0),
 memisc (0.95-21), minet (2.0.0), MLDS (0.2-0), monomvn (1.7-3), mrt
 (0.3), nsRFA (0.6-9), PBSmodelling (2.21), plink (1.2-0), plotrix
 (2.7), RaschSampler (0.8-2), RCurl (1.0-0), rgdal (0.6-14), RSiena
 (1.0.5), rtv (0.3.0), sdef (1.1), SIS (0.2), spdep (0.4-36)

 New reviews
 ---

 * sqldf, by m.e.driscoll
 http://crantastic.org/reviews/26

 * SensoMineR, by padmanabhan.vijayan
 http://crantastic.org/reviews/25

 * plyr, by eamani
 http://crantastic.org/reviews/24



 This email provided as a service for the R community by
 http://crantastic.org.

 Like it? Hate it? Please let us know: crana...@gmail.com.

 On Mon, Aug 24, 2009 at 4:18 PM, Hadley Wickham crana...@gmail.com wrote:

 CRAN (and crantastic) updates this week

 New packages
 


 Updated packages
 



 New reviews
 ---



 This email provided as a service for the R community by
 http://crantastic.org.

 Like it?  Hate it?  Please let us know: crana...@gmail.com.


        [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
HUANG Ronggui, Wincent
PhD Candidate
Dept of Public and Social Administration
City University of Hong Kong
Home page: http://asrr.r-forge.r-project.org/rghuang.html

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Re current neural network (RNN) package?

2009-08-24 Thread J_Laberga

Hello,

I'm trying to tackle a problem that would require the implementation of a
recurrent NN. However, even though the CRAN is very big, I can’t seem to
find a package for this. Does anybody here know if one exits?


BR,
John
-- 
View this message in context: 
http://www.nabble.com/Recurrent-neural-network-%28RNN%29-package--tp25116968p25116968.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] CRAN (and crantastic) updates this week

2009-08-24 Thread John Kerpel
Great idea - thx!

On Mon, Aug 24, 2009 at 9:30 AM, hadley wickham crana...@gmail.com wrote:

 Sorry, we had some problems with the initial sending of our weekly digest
 which resulted in a rather empty email. Here is the correct version:

 CRAN (and crantastic) updates this week

 New packages
 

 * atm (0.1.0)
 Charlotte Maia
 http://crantastic.org/packages/atm

 An R package for creating additive models with semiparametric
 predictors, emphasizing term objects, especially (1) implementation
 of a term class hierarchy, and (2) interpretation and evaluation of
 term estimates as functions of explanatories.

 * gnumeric (0.5-1)
 Karoly Antal
 http://crantastic.org/packages/gnumeric

 Read data files readable into R. Can read whole sheet or a range, from
 several file formats, including the native format of gnumeric.
 Reading is done by using ssconvert (a file converter utility
 included in the gnumeric distribution) to convert the requested part
 to CSV.

 * KFAS (0.3.1)
 Jouni Lehtonen
 http://crantastic.org/packages/KFAS

 Fast multivariate Kalman filter, smoother, simulation smoother and
 forecasting. Uses exact diffuse initialisation when distributions of
 some or all elements of initial state vector are unknown.

 * munsell (0.1)
 Charlotte Wickham
 http://crantastic.org/packages/munsell

 Functions for exploring and using the Munsell colour system

 * oosp (0.1.0)
 Charlotte Maia
 http://crantastic.org/packages/oosp

 An R package designed to support object oriented statistical
 programming, especially by extending S3 capabilities, providing
 pointer and component objects, and providing basic support for
 symbolic-numeric statistical programming.

 * PLIS (1.0)
 Zhi Wei
 http://crantastic.org/packages/PLIS

 PLIS is a multiple testing procedure for testing several groups of
 hypotheses. Linear dependency is expected from the hypotheses within
 the same group and is modeled by hidden Markov Models. It is noted
 that, for PLIS, a smaller p value does not necessarily imply more
 significance because of dependency among the hypotheses. A typical
 applicaiton of PLIS is to analyze genome wide association studies
 datasets, where SNPs from the same chromosome are treated as a group
 and exhibit strong linear genomic dependency.

 * rrv (0.0.1)
 Charlotte Maia
 http://crantastic.org/packages/rrv

 An incomplete R package for working with random return variables. The
 current package provides limited support for formatting money. More
 features will be added in the future.

 * ttrTests (1.0)
 David St John
 http://crantastic.org/packages/ttrTests

 Four core functions evaluate the efficacy of a technical trading rule.
 - Conditional return statistics - Bootstrap resampling statistics -
 Reality Check for data snooping bias among parameter choices -
 Robustness, or Persistence, of parameter choices


 Updated packages
 

 AdMit (1-01.03), alr3 (1.1.9), alr3 (1.1.10), cmprskContin (1.1),
 dataframes2xls (0.4.3), digeR (1.2), doBy (4.0.1), DoE.base (0.7),
 FrF2 (0.97-1), FrF2 (0.97-3), hdrcde (2.10), HH (2.1-30), JM (0.4-0),
 memisc (0.95-21), minet (2.0.0), MLDS (0.2-0), monomvn (1.7-3), mrt
 (0.3), nsRFA (0.6-9), PBSmodelling (2.21), plink (1.2-0), plotrix
 (2.7), RaschSampler (0.8-2), RCurl (1.0-0), rgdal (0.6-14), RSiena
 (1.0.5), rtv (0.3.0), sdef (1.1), SIS (0.2), spdep (0.4-36)

 New reviews
 ---

 * sqldf, by m.e.driscoll
 http://crantastic.org/reviews/26

 * SensoMineR, by padmanabhan.vijayan
 http://crantastic.org/reviews/25

 * plyr, by eamani
 http://crantastic.org/reviews/24



 This email provided as a service for the R community by
 http://crantastic.org.

 Like it? Hate it? Please let us know: crana...@gmail.com.

 On Mon, Aug 24, 2009 at 4:18 PM, Hadley Wickham crana...@gmail.com
 wrote:

  CRAN (and crantastic) updates this week
 
  New packages
  
 
 
  Updated packages
  
 
 
 
  New reviews
  ---
 
 
 
  This email provided as a service for the R community by
  http://crantastic.org.
 
  Like it?  Hate it?  Please let us know: crana...@gmail.com.
 

[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.htmlhttp://www.r-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Two lines, two scales, one graph

2009-08-24 Thread Rick


First of all, thanks to everyone who answers these questions - it's
most helpful.

I'm new to R and despite searching have not found an example of what I
want to do (there are some good beginner's guides and a lot of complex
plots, but  I haven't found this).

I would like to plot two variables against the same abscissa values. They
have different scales. I've found how to make a second axis on the right
for labeling, but not how to plot two lines at different scales.

 Thanks,


 - Rick


r...@ece.pdx.edu

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] robust method to obtain a correlation coeff?

2009-08-24 Thread Christian Meesters
Hi,

Being a R-newbie I am wondering how to calculate a correlation
coefficient (preferably with an associated p-value) for data like:

 d[,1]
 [1] 25.5 25.3 25.1   NA 23.3 21.5 23.8 23.2 24.2 22.7 27.6 24.2 ...
 d[,2]
[1]  0.0 11.1  0.0   NA  0.0 10.1 10.6  9.5  0.0 57.9  0.0  0.0  ...

Apparently corr(d) from the boot-library fails with NAs in the data,
also cor.test cannot cope with a different number of NAs. Is there a
solution to this problem (calculating a correlation coefficient and
ignoring different number of NAs), e.g. Pearson's corr coeff?

If so, please point me to the relevant piece of documentation.

TIA
Christian

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] robust method to obtain a correlation coeff?

2009-08-24 Thread Ted Harding
On 24-Aug-09 14:47:02, Christian Meesters wrote:
 Hi,
 Being a R-newbie I am wondering how to calculate a correlation
 coefficient (preferably with an associated p-value) for data like:
 
 d[,1]
  [1] 25.5 25.3 25.1   NA 23.3 21.5 23.8 23.2 24.2 22.7 27.6 24.2 ...
 d[,2]
 [1]  0.0 11.1  0.0   NA  0.0 10.1 10.6  9.5  0.0 57.9  0.0  0.0  ...
 
 Apparently corr(d) from the boot-library fails with NAs in the data,

Yes, apparently corr() has no option for dealing with NAs.

 also cor.test cannot cope with a different number of NAs.

On the other hand, cor.test() does have an option na.action
which, by default, is the same as what is in getOption(na.action).

In my R installation, this, by default, is na.omit. This has the
effect that, for any pair in (x,y) where at least one of the pair
is NA, that pair will be omitted from the calculation. For example,
basing two vectors x,y on your data above, and a third z which is y
with an extra NA:

  x-c(25.5,25.3,25.1,NA,23.3,21.5,23.8,23.2,24.2,22.7,27.6,24.2)
  y-c( 0.0,11.1, 0.0,NA, 0.0,10.1,10.6, 9.5, 0.0,57.9, 0.0, 0.0)
  z-y; z[8]-NA

I get
  cor.test(x,y)
  # Pearson's product-moment correlation
  # data:  x and y 
  # t = -1.3986, df = 9, p-value = 0.1954
  # alternative hypothesis: true correlation is not equal to 0 
  # 95 percent confidence interval:
  #  -0.8156678  0.2375438 
  # sample estimates:
  #   cor 
  # -0.422542 
  # cor.test(x,z)
  # Pearson's product-moment correlation
  # data:  x and z 
  # t = -1.3466, df = 8, p-value = 0.215
  # alternative hypothesis: true correlation is not equal to 0 
  # 95 percent confidence interval:
  #  -0.8338184  0.2738824 
  # sample estimates:
  #cor 
  # -0.4298726 

So it has worked in both cases (see the difference in 'df'), despite
the different numbers of NAs in x and z.

For functions such as corr() which do not have provision for omitting
NAs, you can fix it up for yourself before calling the function.
In the case of your two series d[,1], d[,2] you could use an index
variable to select cases:

  ix - (!is.na(d[,1]))(!is.na(d[,2]))
  corr(d[ix,])

With my variables x,y,z I get

  ix.1 - (!is.na(x))(!is.na(y))
  ix.2 - (!is.na(x))(!is.na(z))
  d.1  -cbind(x,y)
  corr(d.1[ix.1,])
  # [1] -0.422542  ## (and -0.422542 from cor.test above as well)
  d.2  - cbind(x,z)
  corr(d.2[ix.2,])
  # [1] -0.4298726 ## (and -0.4298726 from cor.test above as well)

Hoping this helps,
Ted.

 Is there a
 solution to this problem (calculating a correlation coefficient and
 ignoring different number of NAs), e.g. Pearson's corr coeff?
 
 If so, please point me to the relevant piece of documentation.
 
 TIA
 Christian
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


E-Mail: (Ted Harding) ted.hard...@manchester.ac.uk
Fax-to-email: +44 (0)870 094 0861
Date: 24-Aug-09   Time: 16:26:53
-- XFMail --

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] robust method to obtain a correlation coeff?

2009-08-24 Thread David Winsemius


On Aug 24, 2009, at 11:26 AM, (Ted Harding) wrote:


On 24-Aug-09 14:47:02, Christian Meesters wrote:

Hi,
Being a R-newbie I am wondering how to calculate a correlation
coefficient (preferably with an associated p-value) for data like:


d[,1]

[1] 25.5 25.3 25.1   NA 23.3 21.5 23.8 23.2 24.2 22.7 27.6 24.2 ...

d[,2]

[1]  0.0 11.1  0.0   NA  0.0 10.1 10.6  9.5  0.0 57.9  0.0  0.0  ...

Apparently corr(d) from the boot-library fails with NAs in the data,


Yes, apparently corr() has no option for dealing with NAs.


also cor.test cannot cope with a different number of NAs.


On the other hand, cor.test() does have an option na.action
which, by default, is the same as what is in getOption(na.action).

In my R installation, this, by default, is na.omit. This has the
effect that, for any pair in (x,y) where at least one of the pair
is NA, that pair will be omitted from the calculation. For example,
basing two vectors x,y on your data above, and a third z which is y
with an extra NA:

 x-c(25.5,25.3,25.1,NA,23.3,21.5,23.8,23.2,24.2,22.7,27.6,24.2)
 y-c( 0.0,11.1, 0.0,NA, 0.0,10.1,10.6, 9.5, 0.0,57.9, 0.0, 0.0)
 z-y; z[8]-NA

I get
 cor.test(x,y)
snipped unneeded output
 # sample estimates:
 #cor
 # -0.4298726

So it has worked in both cases (see the difference in 'df'), despite
the different numbers of NAs in x and z.


You may not need to go through the material that follows. There are  
already a set of functions to handle such concerns:


?na.omit will bring a help page describing:

na.fail(object, ...) na.omit(object, ...) na.exclude(object, ...)  
na.pass(object, ...)


It reminded me that:

na.action: the name of a function for treating missing values (NA's)  
for certain situations.


... but I do not know what those certain situations really are.


For functions such as corr() which do not have provision for omitting
NAs, you can fix it up for yourself before calling the function.
In the case of your two series d[,1], d[,2] you could use an index
variable to select cases:

 ix - (!is.na(d[,1]))(!is.na(d[,2]))
 corr(d[ix,])

With my variables x,y,z I get

 ix.1 - (!is.na(x))(!is.na(y))
 ix.2 - (!is.na(x))(!is.na(z))
 d.1  -cbind(x,y)
 corr(d.1[ix.1,])
 # [1] -0.422542  ## (and -0.422542 from cor.test above as well)
 d.2  - cbind(x,z)
 corr(d.2[ix.2,])
 # [1] -0.4298726 ## (and -0.4298726 from cor.test above as well)

Hoping this helps,
Ted.


Is there a
solution to this problem (calculating a correlation coefficient and
ignoring different number of NAs), e.g. Pearson's corr coeff?

If so, please point me to the relevant piece of documentation.



David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] robust method to obtain a correlation coeff?

2009-08-24 Thread David Winsemius


On Aug 24, 2009, at 11:38 AM, David Winsemius wrote:



On Aug 24, 2009, at 11:26 AM, (Ted Harding) wrote:


On 24-Aug-09 14:47:02, Christian Meesters wrote:

Hi,
Being a R-newbie I am wondering how to calculate a correlation
coefficient (preferably with an associated p-value) for data like:


d[,1]

[1] 25.5 25.3 25.1   NA 23.3 21.5 23.8 23.2 24.2 22.7 27.6 24.2 ...

d[,2]

[1]  0.0 11.1  0.0   NA  0.0 10.1 10.6  9.5  0.0 57.9  0.0  0.0  ...

Apparently corr(d) from the boot-library fails with NAs in the data,


Yes, apparently corr() has no option for dealing with NAs.


also cor.test cannot cope with a different number of NAs.


On the other hand, cor.test() does have an option na.action
which, by default, is the same as what is in getOption(na.action).

In my R installation, this, by default, is na.omit. This has the
effect that, for any pair in (x,y) where at least one of the pair
is NA, that pair will be omitted from the calculation. For example,
basing two vectors x,y on your data above, and a third z which is y
with an extra NA:

x-c(25.5,25.3,25.1,NA,23.3,21.5,23.8,23.2,24.2,22.7,27.6,24.2)
y-c( 0.0,11.1, 0.0,NA, 0.0,10.1,10.6, 9.5, 0.0,57.9, 0.0, 0.0)
z-y; z[8]-NA

I get
cor.test(x,y)
snipped unneeded output
# sample estimates:
#cor
# -0.4298726

So it has worked in both cases (see the difference in 'df'), despite
the different numbers of NAs in x and z.


You may not need to go through the material that follows. There are  
already a set of functions to handle such concerns:


?na.omit will bring a help page describing:

na.fail(object, ...) na.omit(object, ...) na.exclude(object, ...)  
na.pass(object, ...)




Apologies; this was a bit hastily constructed. What I was quoting in  
what follows was from the Options help page and Options set in  
package stats section of that help page.


na.action: the name of a function for treating missing values (NA's)  
for certain situations.


... but I do not know what those certain situations really are.
So there are some function that may be affected by settings of  
options(na.action) but I cannot tell you where to find a list of  
such functions.





For functions such as corr() which do not have provision for omitting
NAs, you can fix it up for yourself before calling the function.
In the case of your two series d[,1], d[,2] you could use an index
variable to select cases:

ix - (!is.na(d[,1]))(!is.na(d[,2]))
corr(d[ix,])

With my variables x,y,z I get

ix.1 - (!is.na(x))(!is.na(y))
ix.2 - (!is.na(x))(!is.na(z))
d.1  -cbind(x,y)
corr(d.1[ix.1,])
# [1] -0.422542  ## (and -0.422542 from cor.test above as well)
d.2  - cbind(x,z)
corr(d.2[ix.2,])
# [1] -0.4298726 ## (and -0.4298726 from cor.test above as well)

Hoping this helps,
Ted.






David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Multiply List by a Numeric

2009-08-24 Thread Brigid Mooney
I apologize for what seems like it should be a straighforward query.

I am trying to multiply a list by a numeric and thought there would be a
straightforward way to do this, but the best solution I found so far has a
for loop.
Everything else I try seems to throw an error non-numeric argument to
binary operator

Consider the example:

a - 1
b - 1:2
c - 1:3
abc - list(a,b,c)
To multiply every element of abc by a numeric, say 3, I wrote a for-loop:

for (i in 1:length(abc))
{
abc[[i]] - 3*abc[[i]]
}

Is this really the simplest way or am I missing something?

Thanks!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] transforming data glm

2009-08-24 Thread Ben Bolker



Mcdonald, Grant wrote:
 
 Dear sir,
 
 I am fitting a glm with default identity link:
 
 
 
 model-glm(timetoacceptsecs~maleage*maletub*relweight*malemobtrue*femmobtrue)
 
 the model is overdisperesed and plot model shows a low level of linearity
 of the residuals. 
 
I don't see how the model can be *over*dispersed unless you are using
 a family
 with a fixed scale parameter (binomial/Poisson/etc.) ?
 
  The overdispersion and linearity of residulas on the normal Q-Q plot is
 corrected well by using:
 
 
 
 model-glm(log(timetoacceptsecs)~maleage*maletub*relweight*malemobtrue*femmobtrue))
 
 Boxcox of my model also suggests that the log transformation is what i
 should do.
 
 I ask how i am able to do this by changing the link function or error
 family of my glm and not diretly taking the log of the response variable.  
 
 For instance:
 model-glm(log(timetoacceptsecs)~maleage*maletub*relweight*malemobtrue*femmobtrue,
 family=poisson))
 does not improve my model in terms of overdispersion etc as much as taking
 the log.
 
 

I don't see why you are using a Poisson family for data that are
(apparently, based on their name
time to accept in seconds) -- unless you have some particular reason to
believe that in your
system they should follow a Poisson (it seems unlikely -- some form of
waiting time distribution
[exponential, gamma, Weibull) seems more plausible))

   the difference between

  glm(y~x,family=gaussian(link=log))

and

  glm(log(y)~x, family=gaussian(link=identity))

(which is essentially equivalent to glm(log(y)~x) or lm(log(y)~x))

  is in whether the error is assumed to be normal with a constant
variance on the original scale (the first method) or on the
log-transformed scale (the second method)

  note that you have to be careful about model comparisons between
continuous data transformed to different scales.

  Bottom line: I don't see what's wrong with your second model.  Why not
just use it?

-- 
View this message in context: 
http://www.nabble.com/transforming-data-glm-tp25115147p25118604.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] table function

2009-08-24 Thread Inchallah Yarab
hi,

i want to use the function table to build a table not of frequence (number of 
time the vareable is repeated in a list or a data frame!!) but in function of 
classes
I don t find a clear explnation in  examples of  ?table !!!

example

x      y    z
1    0   100
5    1   1500
6    1   1200 
2    2   500 
1    1   3500 
5 2 2000 
8 5 4500

i want to do a table summerizing the number of variable where z is in 
[0-1000],],[1000-3000], [ 3000]

thank you very much for your help


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] robust method to obtain a correlation coeff?

2009-08-24 Thread Bert Gunter

Inline below.

Bert Gunter
Genentech Nonclinical Biostatisics

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of David Winsemius
Sent: Monday, August 24, 2009 8:53 AM
To: David Winsemius
Cc: r-help@r-project.org Help; ted.hard...@manchester.ac.uk
Subject: Re: [R] robust method to obtain a correlation coeff?


On Aug 24, 2009, at 11:38 AM, David Winsemius wrote:

 ... but I do not know what those certain situations really are.
So there are some function that may be affected by settings of  
options(na.action) but I cannot tell you where to find a list of  
such functions.
---
Because this is up to the whim of package developers, there is no such list.
In general, most of the modeling functions of base R and recommended
packages, e.g. lm, glm, rlm in MASS, lme in nlme,... have an na.action
argument. But not universally: e.g. loess() has an na.action argument, but
lowess() does not.

A real gotcha with missing values is that some of R's core arithmetic
functions like mean(), median(), sum() etc. use a logical argument, na.rm =
TRUE or FALSE to control the handling of missings (as I'm sure you know).

-- Bert

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Multiply List by a Numeric

2009-08-24 Thread Marc Schwartz

On Aug 24, 2009, at 10:58 AM, Brigid Mooney wrote:


I apologize for what seems like it should be a straighforward query.

I am trying to multiply a list by a numeric and thought there would  
be a
straightforward way to do this, but the best solution I found so far  
has a

for loop.
Everything else I try seems to throw an error non-numeric argument to
binary operator

Consider the example:

a - 1
b - 1:2
c - 1:3
abc - list(a,b,c)
To multiply every element of abc by a numeric, say 3, I wrote a for- 
loop:


for (i in 1:length(abc))
{
abc[[i]] - 3*abc[[i]]
}

Is this really the simplest way or am I missing something?

Thanks!


Try:

 abc
[[1]]
[1] 1

[[2]]
[1] 1 2

[[3]]
[1] 1 2 3


 lapply(abc, *, 3)
[[1]]
[1] 3

[[2]]
[1] 3 6

[[3]]
[1] 3 6 9


See ?lapply for more information.

HTH,

Marc Schwartz

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Multiply List by a Numeric

2009-08-24 Thread Peter Ehlers

Try

lapply(abc, function(x) x*3)

Peter Ehlers

Brigid Mooney wrote:

I apologize for what seems like it should be a straighforward query.

I am trying to multiply a list by a numeric and thought there would be a
straightforward way to do this, but the best solution I found so far has a
for loop.
Everything else I try seems to throw an error non-numeric argument to
binary operator

Consider the example:

a - 1
b - 1:2
c - 1:3
abc - list(a,b,c)
To multiply every element of abc by a numeric, say 3, I wrote a for-loop:

for (i in 1:length(abc))
{
abc[[i]] - 3*abc[[i]]
}

Is this really the simplest way or am I missing something?

Thanks!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Combining matrices

2009-08-24 Thread Daniel Nordlund
If I have two matrices like

x - matrix(rep(c(1,2,3),3),3)
y - matrix(rep(c(4,5,6),3),3)

How can I combine  them to get ?

1 1 1 4 4 4
1 1 1 5 5 5
1 1 1 6 6 6
2 2 2 4 4 4
2 2 2 5 5 5
2 2 2 6 6 6
3 3 3 4 4 4
3 3 3 5 5 5
3 3 3 6 6 6

The number of rows and the actual numbers above are unimportant, they are given 
so as to illustrate how I want to combine the matrices.  I.e., I am looking for 
a general way to combine the first row of x with each row of y, then the second 
row of x with y, 

Thanks,

Dan

Daniel Nordlund
Bothell, WA USA

Thanks for 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] problem with BRugs

2009-08-24 Thread R Heberto Ghezzo, Dr
Hello, I am sorry, I have this problem before and Uwe send me the answer but I 
misplaced it and can not find it.
writing a model for BRugs

 library(BRugs)
Loading required package: coda
Loading required package: lattice
Welcome to BRugs running on OpenBUGS version 3.0.3
 setwd(c:/tmp)
Error in setwd(c:/tmp) : cannot change working directory
 mo - function(){
+   for (k in 1:p){
+ delta[1,k] ~ dnorm(0,0.1)I(,delta[2,k])
Error: unexpected symbol in:
  for (k in 1:p){
delta[1,k] ~ dnorm(0,0.1)I
 delta[2,k] ~ dnorm(0,0.1)I(delta[1,k],delta[3,k])
Error: unexpected symbol in delta[2,k] ~ dnorm(0,0.1)I
 delta[3,k] ~ dnorm(0,0.1)I(delta[2,k],)}
Error: unexpected symbol in delta[3,k] ~ dnorm(0,0.1)I
 }
Error: unexpected '}' in }
 

so R parser does not like the I(,) construct, What is the alternative way of 
propgramming the
constrain I(lower,upper)
Thanks
Heberto Ghezzo
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] table function

2009-08-24 Thread Erik Iverson
You need to create a factor that indicates which group the values in 'z' belong 
to.  The easiest way to do that based on your situation is to use the 'cut' 
function to construct the factor, and then call 'table' using the result 
created by 'cut'.  See ?cut and ?factor 

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
Behalf Of Inchallah Yarab
Sent: Monday, August 24, 2009 10:59 AM
To: r-help@r-project.org
Subject: [R] table function

hi,

i want to use the function table to build a table not of frequence (number of 
time the vareable is repeated in a list or a data frame!!) but in function of 
classes
I don t find a clear explnation in  examples of  ?table !!!

example

x      y    z
1    0   100
5    1   1500
6    1   1200 
2    2   500 
1    1   3500 
5 2 2000 
8 5 4500

i want to do a table summerizing the number of variable where z is in 
[0-1000],],[1000-3000], [ 3000]

thank you very much for your help


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] table function

2009-08-24 Thread Marc Schwartz

On Aug 24, 2009, at 10:59 AM, Inchallah Yarab wrote:


hi,

i want to use the function table to build a table not of frequence  
(number of time the vareable is repeated in a list or a data  
frame!!) but in function of classes

I don t find a clear explnation in  examples of  ?table !!!

example

x  yz
10   100
51   1500
61   1200
22   500
11   3500
5 2 2000
8 5 4500

i want to do a table summerizing the number of variable where z is  
in [0-1000],],[1000-3000], [ 3000]


thank you very much for your help



See ?cut, which bins a continuous variable.

 DF
  x yz
1 1 0  100
2 5 1 1500
3 6 1 1200
4 2 2  500
5 1 1 3500
6 5 2 2000
7 8 5 4500


 table(cut(DF$z, breaks = c(-Inf, 1000, 3000, Inf),
labels = c(0 - 1000, 1000 - 3000, 3000)))

0 - 1000 1000 - 30003000
   232

HTH,

Marc Schwartz

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] table function

2009-08-24 Thread Glen Sargeant


Inchallah Yarab wrote:
 
 i want to do a table summerizing the number of variable where z is in
 [0-1000],],[1000-3000], [ 3000]
 

You can use cut to create a new vector of labels and tabulate the result. 
Options control closed/open endpoints (see ?cut):

 z - c(100,1500,1200,500,3500,2000,4500)

 table(cut(z,c(0,1000,3000,max(z

  (0,1e+03]   (1e+03,3e+03] (3e+03,4.5e+03] 
  2   3   2 



-- 
View this message in context: 
http://www.nabble.com/table-function-tp25118909p25119226.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] [Rd] Formulas in gam function of mgcv package

2009-08-24 Thread Gavin Simpson
[Note R-Devel is the wrong list for such questions. R-Help is where this
should have been directed - redirected there now]

On Mon, 2009-08-24 at 17:02 +0100, Corrado wrote:
 Dear R-experts,
 
 I have a question on the formulas used in the gam function of the mgcv 
 package.
 
 I am trying to understand the relationships between:
 
 y~s(x1)+s(x2)+s(x3)+s(x4)
 
 and 
 
 y~s(x1,x2,x3,x4)
 
 Does the latter contain the former? what about the smoothers of all 
 interaction terms?

I'm not 100% certain how this scales to smooths of more than 2
variables, but Sections 4.10.2 and 5.2.2 of Simon Wood's book GAM: An
Introduction with R (2006, Chapman Hall/CRC) discuss this for smooths of
2 variables.

Strictly y ~ s(x1) + s(x2) is not nested in y ~ s(x1, x2) as the bases
used to produce the smoothers in the two models may not be the same in
both models. One option to ensure nestedness is to fit the more
complicated model as something like this:

## if simpler model were: y ~ s(x1, k=20) + s(x2, k = 20)
y ~ s(x1, k=20) + s(x2, k = 20) + s(x1, x2, k = 60)
  ^ 
where the last term (^^^ above) has the same k as used in s(x1, x2)

Note that these are isotropic smooths; are x1 and x2 measured in the
same units etc.? Tensor product smooths may be more appropriate if not,
and if we specify the bases when fitting models s(x1) + s(x2) *is*
strictly nested in te(x1, x2), eg.

y ~ s(x1, bs = cr, k = 10) + s(x2, bs = cr, k = 10)

is strictly nested within

y ~ te(x1, x2, k = 10)
## is the same as y ~ te(x1, x2, bs = cr, k = 10)

[Note that bs = cr is the default basis in te() smooths, hence we
don't need to specify it, and k = 10 refers to each individual smooth in
the te().]

HTH

G

  
 
 I have (tried to) read the manual pages of gam, formula.gam, smooth.terms, 
 linear.functional.terms but could not understand properly.
 
 Regards
-- 
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
 Dr. Gavin Simpson [t] +44 (0)20 7679 0522
 ECRC, UCL Geography,  [f] +44 (0)20 7679 0565
 Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
 Gower Street, London  [w] http://www.ucl.ac.uk/~ucfagls/
 UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Combining matrices

2009-08-24 Thread Henrique Dallazuanna
Try this;

do.call(rbind, lapply(split(x, seq(nrow(x))), cbind, y))

On Mon, Aug 24, 2009 at 1:16 PM, Daniel Nordlund djnordl...@verizon.netwrote:

 If I have two matrices like

 x - matrix(rep(c(1,2,3),3),3)
 y - matrix(rep(c(4,5,6),3),3)

 How can I combine  them to get ?

 1 1 1 4 4 4
 1 1 1 5 5 5
 1 1 1 6 6 6
 2 2 2 4 4 4
 2 2 2 5 5 5
 2 2 2 6 6 6
 3 3 3 4 4 4
 3 3 3 5 5 5
 3 3 3 6 6 6

 The number of rows and the actual numbers above are unimportant, they are
 given so as to illustrate how I want to combine the matrices.  I.e., I am
 looking for a general way to combine the first row of x with each row of y,
 then the second row of x with y, 

 Thanks,

 Dan

 Daniel Nordlund
 Bothell, WA USA

 Thanks for

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40 S 49° 16' 22 O

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] table function

2009-08-24 Thread Daniel Malter
Your question is a little vague. Do you just want to know how often z falls
in one the three classes? If so, you could either code an indicator variable
(e.g. z.cat) that expresses the three categories and then do table(z.cat).
Alternatively, you could just do

sum(z0z1000)
sum(z1z3000)
sum(z3000)

It's unclear to me whether you want to summarize the other variables (x and
y) for each of the categories of z. If you want to do that, use tapply (see
?tapply).

Daniel

-
cuncta stricte discussurus
-

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im
Auftrag von Inchallah Yarab
Gesendet: Monday, August 24, 2009 11:59 AM
An: r-help@r-project.org
Betreff: [R] table function

hi,

i want to use the function table to build a table not of frequence (number
of time the vareable is repeated in a list or a data frame!!) but in
function of classes I don t find a clear explnation in  examples of  ?table
!!!

example

x  yz
10   100
51   1500
61   1200 
22   500 
11   3500 
5 2 2000 
8 5 4500

i want to do a table summerizing the number of variable where z is in
[0-1000],],[1000-3000], [ 3000]

thank you very much for your help


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] image plot

2009-08-24 Thread Paul Hiemstra

ogbos okike schreef:

Hi,
I am trying to use the image function to do a color plot. My matrix columns
are labeled y and x. I tried image(y, x) but I had error message (Error in
image.default(y, x) : increasing 'x' and 'y' values expected).
Could anybody please tell me how to add these increasing 'x' and 'y' values.
Thanks

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
  

Hi,

Please provide a reproducible example.

An example that works with image:

x = 1:10
y = seq(1,100,by =10)
z = matrix(runif(100), 10, 10)
image(x,y,z)
x = sort(runif(10))
y = sort(runif(10))
image(x,y,z)

So z is a matrix with the values and x and y tell the dimensions of each 
cell, often with equal interval.


cheers,
Paul

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is there a fast way to do several hundred thousand ANOVA tests?

2009-08-24 Thread Benilton Carvalho

have you tried:

fits - lm(a~b)
fstat - sapply(summary(fits), function(x) x[[fstatistic]][[value]])

it takes 3secs for 100K columns on my machine (running on batt)

b

On Aug 23, 2009, at 9:55 PM, big permie wrote:


Dear R users,

I have a matrix a and a classification vector b such that


str(a)

num [1:50, 1:80]
and

str(b)

Factor w/ 3 levels cond1,cond2,cond3

I'd like to do an anova on all 80 columns and record the F  
statistic for

each test; I currently do this using

f.stat.vec - numeric(length(a[1,])

for (i in 1:length(a[1,]) {
 f.test.frame - data.frame(nums = a[,i], cond = b)
 aov.vox - aov(nums ~ cond, data = f.test.frame)
 f.stat - summary(aov.vox)[[1]][1,4]
 f.stat.vec[i] - f.stat
}

The problem is that this code takes about 70 minutes to run.

Is there a faster way to do an anova  record the F stat for each  
column?


Any help would be appreciated.

Thanks
Heath

   [[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Combining matrices

2009-08-24 Thread Marc Schwartz


On Aug 24, 2009, at 11:16 AM, Daniel Nordlund wrote:


If I have two matrices like

x - matrix(rep(c(1,2,3),3),3)
y - matrix(rep(c(4,5,6),3),3)

How can I combine  them to get ?

1 1 1 4 4 4
1 1 1 5 5 5
1 1 1 6 6 6
2 2 2 4 4 4
2 2 2 5 5 5
2 2 2 6 6 6
3 3 3 4 4 4
3 3 3 5 5 5
3 3 3 6 6 6

The number of rows and the actual numbers above are unimportant,  
they are given so as to illustrate how I want to combine the  
matrices.  I.e., I am looking for a general way to combine the first  
row of x with each row of y, then the second row of x with y, 


Thanks,

Dan




nr.x - nrow(x)
nr.y - nrow(y)

 cbind(x[rep(1:nr.x, each = nr.x), ], y[rep(1:nr.y, nr.y), ])
  [,1] [,2] [,3] [,4] [,5] [,6]
 [1,]111444
 [2,]111555
 [3,]111666
 [4,]222444
 [5,]222555
 [6,]222666
 [7,]333444
 [8,]333555
 [9,]333666



HTH,

Marc Schwartz

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Selecting groups with R

2009-08-24 Thread Glen Sargeant


jlwoodard wrote:
 
 
 Each of the above lines successfully excludes the BLUE subjects, but the
 BLUE category is still present in my data set; that is, if I try
 table(Color)  I get 
 
 RED  WHITE  BLUE
 82 151  0
 
 How can I eliminate the BLUE category completely so I can do a t-test
 using Color (with just the RED and WHITE subjects)?
 
 

A simpler example.  See details in the help file for factor() for an
explanation.

#Factor with 3 levels
 x - rep(c(blue,red,white),c(1,1,2))
 
 x - factor(x)
 
 table(x)
x
 blue   red white 
1 1 2 
 
#Subset is still a factor with 3 levels
 y - x[x!=blue]
 
 table(y)
y
 blue   red white 
0 1 2 
 
#Drops unused levels; result a factor with 2 levels
 table(factor(y))

  red white 
1 2 

-- 
View this message in context: 
http://www.nabble.com/Selecting-groups-with-R-tp25088073p25119474.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] help with recalling data points in a specific region of the plot

2009-08-24 Thread Edward Chen
Hi all,

Is there a quick way to display or recall data points from a specific region
on the plot? For example I want the points from x5 and y5?
Thank you very much!

-- 
Edward Chen

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Combining matrices

2009-08-24 Thread Marc Schwartz


On Aug 24, 2009, at 11:46 AM, Marc Schwartz wrote:



On Aug 24, 2009, at 11:16 AM, Daniel Nordlund wrote:


If I have two matrices like

x - matrix(rep(c(1,2,3),3),3)
y - matrix(rep(c(4,5,6),3),3)

How can I combine  them to get ?

1 1 1 4 4 4
1 1 1 5 5 5
1 1 1 6 6 6
2 2 2 4 4 4
2 2 2 5 5 5
2 2 2 6 6 6
3 3 3 4 4 4
3 3 3 5 5 5
3 3 3 6 6 6

The number of rows and the actual numbers above are unimportant,  
they are given so as to illustrate how I want to combine the  
matrices.  I.e., I am looking for a general way to combine the  
first row of x with each row of y, then the second row of x with  
y, 


Thanks,

Dan




nr.x - nrow(x)
nr.y - nrow(y)

 cbind(x[rep(1:nr.x, each = nr.x), ], y[rep(1:nr.y, nr.y), ])
 [,1] [,2] [,3] [,4] [,5] [,6]
[1,]111444
[2,]111555
[3,]111666
[4,]222444
[5,]222555
[6,]222666
[7,]333444
[8,]333555
[9,]333666




Actually, correction...that will work in this case, but in the general  
case, I believe that it needs to be:


x - matrix(rep(c(1,2,3),3),3)
y - matrix(rep(c(4,5,6,7),3),4)

 x
 [,1] [,2] [,3]
[1,]111
[2,]222
[3,]333

 y
 [,1] [,2] [,3]
[1,]444
[2,]555
[3,]666
[4,]777


nr.x - nrow(x)
nr.y - nrow(y)


 cbind(x[rep(1:nr.x, each = nr.y), ], y[rep(1:nr.y, nr.x), ])
  [,1] [,2] [,3] [,4] [,5] [,6]
 [1,]111444
 [2,]111555
 [3,]111666
 [4,]111777
 [5,]222444
 [6,]222555
 [7,]222666
 [8,]222777
 [9,]333444
[10,]333555
[11,]333666
[12,]333777


We need to replicate each row by the number of rows in the other matrix.

HTH,

Marc

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] robust method to obtain a correlation coeff?

2009-08-24 Thread John Kane
I may be misunderstanding the question but would
cor(d1, use='complete.obs') or some other variant of use help?

--- On Mon, 8/24/09, Christian Meesters meest...@imbie.uni-bonn.de wrote:

 From: Christian Meesters meest...@imbie.uni-bonn.de
 Subject: [R] robust method to obtain a correlation coeff?
 To: r-help@r-project.org Help r-help@r-project.org
 Received: Monday, August 24, 2009, 10:47 AM
 Hi,
 
 Being a R-newbie I am wondering how to calculate a
 correlation
 coefficient (preferably with an associated p-value) for
 data like:
 
  d[,1]
  [1] 25.5 25.3 25.1   NA 23.3 21.5 23.8 23.2
 24.2 22.7 27.6 24.2 ...
  d[,2]
 [1]  0.0 11.1  0.0   NA  0.0
 10.1 10.6  9.5  0.0 57.9  0.0  0.0 
 ...
 
 Apparently corr(d) from the boot-library fails with NAs in
 the data,
 also cor.test cannot cope with a different number of NAs.
 Is there a
 solution to this problem (calculating a correlation
 coefficient and
 ignoring different number of NAs), e.g. Pearson's corr
 coeff?
 
 If so, please point me to the relevant piece of
 documentation.
 
 TIA
 Christian
 
 __
 R-help@r-project.org
 mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained,
 reproducible code.
 


  __
Be smarter than spam. See how smart SpamGuard is
ons in Mail and switch to New Mail today or register for free at 
http://mail.yahoo.ca

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Combining matrices

2009-08-24 Thread Gabor Grothendieck
Try this:

kronecker(cbind(x, y), rep(1, 3))


On Mon, Aug 24, 2009 at 12:16 PM, Daniel Nordlunddjnordl...@verizon.net wrote:
 If I have two matrices like

 x - matrix(rep(c(1,2,3),3),3)
 y - matrix(rep(c(4,5,6),3),3)

 How can I combine  them to get ?

 1 1 1 4 4 4
 1 1 1 5 5 5
 1 1 1 6 6 6
 2 2 2 4 4 4
 2 2 2 5 5 5
 2 2 2 6 6 6
 3 3 3 4 4 4
 3 3 3 5 5 5
 3 3 3 6 6 6

 The number of rows and the actual numbers above are unimportant, they are 
 given so as to illustrate how I want to combine the matrices.  I.e., I am 
 looking for a general way to combine the first row of x with each row of y, 
 then the second row of x with y, 

 Thanks,

 Dan

 Daniel Nordlund
 Bothell, WA USA

 Thanks for

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Combining matrices

2009-08-24 Thread Daniel Nordlund
 -Original Message-
 From: Marc Schwartz [mailto:marc_schwa...@me.com] 
 Sent: Monday, August 24, 2009 9:57 AM
 To: Daniel Nordlund
 Cc: r help
 Subject: Re: [R] Combining matrices
 
 
 On Aug 24, 2009, at 11:46 AM, Marc Schwartz wrote:
 
 
  On Aug 24, 2009, at 11:16 AM, Daniel Nordlund wrote:
 
  If I have two matrices like
 
  x - matrix(rep(c(1,2,3),3),3)
  y - matrix(rep(c(4,5,6),3),3)
 
  How can I combine  them to get ?
 
  1 1 1 4 4 4
  1 1 1 5 5 5
  1 1 1 6 6 6
  2 2 2 4 4 4
  2 2 2 5 5 5
  2 2 2 6 6 6
  3 3 3 4 4 4
  3 3 3 5 5 5
  3 3 3 6 6 6
 
  The number of rows and the actual numbers above are unimportant,  
  they are given so as to illustrate how I want to combine the  
  matrices.  I.e., I am looking for a general way to combine the  
  first row of x with each row of y, then the second row of x with  
  y, 
 
  Thanks,
 
  Dan
 
 
 
  nr.x - nrow(x)
  nr.y - nrow(y)
 
   cbind(x[rep(1:nr.x, each = nr.x), ], y[rep(1:nr.y, nr.y), ])
   [,1] [,2] [,3] [,4] [,5] [,6]
  [1,]111444
  [2,]111555
  [3,]111666
  [4,]222444
  [5,]222555
  [6,]222666
  [7,]333444
  [8,]333555
  [9,]333666
 
 
 
 Actually, correction...that will work in this case, but in 
 the general  
 case, I believe that it needs to be:
 
 x - matrix(rep(c(1,2,3),3),3)
 y - matrix(rep(c(4,5,6,7),3),4)
 
   x
   [,1] [,2] [,3]
 [1,]111
 [2,]222
 [3,]333
 
   y
   [,1] [,2] [,3]
 [1,]444
 [2,]555
 [3,]666
 [4,]777
 
 
 nr.x - nrow(x)
 nr.y - nrow(y)
 
 
   cbind(x[rep(1:nr.x, each = nr.y), ], y[rep(1:nr.y, nr.x), ])
[,1] [,2] [,3] [,4] [,5] [,6]
   [1,]111444
   [2,]111555
   [3,]111666
   [4,]111777
   [5,]222444
   [6,]222555
   [7,]222666
   [8,]222777
   [9,]333444
 [10,]333555
 [11,]333666
 [12,]333777
 
 
 We need to replicate each row by the number of rows in the 
 other matrix.
 
 HTH,
 
 Marc
 

Thanks to all who responded (including those off-list).  I now have options
to apply to solving my programming task.  

Thanks,

Dan

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help with recalling data points in a specific region of the plot

2009-08-24 Thread John Kane
I am assuming that you want to desplay all the data and highlight the subset. 
Set up a vector to indicate the breakdown of the data and you can do it fairly 
easily in ggplot2 if you treat the vector as a factor.

library(ggplot)
mydata - data.frame(x=1:21, y= -10:10)
z - ifelse(mydata[,1]5  mydata[,2] 5, 1, 0) # identify points 5
mydata - data.frame(mydata, z)
ggplot(mydata, aes(x=x, y=y, colour= factor(z))) + geom_point() 

--- On Mon, 8/24/09, Edward Chen edche...@gmail.com wrote:

 From: Edward Chen edche...@gmail.com
 Subject: [R] help with recalling data points in a specific region of the plot
 To: r-help@r-project.org
 Received: Monday, August 24, 2009, 12:55 PM
 Hi all,
 
 Is there a quick way to display or recall data points from
 a specific region
 on the plot? For example I want the points from x5 and
 y5?
 Thank you very much!
 
 -- 
 Edward Chen
 
     [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org
 mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained,
 reproducible code.
 


  __
Looking for the perfect gift? Give the gift of Flickr! 

http://www.flickr.com/gift/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] RMySQL - overwrite record, not table

2009-08-24 Thread Adrian Dusa


whizvast wrote:
 
 Hi, Adrian-
 
 If you use overwrite=T parameter, you will overwrite the entire table,
 not each record. this is the essence of my problem and i still haven't
 found out right solution. i am thinking of writing my own MySQLwriteTable
 function...
 
 Thank you for your answer anyway!
 

Sorry for the late reply (I'm on my vacation). If you want to replace a
variable instead of the whole dataframe, I wrote a function about a year ago
and I used it succesfully a few times.
Try this:

dbUpdateVars -
function(conn, dbtable, dataframe=NULL, primary, vars) {
if (!dbExistsTable(conn, dbtable)) {
stop(The target table \, dbtable, \ doesn't exist in the
database \, dbGetInfo(conn)$dbname, \\n\n, call. = FALSE)
}
if (is.null(dataframe)) {
stop(The source dataframe is missing, with no default\n\n, call. =
FALSE)
}
if (!(toupper(primary) %in% toupper(names(dataframe {
stop(The primary key variable doesn't exist in the source
dataframe\n\n, call. = FALSE)
}
if (!all(toupper(vars) %in% toupper(names(dataframe {
stop(One or more variables don't exist in the source
dataframe\n\n, call. = FALSE)
}
if (!(toupper(primary) %in% toupper(dbListFields(con, dbtable {
stop(The primary key variable doesn't exist in the target
table\n\n, call. = FALSE)
}
if (!all(toupper(vars) %in% toupper(dbListFields(con, dbtable {
stop(One or more variables don't exist in the target table\n\n,
call. = FALSE)
}
 
if(length(vars)  1) {
pastedvars - paste(', apply(dataframe[, vars], 1, paste,
collapse=', '), ', sep=)
}
else {
pastedvars - paste(', dataframe[, vars], ', sep=)
}

varlist - paste(dbtable, (, paste(c(primary, vars), collapse=, ),
), sep=)
datastring - paste((, paste(paste(dataframe[, primary], pastedvars,
sep=, ), collapse=), (), ), sep=)
toupdate - paste(paste(vars, =VALUES(, vars, ), sep=),
collapse=, )   

sqlstring - paste(INSERT INTO, varlist, VALUES, datastring, ON
DUPLICATE KEY UPDATE, toupdate)
dbSendQuery(conn, sqlstring)
}

I hopw it helps you,
Adrian



-- 
View this message in context: 
http://www.nabble.com/RMySQL---overwrite-record%2C-not-table-tp24870097p25120044.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] hdf5 package segfault when processing large data

2009-08-24 Thread William Dunlap
 -Original Message-
 From: r-help-boun...@r-project.org 
 [mailto:r-help-boun...@r-project.org] On Behalf Of Budi Mulyono
 Sent: Monday, August 24, 2009 3:38 AM
 To: r-help@r-project.org
 Subject: [R] hdf5 package segfault when processing large data
 
 Hi there,
 
 I am currently working on something that uses hdf5 library. I think
 hdf5 is a great data format, I've used it somewhat extensively in
 python via PyTables. I was looking for something similar to that in R.
 The closest I can get is this library: hdf5. While it does not work
 the same way as PyTables did, but it's good enough to let them
 exchange data via hdf5 file.
 
 There is just 1 problem, I keep getting Segfault error when trying to
 process large files (10MB), although this is by no mean large when we
 talk about hdf5 capabilities. I have included the example code and
 data below. I have tried with different OS (WinXP and Ubuntu 8.04),
 architecture (32 and 64bit) and R versions (2.7.1, 2.72, and 2.9.1),
 but all of them present the same problem. I was wondering if anyone
 have any clue as to what's going on here and maybe can advice me to
 handle it.

This sort of problem should be sent to the package's maintainer.
packageDescription(hdf5)
   Package: hdf5
   Version: 1.6.9
   Title: HDF5
   Author: Marcus G. Daniels mdani...@lanl.gov
   Maintainer: Marcus G. Daniels mdani...@lanl.gov
   Description: Interface to the NCSA HDF5 library
   ...

This is probably due to the code in hdf5.c allocating a huge
matrix, buf, on the stack with

883   unsigned char buf[rowcount][size];

It dies with the segmentatio fault (stack overflow, in particular)
at line 898, where it tries to access this buf.

885   for (ri = 0; ri  rowcount; ri++)
886 for (pos = 0; pos  colcount; pos++)
887   {
888 SEXP item = VECTOR_ELT (val, pos);
889 SEXPTYPE type = TYPEOF (item);
890 void *ptr = buf[ri][offsets[pos]];
891
892 switch (type)
893   {
894   case REALSXP:
895 memcpy (ptr, REAL (item)[ri], sizeof
(double));
896 break;
897   case INTSXP:
898 memcpy (ptr, INTEGER (item)[ri], sizeof
(int));
899 break;

The code should use one of the allocators in the R API instead
of putting the big memory block on the stack.

Bill Dunlap
TIBCO Software Inc - Spotfire Division
wdunlap tibco.com  

 
 Thank you, appreciate any help i can get.
 
 Cheers,
 
 Budi
 
 The example script
 
 library(hdf5)
 fileName - sample.txt
 myTable - read.table(fileName,header=TRUE,sep=\t,as.is=TRUE)
 hdf5save(test.hdf, myTable)
 
 
 The data example, the list continue for more than 250,000 
 rows: sample.txt
 
 Date  Timef1  f2  f3  f4  f5
 20070328  07:56   463 463.07  462.9   463.01  1100
 20070328  07:57   463.01  463.01  463.01  463.01  200
 
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] trouble with Vista reading files

2009-08-24 Thread spencerg
Dear Mike: 



 I don't know. 



   1.  What specific error message do you get? 



  2.  Your example is too long for me to parse, especially with 
the color being stripped before I saw it. 



  3.  Have you tried using debug(cro.etest.grab), then 
walking through the code line by line until you get to the offending 
line.  Then you can study the error message and try different things 
quickly.  This may not work, but typically provides access to more 
information on the problem AND often usually substantially the test 
cycle for potential fixes. 



 Hope this helps. 
 Spencer



Mike Williamson wrote:

All,

I am having trouble with a read.table() function that is inside of
another function.  But if I call the function by itself, it works fine.
Moreover, if I run the script on a Mac OS X (with the default Mac OS X
version of R installed, rev 2.8), it works fine.  But it does not work if I
run it on windows vista (also default Windows version of R, rev. 2.8).

Again, both calls shown below work fine in Mac, but only the call by
itself works in Vista.  The other call embedded in a function does not.

Thanks in advance for all the help!!
  Regards, Mike

Below are the calls:

#
Below is the call which DOES work, as long as it is called by itself.
##



*  eTestData -
read.table(C:/Users/userID/Documents/R/eTestDataDir/EtestExample.csv,header
= TRUE,
  as.is = TRUE)
*


#
Below is the call which does NOT work.  The problem function call
highlighted in *red*
Especially strange with this is that there is a call below to ask for all
the files in the directory, which I
have highlighted in *purple*, and that call works fine.  So it is some sort
of permissions thing.
##

cro.etest.grab - function(dataDir=raw.etest.data, header=hdr,
  dataHeaders=datasets/eTestDataHeaders.txt,
  slotCol=Wafer, dateFormat=%m/%d/%Y %H:%M:%S,
  lotCol=eTestLotID) {
### Function: grab data in its raw form from SVTC's HP electrical tester and
###   munge it into a format more friendly for analysis in R.
### Requires: dataDir -- the directory where the raw SVTC data set is
stored
###   header  -- the differentiation between the names of the
data
###  files and the header files.  E.g., if data file
###  is CORR682..18524 and header file is
###  CORR682.hdr.18524, then the header is hdr.
###   dataHeaders -- Sometimes the labels for the data is missing,
but
###  they are NEARLY always the same.  If the labels
###  are ever missing, this fills them in with the
###  vector of headers given here.  E.g., c(Wafer,
###  Site,R2_ET1_M1,etc.)
###   slotCol -- In the data file, typically column Wafer is
###  actually the slot ID. This renames it to Slot.
###  So, SlotCol is the name IN THE RAW DATA.
###   lotCol   -- The data files have no lot ID column, the lot
ID
###  is grabbed from the file name. This provides a
###  column header name for the lot ID.
###   dateFormat  -- The test data header file has eTest time,
written
###  in the format month/day/year hour:minute:sec.
###  If another format is being read, it can be
###  altered here.
  dataHeaders - read.table(dataHeaders, stringsAsFactors = FALSE)[,1]

  print(paste(dataDir:,dataDir,header:,header,
slotCol:,slotCol,
  lotCol:,lotCol))
  *allFiles - list.files(path = dataDir)*
  tmp - grep(hdr,allFiles,ignore.case = TRUE)
  dataFiles - allFiles[-tmp]
  hdrFiles - sub(\\.(.*)\\.,\\.hdr\\1\\.,dataFiles)
*  eTestData - read.table(paste(dataDir,/,dataFiles[1],sep=),header =
TRUE,
  as.is = TRUE)
*  eTestData[,slotCol] - as.character(eTestData[,slotCol])
  eTestData[,lotCol] - rep(dataFiles[1],length(eTestData[,1]))
  tmp - try(scan(paste(dataDir,/, hdrFiles[1], sep=), what =
character,
  sep=\n, quiet=TRUE), silent=TRUE)
  if (is.null(attr(tmp,class))) {
dateCols - grep([0-9][0-9]/[0-9][0-9]/20[01][0-9],tmp)
hdrDF -
data.frame(tmp[(dateCols-1)],tmp[dateCols],stringsAsFactors=FALSE)
hdrDF$LotDate - rep(hdrDF[1,2],length(hdrDF[,1])) ; hdrDF - hdrDF[-1,]
hdrDF[1,1] - tmp[(dateCols-2)][1]
names(hdrDF) - c(slotCol,Date,LotDate)
hdrDF[,slotCol] - substring(hdrDF[,slotCol],
 (regexpr(=,hdrDF[,slotCol])+2),
  

Re: [R] help with recalling data points in a specific region of the plot

2009-08-24 Thread David Winsemius


On Aug 24, 2009, at 12:55 PM, Edward Chen wrote:


Hi all,

Is there a quick way to display or recall data points from a  
specific region

on the plot? For example I want the points from x5 and y5?
Thank you very much!

Your question is pretty light on specifics but assuming that you have  
columns  x and y  in a dataframe, df1, then in ordinary graphics you  
could just execute this after another plotting function has been  
performed:


with(subset(df1, x5  y5), points(x,y, col=red) )

# I generally make my points smaller with cex=0.2 or cex=0.1

subset(f1, x5  y5)[ , c(x,y)]
# should recall the values, if my wetware R interpreter is working  
properly.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme, lmer, gls, and spatial autocorrelation

2009-08-24 Thread Timothy_Handley

Hello folks,

I have some data where spatial autocorrelation seems to be a serious
problem, and I'm unclear on how to deal with it in R. I've tried to do my
homework - read through 'The R Book,' use the online help in R, search the
internet, etc. - and I still have some unanswered questions. I'd greatly
appreciate any help you could offer. The super-super short explanation is
that I'd like to draw a straight line through my data, accounting for
spatial autocorrelation and using Poisson errors (I have count data).
There's a longer explanation at the end of this e-mail, I just didn't want
to overdo it at the start.

There are three R functions that do at least some of what I would like, but
I'm unclear on some of their specifics.

1. lme - Maybe models spatial autocorrelation, but doesn't allow for
Poisson errors. I get mixed messages from The R Book. On p. 647, there's an
example that uses lme with temporal autocorrelation, so it seems that you
can specify a correlation structure. On the other hand, on p.778, The R
Book says, the great advantage of the gls function is that the errors are
allowed to be correlated. This suggests that only gls (not lme or lmer)
allows specification of a corStruct class. Though it may also suggest that
I have an incomplete understanding of these functions.

2. lmer - Allows specification of a Poisson error structure. However, it
seems that lmer does not yet handle correlated errors.

3. gls - Surely works with spatial autocorrelation, but doesn't allow for
Poisson errors. Does allow the spatial autocorrelation to be assessed
independently for different groups (I have two groups, one at each of two
different spatial scales).

Since gls is what The R Book uses in the example of spatial
autocorrelation, this seems like the best option. I'd rather have Poisson
errors, but Gaussian would be OK. However, I'm still somewhat confused by
these three functions. In particular, I'm unclear on the difference between
lme and gls. I'd feel more confident in my results if I had a better
understanding of these choices. I'd greatly appreciate advice on the matter


More detailed explanation of the data/problem is below:

The data:
[1] A count of the number of plant species present on each of 96 plots that
are 1m^2 in area.
[2] A count of the number of plant species present on each of 24 plots that
are 100m^2 in area.
[3] X,Y coordinates for the centroid of all plots (both sizes).

Goal:
1. A best fit straight-line relating log10(area) to #species.
2. The slope of that line, and the standard error of that slope. (I want to
compare the slope of this line with the slope of another line)

The problem:
Spatial autocorrelation. Across our range of plot-separation-distances,
Moran's I ranges from -.5 to +.25. Depending on the size of the
distance-bins, about 1 out of 10 of these I values are statistically
significant. Thus, there seems to be a significant degree of spatial
autocorrelation. if I want 'good' values for my line parameters, I need to
account for this somehow.


Tim Handley
Fire Effects Monitor
Santa Monica Mountains National Recreation Area
401 W. Hillcrest Dr.
Thousand Oaks, CA 91360
805-370-2347

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Unique command not deleting all duplicate rows

2009-08-24 Thread Mehdi Khan
Hello everyone, when I run the unique command on my data frame, it deletes
the majority of duplicate rows, but not all of them.  Here is a sample of my
data. How do I get it to delete all the rows?

 6 -115.38 32.894 195 162.94 D 8419 D

 7 -115.432 32.864 115 208.91 D 8419 D

 8 -115.447 32.773 1170 264.57 D 8419 D

 9 -115.447 32.773 1170 264.57 D 8419 D

 10 -115.447 32.773 1170 264.57 D 8419 D

 11 -115.447 32.773 1170 264.57 D 8419 D

 12 -115.447 32.773 149 186.21 D 8419 D

 13 -115.466 32.855 114 205.63 D 8419 D

 14 -115.473 32.8 1121 207.469 D 8419 D


Thanks a bunch!

Mehdi Khan

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unique command not deleting all duplicate rows

2009-08-24 Thread Erik Iverson
I really don't think this is the issue.  I think the issue is that some columns 
of the data.frame, specifically V1, V2, and V4 should be checked versus R FAQ 
7.31.  

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
Behalf Of Don McKenzie
Sent: Monday, August 24, 2009 1:35 PM
To: Mehdi Khan
Cc: r-help@r-project.org
Subject: Re: [R] Unique command not deleting all duplicate rows

duplicated()

  test.df
 V1 V2   V3  V4 V5   V6 V7
1 -115.380 32.894  195 162.940  D 8419  D
2 -115.432 32.864  115 208.910  D 8419  D
3 -115.447 32.773 1170 264.570  D 8419  D
4 -115.447 32.773 1170 264.570  D 8419  D
5 -115.447 32.773 1170 264.570  D 8419  D
6 -115.447 32.773 1170 264.570  D 8419  D
7 -115.447 32.773  149 186.210  D 8419  D
8 -115.466 32.855  114 205.630  D 8419  D
9 -115.473 32.800 1121 207.469  D 8419  D

  test.df[!duplicated(test.df),]
 V1 V2   V3  V4 V5   V6 V7
1 -115.380 32.894  195 162.940  D 8419  D
2 -115.432 32.864  115 208.910  D 8419  D
3 -115.447 32.773 1170 264.570  D 8419  D
7 -115.447 32.773  149 186.210  D 8419  D
8 -115.466 32.855  114 205.630  D 8419  D
9 -115.473 32.800 1121 207.469  D 8419  D


On 24-Aug-09, at 11:23 AM, Mehdi Khan wrote:

 Hello everyone, when I run the unique command on my data frame,  
 it deletes
 the majority of duplicate rows, but not all of them.  Here is a  
 sample of my
 data. How do I get it to delete all the rows?

  6 -115.38 32.894 195 162.94 D 8419 D

  7 -115.432 32.864 115 208.91 D 8419 D

  8 -115.447 32.773 1170 264.57 D 8419 D

  9 -115.447 32.773 1170 264.57 D 8419 D

  10 -115.447 32.773 1170 264.57 D 8419 D

  11 -115.447 32.773 1170 264.57 D 8419 D

  12 -115.447 32.773 149 186.21 D 8419 D

  13 -115.466 32.855 114 205.63 D 8419 D

  14 -115.473 32.8 1121 207.469 D 8419 D


 Thanks a bunch!

 Mehdi Khan

   [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

Don McKenzie, Research Ecologist
Pacific WIldland Fire Sciences Lab
US Forest Service

Affiliate Professor
School of Forest Resources, College of the Environment
CSES Climate Impacts Group
University of Washington

desk: 206-732-7824
cell: 206-321-5966
d...@u.washington.edu
donaldmcken...@fs.fed.us

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme, lmer, gls, and spatial autocorrelation

2009-08-24 Thread Bert Gunter
Have you looked at the Spatial task view on CRAN? That would seem to me
the logical first place to go.

Bert Gunter
Genentech Nonclinical Biostatisics


-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of timothy_hand...@nps.gov
Sent: Monday, August 24, 2009 11:12 AM
To: r-help@r-project.org
Subject: [R] lme, lmer, gls, and spatial autocorrelation


Hello folks,

I have some data where spatial autocorrelation seems to be a serious
problem, and I'm unclear on how to deal with it in R. I've tried to do my
homework - read through 'The R Book,' use the online help in R, search the
internet, etc. - and I still have some unanswered questions. I'd greatly
appreciate any help you could offer. The super-super short explanation is
that I'd like to draw a straight line through my data, accounting for
spatial autocorrelation and using Poisson errors (I have count data).
There's a longer explanation at the end of this e-mail, I just didn't want
to overdo it at the start.

There are three R functions that do at least some of what I would like, but
I'm unclear on some of their specifics.

1. lme - Maybe models spatial autocorrelation, but doesn't allow for
Poisson errors. I get mixed messages from The R Book. On p. 647, there's an
example that uses lme with temporal autocorrelation, so it seems that you
can specify a correlation structure. On the other hand, on p.778, The R
Book says, the great advantage of the gls function is that the errors are
allowed to be correlated. This suggests that only gls (not lme or lmer)
allows specification of a corStruct class. Though it may also suggest that
I have an incomplete understanding of these functions.

2. lmer - Allows specification of a Poisson error structure. However, it
seems that lmer does not yet handle correlated errors.

3. gls - Surely works with spatial autocorrelation, but doesn't allow for
Poisson errors. Does allow the spatial autocorrelation to be assessed
independently for different groups (I have two groups, one at each of two
different spatial scales).

Since gls is what The R Book uses in the example of spatial
autocorrelation, this seems like the best option. I'd rather have Poisson
errors, but Gaussian would be OK. However, I'm still somewhat confused by
these three functions. In particular, I'm unclear on the difference between
lme and gls. I'd feel more confident in my results if I had a better
understanding of these choices. I'd greatly appreciate advice on the matter


More detailed explanation of the data/problem is below:

The data:
[1] A count of the number of plant species present on each of 96 plots that
are 1m^2 in area.
[2] A count of the number of plant species present on each of 24 plots that
are 100m^2 in area.
[3] X,Y coordinates for the centroid of all plots (both sizes).

Goal:
1. A best fit straight-line relating log10(area) to #species.
2. The slope of that line, and the standard error of that slope. (I want to
compare the slope of this line with the slope of another line)

The problem:
Spatial autocorrelation. Across our range of plot-separation-distances,
Moran's I ranges from -.5 to +.25. Depending on the size of the
distance-bins, about 1 out of 10 of these I values are statistically
significant. Thus, there seems to be a significant degree of spatial
autocorrelation. if I want 'good' values for my line parameters, I need to
account for this somehow.


Tim Handley
Fire Effects Monitor
Santa Monica Mountains National Recreation Area
401 W. Hillcrest Dr.
Thousand Oaks, CA 91360
805-370-2347

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unique command not deleting all duplicate rows

2009-08-24 Thread Mehdi Khan
Duplicated did not work, I agree with Erik. Is there any way I can specify a
tolerance limit and then delete?

On Mon, Aug 24, 2009 at 11:41 AM, Erik Iverson eiver...@nmdp.org wrote:

 I really don't think this is the issue.  I think the issue is that some
 columns of the data.frame, specifically V1, V2, and V4 should be checked
 versus R FAQ 7.31.

 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
 On Behalf Of Don McKenzie
 Sent: Monday, August 24, 2009 1:35 PM
 To: Mehdi Khan
 Cc: r-help@r-project.org
 Subject: Re: [R] Unique command not deleting all duplicate rows

 duplicated()

   test.df
 V1 V2   V3  V4 V5   V6 V7
 1 -115.380 32.894  195 162.940  D 8419  D
 2 -115.432 32.864  115 208.910  D 8419  D
 3 -115.447 32.773 1170 264.570  D 8419  D
 4 -115.447 32.773 1170 264.570  D 8419  D
 5 -115.447 32.773 1170 264.570  D 8419  D
 6 -115.447 32.773 1170 264.570  D 8419  D
 7 -115.447 32.773  149 186.210  D 8419  D
 8 -115.466 32.855  114 205.630  D 8419  D
 9 -115.473 32.800 1121 207.469  D 8419  D

   test.df[!duplicated(test.df),]
 V1 V2   V3  V4 V5   V6 V7
 1 -115.380 32.894  195 162.940  D 8419  D
 2 -115.432 32.864  115 208.910  D 8419  D
 3 -115.447 32.773 1170 264.570  D 8419  D
 7 -115.447 32.773  149 186.210  D 8419  D
 8 -115.466 32.855  114 205.630  D 8419  D
 9 -115.473 32.800 1121 207.469  D 8419  D


 On 24-Aug-09, at 11:23 AM, Mehdi Khan wrote:

  Hello everyone, when I run the unique command on my data frame,
  it deletes
  the majority of duplicate rows, but not all of them.  Here is a
  sample of my
  data. How do I get it to delete all the rows?
 
   6 -115.38 32.894 195 162.94 D 8419 D
 
   7 -115.432 32.864 115 208.91 D 8419 D
 
   8 -115.447 32.773 1170 264.57 D 8419 D
 
   9 -115.447 32.773 1170 264.57 D 8419 D
 
   10 -115.447 32.773 1170 264.57 D 8419 D
 
   11 -115.447 32.773 1170 264.57 D 8419 D
 
   12 -115.447 32.773 149 186.21 D 8419 D
 
   13 -115.466 32.855 114 205.63 D 8419 D
 
   14 -115.473 32.8 1121 207.469 D 8419 D
 
 
  Thanks a bunch!
 
  Mehdi Khan
 
[[alternative HTML version deleted]]
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-
  guide.html
  and provide commented, minimal, self-contained, reproducible code.

 Don McKenzie, Research Ecologist
 Pacific WIldland Fire Sciences Lab
 US Forest Service

 Affiliate Professor
 School of Forest Resources, College of the Environment
 CSES Climate Impacts Group
 University of Washington

 desk: 206-732-7824
 cell: 206-321-5966
 d...@u.washington.edu
 donaldmcken...@fs.fed.us

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Between-group variance from ANOVA

2009-08-24 Thread Mark Difford

Hi Emma,

 

R gives you the tools to work this out.

## Example
set.seed(7)
TDat - data.frame(response = c(rnorm(100, 5, 2), rnorm(100, 20, 2)))
TDat$group - gl(2, 100, labels=c(A,B))
with(TDat, boxplot(split(response, group)))
summary(aov(response ~ group, data=TDat))

Regards, Mark.


emj83 wrote:
 
 can anyone advise me please?
 
 
 emj83 wrote:
 
 I have done some ANOVA tables for some data that I have, from this I can
 read the within-group variance. can anyone tell me how i may find out the
 between-group variance?
 
 Thanks Emma
 
 
 

-- 
View this message in context: 
http://www.nabble.com/Between-group-variance-from-ANOVA-tp24954045p25121532.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] create list entry from variable

2009-08-24 Thread rami jiossy

Hi;

assume i-10

how can  i create a list having key=10 and value=11

list(i=11) generates a list with 

'i'
[1] 11

and not 

10
[1] 11

any help?

Thanks

_

 Facebook.

:ON:WL:en-US:SI_SB_facebook:082009
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unique command not deleting all duplicate rows

2009-08-24 Thread Bert Gunter
?round

Bert Gunter
Genentech Nonclinical Biostatisics

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Mehdi Khan
Sent: Monday, August 24, 2009 11:52 AM
To: Erik Iverson
Cc: r-help@r-project.org
Subject: Re: [R] Unique command not deleting all duplicate rows

Duplicated did not work, I agree with Erik. Is there any way I can specify a
tolerance limit and then delete?

On Mon, Aug 24, 2009 at 11:41 AM, Erik Iverson eiver...@nmdp.org wrote:

 I really don't think this is the issue.  I think the issue is that some
 columns of the data.frame, specifically V1, V2, and V4 should be checked
 versus R FAQ 7.31.

 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
 On Behalf Of Don McKenzie
 Sent: Monday, August 24, 2009 1:35 PM
 To: Mehdi Khan
 Cc: r-help@r-project.org
 Subject: Re: [R] Unique command not deleting all duplicate rows

 duplicated()

   test.df
 V1 V2   V3  V4 V5   V6 V7
 1 -115.380 32.894  195 162.940  D 8419  D
 2 -115.432 32.864  115 208.910  D 8419  D
 3 -115.447 32.773 1170 264.570  D 8419  D
 4 -115.447 32.773 1170 264.570  D 8419  D
 5 -115.447 32.773 1170 264.570  D 8419  D
 6 -115.447 32.773 1170 264.570  D 8419  D
 7 -115.447 32.773  149 186.210  D 8419  D
 8 -115.466 32.855  114 205.630  D 8419  D
 9 -115.473 32.800 1121 207.469  D 8419  D

   test.df[!duplicated(test.df),]
 V1 V2   V3  V4 V5   V6 V7
 1 -115.380 32.894  195 162.940  D 8419  D
 2 -115.432 32.864  115 208.910  D 8419  D
 3 -115.447 32.773 1170 264.570  D 8419  D
 7 -115.447 32.773  149 186.210  D 8419  D
 8 -115.466 32.855  114 205.630  D 8419  D
 9 -115.473 32.800 1121 207.469  D 8419  D


 On 24-Aug-09, at 11:23 AM, Mehdi Khan wrote:

  Hello everyone, when I run the unique command on my data frame,
  it deletes
  the majority of duplicate rows, but not all of them.  Here is a
  sample of my
  data. How do I get it to delete all the rows?
 
   6 -115.38 32.894 195 162.94 D 8419 D
 
   7 -115.432 32.864 115 208.91 D 8419 D
 
   8 -115.447 32.773 1170 264.57 D 8419 D
 
   9 -115.447 32.773 1170 264.57 D 8419 D
 
   10 -115.447 32.773 1170 264.57 D 8419 D
 
   11 -115.447 32.773 1170 264.57 D 8419 D
 
   12 -115.447 32.773 149 186.21 D 8419 D
 
   13 -115.466 32.855 114 205.63 D 8419 D
 
   14 -115.473 32.8 1121 207.469 D 8419 D
 
 
  Thanks a bunch!
 
  Mehdi Khan
 
[[alternative HTML version deleted]]
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-
  guide.html
  and provide commented, minimal, self-contained, reproducible code.

 Don McKenzie, Research Ecologist
 Pacific WIldland Fire Sciences Lab
 US Forest Service

 Affiliate Professor
 School of Forest Resources, College of the Environment
 CSES Climate Impacts Group
 University of Washington

 desk: 206-732-7824
 cell: 206-321-5966
 d...@u.washington.edu
 donaldmcken...@fs.fed.us

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unique command not deleting all duplicate rows

2009-08-24 Thread Don McKenzie

duplicated()

 test.df
V1 V2   V3  V4 V5   V6 V7
1 -115.380 32.894  195 162.940  D 8419  D
2 -115.432 32.864  115 208.910  D 8419  D
3 -115.447 32.773 1170 264.570  D 8419  D
4 -115.447 32.773 1170 264.570  D 8419  D
5 -115.447 32.773 1170 264.570  D 8419  D
6 -115.447 32.773 1170 264.570  D 8419  D
7 -115.447 32.773  149 186.210  D 8419  D
8 -115.466 32.855  114 205.630  D 8419  D
9 -115.473 32.800 1121 207.469  D 8419  D

 test.df[!duplicated(test.df),]
V1 V2   V3  V4 V5   V6 V7
1 -115.380 32.894  195 162.940  D 8419  D
2 -115.432 32.864  115 208.910  D 8419  D
3 -115.447 32.773 1170 264.570  D 8419  D
7 -115.447 32.773  149 186.210  D 8419  D
8 -115.466 32.855  114 205.630  D 8419  D
9 -115.473 32.800 1121 207.469  D 8419  D


On 24-Aug-09, at 11:23 AM, Mehdi Khan wrote:

Hello everyone, when I run the unique command on my data frame,  
it deletes
the majority of duplicate rows, but not all of them.  Here is a  
sample of my

data. How do I get it to delete all the rows?

 6 -115.38 32.894 195 162.94 D 8419 D

 7 -115.432 32.864 115 208.91 D 8419 D

 8 -115.447 32.773 1170 264.57 D 8419 D

 9 -115.447 32.773 1170 264.57 D 8419 D

 10 -115.447 32.773 1170 264.57 D 8419 D

 11 -115.447 32.773 1170 264.57 D 8419 D

 12 -115.447 32.773 149 186.21 D 8419 D

 13 -115.466 32.855 114 205.63 D 8419 D

 14 -115.473 32.8 1121 207.469 D 8419 D


Thanks a bunch!

Mehdi Khan

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting- 
guide.html

and provide commented, minimal, self-contained, reproducible code.


Don McKenzie, Research Ecologist
Pacific WIldland Fire Sciences Lab
US Forest Service

Affiliate Professor
School of Forest Resources, College of the Environment
CSES Climate Impacts Group
University of Washington

desk: 206-732-7824
cell: 206-321-5966
d...@u.washington.edu
donaldmcken...@fs.fed.us

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme, lmer, gls, and spatial autocorrelation

2009-08-24 Thread Timothy_Handley
Bert -

 I took a look at that page just now, and I'd classify my problem as
spatial regression. Unfortunately, I don't think the spdep library fits my
needs. Or at least, I can't figure out how to use it for this problem. The
examples I have seen all use spdep with networks. They build a graph,
connecting each location to something like the nearest N neighbors, attach
some set of weights, and then do an analysis. The plots in my data have a
very irregular, semi-random, yet somewhat clumped (several isolated
islands), spatial distribution. Honestly, it's quite weird looking. I don't
know how to cleanly turn this into a network, and even if I did, I don't
know that I ought to. To me (and please feel free to disagree) it seems
more natural to use a matrix of distances and associated correlations,
which is what the gls function appears to do.

In the ecological analysis section, it looks like both 'ade4' and 'vegan'
may have helpful tools. I'll explore that some more. However, I still think
that one of lme or gls already has the functionality I need, and I just
need to learn how to use them properly.

Tim Handley
Fire Effects Monitor
Santa Monica Mountains National Recreation Area
401 W. Hillcrest Dr.
Thousand Oaks, CA 91360
805-370-2347


   
 Bert Gunter   
 gunter.ber...@ge 
 ne.comTo 
   timothy_hand...@nps.gov,  
 08/24/2009 11:43  r-help@r-project.org  
 AM cc 
   
   Subject 
   RE: [R] lme, lmer, gls, and spatial 
   autocorrelation 
   
   
   
   
   
   




Have you looked at the Spatial task view on CRAN? That would seem to me
the logical first place to go.

Bert Gunter
Genentech Nonclinical Biostatisics


-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of timothy_hand...@nps.gov
Sent: Monday, August 24, 2009 11:12 AM
To: r-help@r-project.org
Subject: [R] lme, lmer, gls, and spatial autocorrelation


Hello folks,

I have some data where spatial autocorrelation seems to be a serious
problem, and I'm unclear on how to deal with it in R. I've tried to do my
homework - read through 'The R Book,' use the online help in R, search the
internet, etc. - and I still have some unanswered questions. I'd greatly
appreciate any help you could offer. The super-super short explanation is
that I'd like to draw a straight line through my data, accounting for
spatial autocorrelation and using Poisson errors (I have count data).
There's a longer explanation at the end of this e-mail, I just didn't want
to overdo it at the start.

There are three R functions that do at least some of what I would like, but
I'm unclear on some of their specifics.

1. lme - Maybe models spatial autocorrelation, but doesn't allow for
Poisson errors. I get mixed messages from The R Book. On p. 647, there's an
example that uses lme with temporal autocorrelation, so it seems that you
can specify a correlation structure. On the other hand, on p.778, The R
Book says, the great advantage of the gls function is that the errors are
allowed to be correlated. This suggests that only gls (not lme or lmer)
allows specification of a corStruct class. Though it may also suggest that
I have an incomplete understanding of these functions.

2. lmer - Allows specification of a Poisson error structure. However, it
seems that lmer does not yet handle correlated errors.

3. gls - Surely works with spatial autocorrelation, but doesn't allow for
Poisson errors. Does allow the spatial autocorrelation to be assessed
independently for different groups (I have two groups, one at each of two
different spatial scales).

Since gls is what The R Book uses in the example of spatial
autocorrelation, this seems like the best option. I'd rather have Poisson
errors, but Gaussian would be OK. However, I'm still somewhat confused by
these three functions. In particular, I'm unclear on the difference between
lme and gls. I'd feel more confident in my results if I had a better

Re: [R] create list entry from variable

2009-08-24 Thread Henrique Dallazuanna
Try this:

l - list(i + 1)
names(l) - i


On Mon, Aug 24, 2009 at 4:01 PM, rami jiossy sra...@hotmail.com wrote:


 Hi;

 assume i-10

 how can  i create a list having key=10 and value=11

 list(i=11) generates a list with

 'i'
 [1] 11

 and not

 10
 [1] 11

 any help?

 Thanks

 _

  Facebook.

 :ON:WL:en-US:SI_SB_facebook:082009
[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40 S 49° 16' 22 O

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] create list entry from variable

2009-08-24 Thread rami jiossy

Yep; 

great thanks :)

Date: Mon, 24 Aug 2009 16:11:20 -0300
Subject: Re: [R] create list entry from variable
From: www...@gmail.com
To: sra...@hotmail.com
CC: r-help@r-project.org

Try this:

l - list(i + 1)
names(l) - i


On Mon, Aug 24, 2009 at 4:01 PM, rami jiossy sra...@hotmail.com wrote:



Hi;



assume i-10



how can  i create a list having key=10 and value=11



list(i=11) generates a list with



'i'

[1] 11



and not



10

[1] 11



any help?



Thanks



_



 Facebook.



:ON:WL:en-US:SI_SB_facebook:082009

[[alternative HTML version deleted]]



__

R-help@r-project.org mailing list



PLEASE do read the posting guide http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.



-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40 S 49° 16' 22 O

_


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] create list entry from variable

2009-08-24 Thread William Dunlap
 -Original Message-
 From: r-help-boun...@r-project.org 
 [mailto:r-help-boun...@r-project.org] On Behalf Of rami jiossy
 Sent: Monday, August 24, 2009 12:01 PM
 To: R-Help
 Subject: [R] create list entry from variable
 
 
 Hi;
 
 assume i-10
 
 how can  i create a list having key=10 and value=11
 
 list(i=11) generates a list with 
 
 'i'
 [1] 11
 
 and not 
 
 10
 [1] 11
 
 any help?

You can use [[- with a character argument for the key
myList-list()
i-10
myList[[i]] - 11
myList
   $`10`
   [1] 11
If you use an integer argument, as in,
myList[[10]] - 11
then 11 becomes the 10'th element of myList, not the
element named '10'.

myList[[i]] - NULL
will remove the element.

Bill Dunlap
TIBCO Software Inc - Spotfire Division
wdunlap tibco.com 

 
 Thanks
 
 _
 
  Facebook.
 
 :ON:WL:en-US:SI_SB_facebook:082009
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Two lines, two scales, one graph

2009-08-24 Thread Patrick Connolly
On Mon, 24-Aug-2009 at 08:00AM -0700, Rick wrote:


 First of all, thanks to everyone who answers these questions - it's
 most helpful.

 I'm new to R and despite searching have not found an example of what I
 want to do (there are some good beginner's guides and a lot of complex
 plots, but  I haven't found this).

 I would like to plot two variables against the same abscissa values. They
 have different scales. I've found how to make a second axis on the right
 for labeling, but not how to plot two lines at different scales.

The idea is that you rescale the second lot of y values to fit into
the same range as the first lot.  

If your first ones range from 0 to 10 and your second ones from 0 to
1000, you do the second line (using the lines() function) by dividing
every value by 100 and I think you will have found how to use axis
with side = 3 to do the axis.  If the zeros don't coincide, you need
to make further adjustments which should become obvious.

HTH



-- 
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.   
   ___Patrick Connolly   
 {~._.~}   Great minds discuss ideas
 _( Y )_ Average minds discuss events 
(:_~*~_:)  Small minds discuss people  
 (_)-(_)  . Eleanor Roosevelt
  
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help on comparing two matrices

2009-08-24 Thread Michael Kogan
David: Well, e.g. the first row has 2 ones in your output while there 
were no rows with 2 ones in the original matrix. Since the row and 
column sums can't be changed by sorting them, the output matrix can't be 
equivalent to the original one. But that means nothing, maybe it's 
intended and just for comparison reasons? :) But I don't get how the 
ones can get lost by making a string out of the row values...


Steve: The two matrices I want to compare really are graph matrices, 
just not adjacency but incidence matrices. There should be a way to get 
an adjacency matrix of a graph out of its incidence matrix but I don't 
know it...


David Winsemius schrieb:


On Aug 23, 2009, at 4:14 PM, Michael Kogan wrote:


Thanks for all the replies!

Steve: I don't know whether my suggestion is a good one. I'm quite 
new to programming, have absolutely no experience and this was the 
only one I could think of. :-) I'm not sure whether I'm able to put 
your tips into practice, unfortunately I had no time for much reading 
today but I'll dive into it tomorrow.


David: To be honest I don't understand your code yet, but the result 
is not equivalent to the original matrix since the row sums don't 
match, isn't it? Or is it intended like this? I'll try to ask Google 
(or rather RSeek) tomorrow to understand your code. :-)


Not sure what you mean by the row sums don't match.

All I did (working from the inside of that function outward) was:

a) concatenate the row values into a string:
 Reduce(paste, sm[1,])
[1] 0 0 0 0 1 1 0 0  # the innermost function applied to the first row.

b) do it for every row
 sapply(1:7, function(x) Reduce(paste, sm[x,]))
[1] 0 0 0 0 1 1 0 0 1 1 1 1 0 1 1 0 1 1 1 1 0 0 1 0 1 0 0 1 0 0 
0 0

0 0 1 1 1 0 0 1
[6] 1 0 0 0 0 0 0 1 1 1 0 0 1 1 0 1

c) create a sorting vector from that vector (of characters):
 order(sapply(1:7, function(x) Reduce(paste, sm[x,])) )
[1] 1 5 6 4 7 3 2

d) use that sort vector to order the rows:
 sm[order(sapply(1:7, function(x) Reduce(paste, sm[x,])) ), ]
 [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]00001100
[2,]00111001
[3,]10000001
[4,]10010000
[5,]11001101
[6,]11110010
[7,]11110110

All of the original vectors are output, just in a different order, so 
I am therefore confused.  why you think the rowSums don't match 
 don't match what?


I assumed you would take this process and apply it to _each_ of the 
two matrices in question and then see if you got a TRUE result with 
the identical function or perhaps the == function.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] plotting a grid with grid() ?

2009-08-24 Thread John Kane
I am trying to come up with a way of shading-in a grid for a simple pattern 

So far I can draw a square where I want but I cannot seem to draw a complete 
grid. I am just drawing them along the diagonal!! 

Clearly I am missing something simple but what?

Any suggestions gratefully accepted.

Example
#
op -  par(xaxs=i, yaxs=i)
plot(c(1, 11), c(1,11), type =n, xlab=, ylab=)

x1 - rep(1:10, each=10)
x2 - rep(2:11, each=10)
y1 - rep(1:10, each=10)
y2 - rep(2:11, each=10)

# no grid :(
rect(x1,y1,x2,y2, border=blue)

rect(2,2,3,3, col=red)

x1 - rep(1:10,10)
x2 - rep(2:11, 10)
y1 - rep(1:10, 10)
y2 - rep(2:11, 10)

# no grid again :(
rect(x1,y1,x2,y2, border=blue)

par - op

#=





  __
The new Internet Explorer® 8 - Faster, safer, easier.  Optimized for Yahoo!  
Get it Now for Free! at http://downloads.yahoo.com/ca/internetexplorer/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Two lines, two scales, one graph

2009-08-24 Thread David Winsemius


On Aug 24, 2009, at 11:00 AM, Rick wrote:



First of all, thanks to everyone who answers these questions - it's
most helpful.

I'm new to R and despite searching have not found an example of what I
want to do (there are some good beginner's guides and a lot of complex
plots, but  I haven't found this).

I would like to plot two variables against the same abscissa values.  
They
have different scales. I've found how to make a second axis on the  
right

for labeling, but not how to plot two lines at different scales.


After using R site search I would ask: have you looked at as.layer in  
{latticeExtra}?



David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help on comparing two matrices

2009-08-24 Thread David Winsemius


On Aug 24, 2009, at 4:01 PM, Michael Kogan wrote:

David: Well, e.g. the first row has 2 ones in your output while  
there were no rows with 2 ones in the original matrix. Since the row  
and column sums can't be changed by sorting them, the output matrix  
can't be equivalent to the original one. But that means nothing,  
maybe it's intended and just for comparison reasons? :) But I don't  
get how the ones can get lost by making a string out of the row  
values...


OK, so shoot me.  I screwed up and forgot to use byrow=TRUE in my scan  
operation. So I ended up with a different starting matrix than you.  
This is what it should have looked like:


 sm - matrix(scan(textConnection(
+ 01110110
+ 11000101
+ 10100011
+ 11001000
+ 10111000
+ 01011000
+ 00000111)), 7, 8, byrow=TRUE)
Read 56 items
 sm
 [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]01110110
[2,]11000101
[3,]10100011
[4,]11001000
[5,]10111000
[6,]01011000
[7,]00000111
 order(sapply(1:7, function(x) Reduce(paste, sm[x,])) )
[1] 7 6 1 3 5 2 4
 sm[order(sapply(1:7, function(x) Reduce(paste, sm[x,])) ), ]
 [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]00000111
[2,]01011000
[3,]01110110
[4,]10100011
[5,]10111000
[6,]11000101
[7,]11001000

The process creates a sorted index and then just outputs rows from the  
original matrix, so there cannot be any row that was not there at the  
start. Gabor's solution will do the same operation and certainly looks  
more elegant than mine. (His input operation did the same mutilation  
on your input string as did mine.)


--
David




Steve: The two matrices I want to compare really are graph matrices,  
just not adjacency but incidence matrices. There should be a way to  
get an adjacency matrix of a graph out of its incidence matrix but I  
don't know it...


David Winsemius schrieb:


On Aug 23, 2009, at 4:14 PM, Michael Kogan wrote:


Thanks for all the replies!

Steve: I don't know whether my suggestion is a good one. I'm quite  
new to programming, have absolutely no experience and this was the  
only one I could think of. :-) I'm not sure whether I'm able to  
put your tips into practice, unfortunately I had no time for much  
reading today but I'll dive into it tomorrow.


David: To be honest I don't understand your code yet, but the  
result is not equivalent to the original matrix since the row sums  
don't match, isn't it? Or is it intended like this? I'll try to  
ask Google (or rather RSeek) tomorrow to understand your code. :-)


Not sure what you mean by the row sums don't match.

All I did (working from the inside of that function outward) was:

a) concatenate the row values into a string:
 Reduce(paste, sm[1,])
[1] 0 0 0 0 1 1 0 0  # the innermost function applied to the  
first row.


b) do it for every row
 sapply(1:7, function(x) Reduce(paste, sm[x,]))
[1] 0 0 0 0 1 1 0 0 1 1 1 1 0 1 1 0 1 1 1 1 0 0 1 0 1 0 0 1  
0 0 0 0

0 0 1 1 1 0 0 1
[6] 1 0 0 0 0 0 0 1 1 1 0 0 1 1 0 1

c) create a sorting vector from that vector (of characters):
 order(sapply(1:7, function(x) Reduce(paste, sm[x,])) )
[1] 1 5 6 4 7 3 2

d) use that sort vector to order the rows:
 sm[order(sapply(1:7, function(x) Reduce(paste, sm[x,])) ), ]
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]00001100
[2,]00111001
[3,]10000001
[4,]10010000
[5,]11001101
[6,]11110010
[7,]11110110

All of the original vectors are output, just in a different order,  
so I am therefore confused.  why you think the rowSums don't  
match  don't match what?


I assumed you would take this process and apply it to _each_ of the  
two matrices in question and then see if you got a TRUE result with  
the identical function or perhaps the == function.




David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


  1   2   >