Re: [R] fitdistr question

2011-02-11 Thread Ingmar Visser
The ML estimate of lambda is the mean, so no need for (iterative)
optimization. See eg:
http://mathworld.wolfram.com/MaximumLikelihood.html
hth, Ingmar

On Fri, Feb 11, 2011 at 8:52 AM, Antje Niederlein niederlein-rs...@yahoo.de
 wrote:

 Hello,

 I tried to fit a poisson distribution but looking at the function
 fitdistr() it does not optimize lambda but simply estimates the mean
 of the data and returns it as lambda. I'm a bit confused because I was
 expecting an optimization of this parameter to gain a good fit...
 If I would use mle() of stats4 package or mle2() of bbmle package, I
 would have to write the function by myself which should be optimized.
 But what shall I return?

 -sum((y_observed - y_fitted)^2)

 ?

 Any other suggestions or comments on my solution?

 Antje

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] extracting characters from string

2011-02-11 Thread Soumendra
Well, I believe, given the original statement of the problem, that it
is philosophically wrong to use the gsub approach. What if there are
50 underscores instead of 5, and you want to extract the characters
after the 23rd underscore? By using gsub, you are trying to fight
against the pattern of underscores. By using strsplit, we are using
that pattern to our advantage. Kind of. :)

Besides, breaking it up using strsplit will also give us the option to
iterate through it, though it is not relevant it here.




--
Soumendra Prasad Dhanee
Quantitative Analyst, Neural Technologies and Software Pvt. Ltd.

soumen...@neuraltechsoft.com, soumen...@maths.org.in, soumen...@gmail.com
+91-7498076111, +91-8100428686

--
When you understand why you dismiss all the other possible gods, you
will understand why I dismiss yours. - Stephen Roberts



On 11 February 2011 05:55, jim holtman jholt...@gmail.com wrote:
 A safer way to make sure you don't match the underscore:

 gsub([^_]*_[^_]*_([^_]*).*, \\1,  abcd_efgh_X_12ab3_dfsfd)
 [1] X


 On Thu, Feb 10, 2011 at 2:06 PM, Henrique Dallazuanna www...@gmail.com 
 wrote:
 So, a way could be:

 gsub((.*)_(.*)_(.*)_.*, \\3,  abcd_efgh_X_12ab3_dfsfd)

 On Thu, Feb 10, 2011 at 3:47 PM, Soumendra soumen...@gmail.com wrote:

 Hi Henrique,

 I believe your solution is wrong as it is fitted to find 12ab3,
 whereas Yan seems to be asking for the characters after the second
 underscore and before the third underscore.

 For example, gsub(.*_.*_(.*)_.*, \\1,
 abcd_efgh_X_12ab3_dfsfd) would still yield 12ab3 even though, as
 I understand it, it should have output X.

 I think a straightforward solution would do the job:

 strsplit(abcd_efgh_12ab3_dfsfd, _)[[1]][3]

 strsplit(abcd_efgh_X_12ab3_dfsfd, _)[[1]][3] has the output
 X, for example.

 Of course, I would be wrong if Yan specifically wanted to find the
 string 12ab3. But in that case, he would have been asking for matching
 (and locating) that substring instead of extracting it.

 Regards,

 Soumendra


 --
 Soumendra Prasad Dhanee
 Quantitative Analyst, Neural Technologies and Software Pvt. Ltd.

 soumen...@neuraltechsoft.com, soumen...@maths.org.in, soumen...@gmail.com
 +91-7498076111, +91-8100428686

 --
 When you understand why you dismiss all the other possible gods, you
 will understand why I dismiss yours. - Stephen Roberts



 On 10 February 2011 11:52, Henrique Dallazuanna www...@gmail.com wrote:
  Try this:
 
  gsub(.*_.*_(.*)_.*, \\1, abcd_efgh_12ab3_dfsfd)
 
  On Thu, Feb 10, 2011 at 9:42 AM, Yan Jiao y.j...@ucl.ac.uk wrote:
 
  Dear R gurus,
 
 
 
  If I got a vector with string characters like abcd_efgh_12ab3_dfsfd,
  how could I extract 12ab3, which is the characters after second
  underscore and before the third underscore?
 
 
 
  Tons of thanks
 
 
 
  yan
 
 
 
 
 
 
  **
  This email and any files transmitted with it are
 confide...{{dropped:10}}
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 
 
 
  --
  Henrique Dallazuanna
  Curitiba-Paraná-Brasil
  25° 25' 40 S 49° 16' 22 O
 
         [[alternative HTML version deleted]]
 
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 




 --
 Henrique Dallazuanna
 Curitiba-Paraná-Brasil
 25° 25' 40 S 49° 16' 22 O

        [[alternative HTML version deleted]]


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.





 --
 Jim Holtman
 Data Munger Guru

 What is the problem that you are trying to solve?


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Revolution Analytics reading SAS datasets

2011-02-11 Thread Tobias Verbeke

On 02/10/2011 07:44 PM, David Smith wrote:

The SAS import/export feature of Revolution R Enterprise 4.2 isn't
open-source, so we can't release it in open-source Revolution R
Community, or to CRAN as we do with the ParallelR packages (foreach,
doMC, etc.).

It is, though, available for download free of charge to members of the
academic community (as is all of Revolution Analytics' software) from
http://www.revolutionanalytics.com/downloads/


timeo Danaos et dona ferentes


On Wed, Feb 9, 2011 at 5:46 PM, Daniel Nordlunddjnordl...@frontier.com  wrote:

Has anyone heard whether Revolution Analytics is going to release this 
capability to the R community?

http://www.businesswire.com/news/home/20110201005852/en/Revolution-Analytics-Unlocks-SAS-Data

Dan

Daniel Nordlund
Bothell, WA USA






VP of Marketing, Revolution Analytics  http://blog.revolutionanalytics.com


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] fitdistr question

2011-02-11 Thread Antje Niederlein
Hi Ingmar, hi Dennis,

okay, you're right. I was expecting that the result would give the
best fit to my data even if it's not a real poisson distribution. It
looks somehow similar...
But how to judge the goodness of fit? I was using the residual sum of
squares. I'm not a statistician, so I'm not sure whether this method
is the one to choose...
If I estimate lambda with mle2() and use the RSS as criteria to
minimize, my lambda is much smaller that with fitdistr().

I'm happy about any suggestion!

Antje



On 11 February 2011 09:16, Ingmar Visser i.vis...@uva.nl wrote:
 The ML estimate of lambda is the mean, so no need for (iterative)
 optimization. See eg:
 http://mathworld.wolfram.com/MaximumLikelihood.html
 hth, Ingmar

 On Fri, Feb 11, 2011 at 8:52 AM, Antje Niederlein
 niederlein-rs...@yahoo.de wrote:

 Hello,

 I tried to fit a poisson distribution but looking at the function
 fitdistr() it does not optimize lambda but simply estimates the mean
 of the data and returns it as lambda. I'm a bit confused because I was
 expecting an optimization of this parameter to gain a good fit...
 If I would use mle() of stats4 package or mle2() of bbmle package, I
 would have to write the function by myself which should be optimized.
 But what shall I return?

 -sum((y_observed - y_fitted)^2)

 ?

 Any other suggestions or comments on my solution?

 Antje

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimal choice of the threshold u in Peak Over Threshold (POT)Approach

2011-02-11 Thread Pfaff, Bernhard Dr.
Dear Fir,

for instance, have a look at the package 'ismev' and the function mrl.plot(). 
The CRAN task view 'Finance' lists many more packages that address EVT under 
the topic 'Risk management'.

Best,
Bernhard

 -Ursprüngliche Nachricht-
 Von: r-help-boun...@r-project.org 
 [mailto:r-help-boun...@r-project.org] Im Auftrag von FMH
 Gesendet: Donnerstag, 10. Februar 2011 19:28
 An: r-help@r-project.org
 Betreff: [R] Optimal choice of the threshold u in Peak Over 
 Threshold (POT)Approach
 
 Dear All,
 
 Could someone please suggest me the way to calculate the 
 optimal threshold in POT method via any available  packages in R?
 
 Thanks,
 Fir
 
 
 
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] fitdistr question

2011-02-11 Thread Antje Niederlein
Yes, I understand.
If I have a distribution which is not listed in fitdistr() but I still
would like to you compute the ML estimate.
Would it be correct to maximize the following function?

sum( log( dens_mydistr(x, my_distr_param)))

As I said, I try to step into this field by reading and trying things
and I'm not sure whether I got it right how to find the ML-function of
a more complex distribution...

Antje



On 11 February 2011 10:14, Ingmar Visser i.vis...@uva.nl wrote:
 Antje,

 On Fri, Feb 11, 2011 at 9:58 AM, Antje Niederlein
 niederlein-rs...@yahoo.de wrote:

 Hi Ingmar, hi Dennis,

 okay, you're right. I was expecting that the result would give the
 best fit to my data even if it's not a real poisson distribution. It
 looks somehow similar...

 The ML estimate is of course made under the assumption that the data stems
 from a Poisson distribution, and under that assumption, the ML estimate is
 most efficient and unbiased compared with other estimates.

 Best, Ingmar


 But how to judge the goodness of fit? I was using the residual sum of
 squares. I'm not a statistician, so I'm not sure whether this method
 is the one to choose...
 If I estimate lambda with mle2() and use the RSS as criteria to
 minimize, my lambda is much smaller that with fitdistr().

 I'm happy about any suggestion!

 Antje



 On 11 February 2011 09:16, Ingmar Visser i.vis...@uva.nl wrote:
  The ML estimate of lambda is the mean, so no need for (iterative)
  optimization. See eg:
  http://mathworld.wolfram.com/MaximumLikelihood.html
  hth, Ingmar
 
  On Fri, Feb 11, 2011 at 8:52 AM, Antje Niederlein
  niederlein-rs...@yahoo.de wrote:
 
  Hello,
 
  I tried to fit a poisson distribution but looking at the function
  fitdistr() it does not optimize lambda but simply estimates the mean
  of the data and returns it as lambda. I'm a bit confused because I was
  expecting an optimization of this parameter to gain a good fit...
  If I would use mle() of stats4 package or mle2() of bbmle package, I
  would have to write the function by myself which should be optimized.
  But what shall I return?
 
  -sum((y_observed - y_fitted)^2)
 
  ?
 
  Any other suggestions or comments on my solution?
 
  Antje
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] directed MST

2011-02-11 Thread amir
Hi every body,

Is there any function in R to find the Directed Minimum Spanning tree?

There are some for undirected but I am looking for a directed Minimum spannin 
tree.

Regards,

Amir

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Ggplot: free x-scales in a facet-grid

2011-02-11 Thread Strategische Analyse CSD Hasselt

Hello,

hereby the code with example data, as an attach to my question (see mail 
below).

Thank you!
Ann

library(ggplot2)
library(grid)
library(RColorBrewer)
library(car)
library(reshape)

#make dataframe
ID=c(a,b,c,d,e,f,g,h,i,j)
type=c(type1,type2,type3,type2,type2,type1,type2,type1,type1,type3)
dat_feit_lo=c(13229222400,13510803600,13463193600,13491619200,13502732400,13514315400,13463193600,13514718600,13514497200,13515031800)
dat_feit_hi=c(13502591940,13510803600,13464798000,13508697600,13514036100,13514315400,13507862400,13514719380,13514432400,13515036600)
dat_pol=c(13512488400,13510877580,13468415940,13508697600,13514036100,13514315400,13513528800,13514719380,13514809800,13515037260)
dat_avv_start=c(13512502320,13510936200,13513705980,13514227440,13514217300,13514396280,13514636520,13514810580,13514909640,13515099060)
feiten-data.frame(ID,type,dat_feit_lo,dat_feit_hi,dat_pol,dat_avv_start)

#make POSIX of date variables
feiten$dat_feit_lo-as.POSIXct(feiten$dat_feit_lo,
origin=1582-10-14,tz=GMT)
feiten$dat_feit_hi-as.POSIXct(feiten$dat_feit_hi,
origin=1582-10-14,tz=GMT)
feiten$dat_pol-as.POSIXct(feiten$dat_pol, origin=1582-10-14,tz=GMT)
feiten$dat_avv_start-as.POSIXct(feiten$dat_avv_start,
origin=1582-10-14,tz=GMT)

#sort  melt data#
feiten$ID-with(feiten,reorder(reorder(reorder(ID,1/as.numeric(dat_pol)),1/as.numeric(dat_avv_start)),as.numeric(type)))
sortframe=function(df,...)df[do.call(order,list(...)),]
data_sort-with(feiten,sortframe(feiten,as.numeric(type),1/as.numeric(dat_avv_start),1/as.numeric(dat_pol)))
data.melt-melt.data.frame(data_sort, id=c(ID,type), variable_name =
time)
levels(data.melt$time)-c(fact low,fact high,complaint,hearing)


#make plot#
data.melt$pos-data.melt$valueas.POSIXlt(2010-12-01 00:00:00)
data.melt$pos[is.na(data.melt$pos)]-'FALSE'

plot-
ggplot(data.melt,aes(value,ID)) +
geom_point(aes(groups=time,colour=time,shape=time)) +
facet_grid(type~pos,scales=free,space=free) +
opts(strip.text.y=theme_text())+
xlab(NULL) + ylab(NULL)+
opts(axis.text.x = theme_text(angle = 90, hjust = 1, size = 8)) +
opts(legend.text = theme_text(hjust=1, size = 8))+
opts(legend.position=top,legend.direction=horizontal)+
scale_shape_manual(values = c(1,3,0,2),name=)  +
scale_colour_manual(values =
c(red,red,royalblue4,mediumvioletred),name=)


- Original Message - 
From: Strategische Analyse CSD Hasselt csd...@fedpolhasselt.be

To: r-help@R-project.org
Sent: Thursday, February 10, 2011 2:40 PM
Subject: Ggplot: free x-scales in a facet-grid



Hello,

I have a ggplot that has the looks of the plot that I want, but it doesn't 
have the right layout.


The data is an ordered melted dataframe:
- ID
- type (to use for a faced grid)
- time - type
- time - value (POSIXct)
- pos (to use for a faced grid, this is an index to split the plot)

The goal of the plot is to create a time line for each ID (different 
points of time). The ID's are split in facets according to their type.


The plot will look like this (the numbers refer to the ID, the letters to 
the time values):


1 xosTYPE1
2xo   s
3 xosTYPE2
4xos TYPE3

The data are ordered within each type, according to date 's'.

Now here's the problem. The most data are between the periode 01/12/2010 
and 31/01/2011. But there are some outliers, going back to 2003.
Now I would like to split the plot in 2 (based on the index 'pos', split 
date = 01/12/2010), so the left part of the plot are the time values 
before this date (scale_x_datetime major = 1 year), and the right part of 
the plot are the time values after this date (scale_x_datetime major=1 
day).


Hereby also the R-code (simplified):
ggplot(data_plot.melt,aes(timevalue,ID)) +
geom_point(aes(groups=timetype,colour=timetype,shape=timetype)) +
facet_grid(type ~pos,scales=free,space=free) +
xlab(NULL) + ylab(NULL)

The scales of y has to be free, because the number of ID's per type 
differ. The scales of x has to be free, so the scales differ in the left 
and right part of the plot.
This code succeeds in my goal, but the left part of the plot is very big, 
and the right part very very small. However, the most important part of 
the plot is the right part. The left part is only to mention the outliers, 
to read the plot correctly.


I don't know if it's possible to get a plot like I want?

Before I added the following code to make the plot, but then I loose the 
information of every time value before 01/12/2010:
+ scale_x_datetime (major = 1 
days,limits=c(as.numeric(as.POSIXlt(2010-12-01 
00:00:00)),as.numeric(as.POSIXlt(2011-01-31 22:00:00))),format = 
%b-%d,expand=c(0,0))


Thank you very much in advance!

Ann Frederix


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 

[R] How to compute yaxp and usr without plotting ?

2011-02-11 Thread Yves REECHT
  Dear all,

I'd like to know how I could compute the parameters yaxp and (the y 
components of) usr without having to plot the data first. Note that 
ylim is /a priori/ fixed.

The aim is to automatically adjust the parameter mgp without having to 
make the plot twice. Then, with yaxp and usr known, it should be 
easy to calculate a suitable mgp with the axTicks and strwidth functions.

Many thanks in advance,
Yves

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Simulation of Multivariate Fractional Gaussian Noise and Fractional Brownian Motion

2011-02-11 Thread Wonsang You
Dear Kjetil,

Thank you so much for your advice on my question.

Best Regards,
Wonsang



2011/2/10 Kjetil Halvorsen kjetilbrinchmannhalvor...@gmail.com

 What you can do to find out is to type into your R session
 RSiteSearch(multivariate fractional gaussian)

 That seems to give some usefull results.

 Kjetil

 On Tue, Feb 8, 2011 at 1:51 PM, Wonsang You y...@ifn-magdeburg.de wrote:
 
  Dear R Helpers,
 
  I have searched for any R package or code for simulating multivariate
  fractional Brownian motion (mFBM) or multivariate fractional Gaussian
 noise
  (mFGN) when a covariance matrix are given. Unfortunately, I could not
 find
  such a package or code.
  Can you suggest any solution for multivariate FBM and FGN simulation?
 Thank
  you for your help.
 
  Best Regards,
  Ryan
 
 
  -
  Wonsang You
  Leibniz Institute for Neurobiology
  --
  View this message in context:
 http://r.789695.n4.nabble.com/Simulation-of-Multivariate-Fractional-Gaussian-Noise-and-Fractional-Brownian-Motion-tp3276296p3276296.html
  Sent from the R help mailing list archive at Nabble.com.
 
 [[alternative HTML version deleted]]
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Writing R packages in an easier way?

2011-02-11 Thread Barry Rowlingson
On Fri, Feb 11, 2011 at 7:52 AM, Dr. Michael Wolf m-w...@muenster.de wrote:
 Dear R colleagues,

 is there an easier way to write R packages for the own use - without RTools
 and TeX?

 There are simpler ways of maintaining R source code than building
packages. I bashed out a quick way of keeping code in directories as
source and reading in on demand. All you need is two functions:

import - function(dir){
  e = attach(NULL,name=dir)
  assign(__path__,dir,envir=e)
  reload(e)
  invisible(e)
}

reload - function(e){
  path = get(__path__,e)
  files = list.files(path,.R$,full.names=TRUE,recursive=TRUE,ignore.case=TRUE)
  for(f in files){
sys.source(f,envir=e)
  }
}

 Now put the source code in some folder/directory somewhere. You can
even make subdirs if you wish to organise it that way.

 I've got some .R files in lib1. I do;

  import(lib1)

 and that runs 'source' on all the .R files in there and loads them
into position 2 on the search list. ls(pos=2) shows them.

 If you edit the source code, just do reload(2) (assuming it hasn't
moved because you've loaded another package) and your stuff will be
updated. It just sources everything again. Do detach(2) to get rid of
it.

 If you want to distribute your code in source format, just make a
zip or tar of the folder with the code in, and make sure your users
have the import and reload functions (I should probably put them in a
package one day...).

 To distribute in 'binary' format, do an import, and then save as .RData:

  save(list=ls(pos=2),file=lib1.RData, envir=as.environment(2)) #
not well tested

Then your users just need to attach(lib1.RData) to get your functions.

 Now this scheme could get more complex - for example it doesn't deal
with C or Fortran code, or documentation, or tests, or examples, or
version numbering. Adding any of those would be pointless - if you
want any of that either use packages and learn to write R docs
(roxygen helps) or add it to my code yourself. You'll end up
rebuilding the R package system anyway.

 Hope this helps. Doubtless someone else will come up with a simple
way to build proper packages without all the bondage and discipline
that annoys you.

Barry

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Ggplot: free x-scales in a facet-grid

2011-02-11 Thread ONKELINX, Thierry
Dear Ann,

The easiest way is to seperate both plots into two subplots. And then use 
viewport to paste them together.

Best regards,

Thierry


p1 - 
ggplot(subset(data.melt, pos = FALSE),aes(value,ID)) +
geom_point(aes(groups=time,colour=time,shape=time)) +
facet_grid(type~.,scales=free,space=free) +
opts(strip.text.y=theme_text())+
xlab(NULL) + ylab(NULL)+
opts(axis.text.x = theme_text(angle = 90, hjust = 1, size = 8)) +
opts(legend.text = theme_text(hjust=1, size = 8))+
opts(legend.position=top,legend.direction=horizontal)+
scale_shape_manual(values = c(1,3,0,2),name=)  +
scale_colour_manual(values =
c(red,red,royalblue4,mediumvioletred),name=)

p2 - 
ggplot(subset(data.melt, pos = TRUE),aes(value,ID)) +
geom_point(aes(groups=time,colour=time,shape=time)) +
facet_grid(type~.,scales=free,space=free) +
opts(strip.text.y=theme_text())+
xlab(NULL) + ylab(NULL)+
opts(axis.text.x = theme_text(angle = 90, hjust = 1, size = 8)) +
opts(legend.text = theme_text(hjust=1, size = 8))+
opts(legend.position=top,legend.direction=horizontal)+
scale_shape_manual(values = c(1,3,0,2),name=)  +
scale_colour_manual(values =
c(red,red,royalblue4,mediumvioletred),name=)

vp1 - viewport(width = 1/3, height = 1, x = 1/6, y = 0.5)
vp2 - viewport(width = 2/3, height = 1, x = 4/6, y = 0.5)
print(p1, vp = vp1)
print(p1, vp = vp2)



ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek
team Biometrie  Kwaliteitszorg
Gaverstraat 4
9500 Geraardsbergen
Belgium

Research Institute for Nature and Forest
team Biometrics  Quality Assurance
Gaverstraat 4
9500 Geraardsbergen
Belgium

tel. + 32 54/436 185
thierry.onkel...@inbo.be
www.inbo.be

To call in the statistician after the experiment is done may be no more than 
asking him to perform a post-mortem examination: he may be able to say what the 
experiment died of.
~ Sir Ronald Aylmer Fisher

The plural of anecdote is not data.
~ Roger Brinner

The combination of some data and an aching desire for an answer does not ensure 
that a reasonable answer can be extracted from a given body of data.
~ John Tukey
  

 -Oorspronkelijk bericht-
 Van: r-help-boun...@r-project.org 
 [mailto:r-help-boun...@r-project.org] Namens Strategische 
 Analyse CSD Hasselt
 Verzonden: vrijdag 11 februari 2011 10:09
 Aan: r-help@r-project.org
 Onderwerp: Re: [R] Ggplot: free x-scales in a facet-grid
 
 Hello,
 
 hereby the code with example data, as an attach to my 
 question (see mail 
 below).
 Thank you!
 Ann
 
 library(ggplot2)
 library(grid)
 library(RColorBrewer)
 library(car)
 library(reshape)
 
 #make dataframe
 ID=c(a,b,c,d,e,f,g,h,i,j)
 type=c(type1,type2,type3,type2,type2,type1,type2
 ,type1,type1,type3)
 dat_feit_lo=c(13229222400,13510803600,13463193600,13491619200,
 13502732400,13514315400,13463193600,13514718600,13514497200,13
 515031800)
 dat_feit_hi=c(13502591940,13510803600,13464798000,13508697600,
 13514036100,13514315400,13507862400,13514719380,13514432400,13
 515036600)
 dat_pol=c(13512488400,13510877580,13468415940,13508697600,1351
 4036100,13514315400,13513528800,13514719380,13514809800,13515037260)
 dat_avv_start=c(13512502320,13510936200,13513705980,1351422744
 0,13514217300,13514396280,13514636520,13514810580,13514909640,
 13515099060)
 feiten-data.frame(ID,type,dat_feit_lo,dat_feit_hi,dat_pol,dat
 _avv_start)
 
 #make POSIX of date variables
 feiten$dat_feit_lo-as.POSIXct(feiten$dat_feit_lo,
 origin=1582-10-14,tz=GMT)
 feiten$dat_feit_hi-as.POSIXct(feiten$dat_feit_hi,
 origin=1582-10-14,tz=GMT)
 feiten$dat_pol-as.POSIXct(feiten$dat_pol, 
 origin=1582-10-14,tz=GMT)
 feiten$dat_avv_start-as.POSIXct(feiten$dat_avv_start,
 origin=1582-10-14,tz=GMT)
 
 #sort  melt data#
 feiten$ID-with(feiten,reorder(reorder(reorder(ID,1/as.numeric
 (dat_pol)),1/as.numeric(dat_avv_start)),as.numeric(type)))
 sortframe=function(df,...)df[do.call(order,list(...)),]
 data_sort-with(feiten,sortframe(feiten,as.numeric(type),1/as.
 numeric(dat_avv_start),1/as.numeric(dat_pol)))
 data.melt-melt.data.frame(data_sort, id=c(ID,type), 
 variable_name =
 time)
 levels(data.melt$time)-c(fact low,fact 
 high,complaint,hearing)
 
 
 #make plot#
 data.melt$pos-data.melt$valueas.POSIXlt(2010-12-01 00:00:00)
 data.melt$pos[is.na(data.melt$pos)]-'FALSE'
 
 plot-
 ggplot(data.melt,aes(value,ID)) +
 geom_point(aes(groups=time,colour=time,shape=time)) +
 facet_grid(type~pos,scales=free,space=free) +
 opts(strip.text.y=theme_text())+
 xlab(NULL) + ylab(NULL)+
 opts(axis.text.x = theme_text(angle = 90, hjust = 1, size = 8)) +
 opts(legend.text = theme_text(hjust=1, size = 8))+
 opts(legend.position=top,legend.direction=horizontal)+
 scale_shape_manual(values = c(1,3,0,2),name=)  +
 scale_colour_manual(values =
 c(red,red,royalblue4,mediumvioletred),name=)
 
 
 - Original Message - 
 From: Strategische Analyse CSD Hasselt csd...@fedpolhasselt.be
 To: r-help@R-project.org
 Sent: Thursday, February 10, 2011 2:40 PM
 

Re: [R] Matrix of Matrices?

2011-02-11 Thread Petr Savicky
On Thu, Feb 10, 2011 at 11:54:50PM -0800, Alaios wrote:
 Dear all I have a few matrices that I would like to store alltogether under a 
 bigger object.
 My matrixes with the same name were calculated inside a loop like
 
 for (few times){
 
estimatedsr- this is my matrix
savematrixasimagefile();

 
 }
 
 which means that I was losing all that instances.
 What can I do to keep all these matrices? ( I do not know in advance their 
 number, so I can not preallocate space). 
 How can I store them and adress them back again?
 
 I would like to thank you in advance in your help

Hello.

A possible approach is to use list (see ?list).

  lst - list()
  lst[[1]] - rbind(c(1, 2), c(1, 2))
  lst[[2]] - rbind(c(3, 3), c(4, 4))
  lst

  [[1]]
   [,1] [,2]
  [1,]12
  [2,]12
  
  [[2]]
   [,1] [,2]
  [1,]33
  [2,]44

If you know in advance an upper bound on the number of matrices,
then it is possible to use an array (see ?array). For example
storing two matrices 2 x 2 may be done as follows

  a - array(dim=c(2, 2, 2))
  a[,,1] - rbind(c(1, 2), c(1, 2))
  a[,,2] - rbind(c(3, 3), c(4, 4))
  a

  , , 1
  
   [,1] [,2]
  [1,]12
  [2,]12
  
  , , 2
  
   [,1] [,2]
  [1,]33
  [2,]44

Hope this helps.

Petr Savicky.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Writing R packages in an easier way?

2011-02-11 Thread Spencer Graves

Dear Dr. Wolf:


  I understand your concern that the mechanics of writing an R 
package can be difficult.  It was hard for me when I started.



  I came to embrace it, because I actually got more done in less 
time doing so.  In my previous experience, as code I wrote got more 
complicated, it became more difficult to find bugs.  Then I started 
writing R packages, beginning by writing the documentation including 
examples that became unit tests 
(http://en.wikipedia.org/wiki/Unit_testing).  Then I wrote code to the 
test.  For example, I would first write function A.  Then I write B that 
calls A.  Then I write C that also uses A.  In the process of writing C, 
I modify A.  When I run R CMD check, I learn that my change to A broke 
B.  I learn that instantly.  Without the R package discipline, it could 
be 6 months before I learned that I had a bug.  Then finding and fixing 
it was very difficult and time consuming.  Now, whenever I stumble over 
a new bug, I add it to my tests in the examples on the help page.  
People tell me I shouldn't put too much in the examples, because the 
complicated examples can confuse users.  Fortunately, with \dontshow, 
not all examples need to be shown to the users of the help page.



  By using the R package development process, I get more 
trustworthy software -- fewer bugs -- with less time, work, blood, sweat 
and tears.  The savings comes from less time spent debugging.  Moreover, 
the resulting software is generally better designed, because the process 
of writing the standard help page with the standard parts helps me think 
through what each function should do and how they all should work 
together.  I save so much debugging time that I essentially get the 
documentation for free.  This makes it much easier to share my code with 
others -- and to revisit code I wrote previously but had forgotten many 
details.



  I started writing Fortran in 1963, and I had many bad software 
development habits.  I had read a lot about software productivity and 
the virtues various software productivity practices.  It was only when I 
started writing R packages that I began to actually use those 
recommended practices productively.



  I've never read the entire Writing R Extensions manual, though 
I have used it repeatedly as a reference.  Fortunately, there are 
simpler introductions to R package development.  Have you checked the 
contributed page on CRAN for the word package?  I just found 9 
matches, including 4 documents on creating R packages, two in English, 
one in French, and one in Italian.  Leider, kein Deutsch.



  Hope this helps.
  Spencer


On 2/11/2011 3:03 AM, Barry Rowlingson wrote:

On Fri, Feb 11, 2011 at 7:52 AM, Dr. Michael Wolfm-w...@muenster.de  wrote:

Dear R colleagues,

is there an easier way to write R packages for the own use - without RTools
and TeX?

  There are simpler ways of maintaining R source code than building
packages. I bashed out a quick way of keeping code in directories as
source and reading in on demand. All you need is two functions:

import- function(dir){
   e = attach(NULL,name=dir)
   assign(__path__,dir,envir=e)
   reload(e)
   invisible(e)
}

reload- function(e){
   path = get(__path__,e)
   files = 
list.files(path,.R$,full.names=TRUE,recursive=TRUE,ignore.case=TRUE)
   for(f in files){
 sys.source(f,envir=e)
   }
}

  Now put the source code in some folder/directory somewhere. You can
even make subdirs if you wish to organise it that way.

  I've got some .R files in lib1. I do;

import(lib1)

  and that runs 'source' on all the .R files in there and loads them
into position 2 on the search list. ls(pos=2) shows them.

  If you edit the source code, just do reload(2) (assuming it hasn't
moved because you've loaded another package) and your stuff will be
updated. It just sources everything again. Do detach(2) to get rid of
it.

  If you want to distribute your code in source format, just make a
zip or tar of the folder with the code in, and make sure your users
have the import and reload functions (I should probably put them in a
package one day...).

  To distribute in 'binary' format, do an import, and then save as .RData:

save(list=ls(pos=2),file=lib1.RData, envir=as.environment(2)) #
not well tested

Then your users just need to attach(lib1.RData) to get your functions.

  Now this scheme could get more complex - for example it doesn't deal
with C or Fortran code, or documentation, or tests, or examples, or
version numbering. Adding any of those would be pointless - if you
want any of that either use packages and learn to write R docs
(roxygen helps) or add it to my code yourself. You'll end up
rebuilding the R package system anyway.

  Hope this helps. Doubtless someone else will come up with a simple
way to build proper packages without all the bondage and discipline
that annoys you.

Barry

__
R-help@r-project.org mailing list

Re: [R] Writing R packages in an easier way?

2011-02-11 Thread S Ellison


 Dr. Michael Wolf m-w...@muenster.de 11/02/2011 07:52 
is there an easier way to write R packages for the own use - without
RTools and TeX?

Installing Rtools is not hard, and doesn't have to happen often; the
hardest bit in Windows is making sure that the requisite executables are
on the path, and that just involves adding the directory names to the
path environment variable. If I understand you, the problem is the time
spent hacking about in the .Rd help files. That can certainly be
simplified - eliminated, in fact.

Use package.skeleton() once you have a good starting set of functions
and data in R. That creates all the necessary directories, creates
skeleton (but valid) .Rd files, and exports your functions and data
objects for you. You can then edit the code directly, use RCMD check to
check the package (useful anyway) and use RCMD build to build it. (In
fact if all you want is the zip, you can - or at least could - zip the
package directory created by RCMD check). 

S Ellison


***
This email and any attachments are confidential. Any use...{{dropped:8}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Writing R packages in an easier way?

2011-02-11 Thread Barry Rowlingson
On Fri, Feb 11, 2011 at 12:33 PM, Spencer Graves
spencer.gra...@structuremonitoring.com wrote:
 Dear Dr. Wolf:


      I understand your concern that the mechanics of writing an R package
 can be difficult.  It was hard for me when I started.

 I should add that although I did write that import/reload code for
simple development of R code in folders - I don't actually use it.
It's one of those things I wrote because someone on R-help asked a
similar question to Dr Wolf's.

 I write proper packages with examples and tests. Of course I do.

Barry

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Large Datasets

2011-02-11 Thread John Filben
I have recently been using R - more speciifcally the GUI packages Rattle 
and Rcmdr.

I like these products a lot and want to use them for some projects - the 
problem 
that I run into is when I start to try and run large datasets through them.  
The 
data sets are 10-15 million in record quantity and usually have 15-30 fields 
(both numerical and categorical).

I saw that there were some packages that could deal with large datasets in R - 
bigmemory, ff, ffdf, biganalytics.  My problem is that I am not much of a coder 
(and the reason I use the above mentioned GUIs).  These GUIs do show 
the executable R code in the background - my thought was to run a small sample 
through the GUI, copy the code, and then incorporate some of the large data 
packages mentioned above - have anyone every tried to do this and would you 
have 
working examples.  In terms of what I am trying to do to the data - really 
simple stuff - desriptive statistics, k-means clustering, and possibly some 
decision trees.  Any help would be greatly appreciated.

Thank you - John
John Filben
Cell Phone - 773.401.2822
Email - johnfil...@yahoo.com 


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Ordinal logistic regression (lrm)- checking model assumptions

2011-02-11 Thread Anna Berthinussen

Dear all,

I have been using the lrm function in R to run an ordinal logistic  
regression and I am a bit confused about the methods for checking the  
model assumptions.


I have produced residual plots in R of the score.binary type which I  
think look ok. However, the partial type plots show bell shaped  
patterns and have crossing lines, indicating violation of parallelism.  
However, I noticed on the help page that for ordinal models,  
simulations where proportional odds are satisfied have also produced  
similar patterns.


I have also run the regression in SPSS and found that the test of  
parallel lines shows that the assumption of parallelism has not been  
violated. However, I have read that this is not a reliable method.


I am finding it very confusing to determine if my model meets the  
necessary assumptions. Does anybody know the best way to do this?


Any advice would be much appreciated,

Thank you,

Anna

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How can we make a vector call a function element-wise efficiently?

2011-02-11 Thread Eik Vettorazzi
Hi,
you compute the same results for logx many times. So it is easier and
time saving tabulating all intermediate results.

smth. like
 n-10
 CT=6000#assignment to CT
 NT=29535210#assignment to NT
 i - 0:(n-1)
 lookup- lchoose(NT-n, CT-i) + lchoose(n, i)
 lgmax-cummax(lookup)
 calsta2-function(c) lgmax[c] + log(sum(exp(lookup[1:c] - lgmax[c])))

should help for a start, but I think, you are running into numerical
troubles, since you are dealing with very high and low (on log scale)
numbers and calsta constantly returns 57003.6 for c38 (the summands in
sum(exp(logx - logmax)) will become 0 for c38).

#check
sapply(1:50,calsta2)
sapply(1:50,calsta)

hth

Am 11.02.2011 06:12, schrieb zhaoxing731:
 Hello
   I have a time-comsuming program which need to simplify, I have tested 
 the annotated program as follow:
 
 #define function which will be call
 
 calsta - function(c, n=10) 
 + { 
 +   i - seq(from=0, length=c) 
 +   logx - lchoose(NT-n, CT-i) + lchoose(n, i) 
 +   logmax - max(logx) 
 +   logmax + log(sum(exp(logx - logmax))) 
 + } 
 CT=6000  #assignment to CT
 NT=29535210  #assignment to NT

 vec-c(2331,524,918,218,1100,547,289,1167,450,1723)
 vec
  [1] 2331  524  918  218 1100  547  289 1167  450 1723
 vec-rep(vec,1000)#replicate the vec 1000 times
 length(vec)
 [1] 1 
 
 #then I'd like to make vector vec call function calsta element-wise
 #and save the output to vector result
 
 system.time(result-sapply(vec,calsta))
user  system elapsed 
   26.450.03   26.70 

 system.time(for (i in 1:1) result[i]=calsta(vec[i]))
user  system elapsed 
   27.740.14   28.94 
 
 I have about  300,000 such 26.70/ 28.94 seconds, so the approximate 
 computation time is 100 days
 What a terrible thing to do!!!
 Any modification, nomatter how subtle, will be a big help for me
 
 Thank you in advance
 
 Yours sincerely
  
 
 
 
 ZhaoXing
 Department of Health Statistics
 West China School of Public Health
 Sichuan University
 No.17 Section 3, South Renmin Road
 Chengdu, Sichuan 610041
 P.R.China
 
 __
 ¸Ï¿ì×¢²áÑÅ»¢³¬´óÈÝÁ¿Ãâ·ÑÓÊÏä?
 
 
 
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


-- 
Eik Vettorazzi
Institut für Medizinische Biometrie und Epidemiologie
Universitätsklinikum Hamburg-Eppendorf

Martinistr. 52
20246 Hamburg

T ++49/40/7410-58243
F ++49/40/7410-57790

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R example code of Split-plot Manova

2011-02-11 Thread John Fox
Dear Xiang Gao,

See the OBrienKaiser example in ?Anova in the car package.

I hope this helps,
 John

 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
 On Behalf Of Xiang Gao
 Sent: March-15-10 4:35 PM
 To: r-help@r-project.org
 Subject: [R] R example code of Split-plot Manova
 
 Hi,
 
 Urgent help- I have not been using R and statistics in my research for a
 long time, but still remember some concept. I would like to have a sample
 code for Manova analysis of Split-plot experiment. Could someone please
 post a sample code and a short input sample as well?
 
 Thank you so much!
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Using merge

2011-02-11 Thread Ronaldo Reis Junior
Hi,

I have two tables and I need to merge both. I use the merge command, but 
in this way the name must be exactly. How I can make to compare 
independently of upper or lower-case?

Look:

data1-data.frame(journal=c(Ecology,Environmental 
Entomology,Neotropical Biology And Conservation))

data2-data.frame(journal=c(Ecology,Environmental 
Entomology,Neotropical Biology and 
Conservation,Sociobiology),qualis=c(A1,A2,B1,B5))

merge(data1,data2)

  merge(data1,data2)
journal qualis
1  Ecology A1
2 Environmental Entomology A2

the expected result is:

journal  qualis
1  Ecology  A1
2 Environmental Entomology  A2
3 Neotropical Biology And Conservation  B1

Look that result is wrong because the And and and in the name 
Neotropical Biology And Conservation

how to fix it automatically? Has any function to make all names 
lowercase or any other mean to make this to work.

Thanks
Ronaldo

-- 
16ª lei - Lembre-se, é a sua dissertação. Você (!) é quem precisa fazê-la.

   --Herman, I. P. 2007. Following the law. NATURE, Vol 445, p. 228.

  Prof. Ronaldo Reis Júnior
|  .''`. UNIMONTES/DBG/Lab. Ecologia Comportamental e Computacional
| : :'  : Campus Universitário Prof. Darcy Ribeiro, Vila Mauricéia
| `. `'` CP: 126, CEP: 39401-089, Montes Claros - MG - Brasil
|   `- Fone: (38) 3229-8192 | ronaldo.r...@unimontes.br
| http://www.ppgcb.unimontes.br/lecc | LinuxUser#: 205366


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] censReg or tobit: testing for assumptions in R?

2011-02-11 Thread E Hofstadler
Hello!

I'm thinking of applying a censored regression model to
cross-sectional data, using either the tobit (package survival) or the
censReg function (package censReg). The dependent variable is left and
right-censored.

My hopefully not too silly question is this: I understand that
heteroskedasticity and nonnormal errors are even more serious problems
in a censored regression than in an ols-regression. But I'm not sure
how to test for these assumptions in R? Is there a way to get to the
residuals of censored regression models (given that corresponding
functions for lm, such as rstandard, are not applicable)?

(Or perhaps I'm on the wrong track in a more fundamental sense and
shouldn't be Iooking for equivalences to lm?)

Many thanks for any help.

EH

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Using filled.contour and contour functions together

2011-02-11 Thread Xavier Bodin

Dear R help contributors,

I'd like to plot ground temperature with time on X-axis and depth on Y-axis
on this datasets ( http://r.789695.n4.nabble.com/file/n3301033/NEdaily.csv
NEdaily.csv ), and to do so I use the following commands:

library(RSEIS) 

xNE - seq(1, as.numeric(as.Date(max(NEdaily[[1]])) -
as.Date(min(NEdaily[[1]]))), 1)
yNE - rev(c(-0.3, -0.5, -0.7, -0.9, -1.1, -1.4, -1.7, -2, -2.5, -3, -4,
-5, -7, -9, -10))
zNE -
mirror.matrix(as.matrix(NEdaily[1:(nrow(NEdaily)-1),2:length(NEdaily)]))

filled.contour(xNE,yNE,zNE
, col = myPal(20)
, zlim = c(-20,20)
, ylab = Depth [m], 
, xlab = paste(Days since , as.Date(min(NEdaily[[1]]), format
=%d.%m.%Y))
)
contour(xNE,yNE,zNE, lty = 3, add = T)
contour(xNE,yNE,zNE, nlevels = 1, level = 0, add = T, lwd = 1.5)

I get this graph ( http://r.789695.n4.nabble.com/file/n3301033/NEdaily.png
NEdaily.png ) and don't understand why filled.contour and contour plots are
no set on the same dimensions and why they don't exactly overlay. Does
anyone have an idea and a solution ?!

Thanks in advance,

Xavier
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Using-filled-contour-and-contour-functions-together-tp3301033p3301033.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] extracting p-values from the Manova function (car library)

2011-02-11 Thread Bettina Kulle Andreassen

hi,

i am not able to extract the p-values from the
Manova function in the car library. I need
to use this function in a high-throughput setting
and somehow need the p-values produced.

Any ideas?

Best regards

Bettina Kulle Andreassen

--

Bettina Kulle Andreassen

University of Oslo

Department of Biostatistics

and

Institute for Epi-Gen (Faculty Division Ahus)

tel:
+47 22851193
+47 67963923

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Extract a slot value from a 'SpatialPolygons' class object

2011-02-11 Thread Xavier Hoenner
Dear R-users,

I’m currently trying to extract the value of a slot (area) but can’t find out 
how to do that.

str(overlperc)

List of 1
 $ :Formal class 'SpatialPolygons' [package sp] with 4 slots
  .. ..@ polygons   :List of 1
  .. .. ..$ :Formal class 'Polygons' [package sp] with 5 slots
  .. .. .. .. ..@ Polygons :List of 1
  .. .. .. .. .. ..$ :Formal class 'Polygon' [package sp] with 5 slots
  .. .. .. .. .. .. .. ..@ labpt  : num [1:2] 709374 -1507888
  .. .. .. .. .. .. .. ..@ area   : num 11542604
  .. .. .. .. .. .. .. ..@ hole   : logi FALSE
  .. .. .. .. .. .. .. ..@ ringDir: int 1
  .. .. .. .. .. .. .. ..@ coords : num [1:23, 1:2] 706840 706657 706840 707294 
707585 ...
  .. .. .. .. .. .. .. .. ..- attr(*, dimnames)=List of 2
  .. .. .. .. .. .. .. .. .. ..$ : chr [1:23] 1 2 3 4 ...
  .. .. .. .. .. .. .. .. .. ..$ : chr [1:2] x y
  .. .. .. .. ..@ plotOrder: int 1
  .. .. .. .. ..@ labpt: num [1:2] 709374 -1507888
  .. .. .. .. ..@ ID   : chr 1
  .. .. .. .. ..@ area : num 11542604
  .. ..@ plotOrder  : int 1
  .. ..@ bbox   : num [1:2, 1:2] 706657 -1509411 711710 -1506189
  .. .. ..- attr(*, dimnames)=List of 2
  .. .. .. ..$ : chr [1:2] x y
  .. .. .. ..$ : chr [1:2] min max
  .. ..@ proj4string:Formal class 'CRS' [package sp] with 1 slots
  .. .. .. ..@ projargs: chr NA

I’d like to extract the area value so as to be able to use this value for 
further analysis. Here’s below what I get when I only type the name of the 
object:

overlperc

[[1]]
An object of class SpatialPolygons
Slot polygons:
[[1]]
An object of class Polygons
Slot Polygons:
[[1]]
An object of class Polygon
Slot labpt:
[1]   709374 -1507888

Slot area:
[1] 11542604

Slot hole:
[1] FALSE

Slot ringDir:
[1] 1

Slot coords:
  xy
1  706839.8 -1508654
2  706657.2 -1508029
3  706839.8 -1507634
4  707293.6 -1507284
5  707584.7 -1507174
6  708329.6 -1506851
7  709013.3 -1506539
8  709074.5 -1506513
9  709819.5 -1506189
10 710564.4 -1506289
11 711021.1 -1506539
12 711309.3 -1506769
13 711642.0 -1507284
14 711709.8 -1508029
15 711309.3 -1508690
16 711246.4 -1508774
17 710564.4 -1509347
18 709819.5 -1509411
19 709074.5 -1509277
20 708329.6 -1509072
21 707584.7 -1509039
22 706927.4 -1508774
23 706839.8 -1508654



Slot plotOrder:
[1] 1

Slot labpt:
[1]   709374 -1507888

Slot ID:
[1] 1

Slot area:
[1] 11542604



Slot plotOrder:
[1] 1

Slot bbox:
 minmax
x   706657.2   711709.8
y -1509411.3 -1506189.0

Slot proj4string:
CRS arguments: NA

I’m stuck and spent more or less the whole afternoon online trying to find a 
solution to my problem but I couldn’t find anything. I look forward to hear 
from some of you. Thanks in advance for your kind help.

Have a good weekend.

Cheers


Xavier


Xavier Hoenner
PhD Student
Nesting Ecology, Migrations and Diving Behaviour of Hawksbill Turtles
School of Environmental Research, Charles Darwin University
Darwin, NT 0909
Ph: (08)8946.7721
email:xavier.hoen...@cdu.edu.au

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] When is *interactive* data visualization useful to use?

2011-02-11 Thread Mike Marchywka



 From: tal.gal...@gmail.com
 Date: Fri, 11 Feb 2011 08:26:16 +0200
 To: r-help@r-project.org
 Subject: [R] When is *interactive* data visualization useful to use?

 Hello all,

 Before getting to my question, I would like to apologize for asking this
 question here. My question is not directly an R question, however, I still
 find the topic relevant to R community of users - especially due to only *
 partial* (current) support for interactive data visualization (see here:
 http://cran.r-project.org/web/views/Graphics.html were with iplots we are
 waiting for iplots extreme, and with rggobi, it currently can not run with R
 2.12 and windows 7 OS).


I guess I would just mention a few related issues that are central to R
that I have encountered. This is not well organized but if there is a point
here I'm suggesting that maybe the thing to do is make R work better with 
streaming
data and provide a way to pipe text data to and from other graphically oriented 
tools that could be taken from many unrelated sources. 

One issue is the concept of streaming for dealing with unlimited data 
and the other is playing nice with the other tools. I recently encountered
your concerns with R ( a few days ago) wondering if interactive may be a good
way to survey some plot I had- many thousands of points that were hard to
explore without interactive zoom seemed to be a natural for this. Often people
here complain about memory limits with large data sets and it is not 
unreasonable
to work with indefinitely long data streams and examine results in real time.
I had encountered this in the past, IIRC I wanted to watch histograms from a 
monte
carlo simulation and wanted to know right away if things were going wrong.

Probably you would want to consider R capabilities along with those of
related tools and means for sharing data. Even complex models or data
are normally reducible to text that can piped around to various tools so
having a feature like this in any tools or packages is important.


If you want to author fixed results but let the viewer interact with them,
maybe look at things like PDF once there are more open source tools for dealing 
with it. 
I have grown up hating PDF but apparently the viewers can offer reasonable
interactivity with properly authored PDF files. The Standard is hardly well 
supported with open source tools
and many features of the standard get referred to only available if you buy 
this from Adobe.
This creates two issues, one just being cost and annoyance but the other is 
ability
to check results. If you suspect something is wrong with open source you are 
can always
look and taking someone's word for software correctness, well, take a look at 
the credit
rating agencies LOL. And there is always a concern for an attitude problem with 
this too as
web designers seem to think that  well we created a huge
brand-name file that is also a 'standard' if it is that big from a big 
company there must be lots of information in all those bytes as if they get 
paid by 
the megabyte when often just a csv file would be more important to R users.


If you really want professional graphics with good interactivity and are willing
to dig a little as part of a larger survey, I'd be curious to know if there is 
anything
that can be extracted from all the interactive games LOL...




 And now for my question:

 While preparing for a talk I will give soon, I recently started digging into
 two major (Free) tools for interactive data visualization:
 GGobi
 and mondrian  - both offer a great range of
 capabilities (even if they're a bit buggy).

 I wish to ask for your help in articulating (both to myself, and for my
 future audience) *When is it helpful to use interactive plots? Either for
 data exploration (for ourselves) and data presentation (for a client)?*

 For when explaining the data to a client, I can see the value of animation
 for:

 - Using identify/linking/brushing for seeing which data point in the
 graph is what.
 - Presenting a sensitivity analysis of the data (e.g: if we remove this
 point, here is what we will get)
 - Showing the effect of different groups in the data (e.g: let's look at
 our graphs for males and now for the females)
 - Showing the effect of time (or age, or in general, offering another
 dimension to the presentation)

 For when exploring the data ourselves, I can see the value of
 identify/linking/brushing when exploring an outlier in a dataset we are
 working on.

 But other then these two examples, I am not sure what other practical use
[[elided Hotmail spam]]

 It could be argued that the interactive part is good for exploring (For
 example) a different behavior of different groups/clusters in the data. But
 when (in practice) I approached such situation, what I tended to do was to
 run the relevant statistical procedures (and post-hoc tests) - and what I
 found to be significant I would then plot with colors clearly dividing the
 data 

[R] Passing function arguments

2011-02-11 Thread Michael Pearmain
Hi All,

Im looking for some help passing function arguments and referencing them,
I've made a replica, less complicated function to show my problem, and how
i've made a work around for this. However i suspect there is a _FAR_ better
way of doing this.

If i do:
BuildDecayModel - function(x = this, y = that, data = model.data) {
  model - nls(y ~ SSexp(x, y0, b), data = model.data)
  return(model)
}
...

Error in lm.fit(x, y, offset = offset, singular.ok = singular.ok, ...) :
  0 (non-NA) cases

This function returns an error because the args are passed as this and
that to the model, and so fails (correct?)

If  i do the following:
BuildDecayModel - function(x = total.reach, y = lift, data =
model.data) {
  x - data[[x]]
  y - data[[y]]
  model.data - as.data.frame(cbind(x,y))
  model - nls(y ~ SSexp(x, y0, b), data = model.data)
  return(model)
}

This works for me, but it seems that i'm missing a trick with just
manipulating the args rather than making an entire new data.frame to work
off,

Can anyone offer some advice?

Thanks in advance

Mike

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using merge

2011-02-11 Thread Ronaldo Reis Junior
Hi,

ignore my e-mail, a just use tolower function.

thanks and sorry

Ronaldo

Em 11-02-2011 11:24, Ronaldo Reis Junior escreveu:
 Hi,

 I have two tables and I need to merge both. I use the merge command, 
 but in this way the name must be exactly. How I can make to compare 
 independently of upper or lower-case?

 Look:

 data1-data.frame(journal=c(Ecology,Environmental 
 Entomology,Neotropical Biology And Conservation))

 data2-data.frame(journal=c(Ecology,Environmental 
 Entomology,Neotropical Biology and 
 Conservation,Sociobiology),qualis=c(A1,A2,B1,B5))

 merge(data1,data2)

  merge(data1,data2)
journal qualis
 1  Ecology A1
 2 Environmental Entomology A2

 the expected result is:

journal  qualis
 1  Ecology  A1
 2 Environmental Entomology  A2
 3 Neotropical Biology And Conservation  B1

 Look that result is wrong because the And and and in the name 
 Neotropical Biology And Conservation

 how to fix it automatically? Has any function to make all names 
 lowercase or any other mean to make this to work.

 Thanks
 Ronaldo
 -- 
 16ª lei - Lembre-se, é a sua dissertação. Você (!) é quem precisa fazê-la.

--Herman, I. P. 2007. Following the law. NATURE, Vol 445, p. 228.

   Prof. Ronaldo Reis Júnior
 |  .''`. UNIMONTES/DBG/Lab. Ecologia Comportamental e Computacional
 | : :'  : Campus Universitário Prof. Darcy Ribeiro, Vila Mauricéia
 | `. `'` CP: 126, CEP: 39401-089, Montes Claros - MG - Brasil
 |   `- Fone: (38) 3229-8192 |ronaldo.r...@unimontes.br
 |http://www.ppgcb.unimontes.br/lecc  | LinuxUser#: 205366


-- 
1ª lei - Suas férias começam após a defesa e entrega de sua dissertação.

   --Herman, I. P. 2007. Following the law. NATURE, Vol 445, p. 228.

  Prof. Ronaldo Reis Júnior
|  .''`. UNIMONTES/DBG/Lab. Ecologia Comportamental e Computacional
| : :'  : Campus Universitário Prof. Darcy Ribeiro, Vila Mauricéia
| `. `'` CP: 126, CEP: 39401-089, Montes Claros - MG - Brasil
|   `- Fone: (38) 3229-8192 | ronaldo.r...@unimontes.br
| http://www.ppgcb.unimontes.br/lecc | LinuxUser#: 205366


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] fitdistr question

2011-02-11 Thread Ben Bolker
Antje Niederlein niederlein-rstat at yahoo.de writes:

 
 Hi Ingmar, hi Dennis,
 
 okay, you're right. I was expecting that the result would give the
 best fit to my data even if it's not a real poisson distribution. It
 looks somehow similar...
 But how to judge the goodness of fit? I was using the residual sum of
 squares. I'm not a statistician, so I'm not sure whether this method
 is the one to choose...
 If I estimate lambda with mle2() and use the RSS as criteria to
 minimize, my lambda is much smaller that with fitdistr().
 

   There are many ways to define the best fit; RSS is one reasonable
option, maximum likelihood (which in the case of a Poisson distribution
is equivalent to least-squares weighted by a variance that is equal
to the expected mean, i.e. (y.obs-y.fitted)^2/y.fitted) is another. 
Which you choose really depends
on why you are calculating the estimates in the first place/
what you intend to use them for, although for Poisson data
maximum likelihood approaches are more widely accepted.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Large Datasets

2011-02-11 Thread Wonsang You
I have not ever tried to use any GUI package. Thus, I cannot give you a good
help.
Instead, I would like to report my experience of exploiting the 'ff' package
to have access to large dataset.

To achieve your goal, I think that you need to make any function which
handles ff objects.
According to my experience, when I created a function which handles ff
objects, it could not recognize these ff objects correctly inside the
function. If you encounter such problems, you can refer to this article.
http://wonsangyou.blogspot.com/2011/01/fast-access-to-large-database-in-r.html


2011/2/11 John Filben johnfil...@yahoo.com

 I have recently been using R - more speciifcally the GUI packages Rattle
 and Rcmdr.

 I like these products a lot and want to use them for some projects - the
 problem
 that I run into is when I start to try and run large datasets through
 them.  The
 data sets are 10-15 million in record quantity and usually have 15-30
 fields
 (both numerical and categorical).

 I saw that there were some packages that could deal with large datasets in
 R -
 bigmemory, ff, ffdf, biganalytics.  My problem is that I am not much of a
 coder
 (and the reason I use the above mentioned GUIs).  These GUIs do show
 the executable R code in the background - my thought was to run a small
 sample
 through the GUI, copy the code, and then incorporate some of the large data
 packages mentioned above - have anyone every tried to do this and would you
 have
 working examples.  In terms of what I am trying to do to the data - really
 simple stuff - desriptive statistics, k-means clustering, and possibly some
 decision trees.  Any help would be greatly appreciated.

 Thank you - John
 John Filben
 Cell Phone - 773.401.2822
 Email - johnfil...@yahoo.com



[[alternative HTML version deleted]]


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Comparison of glm.nb and negbin from the package aod

2011-02-11 Thread Ben Bolker
sabwo sabsiw at gmx.at writes:

[big snip; comparing aod::negbin and MASS::glm.nb fits]

 The thing i really dont understand is why there is such a big difference
 between the deviances? (glm.nb = 30.67 and negbin=52.09?) Shouldnt they be
 nearly the same??
 

  I don't have time to dig into this right now, but calculations of
log-likelihoods or deviances often drop additive constants (such as
the normalizing constant in a probability distribution), and different
implementations often make different choices about which constant
terms to include or not.  If you dig around in the code you should
be able to find out which terms are included or not (although admittedly
this would be a nice thing to have included in the documentation). This
does make it hard to compare across fits in different packages. The important
thing (and the thing I'm fairly certain of, since I've used both packages
and they both seem to be well-written) is that the **differences** in
deviances when comparing models A and B both fitted in the same package
should be the same (because the additive constants that are included
or not cancel out in this case).

  Ben Bolker

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How can we make a vector call a function element-wise efficiently?

2011-02-11 Thread Eik Vettorazzi
Hi ZhaoXing,
without knowledge about the ultimate purpose of your calculations its
quite difficult to give further hints.

best regards

Am 11.02.2011 14:35, schrieb zhaoxing731:
 Dear Eik
  
 What a great idea!!! Thank you so much for your colossal improvment
 Yes, you have a unique eye on the numerical problem, I am worrying about
 this problem right now, hope you could give me new idea again
  
 Hi,
 you compute the same results for logx many times. So it is easier and
 time saving tabulating all intermediate results.
 smth. like
  n-10
  CT=6000 #assignment to CT
  NT=29535210 #assignment to NT
  i - 0:(n-1)
  lookup- lchoose(NT-n, CT-i) + lchoose(n, i)
  lgmax-cummax(lookup)
  calsta2-function(c) lgmax[c] + log(sum(exp(lookup[1:c] - lgmax[c])))
 should help for a start, but I think, you are running into numerical
 troubles, since you are dealing with very high and low (on log scale)
 numbers and calsta constantly returns 57003.6 for c38 (the summands in
 sum(exp(logx - logmax)) will become 0 for c38).
 #check
 sapply(1:50,calsta2)
 sapply(1:50,calsta)
 hth
 /Yours sincerely/
 / /
 
 /ZhaoXing
 Department of Health Statistics
 West China School of Public Health
 Sichuan University
 No.17 Section 3, South Renmin Road
 Chengdu, Sichuan 610041
 P.R.China/


-- 
Eik Vettorazzi
Institut für Medizinische Biometrie und Epidemiologie
Universitätsklinikum Hamburg-Eppendorf

Martinistr. 52
20246 Hamburg

T ++49/40/7410-58243
F ++49/40/7410-57790

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Re. When is *interactive* data visualization useful to use?

2011-02-11 Thread Antony Unwin
Hello Tal,

You asked *When is it helpful to use interactive plots? Either for data 
exploration (for ourselves) and data presentation (for a client)?*

My answer: It's helpful for checking data quality, for exploration with and 
without clients, for checking results, and for data presenting.

Notes:
(1) It's difficult to explain interactive data visualization in print, 
demonstrations are so much more effective.
(2) Interactive data visualization is fun, both for the analyst, and more 
important, for the dataset owners.  You not only get better interaction with 
the data, you get better interaction with the scientists you cooperate with.  
They are prepared to contribute, because they can understand what is going on.  
That is not always the case with statistical models.
(3) The key is not animation but direct manipulation.  The aim is to be 
able to directly interact with all statistical objects in a graphic: querying, 
linking, reordering, reformatting, zooming, whatever.
(4) You write of point-based graphics, what about area-based graphics like 
histograms, barcharts and mosaicplots?  For categorical data the ability to 
select groups and look at spineplots of other variables to compare proportions 
is very effective. (And don't forget linking to maps for spatial data.)
(5) You mention outliers.  How do you decide what is an outlier?  Interactive 
parallel coordinate plots are extremely useful, either for identifying outliers 
or for checking ones found with an analytic approach.
(6) Interactive data visualization is not in competition with other approaches, 
it complements them.  Results found with models should be checked graphically 
and results found graphically should be checked analytically.  Your comment 
about data dredging is important, though why people think this only happens 
with graphics and not with modelling approaches always puzzles me!
(7) There are often interesting features of a dataset (not just errors and 
outlier groups) that can be found graphically that would be difficult or 
impossible to find analytically.

Have a look at Interactive Graphics for Data Analysis: Principles and Examples 
by Martin Theus and Simon Urbanek (Chapman  Hall).  There are some excellent 
explanations and case studies there.

I could go on (and on), but what you really need is a good demo.

Best regards

Antony

PS Have you reported the bugs in GGobi and Mondrian you have found to the 
software authors?

Antony Unwin
Professor of Computer-Oriented Statistics and Data Analysis,
Mathematics Institute,
University of Augsburg, 
86135 Augsburg, Germany

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] RCurl - HTTP request of header ONLY

2011-02-11 Thread Janko Thyson
Hi everyone,

I'm trying to send an HTTP request using RCurl that only requests the
response header, not the actual content. 
http://curl.haxx.se/docs/httpscripting.html says you can do this by using
the following option: curl --head http://www.something.com/

However, I can't figure out how to do this when using 'getURL()', for
example. 

Here's what I tried:
FIRST TRY
txt - getURL(http://www.something.com/;, verbose=TRUE, header=TRUE)
cat(txt)
This gives me header AND content.

SECOND TRY
headers - basicTextGatherer()
txt - getURL(http://www.something.com/;, header=TRUE, trace=TRUE,
headerfunction=headers$update)
cat(headers$value())
This gives me the header, but the content is also requested and sent to
'txt'.

I was looking for a RCurl option like 'head', but only found 'headerdata',
which I assumed is not what I want.

Then I also tried to understand what the individual RCurl options correspond
to in terms of the original libcurl options and found a respective section
in http://www.omegahat.org/RCurl/RCurlJSS.pdf (p. 10, The Request Options).
Since the name of the libcurl option is 'head', a corresponding RCurl
function should also be 'head'. Since it doesn't exist, I take it that it
hasn't been implemented (yet), correct? Is there another way to request
headers only?

Thanks a lot for any advice,
Janko

 Sys.info()
 sysname  release 
   Windows XP 
 version nodename 
build 2600, Service Pack 3   ASHB-109C-02 
 machinelogin 
   x86 wwa418 
user 
wwa418

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Matrix of Matrices?

2011-02-11 Thread Alaios
Thanks that did the work. Once I have that list what is the easiest way to 
export the structure as well as the contents (numbers) into a file.

The purpose is to share that file with a colleague and ask him to load that 
variable with its contents and structure.

Best Regards
Alex

--- On Fri, 2/11/11, Petr Savicky savi...@praha1.ff.cuni.cz wrote:

 From: Petr Savicky savi...@praha1.ff.cuni.cz
 Subject: Re: [R] Matrix of Matrices?
 To: r-help@r-project.org
 Date: Friday, February 11, 2011, 12:22 PM
 On Thu, Feb 10, 2011 at 11:54:50PM
 -0800, Alaios wrote:
  Dear all I have a few matrices that I would like to
 store alltogether under a bigger object.
  My matrixes with the same name were calculated inside
 a loop like
  
  for (few times){
  
     estimatedsr- this is my matrix
     savematrixasimagefile();
     
  
  }
  
  which means that I was losing all that instances.
  What can I do to keep all these matrices? ( I do not
 know in advance their number, so I can not preallocate
 space). 
  How can I store them and adress them back again?
  
  I would like to thank you in advance in your help
 
 Hello.
 
 A possible approach is to use list (see ?list).
 
   lst - list()
   lst[[1]] - rbind(c(1, 2), c(1, 2))
   lst[[2]] - rbind(c(3, 3), c(4, 4))
   lst
 
   [[1]]
        [,1] [,2]
   [1,]    1    2
   [2,]    1    2
   
   [[2]]
        [,1] [,2]
   [1,]    3    3
   [2,]    4    4
 
 If you know in advance an upper bound on the number of
 matrices,
 then it is possible to use an array (see ?array). For
 example
 storing two matrices 2 x 2 may be done as follows
 
   a - array(dim=c(2, 2, 2))
   a[,,1] - rbind(c(1, 2), c(1, 2))
   a[,,2] - rbind(c(3, 3), c(4, 4))
   a
 
   , , 1
   
        [,1] [,2]
   [1,]    1    2
   [2,]    1    2
   
   , , 2
   
        [,1] [,2]
   [1,]    3    3
   [2,]    4    4
 
 Hope this helps.
 
 Petr Savicky.
 
 __
 R-help@r-project.org
 mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained,
 reproducible code.
 


 

Don't get soaked.  Take a quick peek at the forecast
with
uts/#loc_weather

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Matrix of Matrices?

2011-02-11 Thread Petr Savicky
On Fri, Feb 11, 2011 at 06:17:16AM -0800, Alaios wrote:
 Thanks that did the work. Once I have that list what is the easiest way to 
 export the structure as well as the contents (numbers) into a file.
 
 The purpose is to share that file with a colleague and ask him to load that 
 variable with its contents and structure.

Any R object can be stored to a file using save() and read back
using load().

Petr Savicky.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Revolution Analytics reading SAS datasets

2011-02-11 Thread Abhijit Dasgupta, PhD


I'm sure the legal ground is tricky. However, OpenOffice and LibreOffice 
and KWord have been able to open the (proprietary) MS Word doc format 
for a while now, and they are open source (and Libre Office might even 
be GPL'd), so the algorithm is in fact published in Jeremy's sense, 
and has been for several years. I figure the reason for keeping the SAS 
reading functionality proprietary is Revolution's (perfectly legitimate) 
wish to make money by separating their product from GNU R and adding 
features that would make people want to buy rather than just download 
from CRAN.


Within GNU R there are of course sas.get in the Hmisc package (which 
requires SAS). It should also be quite easy to write a wrapper around 
dsread, a command-line closed source product freely downloadable in a 
limited form which will convert sas7bdat files to csv or tsv format (and 
SQL if you pay). This latter path won't require SAS locally.


I'm also sure that SAS has a way to export its datasets into R, since 
the current version of IML Studio will in fact interact with R.



On 02/10/2011 03:11 PM, Jeremy Miles wrote:

On 10 February 2011 12:01, Matt Shotwellm...@biostatmatt.com  wrote:

On Thu, 2011-02-10 at 10:44 -0800, David Smith wrote:

The SAS import/export feature of Revolution R Enterprise 4.2 isn't
open-source, so we can't release it in open-source Revolution R
Community, or to CRAN as we do with the ParallelR packages (foreach,
doMC, etc.).

Judging by the language of Dr. Nie's comments on the page linked below,
it seems unlikely this feature is the result of a licensing agreement
with SAS. Is that correct?



There was some discussion of this on the SAS email list.  People who
seem to know what they were talking about said that they would have
had to reverse engineer it to decode the file format.  It's slightly
tricky legal ground - the file format can't be copyrighted but
publishing the algorigthm might not be allowed.  I guess if they
release it as open source, that could be construed as publishing the
algorithm. (SPSS and WPS both can open SAS files, and I'd be surprised
if SAS licensed to them.  [Esp WPS, who SAS are (or were) suing for
all kinds of things in court in London.)

Jeremy

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] cycle in a directed graph

2011-02-11 Thread amir
Hi,

I have a directed graph and wants to find is there any cycle in it? If
it is, which nodes or edges are in the cycle.
Is there any way to find the cycle in a directed graph in R?

Regards,
Amir

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Creating a ragged array

2011-02-11 Thread Barth B. Riley
Dear list

I am trying to figure out how to create a ragged array that consists of groups 
of array elements (indices from the original array) of similar values. I would 
like to create a ragged array that might look something like this:

S[1]
11, 19, 14,7

S[2]
29,4,1,13,44

S[3]
56,9,2,35

S[4]
3,5

...

Thanks in advance

Barth

PRIVILEGED AND CONFIDENTIAL INFORMATION
This transmittal and any attachments may contain PRIVILEGED AND
CONFIDENTIAL information and is intended only for the use of the
addressee. If you are not the designated recipient, or an employee
or agent authorized to deliver such transmittals to the designated
recipient, you are hereby notified that any dissemination,
copying or publication of this transmittal is strictly prohibited. If
you have received this transmittal in error, please notify us
immediately by replying to the sender and delete this copy from your
system. You may also call us at (309) 827-6026 for assistance.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lattice auto.key gives mismatch colors

2011-02-11 Thread John Smith
Hello All,

I am using the following code to draw a figure. But the legend given buy
auto.key has mismatched colors. Could any one help me?

I am using R2.12.1 and most current lattice on windows XP.

Thanks

John

library(lattice)

src - data.frame(t=rep(c('A','B','C','D'), rep(8,4)),
  s=rep(c(8132,8140,8178,8180,8224,8230,8337,8345), 4),
  v=c(55.10, 56.00, 206.00, 5.86, 164.00, 102.00, 171.00,
280.00, 236.00,
91.10, 238.00, 102.00, 59.30, 227.00, 280.00, 316.00,
205.00, 120.00,
273.00, 98.80, 167.00, 104.00, 155.00, 370.00, 215.00,
97.60, 133.00,
135.00, 48.60, 135.00, 77.10, 91.90))
colors - rgb(c(228,  55,  77, 152, 255, 255, 166, 247),
  c(26,  126, 175,  78, 127, 255,  86, 129),
  c(28,  184,  74, 163,   0,  51,  40, 191), maxColorValue=255)
xyplot(v~t, groups=s, type='o', data=src, col=colors, auto.key =
list(points=TRUE, columns = 4, col=colors))

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Loop in variable names

2011-02-11 Thread Eik Vettorazzi
Hi Stella,
if you just want to print the tables, this should also work
for(i in angus) {
   tab - paste(table, i, sep=)
   cut - paste(P,i, sep = )
   print(table(StoreData$CompanyID, !is.na(StoreData[,cut])))
   }

If you want to keep them, your approach works, but you can also store
them in a list
tabs-list()
for(i in angus) {
   tab - paste(table, i, sep=)
   cut - paste(P,i, sep = )
   tabs[[i]]-table(StoreData$CompanyID, !is.na(StoreData[,cut]))
   }
this produces a list of 5 elements, where only  2 and 5 are populated.

Changing the last line to
tabs[[which(angus==i)]]-table(StoreData$CompanyID, !is.na(StoreData[,cut]))
produces a list of 2 elements, but the information on angus is lost.

best regards

Am 11.02.2011 15:41, schrieb Rita Carreira:
 Thanks so much, that worked well; however, it did not print the tables.
 I went around it and did the following, which worked:
 
 for(i in angus) {
   tab - paste(table, i, sep=)
   cut - paste(P,i, sep = )
   t - table(StoreData$CompanyID, !is.na(StoreData[,cut]))
 assign(tab,t)
   }
 
 table2
 table5
 
 Is this the only way? Could I not have put table2 and table5 with an i
 index inside the loop?
 
 Sorry to bother you. 
 Stella
 
 
 Date: Wed, 9 Feb 2011 20:15:23 +0100
 From: e.vettora...@uke.uni-hamburg.de
 To: ritacarre...@hotmail.com
 CC: r-help@r-project.org
 Subject: Re: [R] Loop in variable names

 Hi Stella,
 in your coding 'cut' is a string, not a data object.

 something like
 cut - paste(P,i, sep=)
 table(StoreData$CompanyID, !is.na(StoreData[,cut]))

 should work.

 hth.

 Am 09.02.2011 19:02, schrieb Rita Carreira:
 
 
  Hello!
  I would like to do some tables for several variables and I would
 like to write a loop that does the table for each variable. I should
 also point out that my data set has several missing observations and
 sometimes the observations that are missing are not the same for all my
 variables.
 
  What I would like to do:
 
  table(StoreData$CompanyID,
  !is.na(StoreData$P2))
  table(StoreData$CompanyID,
  !is.na(StoreData$P5))
 
  If I run the above code, I get:
 
  table(StoreData$CompanyID,
  + !is.na(StoreData$P2))
 
  FALSE TRUE
  2 940 0
  3 0 323
  4 288 0
  5 306 0
 
  table(StoreData$CompanyID,
  + !is.na(StoreData$P5))
 
  FALSE TRUE
  2 940 0
  3 0 323
  4 288 0
  5 306 0
 
 
  Here's the loop that I wrote, which does not work:
 
  angus - c(2,5)
 
  for(i in angus) {
  cut - paste(StoreData$P,i, sep=)
  table(StoreData$CompanyID, !is.na(cut))
  }
 
  When I run the above, I get the following error message:
 
  Error in table(StoreData$CompanyID, !is.na(cut)) :
  all arguments must have the same length
  source(.trPaths[5], echo=TRUE, max.deparse.length=150)
 
  Any help is greatly appreciated!
  Stella
 
 
 
 
  [[alternative HTML version deleted]]
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.


 --
 Eik Vettorazzi
 Institut für Medizinische Biometrie und Epidemiologie
 Universitätsklinikum Hamburg-Eppendorf

 Martinistr. 52
 20246 Hamburg

 T ++49/40/7410-58243
 F ++49/40/7410-57790


-- 
Eik Vettorazzi
Institut für Medizinische Biometrie und Epidemiologie
Universitätsklinikum Hamburg-Eppendorf

Martinistr. 52
20246 Hamburg

T ++49/40/7410-58243
F ++49/40/7410-57790

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Creating a ragged array

2011-02-11 Thread Gabor Grothendieck
On Fri, Feb 11, 2011 at 10:39 AM, Barth B. Riley bbri...@chestnut.org wrote:
 Dear list

 I am trying to figure out how to create a ragged array that consists of 
 groups of array elements (indices from the original array) of similar values. 
 I would like to create a ragged array that might look something like this:

 S[1]
 11, 19, 14,7

 S[2]
 29,4,1,13,44

 S[3]
 56,9,2,35

 S[4]
 3,5


Try stack and unstack:

S - list(A = c(11,19,14,7), B = c(29,4,1,13,44), C = c(56,9,2,35), D
= c(3,5)); S
stk - stack(S); stk
ustk - unstack(stk); ustk
identical(ustk, S) # TRUE

-- 
Statistics  Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Extract a slot value from a 'SpatialPolygons' class object

2011-02-11 Thread David Winsemius


On Feb 11, 2011, at 5:11 AM, Xavier Hoenner wrote:


Dear R-users,

I’m currently trying to extract the value of a slot (area) but can’t  
find out how to do that.


Generally the authors of S4 method provide extractor functions, so the  
first question would be what functions were used to create the object.  
Fortunately there are big hints in the first line of str() results.  
The documentation for package sp says there is a summary method for  
that class as well as a [ method. Those results may offer avenues  
for progresss


These might yield objects that could be further used:  
area(overlperc[1]) or polygons(overlperc) but my efforts at finding an  
area-extractor in the sp vignettes and RSiteSearch were not fulfilled.  
A brute force method is to use the primitive S4 extractor function @  
to traverse the tree:


So you might try either of these:

polygons(overlperc)[[1]]@area  #didn't work on the test object I made  
with one of the examples in pkg::sp


This appears more likely to succeed:

overlperc@polygons[[1]]@area

There's also a specific mailing list for geospatial stats.

--
David.




str(overlperc)


List of 1
$ :Formal class 'SpatialPolygons' [package sp] with 4 slots
 .. ..@ polygons   :List of 1
 .. .. ..$ :Formal class 'Polygons' [package sp] with 5 slots
 .. .. .. .. ..@ Polygons :List of 1
 .. .. .. .. .. ..$ :Formal class 'Polygon' [package sp] with 5  
slots

 .. .. .. .. .. .. .. ..@ labpt  : num [1:2] 709374 -1507888
 .. .. .. .. .. .. .. ..@ area   : num 11542604
 .. .. .. .. .. .. .. ..@ hole   : logi FALSE
 .. .. .. .. .. .. .. ..@ ringDir: int 1
 .. .. .. .. .. .. .. ..@ coords : num [1:23, 1:2] 706840 706657  
706840 707294 707585 ...

 .. .. .. .. .. .. .. .. ..- attr(*, dimnames)=List of 2
 .. .. .. .. .. .. .. .. .. ..$ : chr [1:23] 1 2 3 4 ...
 .. .. .. .. .. .. .. .. .. ..$ : chr [1:2] x y
 .. .. .. .. ..@ plotOrder: int 1
 .. .. .. .. ..@ labpt: num [1:2] 709374 -1507888
 .. .. .. .. ..@ ID   : chr 1
 .. .. .. .. ..@ area : num 11542604
 .. ..@ plotOrder  : int 1
 .. ..@ bbox   : num [1:2, 1:2] 706657 -1509411 711710 -1506189
 .. .. ..- attr(*, dimnames)=List of 2
 .. .. .. ..$ : chr [1:2] x y
 .. .. .. ..$ : chr [1:2] min max
 .. ..@ proj4string:Formal class 'CRS' [package sp] with 1 slots
 .. .. .. ..@ projargs: chr NA

I’d like to extract the area value so as to be able to use this  
value for further analysis. Here’s below what I get when I only type  
the name of the object:



overlperc


[[1]]
An object of class SpatialPolygons
Slot polygons:
[[1]]
An object of class Polygons
Slot Polygons:
[[1]]
An object of class Polygon
Slot labpt:
[1]   709374 -1507888

Slot area:
[1] 11542604

Slot hole:
[1] FALSE

Slot ringDir:
[1] 1

Slot coords:
 xy
1  706839.8 -1508654
2  706657.2 -1508029
3  706839.8 -1507634
4  707293.6 -1507284
5  707584.7 -1507174
6  708329.6 -1506851
7  709013.3 -1506539
8  709074.5 -1506513
9  709819.5 -1506189
10 710564.4 -1506289
11 711021.1 -1506539
12 711309.3 -1506769
13 711642.0 -1507284
14 711709.8 -1508029
15 711309.3 -1508690
16 711246.4 -1508774
17 710564.4 -1509347
18 709819.5 -1509411
19 709074.5 -1509277
20 708329.6 -1509072
21 707584.7 -1509039
22 706927.4 -1508774
23 706839.8 -1508654



Slot plotOrder:
[1] 1

Slot labpt:
[1]   709374 -1507888

Slot ID:
[1] 1

Slot area:
[1] 11542604



Slot plotOrder:
[1] 1

Slot bbox:
minmax
x   706657.2   711709.8
y -1509411.3 -1506189.0

Slot proj4string:
CRS arguments: NA

I’m stuck and spent more or less the whole afternoon online trying  
to find a solution to my problem but I couldn’t find anything. I  
look forward to hear from some of you. Thanks in advance for your  
kind help.


Have a good weekend.

Cheers


Xavier


Xavier Hoenner
PhD Student
Nesting Ecology, Migrations and Diving Behaviour of Hawksbill Turtles
School of Environmental Research, Charles Darwin University
Darwin, NT 0909
Ph: (08)8946.7721
email:xavier.hoen...@cdu.edu.au

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Revolution Analytics reading SAS datasets

2011-02-11 Thread Chao(Charlie) Huang
I am right now using Revolution R Enterprise 4.2. Could somebody show
me how to import/export SAS datasets. Thanks.



On Fri, Feb 11, 2011 at 8:52 AM, Abhijit Dasgupta, PhD
aikidasgu...@gmail.com wrote:

 I'm sure the legal ground is tricky. However, OpenOffice and LibreOffice and
 KWord have been able to open the (proprietary) MS Word doc format for a
 while now, and they are open source (and Libre Office might even be GPL'd),
 so the algorithm is in fact published in Jeremy's sense, and has been for
 several years. I figure the reason for keeping the SAS reading functionality
 proprietary is Revolution's (perfectly legitimate) wish to make money by
 separating their product from GNU R and adding features that would make
 people want to buy rather than just download from CRAN.

 Within GNU R there are of course sas.get in the Hmisc package (which
 requires SAS). It should also be quite easy to write a wrapper around
 dsread, a command-line closed source product freely downloadable in a
 limited form which will convert sas7bdat files to csv or tsv format (and SQL
 if you pay). This latter path won't require SAS locally.

 I'm also sure that SAS has a way to export its datasets into R, since the
 current version of IML Studio will in fact interact with R.


 On 02/10/2011 03:11 PM, Jeremy Miles wrote:

 On 10 February 2011 12:01, Matt Shotwellm...@biostatmatt.com  wrote:

 On Thu, 2011-02-10 at 10:44 -0800, David Smith wrote:

 The SAS import/export feature of Revolution R Enterprise 4.2 isn't
 open-source, so we can't release it in open-source Revolution R
 Community, or to CRAN as we do with the ParallelR packages (foreach,
 doMC, etc.).

 Judging by the language of Dr. Nie's comments on the page linked below,
 it seems unlikely this feature is the result of a licensing agreement
 with SAS. Is that correct?


 There was some discussion of this on the SAS email list.  People who
 seem to know what they were talking about said that they would have
 had to reverse engineer it to decode the file format.  It's slightly
 tricky legal ground - the file format can't be copyrighted but
 publishing the algorigthm might not be allowed.  I guess if they
 release it as open source, that could be construed as publishing the
 algorithm. (SPSS and WPS both can open SAS files, and I'd be surprised
 if SAS licensed to them.  [Esp WPS, who SAS are (or were) suing for
 all kinds of things in court in London.)

 Jeremy

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Passing function arguments

2011-02-11 Thread David Winsemius


On Feb 11, 2011, at 6:14 AM, Michael Pearmain wrote:


Hi All,

Im looking for some help passing function arguments and referencing  
them,
I've made a replica, less complicated function to show my problem,  
and how
i've made a work around for this. However i suspect there is a _FAR_  
better

way of doing this.

If i do:
BuildDecayModel - function(x = this, y = that, data =  
model.data) {

 model - nls(y ~ SSexp(x, y0, b), data = model.data)
 return(model)
}
...

Error in lm.fit(x, y, offset = offset, singular.ok =  
singular.ok, ...) :

 0 (non-NA) cases

This function returns an error because the args are passed as this  
and

that to the model, and so fails (correct?)

If  i do the following:
BuildDecayModel - function(x = total.reach, y = lift, data =
model.data) {
 x - data[[x]]
 y - data[[y]]
 model.data - as.data.frame(cbind(x,y))
 model - nls(y ~ SSexp(x, y0, b), data = model.data)
 return(model)
}

This works for me, but it seems that i'm missing a trick with just
manipulating the args rather than making an entire new data.frame to  
work

off,


The trick you are missing is how to build a formula from component  
character objects. The usual approach something like this:


?formula

Perhaps:
form - as.formula( paste(y, ~ SSexp(,  , x , , y0, b) ) )  
(untested)

model - nls(form, data = model.data)

paste() should result in evaluation of the argument to return this  
and that which will then be bundled into a proper language object  
which is not just a character string. I should say I hope this works,  
but there are mysteries regarding the environment of evaluation that  
continue to trip me up.


--
David.


Can anyone offer some advice?


(Some further advice: Set your client to post in plain text.)


Thanks in advance

Mike



David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Revolution Analytics reading SAS datasets

2011-02-11 Thread Chao(Charlie) Huang
Liao,

Thanks for your reply. Those solutions you mentioned used CSV or 3rd
party middleware.

I used Revolution R for a while.  Since last week Revolution R
Enterprise 4.2 could read/write SAS native dataset(.sas7bdat format),
I am looking for any documents to try this feature out.

Anybody can give a clue?

Thanks,

Charlie





On Fri, Feb 11, 2011 at 10:42 AM, Gong-Yi Liao gong-yi.l...@uconn.edu wrote:
 If you have SAS, You can read Dr. Harrell's page:

 http://biostat.mc.vanderbilt.edu/wiki/Main/SASexportHowto

 if not, you can take a look on WPS:

 http://www.teamwpc.co.uk/products



 On Fri, 2011-02-11 at 10:32 -0600, Chao(Charlie) Huang wrote:
 I am right now using Revolution R Enterprise 4.2. Could somebody show
 me how to import/export SAS datasets. Thanks.



 --
 Gong-Yi Liao

 Department of Statistics
 University of Connecticut
 215 Glenbrook Road  U4120
 Storrs, CT 06269-4120

 860-486-9478



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Revolution Analytics reading SAS datasets

2011-02-11 Thread David Winsemius


On Feb 11, 2011, at 11:32 AM, Chao(Charlie) Huang wrote:


I am right now using Revolution R Enterprise 4.2. Could somebody show
me how to import/export SAS datasets. Thanks.


Should you be asking the company from whom you obtained this  
proprietary product?


--
David.




On Fri, Feb 11, 2011 at 8:52 AM, Abhijit Dasgupta, PhD
aikidasgu...@gmail.com wrote:


I'm sure the legal ground is tricky. However, OpenOffice and  
LibreOffice and
KWord have been able to open the (proprietary) MS Word doc format  
for a
while now, and they are open source (and Libre Office might even be  
GPL'd),
so the algorithm is in fact published in Jeremy's sense, and has  
been for
several years. I figure the reason for keeping the SAS reading  
functionality
proprietary is Revolution's (perfectly legitimate) wish to make  
money by
separating their product from GNU R and adding features that would  
make

people want to buy rather than just download from CRAN.

Within GNU R there are of course sas.get in the Hmisc package (which
requires SAS). It should also be quite easy to write a wrapper around
dsread, a command-line closed source product freely downloadable in a
limited form which will convert sas7bdat files to csv or tsv format  
(and SQL

if you pay). This latter path won't require SAS locally.

I'm also sure that SAS has a way to export its datasets into R,  
since the

current version of IML Studio will in fact interact with R.


On 02/10/2011 03:11 PM, Jeremy Miles wrote:


On 10 February 2011 12:01, Matt Shotwellm...@biostatmatt.com   
wrote:


On Thu, 2011-02-10 at 10:44 -0800, David Smith wrote:


The SAS import/export feature of Revolution R Enterprise 4.2 isn't
open-source, so we can't release it in open-source Revolution R
Community, or to CRAN as we do with the ParallelR packages  
(foreach,

doMC, etc.).


Judging by the language of Dr. Nie's comments on the page linked  
below,
it seems unlikely this feature is the result of a licensing  
agreement

with SAS. Is that correct?



There was some discussion of this on the SAS email list.  People who
seem to know what they were talking about said that they would have
had to reverse engineer it to decode the file format.  It's slightly
tricky legal ground - the file format can't be copyrighted but
publishing the algorigthm might not be allowed.  I guess if they
release it as open source, that could be construed as publishing the
algorithm. (SPSS and WPS both can open SAS files, and I'd be  
surprised

if SAS licensed to them.  [Esp WPS, who SAS are (or were) suing for
all kinds of things in court in London.)

Jeremy

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] foreach with registerDoMC on R 2.12.0 OSX 10.6 --- errors and warnings

2011-02-11 Thread ivo welch
some hints for the search engines.

I just did
   install.packages(foreach)
   install.packages(doMC)
   library(doMC)
   registerDoMC()
   library(foreach)
 foreach(i = 1:3) %dopar% sqrt(i)
The process has forked and you cannot use this CoreFoundation
functionality safely. You MUST exec().
Break on 
__THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__()
to debug.
The process has forked and you cannot use this CoreFoundation
functionality safely. You MUST exec().
Break on 
__THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__()
to debug.
The process has forked and you cannot use this CoreFoundation
functionality safely. You MUST exec().
Break on 
__THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__()
to debug.
The process has forked and you cannot use this CoreFoundation
functionality safely. You MUST exec().
Break on 
__THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__()
to debug.
Tcl_ServiceModeHook: Notifier not initialized
[[1]]
NULL
[[2]]
[1] 1.414
[[3]]
[1] 1.732
The first element is obviously wrong, and the warning messages are scary.

Restarting R eliminated all the problems.  This seems to be something
odd about the install and load process.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Large Datasets

2011-02-11 Thread David Winsemius


On Feb 11, 2011, at 7:51 AM, John Filben wrote:

I have recently been using R - more speciifcally the GUI packages  
Rattle

and Rcmdr.

I like these products a lot and want to use them for some projects -  
the problem
that I run into is when I start to try and run large datasets  
through them.  The
data sets are 10-15 million in record quantity and usually have  
15-30 fields

(both numerical and categorical).


You could instead just buy memory. 32GB ought to be sufficient for  
descriptives and regression. Might even get away with 24.




I saw that there were some packages that could deal with large  
datasets in R -
bigmemory, ff, ffdf, biganalytics.  My problem is that I am not much  
of a coder

(and the reason I use the above mentioned GUIs).  These GUIs do show
the executable R code in the background - my thought was to run a  
small sample
through the GUI, copy the code, and then incorporate some of the  
large data
packages mentioned above - have anyone every tried to do this and  
would you have
working examples.  In terms of what I am trying to do to the data -  
really

simple stuff - desriptive statistics,


Should be fine here.


k-means clustering, and possibly some decision trees.


Not sure how well those scale to tasks as large as what you propose,  
especially since you don't mention packages or functions. Not sure  
they don't, either.


--
David.

  Any help would be greatly appreciated.

Thank you - John
John Filben

--

David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Revolution Analytics reading SAS datasets

2011-02-11 Thread Martin Maechler
 CH == Chao(Charlie) Huang hch...@gmail.com
 on Fri, 11 Feb 2011 10:32:06 -0600 writes:

CH I am right now using Revolution R Enterprise 4.2. Could
CH somebody show me how to import/export SAS
CH datasets. Thanks.

but not primarily on R-help, please.

At first, note that  R is GNU R,
and R-help has been about R as a Free (Libre) Software,
for all its many years and hundreds of thousands of messages.

Revolutions's product may be fine for some, in some situations,
but supporting non-Free parts of it, really does not belong to R
and R-help in my view.

Martin Maechler, 
R Core and R mailing list administrator since 1996


CH On Fri, Feb 11, 2011 at 8:52 AM, Abhijit Dasgupta, PhD
CH aikidasgu...@gmail.com wrote:
 
 I'm sure the legal ground is tricky. However, OpenOffice
 and LibreOffice and KWord have been able to open the
 (proprietary) MS Word doc format for a while now, and
 they are open source (and Libre Office might even be
 GPL'd), so the algorithm is in fact published in
 Jeremy's sense, and has been for several years. I figure
 the reason for keeping the SAS reading functionality
 proprietary is Revolution's (perfectly legitimate) wish
 to make money by separating their product from GNU R and
 adding features that would make people want to buy rather
 than just download from CRAN.
 
 Within GNU R there are of course sas.get in the Hmisc
 package (which requires SAS). It should also be quite
 easy to write a wrapper around dsread, a command-line
 closed source product freely downloadable in a limited
 form which will convert sas7bdat files to csv or tsv
 format (and SQL if you pay). This latter path won't
 require SAS locally.
 
 I'm also sure that SAS has a way to export its datasets
 into R, since the current version of IML Studio will in
 fact interact with R.
 
 
 On 02/10/2011 03:11 PM, Jeremy Miles wrote:
 
 On 10 February 2011 12:01, Matt
 Shotwellm...@biostatmatt.com  wrote:
 
 On Thu, 2011-02-10 at 10:44 -0800, David Smith wrote:
 
 The SAS import/export feature of Revolution R
 Enterprise 4.2 isn't open-source, so we can't release
 it in open-source Revolution R Community, or to CRAN
 as we do with the ParallelR packages (foreach, doMC,
 etc.).
 
 Judging by the language of Dr. Nie's comments on the
 page linked below, it seems unlikely this feature is
 the result of a licensing agreement with SAS. Is that
 correct?
 
 
 There was some discussion of this on the SAS email
 list.  People who seem to know what they were talking
 about said that they would have had to reverse engineer
 it to decode the file format.  It's slightly tricky
 legal ground - the file format can't be copyrighted but
 publishing the algorigthm might not be allowed.  I guess
 if they release it as open source, that could be
 construed as publishing the algorithm. (SPSS and WPS
 both can open SAS files, and I'd be surprised if SAS
 licensed to them.  [Esp WPS, who SAS are (or were) suing
 for all kinds of things in court in London.)
 
 Jeremy

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] extracting p-values from the Manova function (car library)

2011-02-11 Thread Ista Zahn
Hi,
one approach is to modify

getAnywhere(print.Anova.mlm)

to return the information you want.

Best,
Ista

On Fri, Feb 11, 2011 at 7:16 AM, Bettina Kulle Andreassen
b.k.andreas...@medisin.uio.no wrote:
 hi,

 i am not able to extract the p-values from the
 Manova function in the car library. I need
 to use this function in a high-throughput setting
 and somehow need the p-values produced.

 Any ideas?

 Best regards

 Bettina Kulle Andreassen

 --

 Bettina Kulle Andreassen

 University of Oslo

 Department of Biostatistics

 and

 Institute for Epi-Gen (Faculty Division Ahus)

 tel:
 +47 22851193
 +47 67963923

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Help optimizing EMD::extrema()

2011-02-11 Thread Mike Lawrence
Hi folks,

I'm attempting to use the EMD package to analyze some neuroimaging
data (timeseries with 64 channels sampled across 1 million time points
within each of 20 people). I found that processing a single channel of
data using EMD::emd() took about 8 hours. Exploration using Rprof()
suggested that most of the compute time was spent in EMD::extrema().
Looking at the code for EMD:extrema(), I managed to find one obvious
speedup (switching from employing rbind() to c()) and I suspect that
there may be a way to further speed things up by pre-allocating all
the objects that are currently being created with c(), but I'm having
trouble understanding the code sufficiently to know when/where to try
this and what sizes to set as the default pre-allocation length. Below
I include code that demonstrates the speedup I achieved by eliminating
calls to rbind(), and also demonstrates that only a few calls to c()
seem to be responsible for most of the compute time. The files
extrema_c.R and extrema_c2.R are available at:
https://gist.github.com/822691

Any suggestions/help would be greatly appreciated.


#load the EMD library for the default version of extrema
library(EMD)

#some data to process
values = rnorm(1e4)

#profile the default version of extrema
Rprof(tmp - tempfile())
temp = extrema(values)
Rprof()
summaryRprof(tmp)
#1.2s total with most time spend doing rbind
unlink(tmp)

#load a rbind-free version of extrema
source('extrema_c.R')
Rprof(tmp - tempfile())
temp = extrema_c(values)
Rprof()
summaryRprof(tmp) #much faster! .5s total
unlink(tmp)

#still, it encounters slowdowns with lots of data
values = rnorm(1e5)
Rprof(tmp - tempfile())
temp = extrema_c(values)
Rprof()
summaryRprof(tmp)
#44s total, hard to see what's taking up so much time
unlink(tmp)

#load an rbind-free version of extrema that labels each call to c()
source('extrema_c2.R')
Rprof(tmp - tempfile())
temp = extrema_c2(values)
Rprof()
summaryRprof(tmp)
#same time as above, but now we see that it spends more time in
certain calls to c() than others
unlink(tmp)

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] series of boxplots

2011-02-11 Thread Juliet Hannah
If you could provide a small example of an actual data set (using
dput), you may get some suggestions
specific to your goals.

Here are a few examples of boxplots. If these look along the lines of
what you are looking for, you may want to search
the ggplot2 mailing list for more examples.

library(ggplot2)
qplot(factor(cyl), mpg, data=mtcars, geom=boxplot)

# example 2

mydata - data.frame( group1=
sample(c(C,D),size=100,replace=TRUE),
group2=sample(c(E,F),size=100,replace=TRUE),  y=rnorm(100))

qplot(group2, y, data=mydata, facets = ~ group1,  geom=boxplot)

On Mon, Feb 7, 2011 at 6:16 AM, syrvn ment...@gmx.net wrote:

 hi group,

 imagine the following data frame df:

 1 2 3 4 ...
 A 5 1 ..
 A 4 3 ..
 A 3 4 ..
 B 7 9 ..
 B 8 1 ..
 B 6 8 ..

 I tried the following and some variations to plot this matrix as boxplots:


 boxplot(df[1:3,2]~df[1:3,1], xlim=c(1,10))
 par(new=TRUE)
 boxplot(cpd12[4:6,2]~df[1:3,1], xlim=c(2,10))
 par(new=TRUE)
 boxplot(df[1:3,3]~df[1:3,1], xlim=c(1,10))
 par(new=TRUE)
 boxplot(cpd12[4:6,3]~df[1:3,1], xlim=c(2,10))


 can anybody help?
 Cheers
 --
 View this message in context: 
 http://r.789695.n4.nabble.com/series-of-boxplots-tp3263938p3263938.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Where can I download/install grDevices

2011-02-11 Thread Peter Ehlers

On 2011-02-10 18:41, Dennis Murphy wrote:

Hi:

Here's one way:

  plot(1~1,ylab=expression(Areas (~mu*m^2~)))

The tildes incorporate space between the math and text elements; they're
optional, but useful. Another way that also works is

plot(1~1,ylab=expression(paste(Areas (, mu*m^2, ), sep = ' ')))


 plot(1:5, type=n, axes=FALSE, ann=FALSE)
 abline(v=3, col=lightgray)
 text(3, 3.5, expression(
   paste(Areas (, mu * m^2, ), sep =)))
 text(3, 3.0, expression(
   paste(Areas (, mu * m^2, ), sep = )))
 text(3, 2.5, expression(
   paste(Areas (, mu * m^2, ), )))


Peter Ehlers



HTH,
Dennis

On Thu, Feb 10, 2011 at 4:32 PM, Lizbliphome-sick_al...@hotmail.comwrote:




Duncan Murdoch-2 wrote:



There is no plotmath function.  plotmath is the name of the help
topic; it describes how various other functions plot text that includes
math.


grDevices is a base package, so if you've got R, you've got it.

Duncan Murdoch




ok thanks.

I originally thought this was the case and first tried:


plot(areas~cell,ylab=expression(Areas (mu*m^2))


but the text wasn't re-formatted so I thought plotmath must be a separate
function. How is it supposed to work?
--
View this message in context:
http://r.789695.n4.nabble.com/Where-can-I-download-install-grDevices-tp2401415p3300559.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Writing R packages in an easier way?

2011-02-11 Thread Uwe Ligges



On 11.02.2011 13:38, S Ellison wrote:




Dr. Michael Wolfm-w...@muenster.de  11/02/2011 07:52

is there an easier way to write R packages for the own use - without

RTools and TeX?

Installing Rtools is not hard, and doesn't have to happen often; the
hardest bit in Windows is making sure that the requisite executables are
on the path, and that just involves adding the directory names to the
path environment variable. If I understand you, the problem is the time
spent hacking about in the .Rd help files. That can certainly be
simplified - eliminated, in fact.

Use package.skeleton() once you have a good starting set of functions
and data in R. That creates all the necessary directories, creates
skeleton (but valid) .Rd files, and exports your functions and data
objects for you. You can then edit the code directly, use RCMD check to
check the package (useful anyway) and use RCMD build to build it. (In
fact if all you want is the zip, you can - or at least could - zip the
package directory created by RCMD check).



Actually, just say

R CMD INSTALL --build  package

which will generate the zip in a supported way.

Uwe Ligges



S Ellison


***
This email and any attachments are confidential. Any use...{{dropped:8}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Calling symbols from dataframe for xyplot

2011-02-11 Thread Greg Snow
The more common way to do this is to use groups, the default is to have a 
different color for each group, but you can change that using trellis.par.set:


tmp - trellis.par.get()
tmp$superpose.symbol$pch = 0:10
trellis.par.set(tmp)
xyplot(Sepal.Width ~ Petal.Width, data=iris, groups=Species, auto.key=TRUE)


If you need more control of the symbols than that, then look at the 
panel.my.symbols function in the TeachingDemos package.

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
 project.org] On Behalf Of John Poulsen
 Sent: Thursday, February 10, 2011 2:22 PM
 To: r-help@r-project.org
 Subject: [R] Calling symbols from dataframe for xyplot
 
 Hello,
 
 I am trying to make a xyplot plot with points that are different
 symbols. I want to call the symbol type (pch) from a column in my
 dataframe.  Here is a simplified example.  In my real example I also
 have groups, which I have not included here.  This example doesn't
 change the symbols or colors.
 
 Any help you can provide would be appreciated.
 
 Thanks,
 John
 
 x-c(1:12)
 y-c(rpois(12,4))
 grp-c(rep(c(3,4), each=6))
 z-c(rep(c(1,2), each=6))
 p-rep(1:3,4)
 
 xyplot(y~x|z, cex=1.2,
panel=function(x,y,...){
panel.xyplot(x,y,...)
 pch=p
 fill=list(black,blue)})
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] foreach with registerDoMC on R 2.12.0 OSX 10.6 --- errors and warnings

2011-02-11 Thread David Smith
Are you using doMC within the Mac GUI or from the Terminal? The doMC
package doesn't work within the GUI, you need to run R directly from
the command line.

# David Smith

On Fri, Feb 11, 2011 at 8:56 AM, ivo welch ivo...@gmail.com wrote:
 some hints for the search engines.

 I just did
   install.packages(foreach)
   install.packages(doMC)
   library(doMC)
   registerDoMC()
   library(foreach)
 foreach(i = 1:3) %dopar% sqrt(i)
 The process has forked and you cannot use this CoreFoundation
 functionality safely. You MUST exec().
 Break on 
 __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__()
 to debug.
 The process has forked and you cannot use this CoreFoundation
 functionality safely. You MUST exec().
 Break on 
 __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__()
 to debug.
 The process has forked and you cannot use this CoreFoundation
 functionality safely. You MUST exec().
 Break on 
 __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__()
 to debug.
 The process has forked and you cannot use this CoreFoundation
 functionality safely. You MUST exec().
 Break on 
 __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__()
 to debug.
 Tcl_ServiceModeHook: Notifier not initialized
 [[1]]
 NULL
 [[2]]
 [1] 1.414
 [[3]]
 [1] 1.732
 The first element is obviously wrong, and the warning messages are scary.

 Restarting R eliminated all the problems.  This seems to be something
 odd about the install and load process.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
David M Smith da...@revolutionanalytics.com
VP of Marketing, Revolution Analytics  http://blog.revolutionanalytics.com
Tel: +1 (650) 646-9523 (Palo Alto, CA, USA)

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Writing R packages in an easier way?

2011-02-11 Thread Dr. Michael Wolf

Dear collegues,

thanks for your helpfull and persuasive comments. I see that all of you propose 
to work with the official method building R packages. The code of importing 
functions which Barry Rowlingson posted to the forum is very interesting and 
perhaps I can use this for solving other problems.
I'm thinking about a monitoring project with R in the center of my working. 
Therefore I need help files for describing my programming code.


In the consequence of this I have to accept that using the official R way of 
writing a package will be the best in the long run - even it will take some time 
to me especially to write the help files. SO I will reactivate my RTools and TeX!


Best regards

Dr. Michael Wolf
(m-w...@muenster.de)


Am 11.02.2011 18:49, schrieb Uwe Ligges:



On 11.02.2011 13:38, S Ellison wrote:




Dr. Michael Wolfm-w...@muenster.de 11/02/2011 07:52

is there an easier way to write R packages for the own use - without

RTools and TeX?

Installing Rtools is not hard, and doesn't have to happen often; the
hardest bit in Windows is making sure that the requisite executables are
on the path, and that just involves adding the directory names to the
path environment variable. If I understand you, the problem is the time
spent hacking about in the .Rd help files. That can certainly be
simplified - eliminated, in fact.

Use package.skeleton() once you have a good starting set of functions
and data in R. That creates all the necessary directories, creates
skeleton (but valid) .Rd files, and exports your functions and data
objects for you. You can then edit the code directly, use RCMD check to
check the package (useful anyway) and use RCMD build to build it. (In
fact if all you want is the zip, you can - or at least could - zip the
package directory created by RCMD check).



Actually, just say

R CMD INSTALL --build package

which will generate the zip in a supported way.

Uwe Ligges



S Ellison


***
This email and any attachments are confidential. Any use...{{dropped:8}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.





__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] When is *interactive* data visualization useful to use?

2011-02-11 Thread Claudia Beleites

Dear Tal, dear list,

I think the importance of interactive graphics has a lot do with how visual your 
scientific discipline works. I'm spectroscopist, and I think we are very 
visually oriented: if I think of a spectrum I mentally see a graph.


So for that kind of work, I need a lot of interaction (type: plot, change a bit, 
plot again), e.g.
One example is the removal of spikes from Raman spectra (caused e.g. by cosmic 
rays hitting the detector). It is fairly easy to compute a list of suspicious 
signals. It is already much more complicated to find the actual beginning and 
end of the spike. And it is really difficult not to have false positives by some 
automatic procedure, because the spectra can look very different for different 
samples. It would just take me far longer to find a computational description of 
what is a spike than interactively accepting/rejecting the automatically marked 
suspicions. Even though it feels like slave work ;-)


Roughly the same applies for the choice of pre-processing like baseline 
correction. A number of different physical causes can produce different kinds of 
baselines, and usually you don't know which process contributes to what extent. 
In practice, experience suggests a method, I apply it and look whether the 
result looks as expected. I'm not aware of any performance measure that would 
indicate success here.


The next point where interaction is needed pops up as my data has e.g. spatial 
and spectral dimensions. So do the models usually: e.g. in a PCA, the loadings 
would usually capture the spectroscopic direction, whereas the scores belong to 
the spatial domain. So I have connected graphs: the spatial distribution 
(intensity map, score map, etc.), and the spectra (or loadings).

As soon as I have such connections I wish for interactive visualization:
I go back and forth between the plots: what is the spectrum that belongs to this 
region of the map? Where on the sample are high intensities of this band? What 
is the substance behind that: if it is x, the intensities at that other spectral 
band should correlate. And then I want to compare this to the scatterplot (pairs 
plot of the PCA score) or to a dendrogram of HCA...


Also, exploration is not just prerequisite for models, but it frequently is 
already the very proper scientific work (particularly in basic science). The 
more so, if you include exploring the models: Now, which of the bands are 
actually used by my predictive models? Which samples do get their predictions 
because of which spectral feature?
And, the statistical outliers may very well be just the interesting part of 
the sample. And the outlier statistics cannot interprete the data in terms of 
interesting ./. crap.


For presentation* of results, I personally think that most of the time a careful 
selection of static graphs is much better than live interaction.
*The thing where you talk to an audience far awayf from your work computer. As 
opposed to sitting down with your client/colleague and analysing the data together.



It could be argued that the interactive part is good for exploring (For
example) a different behavior of different groups/clusters in the data. But
when (in practice) I approached such situation, what I tended to do was to
run the relevant statistical procedures (and post-hoc tests)

As long as the relevant measure exists, sure.
Yet as a non-statistician, my work is focused on the physical/chemical 
interpretation. Summary statistics are one set of tools for me, and interactive 
visualisation is another set of tools (overlapping though).


I may want to subtract the influence of the overall unchanging sample matrix 
(that would be the minimal intensity for each wavelength). But the minimum 
spectrum is too noisy. So I use a quantile. Which one? Depends on the data. I'll 
have a look at a series (say, the 2nd to 10th percentile) and decide trading off 
noise and whether any new signals appear. I honestly think there's nothing 
gained if I sit down and try to write a function scoring the similarity to the 
minimum spectrum and the noise level: the more so as it just shifts the need for 
a decision (How much noise outweighs what intensity of real signal being 
subtracted?). It is a decision I need to take. With number or with eye. And 
after all, my professional training was thought to enable me taking this 
decision, and I'm paid (also) for being able to take this decision efficiently 
(i.e. making a reasonably good choice within not too long time).


After all, it may also have to do with a complaint a colleague from a 
computational data analysis group once had. He said the bad thing with us 
spectroscopists is that our problems are either so easy that there's no fun in 
solving them, or they are too hard to solve.



- and what I
found to be significant I would then plot with colors clearly dividing the
data to the relevant groups. From what I've seen, this is a safer approach
then wondering around the data 

Re: [R] Writing R packages in an easier way?

2011-02-11 Thread Gabor Grothendieck
On Fri, Feb 11, 2011 at 7:38 AM, S Ellison s.elli...@lgc.co.uk wrote:


 Dr. Michael Wolf m-w...@muenster.de 11/02/2011 07:52 
is there an easier way to write R packages for the own use - without
 RTools and TeX?

 Installing Rtools is not hard, and doesn't have to happen often; the
 hardest bit in Windows is making sure that the requisite executables are
 on the path, and that just involves adding the directory names to the
 path environment variable. If I understand you, the problem is the time

Also you can avoid setting environment variables for R by grabbing
Rcmd.bat from http://batchfiles.googlecode.com and placing it anywhere
on your path -- note: entering the one word, path, from the Windows
cmd line shows you your path.

Rcmd.bat looks up R in the registry and then passes its arguments to
the real Rcmd.

-- 
Statistics  Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Writing R packages in an easier way?

2011-02-11 Thread Steve Lianoglou
Hi,

Another option is to go ahead and make the package structure you're
used to, but try to load and use it via Hadley's devtools package,
instead of installing it for use.

It might do the trick for you:

https://github.com/hadley/devtools

-steve


On Fri, Feb 11, 2011 at 2:14 PM, Dr. Michael Wolf m-w...@muenster.de wrote:
 Dear collegues,

 thanks for your helpfull and persuasive comments. I see that all of you
 propose to work with the official method building R packages. The code of
 importing functions which Barry Rowlingson posted to the forum is very
 interesting and perhaps I can use this for solving other problems.
 I'm thinking about a monitoring project with R in the center of my working.
 Therefore I need help files for describing my programming code.

 In the consequence of this I have to accept that using the official R way of
 writing a package will be the best in the long run - even it will take some
 time to me especially to write the help files. SO I will reactivate my
 RTools and TeX!

 Best regards

 Dr. Michael Wolf
 (m-w...@muenster.de)


 Am 11.02.2011 18:49, schrieb Uwe Ligges:


 On 11.02.2011 13:38, S Ellison wrote:


 Dr. Michael Wolfm-w...@muenster.de 11/02/2011 07:52

 is there an easier way to write R packages for the own use - without

 RTools and TeX?

 Installing Rtools is not hard, and doesn't have to happen often; the
 hardest bit in Windows is making sure that the requisite executables are
 on the path, and that just involves adding the directory names to the
 path environment variable. If I understand you, the problem is the time
 spent hacking about in the .Rd help files. That can certainly be
 simplified - eliminated, in fact.

 Use package.skeleton() once you have a good starting set of functions
 and data in R. That creates all the necessary directories, creates
 skeleton (but valid) .Rd files, and exports your functions and data
 objects for you. You can then edit the code directly, use RCMD check to
 check the package (useful anyway) and use RCMD build to build it. (In
 fact if all you want is the zip, you can - or at least could - zip the
 package directory created by RCMD check).


 Actually, just say

 R CMD INSTALL --build package

 which will generate the zip in a supported way.

 Uwe Ligges


 S Ellison


 ***
 This email and any attachments are confidential. Any use...{{dropped:8}}

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Steve Lianoglou
Graduate Student: Computational Systems Biology
 | Memorial Sloan-Kettering Cancer Center
 | Weill Medical College of Cornell University
Contact Info: http://cbio.mskcc.org/~lianos/contact

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Getting p-value from summary output

2011-02-11 Thread Alice Lin
Awesome! Thanks so much!

On Thu, Feb 10, 2011 at 6:13 PM, Dennis Murphy djmu...@gmail.com wrote:

 Hi:

 Try
 summary(myprobit)$coefficients[, 4]

 HTH,
 Dennis

 On Thu, Feb 10, 2011 at 3:46 PM, Allie818 alice...@gmail.com wrote:


 I can get this summary of a model that I am running:

 summary(myprobit)

 Call:
 glm(formula = Response_Slot ~ trial_no, family = binomial(link =
 probit),
data = neg_data, na.action = na.pass)

 Deviance Residuals:
Min   1Q   Median   3Q  Max
 -0.9528  -0.8934  -0.8418   1.4420   1.6026

 Coefficients:
 Estimate Std. Error z value Pr(|z|)
 (Intercept) -0.340528   0.371201  -0.9170.359
 trial_no-0.005032   0.012809  -0.3930.694

 (Dispersion parameter for binomial family taken to be 1)

Null deviance: 62.687  on 49  degrees of freedom
 Residual deviance: 62.530  on 48  degrees of freedom
 AIC: 66.53

 Number of Fisher Scoring iterations: 4

 But I would like to get the p-value [column heading Pr(|z|)] for the
 esimate.
 I can get the coefficient estimates with myprobit$coefficients. Is there
 something similar to get the p-value?

 Thank you in advance.
 --
 View this message in context:
 http://r.789695.n4.nabble.com/Getting-p-value-from-summary-output-tp3300503p3300503.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] linear models with factors

2011-02-11 Thread ATANU

i am trying to fit a linear model with both continuous covariates and
factors. When fitted with the intercept 
term the first level of the factor is treated by R as intercept and the
estimate of the effects of remaining levels(say i th level)  are given as 
true estimate of i th level - estimate of 1st level.can any please help me?
thanks in advance.
-- 
View this message in context: 
http://r.789695.n4.nabble.com/linear-models-with-factors-tp3301811p3301811.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] take my name from the list

2011-02-11 Thread Fernanda Melo Carneiro
How I can take out my name from this list?

Fernanda Melo Carneiro contato: (62) 3521-1480 e 8121-7374www.ecoevol.ufg.br
Laboratório de Ecologia Teórica e Síntese (UFG)  


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Revolution Analytics reading SAS datasets

2011-02-11 Thread Gong-Yi Liao
If you have SAS, You can read Dr. Harrell's page:

http://biostat.mc.vanderbilt.edu/wiki/Main/SASexportHowto

if not, you can take a look on WPS:

http://www.teamwpc.co.uk/products



On Fri, 2011-02-11 at 10:32 -0600, Chao(Charlie) Huang wrote: 
 I am right now using Revolution R Enterprise 4.2. Could somebody show
 me how to import/export SAS datasets. Thanks.
 


-- 
Gong-Yi Liao

Department of Statistics
University of Connecticut
215 Glenbrook Road  U4120
Storrs, CT 06269-4120

860-486-9478

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Package distr and define your own distribution

2011-02-11 Thread Gabriel.Cardi

Hi all
I  am using the Package distr (and related) 

Do you know if it is possible to define your own distribution (object)
GIVEN that you have an analytical form of the probability density
function (pdf) ?

I would then like to use the standard feature of the distr and related
packages.


Best regards

Giuseppe Gabriel Cardi

Visit our website at http://www.ubs.com

This message contains confidential information and is in...{{dropped:21}}
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] About classification methods.

2011-02-11 Thread Jaeik Cho
Dear R users,

I'm new of the R, I really don't know much.

I want classification some data (two class, many features and huge size of 
data) by using R.

At this case, I want using Support Vector Machine, Bayes theory based 
classifier, Discriminant Analysis, Regression based at least.

Which package should I using, and can I compare each classifier result by 
predictions?

Thank you.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Need help merging two dataframes

2011-02-11 Thread B77S

## i didn't try this, but I would think it would work

newAB -data.frame(AB$id, AB$age, AB$sex, AB$area)
colnames(newAB)-c(id,age, sex, area) 
uni.newAB - unique(newAB) 
t3-merge(t2, uni.newAB, by=id, all=FALSE) 
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Need-help-merging-two-dataframes-tp3297313p3301627.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] problem with installing packages

2011-02-11 Thread xin shi

Dear:
 
I am recnetly trying to install some libraries. However, I found this issue for 
both my laptop and desktop even I uninstall and install it again.
 
I even can not update the R now.
 
I wonder if you have the similar issue.
 
Thakns!
 
Xin
 
 chooseCRANmirror()
Warning message:
In open.connection(con, r) :
  unable to connect to 'cran.r-project.org' on port 80.
 setRepositories()
 utils:::menuInstallPkgs()
Warning: unable to access index for repository 
http://www.stats.bris.ac.uk/R/bin/windows/contrib/2.12
Warning: unable to access index for repository 
http://www.stats.ox.ac.uk/pub/RWin/bin/windows/contrib/2.12
Error in install.packages(NULL, .libPaths()[1L], dependencies = NA, type = 
type) : 
  no packages were specified
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How can we make a vector call a function element-wise efficiently?

2011-02-11 Thread zhaoxing731
Dear Eik

What a great idea!!! Thank you so much for your colossal improvment
Yes, you have a unique eye on the numerical problem, I am worrying about this 
problem right now, hope you could give me new idea again

Hi,
you compute the same results for logx many times. So it is easier and
time saving tabulating all intermediate results.
smth. like
 n-10
 CT=6000 #assignment to CT
 NT=29535210 #assignment to NT
 i - 0:(n-1)
 lookup- lchoose(NT-n, CT-i) + lchoose(n, i)
 lgmax-cummax(lookup)
 calsta2-function(c) lgmax[c] + log(sum(exp(lookup[1:c] - lgmax[c])))
should help for a start, but I think, you are running into numerical
troubles, since you are dealing with very high and low (on log scale)
numbers and calsta constantly returns 57003.6 for c38 (the summands in
sum(exp(logx - logmax)) will become 0 for c38).
#check
sapply(1:50,calsta2)
sapply(1:50,calsta)
hth
Yours sincerely




ZhaoXing
Department of Health Statistics
West China School of Public Health
Sichuan University
No.17 Section 3, South Renmin Road
Chengdu, Sichuan 610041
P.R.China

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Tinn R

2011-02-11 Thread klatinez

Hi Dieter,
It works for me.
Thanks
Karen
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Tinn-R-tp878805p3301466.html
Sent from the R help mailing list archive at Nabble.com.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] cycle in a directed graph

2011-02-11 Thread rex.dwyer
If the graph has n nodes and is represented by an adjacency matrix, you can 
square the matrix (log_2 n)+1 times.  Then you can multiply the matrix 
element-wise by its transpose.  The positive entries in the 7th row will tell 
you all nodes sharing a cycle with node 7.  This assumes all edge weights are 
positive.
Are you sure we're not doing your graph theory homework?  You asked about MSTs 
yesterday.

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
Behalf Of amir
Sent: Friday, February 11, 2011 10:11 AM
To: r-help@r-project.org
Subject: [R] cycle in a directed graph

Hi,

I have a directed graph and wants to find is there any cycle in it? If
it is, which nodes or edges are in the cycle.
Is there any way to find the cycle in a directed graph in R?

Regards,
Amir

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




message may contain confidential information. If you are not the designated 
recipient, please notify the sender immediately, and delete the original and 
any copies. Any use of the message by you is prohibited. 
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] take my name from the list

2011-02-11 Thread David Winsemius


On Feb 11, 2011, at 7:25 AM, Fernanda Melo Carneiro wrote:


How I can take out my name from this list?


Please read the information on the page where you signed up.



Fernanda Melo Carneiro contato: (62) 3521-1480 e  
8121-7374www.ecoevol.ufg.br

Laboratório de Ecologia Teórica e Síntese (UFG)

--

David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] take my name from the list

2011-02-11 Thread Ista Zahn
See the link at the bottom of every message sent to this list...

-Ista

On Fri, Feb 11, 2011 at 7:25 AM, Fernanda Melo Carneiro
fermelcar2...@yahoo.com.br wrote:
 How I can take out my name from this list?

 Fernanda Melo Carneiro contato: (62) 3521-1480 e 8121-7374www.ecoevol.ufg.br
 Laboratório de Ecologia Teórica e Síntese (UFG)



        [[alternative HTML version deleted]]


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.





-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] take my name from the list

2011-02-11 Thread Sarah Goslee
The answer to that question appears on each and every
message to the list, including this one.

But for your convenience:
https://stat.ethz.ch/mailman/listinfo/r-help

On Fri, Feb 11, 2011 at 7:25 AM, Fernanda Melo Carneiro
fermelcar2...@yahoo.com.br wrote:
 How I can take out my name from this list?


-- 
Sarah Goslee
http://www.functionaldiversity.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] take my name from the list

2011-02-11 Thread Steve Lianoglou
There is a link at the bottom of every email sent from this list:

https://stat.ethz.ch/mailman/listinfo/r-help

Go there, scroll to the bottom of page, and follow the unsubscribe instructions.


On Fri, Feb 11, 2011 at 7:25 AM, Fernanda Melo Carneiro
fermelcar2...@yahoo.com.br wrote:
 How I can take out my name from this list?

 Fernanda Melo Carneiro contato: (62) 3521-1480 e 8121-7374www.ecoevol.ufg.br
 Laboratório de Ecologia Teórica e Síntese (UFG)



        [[alternative HTML version deleted]]


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.





-- 
Steve Lianoglou
Graduate Student: Computational Systems Biology
 | Memorial Sloan-Kettering Cancer Center
 | Weill Medical College of Cornell University
Contact Info: http://cbio.mskcc.org/~lianos/contact

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] linear models with factors

2011-02-11 Thread Ista Zahn
I don't see any question here, other than can you please help me.
Since long-awaited esp package has still not been released, you're
going to have to be more specific than that...

Best,
Ista

On Fri, Feb 11, 2011 at 2:05 PM, ATANU ata.s...@gmail.com wrote:

 i am trying to fit a linear model with both continuous covariates and
 factors. When fitted with the intercept
 term the first level of the factor is treated by R as intercept and the
 estimate of the effects of remaining levels(say i th level)  are given as
 true estimate of i th level - estimate of 1st level.can any please help me?
 thanks in advance.
 --
 View this message in context: 
 http://r.789695.n4.nabble.com/linear-models-with-factors-tp3301811p3301811.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] About classification methods.

2011-02-11 Thread David Winsemius


On Feb 11, 2011, at 1:36 PM, Jaeik Cho wrote:


Dear R users,

I'm new of the R, I really don't know much.

I want classification some data (two class, many features and huge  
size of data) by using R.


At this case, I want using Support Vector Machine, Bayes theory  
based classifier, Discriminant Analysis, Regression based at least.


http://cran.r-project.org/web/views/MachineLearning.html
http://cran.r-project.org/web/views/Multivariate.html



Which package should I using, and can I compare each classifier  
result by predictions?


Thank you.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] About classification methods.

2011-02-11 Thread Bert Gunter


 Which package should I using, and can I compare each classifier result by
 predictions?


By prediction on the training data, emphastically no. By prediction on
new data not used for training, yes.

-- Bert

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [R-pkgs] adehabitatMA, LT, HR and HS version 0.1

2011-02-11 Thread Clément Calenge

Dear all,

I have just uploaded 4 new packages on CRAN, which are on the long term 
designed to replace the old package adehabitat:


* adehabitatMA: functions to perform spatial operations (morphology, 
buffer, etc.)

* adehabitatHS: functions for the analysis of habitat selection by wildlife
* adehabitatHR: functions for home range estimation of animals
* adehabitatLT: functions for animal movement analysis

I will still continue to maintain the old adehabitat, but on the 
long-term, adehabitat will be replaced by these four packages.


Detailed justification for the development of these packages is given below.

==

The R environment has changed a lot since I began the development of 
adehabitat in 2002 (development of namespace, etc.), and new classes and 
efficient methods have been developed to deal with spatial data (package 
sp). In addition, the number of functions available in the package has 
grown to more than 250 functions, implementing methods for habitat 
selection analysis, home range estimation, animal movement analysis, or 
spatial operations.


Therefore, I decided to:
(i) rewrite the package adehabitat to make it more compliant with these 
evolutions of the package R,

(ii) split adehabitat into four packages

The four new packages are:

* adehabitatMA: functions to perform spatial operations (morphology, 
buffer, etc.)

* adehabitatHS: functions for the analysis of habitat selection
* adehabitatHR: functions for home range estimation
* adehabitatLT: functions for movement analysis

I will continue to maintain the old adehabitat on CRAN for some time, 
but on the long term, this package will disappear. These four packages 
are expected to become the future of adehabitat.


I now describe several major changes:

* the functions of the packages are documented precisely in a vignette 
(there is one vignette per package). Both their use and the theory 
underlying these functions are described there. To access it, type:

vignette(packagename)

* the home range estimation methods have been homogenized and return 
classes compliant with the classes of the package sp:
- the functions kernelUD, kernelbb, BRB and kernelkc all return objects 
of class estUDm, which are lists of objects of class estUD. The 
class estUD extends the class SpatialPixelsDataFrame.
- the function clusthr and LoCoH return objects of class MCHu, which 
are lists of SpatialPolygonsDataFrame

- the function mcp returns a SpatialPolygonsDataFrame

* home range estimation methods now take objects of class SpatialPoints 
as arguments


* objects of class ltraj are now characterized by an additional 
attribute infolocs, which is designed to store metadata on the 
trajectories (e.g. precision on the relocations). Most functions of the 
package adehabitatLT can be used to analyse these metadata (plotltr, 
etc. see the vignette).


* the method of characteristic hulls (Downs and Horner 2009), suggested 
by Paolo Cavallini, on the list has been added to adehabitatHR, and 
returns an object of class MCHu


* the method of biased random bridges (Benhamou, 2011) has be added to 
adehabitatHR, to estimate the utilization distribution from a trajectory.


* the canonical OMI analysis, allowing exploration of habitat selection 
with radio-tracking data, has been added to the package adehabitatHS


* the autocorrelation functions described by Dray et al. (2010) for the 
analysis of movement have been added to the package adehabitatLT;


* the function rasterize.ltraj allows to rasterize a trajectory (i.e. 
useful to identify the habitat characteristics of the steps building the 
trajectory)


* all the packages have a namespace for management of internal functions.

* Two additional functions dl and ld to convert efficiently the class 
ltraj to and from data frames (thanks to Mathieu Basille for the 
suggestion).


Note that the calculations performed by most functions of adehabitat 
have not changed (e.g. the algorithm implemented in kernelUD of 
adehabitat is the same as the algorithm implemented in the function 
kernelUD of the package adehabitatHR), since they have been deeply 
discussed with users and corrected during the last six years. Only the 
input and output of the functions have been changed.


=

Happy testing,


Clément Calenge
--
Clément CALENGE
Cellule d'appui à l'analyse de données
Direction des Etudes et de la Recherche
Office national de la chasse et de la faune sauvage
Saint Benoist - 78610 Auffargis
tel. (33) 01.30.46.54.14

___
R-packages mailing list
r-packa...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and 

[R] [R-pkgs] ez version 3.0

2011-02-11 Thread Mike Lawrence
Hi folks,

I'd like to announce the release of version 3.0 of the ez package.
This package was developed to aid those that are new to statistical
programming. Over the course of several years of helping colleagues
and students learn R, I observed that folks are often initially turned
off R because they have difficulty obtaining SPSS-like results quickly
(SPSS is the dominant environment in my field, psychology). ez
attempts to fill this gap, providing quick and easy analysis and
graphics for common experimental designs. By easing the early portions
of the R learning curve, ez hopes to promote the spread of R as a
means of open source and reproducible analysis.

ez may also be of interest to more advanced users as it includes the
ezMixed() function, which automates the assessment of fixed effects
in a mixed effects modelling context, and the ezPredict() function,
which obtains predictions for the fixed effects from a mixed effects
model.


Installing ez

Version 3.0 of ez requires that you have R 2.12.1 installed, which is
the latest version of R as of this notice. If you have an older
version of R you will need to update R by installing the latest
version (from http://cran.r-project.org/) before installing ez.

Windows and linux users should be able to install the latest version
by running the command:
install.packages( 'ez' )

Mac users should be able to install the latest version by running the commands:
install.packages( c( 'car' , 'reshape2' , 'plyr' , 'ggplot2' ,
'stringr' , 'lme4' , 'Matrix' ) )
install.packages( 'ez' , type='source' , dependencies=F )


Once installed, running the following commands will load ez and bring
up its help page that links to descriptions of all ez's functions:
library( ez )
?ez


Big changes in version 3.0

- A big rework of ezANOVA() to permit more flexibility, including
more nuanced handling of numeric predictor variables, specification of
sums-of-squares types when data is imbalanced, and an option to
compute/return an aov object representing the requested ANOVA for
follow-up contrast analysis. (The latter two features follow from the
discussion at 
http://stats.stackexchange.com/questions/6208/should-i-include-an-argument-to-request-type-iii-sums-of-squares-in-ezanova)

- An important bugfix for ezMixed(), which previously permitted
specification of multiple random effects but silently ignored all but
the last!

- A big rework of ezMixed(), completely changing the output
(including removal of p-values following advice of Pinero  Bates,
2000, and many on the R-SIG-Mixed-Models mailing list) and providing a
new feature whereby the linearity of numeric fixed effects can be
assessed by comparison to higher polynomial degrees.


Also new

As noted above, this version fixes a big bug in ezMixed() about
which I wish I could have warned users sooner. To facilitate future
rapid notification of users, I've created a discussion group
(http://groups.google.com/group/ez4r) to which users can/should
subscribe to keep up to date on development news. Additionally, I
created a project page on github
(https://github.com/mike-lawrence/ez/issues) where users can submit
bug reports and feature requests. Finally, I encourage users to rate
or review ez (and any other packages you use) at crantastic.org
(http://crantastic.org/packages/ez).


Official CHANGES entry
3.0-0
- added urls in all documentation pointing to the
bug-report/feature-request site
(https://github.com/mike-lawrence/ez/issues) and the discussion group
(http://groups.google.com/group/ez4r).
- changed reshape dependency to reshape2
- ezANOVA
- fixed bug such that if detailed=F and car:Anova fails, the first
line of results from stats:aov is cut off.
- Added more nuanced treatment of numeric variables
- Added type argument to specify sums-of-squares type selection (1,2, or 3)
- Added white.adjust argument to permit heteroscedasticity
adjustment (see ?car::Anova)
- Added return_aov argument to permit returning a stats:aov object
along with results (this was requested by a user seeking to use the
aov object to compute contrasts)
- ezMixed
- IMPORTANT: Fixed bug such that only the last specified random
effect was implemented
- fixed bug such that an error occurred if only one fixed effect
was specified
- changed output format to a list containing a summary data frame,
a list of formulae, a list of errors, a list of warnings, and
(optionally) a list of fitted models
- Changed format of summary output including removal of p-values
(on the strong advice of many that the p-values from a likelihood
ratio test of a fixed effect is highly questionable)
- removed the return_anovas argument
- added nuanced ability to explore non-linear effects via addition
of fixed_poly and fixed_poly_max arguments
- ezPredict
- Added ability to handle models fit with I()'d variables
- Added stop error when encountering models with poly() and no
supplied prediction data 

[R] R for mac, default load package.

2011-02-11 Thread Jaeik Cho
Dear R users,


I'm looking for solution about how can I add a package to default load package 
list.

Because, some packages, every time I use the package for analysis. I don't want 
type load(package) every time.

On the R instruction, I should change .Rprofile file, but I couldn't find R for 
Mac.

How can I add default load package?


Jaeik Cho__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Fwd: About classification methods.

2011-02-11 Thread Jaeik Cho
I mean, after done for the testing step, I want show which data classified to 
wrong class.
That predictions.

Jaeik


Begin forwarded message:

 From: Bert Gunter gunter.ber...@gene.com
 Date: February 11, 2011 3:00:47 PM CST
 To: David Winsemius dwinsem...@comcast.net
 Cc: Jaeik Cho choja...@gmail.com, r-help@r-project.org
 Subject: Re: [R] About classification methods.
 
 
 
 Which package should I using, and can I compare each classifier result by
 predictions?
 
 
 By prediction on the training data, emphastically no. By prediction on
 new data not used for training, yes.
 
 -- Bert

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Summarizing a response variable based on an irregular time period

2011-02-11 Thread Sam Albers
Hello,

I have a question about working with dates in R. I would like to summarize a
response variable based on a designated and irregular time period. The
purpose of this is to compare the summarized values (which were sampled
daily) to another variable that was sampled less frequently. Below is a
trivial example where I would like to summarize the response variable dat$x
such that I have average and sum values from Sept25-27 and Sept28-Oct1. Can
anyone suggest an efficient way to deal with dates like this? As an
extremely tedious previous effort, I simply created another grouping
variable but I had to do this manually. For a large dataset this really
isn't a good option.

Thanks in advance!

Sam

library(plyr)
dat - data.frame(x = runif(6, 0, 125), date =
as.Date(c(2009-09-25,2009-09-26,2009-09-27,2009-09-28,2009-09-29,2009-09-30,2009-10-01),
format=%Y-%m-%d), yy = letters[1:2], stringsAsFactors = TRUE)

#If I was using a regular factor, I would do something like this and this is
what I would be hoping for as a result (obviously switching yy for date as
the grouping variable)
ddply(dat, c(yy), function(df) return(c(avg=mean(df$x), sum=sum(df$x

#This is the data.frame that I would like to compare to dat.
dat2 - data.frame(y = runif(2, 0, 125), date =
as.Date(c(2009-09-27,2009-10-01), format=%Y-%m-%d))

-- 
*
Sam Albers
Geography Program
University of Northern British Columbia
 University Way
Prince George, British Columbia
Canada, V2N 4Z9
phone: 250 960-6777
*

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Rioja package, creating transfer function, WA, Error in FUN

2011-02-11 Thread mdc

Dear Peter,
Thank you very much for your suggestion, I went through the matrices and
removed the headings for pH, WTD etc and did the same for the site names,
which I made purely numerical, and now the WA function is working,
Thanks again!
Matthew

On Fri, Feb 11, 2011 at 12:44 AM, Peter Ehlers [via R] 
ml-node+3300569-1208751190-212...@n4.nabble.com wrote:

 On 2011-02-10 09:40, mdc wrote:

 
  Hi, I am a new R user and am trying to construct a palaeoenvironmental
  transfer function (weighted averaging method) using the package rioja.
  I've managed to insert the two matrices (the species abundance and the
  environmental data) and have assigned them to the y and x values
  respectively. When I try and enter the 'WA' function though, I get an
 'Error
  in FUN' message (see below for full values). Alas, I do not know what
 this
  means and have struggled to find similar problems to this online. Is
 there a
  step I've missed out between assigning the matrices and the WA function?
 
  SWED=odbcConnectExcel(file.choose())   (SWED is the environmental
 data
  file)
  sqlTables(SWED)
  Env=sqlFetch(SWED, Sheet1)
  odbcClose(SWED)
  Env
 
  SampleId WTD  Moisture   pH EC
  1  N1_1   20 91.72700 3.496674  85.02688
  2  N1_22 93.88913 3.550794  85.69465
  3  N1_3   26 90.30269 3.948559 113.19206
  4  N1_45 94.14427 3.697213  48.56375
  5  N1_5   30 90.04269 3.745020 108.57278
  
  90 GAL_15 70 94.07849 3.777932  66.77673
 
 
  STEST=odbcConnectExcel(file.choose())
  sqlTables(STEST)  (STEST is the
  species abundance file)
  Spe=sqlFetch(STEST, Sheet8)
  odbcClose(STEST)
  Spe
 
  (The species data contains the abundance of 32 species over 90 sites, set

  out like this)
  F1AmpFlavAmpWri  ArcCat   ArcDis
  1N1_1 22.2929936 0.000  0.000  0.000
  2N1_2 30.9677419 0.000  0.000  3.2258065
 
  library(rioja)
  y-as.matrix(Spe)
  x-as.matrix(Env)
 
  WA(y, x, tolDW = FALSE, use.N2=TRUE, check.data=TRUE, lean=FALSE)
  (the
  command from the WA section of the rioja booklet)
  Error in FUN(newX[, i], ...) : invalid 'type' (character) of argument

 Well, the error message is fairly clear: you're feeding in
 something of type 'character' where something else (presumably)
 numeric is wanted.

 I don't use rioja, but a quick glance at the documentation
 for WA shows that x should be 'a vector of environmental
 values to be modelled'. The example uses pH which is almost
 surely not a character vector.

 Your x is a *matrix* of *character* values. Possibly, you
 want to pull, say, pH out of your Env, convert to numeric
 and try that. Ditto for the other variables.

 If the above is total nonsense, please forgive my rioja
 ignorance and wait for more cogent advice from
 someone more knowledgeable than I.

 Peter Ehlers

 
 
  Any help would be most appreciated,
  Best wishes,
  Matthew

 __
 [hidden email] http://user/SendEmail.jtp?type=nodenode=3300569i=0mailing 
 list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://r.789695.n4.nabble.com/Rioja-package-creating-transfer-function-WA-Error-in-FUN-tp3299636p3300569.html
  To unsubscribe from Rioja package, creating transfer function, WA, Error
 in FUN, click 
 herehttp://r.789695.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=3299636code=bWF0dC5kLmNvZUBnbWFpbC5jb218MzI5OTYzNnwyMDMzMDk2ODg2.



-- 
View this message in context: 
http://r.789695.n4.nabble.com/Rioja-package-creating-transfer-function-WA-Error-in-FUN-tp3299636p3302063.html
Sent from the R help mailing list archive at Nabble.com.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R for mac, default load package.

2011-02-11 Thread Steve Lianoglou
Hi,

On Fri, Feb 11, 2011 at 3:21 PM, Jaeik Cho choja...@gmail.com wrote:
 Dear R users,


 I'm looking for solution about how can I add a package to default load 
 package list.

 Because, some packages, every time I use the package for analysis. I don't 
 want type load(package) every time.

 On the R instruction, I should change .Rprofile file, but I couldn't find R 
 for Mac.

 How can I add default load package?

The file to add your `library(whatever)` line to is ~/.Rprofile

~ is shorthand for your home directory, which is
/Users/YOUR_SHORT_LOGIN_NAME on OS X.

-- 
Steve Lianoglou
Graduate Student: Computational Systems Biology
 | Memorial Sloan-Kettering Cancer Center
 | Weill Medical College of Cornell University
Contact Info: http://cbio.mskcc.org/~lianos/contact

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [R-pkgs] Update: googleVis 0.2.4 - Using the Google Visualisation API with R

2011-02-11 Thread Markus Gesmann
Hi all,

Version 0.2.4 of the googVis package has been released on CRAN and 
will be available from your local CRAN mirror soon. 

googleVis provides an interface between R and the Google Visualisation API. 
The functions of the package allow users to visualise data stored in R with the 
Google Visualisation API without  uploading their data to Google

Since the last version a lot of work has been carried out under the bonnet to 
make 
googleVis  more flexible and easier to use. 

The new version no longer requires to install the package into a folder with 
access 
rights and  provides a better interface to insert the visualisation output into 
your 
own  sites.

For more information see:
Project site: http://code.google.com/p/google-motion-charts-with-r/
Overview: 
http://google-motion-charts-with-r.googlecode.com/files/GoogleVisOverview_0.2.4.pdf
Examples: 
http://code.google.com/p/google-motion-charts-with-r/wiki/GadgetExamples
Vignette: 
http://cran.r-project.org/web/packages/googleVis/vignettes/googleVis.pdf

And here are the NEWS:

Version 0.2.4 [2011-02-07]
==

Changes
   
   o plot.gvis no longer writes into the package folder. Instead
 temporary files are created. This overcomes the need to install
 the package into a directory with write access. Many thanks to
 Ben Bolker for this suggestion and code contribution.  
  
   o plot.gvis no longer requires the web server provided by
 the R.rsp package to display the visualisation output. Instead it
 uses the internal R HTTP help server. Many thanks to John Verzani
 for this suggestion and code contribution. 
  
   o R = 2.11.0 is required to plot googleVis output, as it uses the
 internal R HTTP help server.
  
   o Updated vignette with a section on how to use googleVis with
 RApache and brew

NEW FEATURES

   o The plot function generates a web page which includes a link
 to the HTML code of the chart. Many thanks to Henrik Bengtsson
 for this suggestion.

   o gvis visualisation functions have a new argument 'chart id', to
 set the chart id of the exhibit manually.   

   o gvis functions return more details about the visualisation chart
 in a structured way. Suppose x is a 'gvis' object, than
 x$html$chart is a named character vector of the chart's
 JavaScript building blocks and html tags. 

   o print.gvis has a new argument 'tag', which gives the user more
 control over the output

   o New brew example files in: 
 system.file(brew, package = googleVis)  

BUG FIXES

   o gvisTable did not accept datetime columns.


Please feel free to send us an email rvisualisat...@gmail.com
if you would like to be kept informed of new versions, or if you have 
any feedback, ideas, suggestions or would like to collaborate.

Best regards,

The googleVis team:
Markus Gesmann, Diego de Castillo



[[alternative HTML version deleted]]

___
R-packages mailing list
r-packa...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Time Series in R with ggplot2

2011-02-11 Thread info


Hi Folks,

First, the important information.

 sessionInfo()
R version 2.12.1 (2010-12-16)
Platform: i386-pc-mingw32/i386 (32-bit)

Second, my problem.

I have a series of data sets comprised in the following format.

 totsoc
   Location Year Value
1 SOUTH 199829
2 SOUTH 199920
3 SOUTH 200032
4 SOUTH 200129
5 SOUTH 200225
6 SOUTH 200328
7 SOUTH 200427
8 SOUTH 200528
9 SOUTH 200622
10SOUTH 200731

In order to generate a time series plot in ggplot2, I ran the following
code.

qplot(Year, Value, data=totsoc, geom=line)

ggplot(totsoc, aes(x=Year, y=Value)) + geom_line()

However, neither command acctually produces a plot with lines connecting
the 
data points. I get a blank window with the general gray background and the
x 
and y axis. The strange thing is that ggplot2 gives me the appropriate
output
when I use bar or point. For example, these commands work.

ggplot(totsoc, aes(Year, Value)) + geom_point()

qplot(Year, Value, data=totsoc, geom=point)

I also tried to generate some sample data, and that worked. However,
I'm not sure why these same commands aren't working on the earlier data
set.
Here is the sample data I was working with.

df - data.frame(one=c(3,8,5,4,2), two=c(KS,MO,KS,CA,IA),
three=c(2001:2005))
qplot(three, one, data=df, geom=line)


Can anyone please help?

Thank You,
A. Mathew

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fwd: About classification methods.

2011-02-11 Thread David Winsemius


On Feb 11, 2011, at 4:11 PM, Jaeik Cho wrote:

I mean, after done for the testing step, I want show which data  
classified to wrong class.

That predictions.


At this point my suggestion is that your (re?)-read the Posting Guide  
and determine whether you have adhered to the level of detail and  
specificity that is implied to be desirable or optimal for questions  
to r-help. There may be a language issue and without implying any  
moral issue, the provision of a worked example might be even more  
important here than it would be in a situation of a shared language.   
You might also consult the How to ask good questions link which IIRC  
is at the bottom of that document.


(My apologies to Bert if this was a question that he really was hoping  
to answer.)


--
David


Jaeik


Begin forwarded message:


From: Bert Gunter gunter.ber...@gene.com
Date: February 11, 2011 3:00:47 PM CST
To: David Winsemius dwinsem...@comcast.net
Cc: Jaeik Cho choja...@gmail.com, r-help@r-project.org
Subject: Re: [R] About classification methods.





Which package should I using, and can I compare each classifier  
result by

predictions?



By prediction on the training data, emphastically no. By prediction  
on

new data not used for training, yes.

-- Bert


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lattice auto.key gives mismatch colors

2011-02-11 Thread Dennis Murphy
Hi:

This seems to work:

mykey - list(space = 'top',
  columns = 4,
  text = list(as.character(unique(src$s)), col = colors),
  points = list(pch = 1, col = colors)
 )
xyplot(v~t, groups=s, type='o', data=src, col=colors, key = mykey)

HTH,
Dennis

On Fri, Feb 11, 2011 at 7:56 AM, John Smith zmr...@gmail.com wrote:

 Hello All,

 I am using the following code to draw a figure. But the legend given buy
 auto.key has mismatched colors. Could any one help me?

 I am using R2.12.1 and most current lattice on windows XP.

 Thanks

 John

 library(lattice)

 src - data.frame(t=rep(c('A','B','C','D'), rep(8,4)),
  s=rep(c(8132,8140,8178,8180,8224,8230,8337,8345), 4),
  v=c(55.10, 56.00, 206.00, 5.86, 164.00, 102.00, 171.00,
 280.00, 236.00,
91.10, 238.00, 102.00, 59.30, 227.00, 280.00, 316.00,
 205.00, 120.00,
273.00, 98.80, 167.00, 104.00, 155.00, 370.00, 215.00,
 97.60, 133.00,
135.00, 48.60, 135.00, 77.10, 91.90))
 colors - rgb(c(228,  55,  77, 152, 255, 255, 166, 247),
  c(26,  126, 175,  78, 127, 255,  86, 129),
  c(28,  184,  74, 163,   0,  51,  40, 191), maxColorValue=255)
 xyplot(v~t, groups=s, type='o', data=src, col=colors, auto.key =
 list(points=TRUE, columns = 4, col=colors))

[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How do I add a book title to the R bibliography?

2011-02-11 Thread Paul Teetor
R community:

I would like to add a new title to the bibliography on the R website
(http://www.r-project.org/doc/bib/R-books.html), but I cannot find
instructions for doing that.

Can anyone tell me, whom should I contact in order to add a new book?

(I added the title to the wiki's list of books, but the bibliography and the
list don't seem to be connected.)

Thank you!

Paul

 
Paul Teetor
Elgin, IL   USA
http://www.linkedin.com/in/paulteetor
 
For quant traders, there are no bad days in the market. It's just more
data.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Writing R packages in an easier way?

2011-02-11 Thread Michael Friendly

On 2/11/2011 2:52 AM, Dr. Michael Wolf wrote:

Dear R colleagues,


...

 From the point of the costs e. g. I had to learn writing help files in
a TeX-like language. But I'm the typical Word user. My last TeX writings
were done in the 1990s! If I'm changing only a letter in a source file
(r-file or help file) I've to build a new package. Seeing the results in
my eyes this is a very expansive way. It's easier to me to write those
files in HTML and to change the HTML source code. I don't need help
files in Rd format.



You can make things a whole lot easier by using prompt() to write the 
skeletons of the .Rd files.  Then you have a ready-made template for

your function or data and only need to fill in the details.  Once you
try this, you'll find it's not really any different than HTML markup.

?prompt

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Time Series in R with ggplot2

2011-02-11 Thread Ista Zahn
Hi,
You probably have Year stored as a factor. See below.

totsoc - structure(list(Location = structure(c(1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L), .Label = SOUTH, class = factor), Year = 1998:2007,
Value = c(29L, 20L, 32L, 29L, 25L, 28L, 27L, 28L, 22L, 31L
)), .Names = c(Location, Year, Value), class = data.frame,
row.names = c(NA,
-10L))

qplot(Year, Value, data=totsoc, geom=line)#works as expected
ggplot(totsoc, aes(x=Year, y=Value)) + geom_line()#same

##convert year to a factor

dat - totsoc
dat$Year - factor(dat$Year)
qplot(Year, Value, data=dat, geom=line)## this now reproduces your problem
ggplot(dat, aes(x=Year, y=Value)) + geom_line() ## same

## Solutions: 1) convert year to numeric, or 2) use group=1 as shown
below, or 3) convert year to date class (this always gives me problems
so I don't show an example).

qplot(Year, Value, data=dat, geom=line, group=1)
ggplot(dat, aes(x=Year, y=Value)) + geom_line(aes(group=1))

Best,
Ista

On Fri, Feb 11, 2011 at 5:29 PM,  i...@mathewanalytics.com wrote:


 Hi Folks,

 First, the important information.

 sessionInfo()
 R version 2.12.1 (2010-12-16)
 Platform: i386-pc-mingw32/i386 (32-bit)

 Second, my problem.

 I have a series of data sets comprised in the following format.

 totsoc
   Location Year Value
 1     SOUTH 1998    29
 2     SOUTH 1999    20
 3     SOUTH 2000    32
 4     SOUTH 2001    29
 5     SOUTH 2002    25
 6     SOUTH 2003    28
 7     SOUTH 2004    27
 8     SOUTH 2005    28
 9     SOUTH 2006    22
 10    SOUTH 2007    31

 In order to generate a time series plot in ggplot2, I ran the following
 code.

 qplot(Year, Value, data=totsoc, geom=line)

 ggplot(totsoc, aes(x=Year, y=Value)) + geom_line()

 However, neither command acctually produces a plot with lines connecting
 the
 data points. I get a blank window with the general gray background and the
 x
 and y axis. The strange thing is that ggplot2 gives me the appropriate
 output
 when I use bar or point. For example, these commands work.

 ggplot(totsoc, aes(Year, Value)) + geom_point()

 qplot(Year, Value, data=totsoc, geom=point)

 I also tried to generate some sample data, and that worked. However,
 I'm not sure why these same commands aren't working on the earlier data
 set.
 Here is the sample data I was working with.

 df - data.frame(one=c(3,8,5,4,2), two=c(KS,MO,KS,CA,IA),
 three=c(2001:2005))
 qplot(three, one, data=df, geom=line)


 Can anyone please help?

 Thank You,
 A. Mathew

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to compute yaxp and usr without plotting ?

2011-02-11 Thread Greg Snow
The usr parameter is either ylim or ylim plus 4 percent on either side (see 
yaxs/xaxs), see the pretty function for possible ways to get the yaxp 
information.  Note that strwidth is based on the current coordinate system and 
will not give you the proper values unless the plot region has already been set 
up.

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
 project.org] On Behalf Of Yves REECHT
 Sent: Friday, February 11, 2011 3:38 AM
 To: r-help@r-project.org
 Subject: [R] How to compute yaxp and usr without plotting ?
 
   Dear all,
 
 I'd like to know how I could compute the parameters yaxp and (the y
 components of) usr without having to plot the data first. Note that
 ylim is /a priori/ fixed.
 
 The aim is to automatically adjust the parameter mgp without having
 to
 make the plot twice. Then, with yaxp and usr known, it should be
 easy to calculate a suitable mgp with the axTicks and strwidth
 functions.
 
 Many thanks in advance,
 Yves
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Writing R packages in an easier way?

2011-02-11 Thread Yihui Xie
I guess Emacs + ESS + roxygen might be the easiest way to write an R
package. Writing or modifying Rd files/templates, in my eyes, is
really time-consuming and the Rd files are difficult to maintain
(unless you really have a good memory). I became reluctant to maintain
my R packages simply because I felt painful to maintain the
documentation. After I learned a bit about roxygen and ESS a few
months ago, several of my packages came back to life again (e.g. this
picture is a piece of evidence:
https://github.com/yihui/animation/graphs/impact). The feeling was
probably like when Dr Harrell switched from SAS to S (see
library(fortunes); fortune('I quit using SAS')).

Anyway, prompt() and package.skeleton() are very helpful in the short run.

Regards,
Yihui
--
Yihui Xie xieyi...@gmail.com
Phone: 515-294-2465 Web: http://yihui.name
Department of Statistics, Iowa State University
2215 Snedecor Hall, Ames, IA



On Fri, Feb 11, 2011 at 5:30 PM, Michael Friendly frien...@yorku.ca wrote:
 On 2/11/2011 2:52 AM, Dr. Michael Wolf wrote:

 Dear R colleagues,

 ...

  From the point of the costs e. g. I had to learn writing help files in
 a TeX-like language. But I'm the typical Word user. My last TeX writings
 were done in the 1990s! If I'm changing only a letter in a source file
 (r-file or help file) I've to build a new package. Seeing the results in
 my eyes this is a very expansive way. It's easier to me to write those
 files in HTML and to change the HTML source code. I don't need help
 files in Rd format.


 You can make things a whole lot easier by using prompt() to write the
 skeletons of the .Rd files.  Then you have a ready-made template for
 your function or data and only need to fill in the details.  Once you
 try this, you'll find it's not really any different than HTML markup.

 ?prompt


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] add mean and sd to dotplot in each panel using lattice

2011-02-11 Thread Xiaokuan Wei
Hi Phil,

This is exactly what I want, I just changed some trivial settings to make it 
plot standard error of the mean and color changing with panels.
Thank you very much.
-Xiaokuan

library(lattice)

mypanel = function(x,y,...){
   
 
agg-aggregate(y,list(x),function(x)c(mean(x,na.rm=TRUE),sd(x,na.rm=TRUE),length(x[!is.na(x)])))

  mns = agg[,2][,1]
sds = agg[,2][,2]
 ns = agg[,2][,3]
for(i in 1:nrow(agg))llines(c(i-.2,i+.2),rep(mns[i],2),lwd=3,col=i)
for(i in 1:nrow(agg)){llines(c(i-.1,i+.1),rep(mns[i] + 
sds[i]/(ns[i]^0.5),2),col=i);  #standard error
  llines(c(i-.1,i+.1),rep(mns[i] - 
sds[i]/(ns[i]^0.5),2),col=i)}
panel.dotplot(x,y,col=1:nrow(agg),...)
}

dotplot(Score ~ Dose | Sex , group=Dose, data=dat,panel=mypanel)







From: Phil Spector spec...@stat.berkeley.edu

Cc: r-help r-help@r-project.org
Sent: Wed, February 9, 2011 6:56:32 PM
Subject: Re: [R] add mean and sd to dotplot in each panel using lattice

Xiaokuan -
Maybe this will get you started:

mypanel = function(x,y,...){
agg-aggregate(y,list(x),function(x)c(mean(x,na.rm=TRUE),sd(x,na.rm=TRUE)))
mns = agg[,2][,1]
sds = agg[,2][,2]
for(i in 1:nrow(agg))llines(c(i-.1,i+.1),rep(mns[i],2),lwd=3)
for(i in 1:nrow(agg)){llines(c(i-.1,i+.1),rep(mns[i] + 1.96 * sds[i],2));
  llines(c(i-.1,i+.1),rep(mns[i] - 1.96 * sds[i],2))}
panel.dotplot(x,y,...)
}

dotplot(Score ~ Dose | Sex, groups=Sex, data=dat,panel=mypanel)

- Phil Spector
 Statistical Computing Facility
 Department of Statistics
 UC Berkeley
spec...@stat.berkeley.edu

On Wed, 9 Feb 2011, Xiaokuan Wei wrote:

 Hi,

 I have a data frame like this:
 ScoreDoseSex
 2.81Dose1M
 1.81Dose1M
 1.22Dose1M
 0.81Dose1M
 0.49Dose1M
 0.22Dose1M
 0.00Dose1M
 -0.19Dose1M
 -0.17Dose1F
 -0.32Dose1F
 -0.46Dose1F
 -0.58Dose1F
 -0.70Dose1F
 -0.81Dose1F
 -0.91Dose1F
 -1.00Dose1F
 -1.77Dose2M
 -1.85Dose2M
 -1.93Dose2M
 -2.00Dose2M
 -2.07Dose2M
 -2.14Dose2M
 -2.20Dose2M
 -2.26Dose2M
 -2.32Dose2F
 -2.38Dose2F
 -2.17Dose2F
 -2.49Dose2F
 -2.54Dose2F
 -2.58Dose2F
 -2.63Dose2F
 -2.42Dose2F



 I can make the dotplot using lattice package:

 library(lattice)
 dat-read.table(test.txt,header=T,sep=\t)
 dotplot(Score ~ Dose | Sex, groups=Sex, data=dat)


 How can I add mean line and stdev bar around mean line to this dotplot?
 I have searched maillists, only several post about add sth to dotplot, but 
they
 are not what I want.
 I know it may have sth to do with panel.function, but don't know how to
 implement it.
 I think the mean and stdev bar are popular additions to dot plot, don't know 
why
 there is no argument to control this kind of plotting.
 Thanks.

 Xiaokuan




_
 TV dinner still cooling?


 [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




 


[[elided Yahoo spam]]

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Where can I download/install grDevices

2011-02-11 Thread Lizblip


djmuseR wrote:
 
 
 Here's one way:
 
  plot(1~1,ylab=expression(Areas (~mu*m^2~)))
 
 The tildes incorporate space between the math and text elements; they're
 optional, but useful. 
 
 

This worked, thanks! 
Elizabeth

-- 
View this message in context: 
http://r.789695.n4.nabble.com/Where-can-I-download-install-grDevices-tp2401415p3302154.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] DiagnosisMed package corrupt?

2011-02-11 Thread Daryl Morris

Hi,

I was borrowing someone's code today, and they were using the package 
DiagnosisMed. I downloaded the package using the built-in package 
installer in the GUI (R 2.12.1 running on Mac OS 10.6.5). The package 
manager lists the following information: ‘DiagnosisMed’ version 0.2.3


Every time I attempted to load the package, I subsequently had R hang. 
The same commands which would cause the hanging after loading the 
package worked without a hitch when I didn't load the package.


Actually, after I wrote that last sentence I tried starting R from 
scratch, loading the package, and then typing 2+2.


This was my result:

 library(DiagnosisMed)
Loading required package: epitools
Loading required package: TeachingDemos
Loading required package: tcltk
Loading Tcl/Tk interface ...
 2+2
Loading required package: tcltk

*** caught segfault ***
address 0x41002109, cause 'memory not mapped'

Traceback:
1: sys.nframe()
2: dynGet(__NameSpacesLoading__, NULL)
3: loadNamespace(package, c(which.lib.loc, lib.loc), keep.source = 
keep.source)

4: doTryCatch(return(expr), name, parentenv, handler)
5: tryCatchOne(expr, names, parentenv, handlers[[1L]])
6: tryCatchList(expr, classes, parentenv, handlers)
7: tryCatch(expr, error = function(e) { call - conditionCall(e) if 
(!is.null(call)) { if (identical(call[[1L]], quote(doTryCatch))) call - 
sys.call(-4L) dcall - deparse(call)[1L] prefix - paste(Error in, 
dcall, : ) LONG - 75L msg - conditionMessage(e) sm - strsplit(msg, 
\n)[[1L]] w - 14L + nchar(dcall, type = w) + nchar(sm[1L], type = 
w) if (is.na(w)) w - 14L + nchar(dcall, type = b) + nchar(sm[1L], 
type = b) if (w  LONG) prefix - paste(prefix, \n , sep = ) } 
else prefix - Error :  msg - paste(prefix, conditionMessage(e), 
\n, sep = ) .Internal(seterrmessage(msg[1L])) if (!silent  
identical(getOption(show.error.messages), TRUE)) { cat(msg, file = 
stderr()) .Internal(printDeferredWarnings()) } invisible(structure(msg, 
class = try-error))})
8: try({ ns - loadNamespace(package, c(which.lib.loc, lib.loc), 
keep.source = keep.source) dataPath - file.path(which.lib.loc, package, 
data) env - attachNamespace(ns, pos = pos, dataPath = dataPath, deps)})
9: library(pkg, character.only = TRUE, logical.return = TRUE, lib.loc = 
lib.loc)

10: .getRequiredPackages2(pkgInfo, quietly = quietly)
11: library(DiagnosisMed)

Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace


I have a workaround. But I just thought I'd report it anyways.

thanks, Daryl Morris
FHCRC, SCHARP, UW Biostatistics

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Predictions with missing inputs

2011-02-11 Thread Axel Urbiz
Dear users,

I'll appreciate your help with this (hopefully) simple problem.

I have a model object which was fitted to inputs X1, X2, X3. Now, I'd like
to use this object to make predictions on a new data set where only X1 and
X2 are available (just use the estimated coefficients for these variables in
making predictions and ignoring the coefficient on X3). Here's my attempt
but, of course, didn't work.

#fit some linear model to random data

x=matrix(rnorm(100*3),100,3)
y=sample(1:2,100,replace=TRUE)
mydata - data.frame(y,x)
mymodel - lm(y ~ ns(X1, df=3) + X2 + X3, data=mydata)
summary(mymodel)

#create new data with 1 missing input

mynewdata - data.frame(matrix(rnorm(100*2),100,2))
mypred - predict(mymodel, mynewdata)
Thanks in advance for your help!

Axel.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fwd: About classification methods.

2011-02-11 Thread David Winsemius


On Feb 11, 2011, at 9:07 PM, Jaeik Cho wrote:

Yes, this point i can understand your suggestion and I should read  
HOW TO ASK GOOD QUESTIONS.
I'm a just new mailing list, also a R user for researching on  
graduate school.


Any person can make mistake, also it can be effect to other people,  
however a good person is teaching good way.
Anyway, actually I couldn't understand why you telling me like this,  
but sorry for my English and stupid first mailing list user.


Have your read the Posting Guide yet?



Sorry again.

ps. I don't know also why should I put your name on CC Bert, sorry.
Last of all, R is a kind of professional software. It means many of  
R user is high level educated person at least I think.
Also, so many foreigner using this software who is not good for  
English writing. Please little bit more understand foreign users.


The fact that people asking and answering questions are highly  
educated is even more reason for including more background and asking  
a detailed question. My point, which seems to have been misunderstood,  
is that using a formal language such as R (or mathematics) is probably  
a superior method of getting the real questions across the language  
barrier. At the moment your questions seem too vague to allow a  
specific answer.





Thanks.

On Feb 11, 2011, at 4:57 PM, David Winsemius wrote:



On Feb 11, 2011, at 4:11 PM, Jaeik Cho wrote:

I mean, after done for the testing step, I want show which data  
classified to wrong class.

That predictions.


At this point my suggestion is that your (re?)-read the Posting  
Guide and determine whether you have adhered to the level of detail  
and specificity that is implied to be desirable or optimal for  
questions to r-help. There may be a language issue and without  
implying any moral issue, the provision of a worked example might  
be even more important here than it would be in a situation of a  
shared language.  You might also consult the How to ask good  
questions link which IIRC is at the bottom of that document.


(My apologies to Bert if this was a question that he really was  
hoping to answer.)


--
David


Jaeik


Begin forwarded message:


From: Bert Gunter gunter.ber...@gene.com
Date: February 11, 2011 3:00:47 PM CST
To: David Winsemius dwinsem...@comcast.net
Cc: Jaeik Cho choja...@gmail.com, r-help@r-project.org
Subject: Re: [R] About classification methods.





Which package should I using, and can I compare each classifier  
result by

predictions?



By prediction on the training data, emphastically no. By  
prediction on

new data not used for training, yes.

-- Bert


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
West Hartford, CT





David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


  1   2   >