[R] different results of fisher.test function in R2.8.1 and R2.6.0

2009-03-27 Thread 马传香
Hi;
I use the function fisher.test to compute in R2.8.1 and R2.6.0,and the
results are not identical.the last number is different.  why?
thank you !


Merry

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Weighting data with normal distribution

2009-03-27 Thread Alice Lin

I have a vector of binary data – a string of 0’s and 1’s. 
I want to weight these inputs with a normal kernel centered around entry x
so it is transformed into a new vector of data that takes into account the
values of the entries around it (weighting them more heavily if they are
near).

Example:
  -
   - -
-  -
0 1 0 0 1 0 0 1 1 1 1 
If x = 3, it’s current value is 0 but it’s new value with the Gaussian
weighting around would be something like .1*0+.5*1+1*0+0.5*0+.1*1= 0.6

I want to be able to play with adjusting the variance to different values as
well.
I’ve found wkde in the mixtools library and think it may be useful but I
have not figured out how to use it yet.

Any tips would be appreciated.

Thanks!

-- 
View this message in context: 
http://www.nabble.com/Weighting-data-with-normal-distribution-tp22728289p22728289.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] interactive image graphic

2009-03-27 Thread Abelian
Dear All
I want to plot a kind of figures, which can interactive with user.
For example, i have a matirx which can be showed by image function.
i.e. we can compare the value depend on different colors.
However, the change of colors depend on the range of value.
Nowaday, i want to set a bar, which can be moved by user such that the
user can obtain the appropriate range.
Does anyone suggest me which function can be applied to solve this
problem?
Thanks

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Data manipulation - multiplicate cases

2009-03-27 Thread MarcioRibeiro

Hi listers,
I am trying to arrange my data and I didn't find any information how to do
it!
I have a data with 3 variables: X Y Z
1-I would like to multiplicate de information of X according to the number I
have for my Y variable...
2-Then I want to identify with a dicotomic variable by the number according
my variable Z from X...
I can do the first part by...
z-rep(x,y)
But I don't know how to set a dicotomic variable according to Z...
Exemple...
I have...
X  YZ
123   31
234   31
345   42
456   32
I want to get...
X  YZ
123   31
123   30
123   30
234   31
234   30
234   30
345   41
345   41
345   40
345   40
456   31
456   31
456   30

Thanks in advance...
-- 
View this message in context: 
http://www.nabble.com/Data-manipulation---multiplicate-cases-tp22730453p22730453.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to quit this mailing list

2009-03-27 Thread Jiang Peng
   Hi,

   I will end this mailbox and start a new one and i want to quit this  
list. I search the R official website with no answer.
thanks in advance 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] loading and manipulating 10 data frames-simplified

2009-03-27 Thread PDXRugger

I have to load 10 different data frames and then manipulate those 10 data
frames but would like to do this in a more simplified code than what i am
doing.  I have tried a couple of approaches but cannot get it to work
correctly.  

So the initial (bulky) code is:

#Bin 1
#--- 

#Loads bin data frame from csv files with acres and TAZ data
Bin1_main -
read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin1_lookup.csv,head=FALSE);

#Separates Acres data from main data and converts acres to square feet
Bin1_Acres=Bin1_main[[1]]*43560

#Separates TAZ data from main data 
Bin1_TAZ=Bin1_main[[2]]

#Separates TAZ data from main data and converts acres to square feet
Bin1_TAZvacant=Bin1_main[[3]]*43560

#Sums each parcel acreage data of the bin
Bin1Acres_sum=sum(Bin1_Acres)

#Creates data frame of cumlative percentages of each parcel of bin
Bin1_cumper=cumsum(Bin1_Acres/Bin1Acres_sum)

#Calculates the probability of choosing particular parcel from bin
Bin1_parprob=abs(1-Bin1_cumper)

#Combines parcel acreage data and cumlative percentage data
Bin1Main.data = cbind(Bin1_Acres,Bin1_parprob,Bin1_TAZ,Bin1_TAZvacant)


#Bin 2
#--- 

#Loads bin data frame from csv files with acres and TAZ data
Bin2_main -
read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin2_lookup.csv,header=FALSE);

#Separates Acres data from main data and converts acres to square feet
Bin2_Acres=Bin2_main[[1]]*43560

#Separates TAZ data from main data 
Bin2_TAZ=Bin2_main[[2]]

#Separates TAZ data from main data and converts acres to square feet
Bin2_TAZvacant=Bin2_main[[3]]*43560

#Sums each parcel acreage data of the bin
Bin2Acres_sum=sum(Bin2_Acres)

#Creates data frame of cumlative percentages of each parcel of bin
Bin2_cumper=cumsum(Bin2_Acres/Bin2Acres_sum)

#Calculates the probability of choosing particular parcel from bin
Bin2_parprob=abs(1-Bin2_cumper)

#Combines parcel acreage data and cumlative percentage data
Bin2Main.data = cbind(Bin2_Acres,Bin2_parprob,Bin2_TAZ,Bin2_TAZvacant)

#Bin 3 
#--- 

#Loads bin data frame from csv files with acres and TAZ data
Bin3_main -
read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin3_lookup.csv,header=FALSE);

#Separates Acres data from main data and converts acres to square feet
Bin3_Acres=Bin3_main[[1]]*43560

#Separates TAZ data from main data 
Bin3_TAZ=Bin3_main[[2]]

#Separates TAZ data from main data and converts acres to square feet
Bin3_TAZvacant=Bin3_main[[3]]*43560

#Sums each parcel acreage data of the bin
Bin3Acres_sum=sum(Bin3_Acres)

#Creates data frame of cumlative percentages of each parcel of bin
Bin3_cumper=cumsum(Bin3_Acres/Bin3Acres_sum)

#Calculates the probability of choosing particular parcel from bin
Bin3_parprob=abs(1-Bin3_cumper)

#Combines parcel acreage data and cumlative percentage data
Bin3Main.data = cbind(Bin3_Acres,Bin3_parprob,Bin3_TAZ,Bin3_TAZvacant)

#Bin 4
#--- 

#Loads bin data frame from csv files with acres and TAZ data
Bin4_main -
read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin4_lookup.csv,header=FALSE);

#Separates Acres data from main data and converts acres to square feet
Bin4_Acres=Bin4_main[[1]]*43560

#Separates TAZ data from main data 
Bin4_TAZ=Bin4_main[[2]]

#Separates TAZ data from main data and converts acres to square feet
Bin4_TAZvacant=Bin4_main[[3]]*43560

#Sums each parcel acreage data of the bin
Bin4Acres_sum=sum(Bin4_Acres)

#Creates data frame of cumlative percentages of each parcel of bin
Bin4_cumper=cumsum(Bin4_Acres/Bin4Acres_sum)

#Calculates the probability of choosing particular parcel from bin
Bin4_parprob=abs(1-Bin4_cumper)

#Combines parcel acreage data and cumlative percentage data
Bin4Main.data = cbind(Bin4_Acres,Bin4_parprob,Bin4_TAZ,Bin4_TAZvacant)

#Bin 5
#--- 

#Loads bin data frame from csv files with acres and TAZ data
Bin5_main -
read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin5_lookup.csv,header=FALSE);

#Separates Acres data from main data and converts acres to square feet
Bin5_Acres=Bin5_main[[1]]*43560

#Separates TAZ data from main data 
Bin5_TAZ=Bin5_main[[2]]

#Separates TAZ data from main data and converts acres to square feet
Bin5_TAZvacant=Bin5_main[[3]]*43560

#Sums each parcel acreage data of the bin
Bin5Acres_sum=sum(Bin5_Acres)

#Creates data frame of cumlative percentages of each parcel of bin
Bin5_cumper=cumsum(Bin5_Acres/Bin5Acres_sum)

#Calculates the probability of choosing particular parcel from bin
Bin5_parprob=abs(1-Bin5_cumper)

#Combines parcel acreage data and cumlative percentage data
Bin5Main.data = cbind(Bin5_Acres,Bin5_parprob,Bin5_TAZ,Bin5_TAZvacant)

#Bin 6
#--- 

#Loads bin data frame from csv files with acres and TAZ data
Bin6_main -

[R] A beginner's question

2009-03-27 Thread minben
I am a new R-language user. I have set up a data frame mydata,one of
the colume of which is skill. Now I want to select the observations
whose skill value is equal to 1,by what command can I get it?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to quit this mailing list

2009-03-27 Thread K. Elo
Hi,

 https://stat.ethz.ch/mailman/listinfo/r-help

and there You'll find the section:

To unsubscribe from R-help, get a password reminder, or change your
subscription options enter your subscription email address:

Hope this helps,
Kimmo

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] A beginner's question

2009-03-27 Thread Coen van Hasselt
Here's an example:

mydata-data.frame(skill=c(1,2,3,4),x=c(1,1,1,1))
mydata[mydata$skill==1,]


On Fri, Mar 27, 2009 at 16:40, minben minb...@gmail.com wrote:
 I am a new R-language user. I have set up a data frame mydata,one of
 the colume of which is skill. Now I want to select the observations
 whose skill value is equal to 1,by what command can I get it?

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] A beginner's question

2009-03-27 Thread K. Elo
Hi,

minben wrote:
 I am a new R-language user. I have set up a data frame mydata,one of
 the colume of which is skill. Now I want to select the observations
 whose skill value is equal to 1,by what command can I get it?

Try this:
mydata1-mydatasubset(mydata, skill==1)

Maybe You should also read this introduction:
http://cran.r-project.org/doc/manuals/R-intro.pdf


Kind regards,
Kimmo

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] use of @ character in variable name

2009-03-27 Thread Thomas Lumley

On Thu, 26 Mar 2009, Mike Miller wrote:

Importing data with a header row using read.delim, one variable should be named 
@5HTT but it is automatically renamed to X.5HTT, presumably because the @ is 
either unacceptable or misunderstood.  I've tried to find out what the rules 
are on variable names but have been unsuccessful.  I'll bet someone here can 
tell me where to look.  Maybe it's hidden away in here somewhere:


http://cran.r-project.org/doc/manuals/R-data.pdf


It's hidden away in: FAQ 7.14 What are valid names?

 -thomas

Thomas Lumley   Assoc. Professor, Biostatistics
tlum...@u.washington.eduUniversity of Washington, Seattle

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] different results of fisher.test function in R2.8.1 and R2.6.0

2009-03-27 Thread Thomas Lumley

On Fri, 27 Mar 2009, 马传香 wrote:


Hi;
I use the function fisher.test to compute in R2.8.1 and R2.6.0,and the
results are not identical.the last number is different.  why?
thank you !



Can you show us the data?  The only change listed in the NEWS file since 2.6.0 
was with the option simulate.p.values=TRUE - you don't say whether you used 
that option.

 -thomas


Thomas Lumley   Assoc. Professor, Biostatistics
tlum...@u.washington.eduUniversity of Washington, Seattle

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Physical or Statistical Explanation for the Funnel Plot?

2009-03-27 Thread Thomas Lumley

On Thu, 26 Mar 2009, Jason Rupert wrote:



The R code below produces (after running for a few minutes on a decent 
computer) the plot shown at the following location:

http://n2.nabble.com/Is-there-a-physical-and-quantitative-explanation-for-this-plot--td2542321.html

I'm just taking the mean of a given set of random variables, where the set size 
is increased.  There appears to be a quick convergence and then a pretty steady

variance out to a set size of 10,.



Part of the convergence is just that the standard devation of a mean of N 
observations is proportional to 1/sqrt(N). In your case the distributions are 
all exactly Normal; the same convergence would occur with other distributions, 
but you would also see the change in shape from left to right as the 
distribution converged to Normal.

There's also some plotting artifacts due to the size of the points.  The 
apparent stabilization at large N (and the wide vertical bar at zero that Marc 
Schwartz commented on) are due partly to the slow convergence of 1/sqrt(N) but 
largely because the width can't be smaller than the width of a point.

When I draw funnel plots like this for whole-genome association data I use the 
'hexbin' package, which doesn't have these artifacts and is much faster and 
produces smaller graphics files.

-thomas


Thomas Lumley   Assoc. Professor, Biostatistics
tlum...@u.washington.eduUniversity of Washington, Seattle

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R 2.8.1 and 2.9 alpha crash when running survest of Design package

2009-03-27 Thread Thomas Lumley


The Design package is incompatible with updates to the survival package 
(version 2.35 and higher) that were made for version 2.9.0. It calls some 
internal fitting functions (coxreg.fit, agreg.fit) whose arguments have changed.

According to the CRAN checks, about a dozen other packages were also affected 
by other changes in the update, but most just give an error message rather than 
crash. Maintainers of all the packages have been notified and given my best 
guess at the reason for their specific incompatibility (and an offer of further 
assistance if necessary)

You may need to downgrade to version 2.34 of the survival package until Design 
is updated.

 -thomas



On Fri, 27 Mar 2009, Nguyen Dinh Nguyen wrote:


Dear Prof Harrell and everyone,



My PC: Window XP service pack 3 and service pack 2

R version 2.8.1 and 2.9 alpha



For the last 3 days, after updating R, my two computers have been facing
problems when running existing and runable R commands that involves with
Design package



I attempt to use 'survest', but I failed all the times with R (both 2.8.1
and 2.9 alpha) being shut down immediately with following error report
messages.

AppName: rgui.exe AppVer: 2.90.48212.0   ModName: survival.dll

ModVer: 0.0.0.0   Offset: 7749



However, if I run these commands on other computers which have not been
updated for 2 week, they run OK



Could you please consider the matter and give me advice



I am looking forward to hearing from you soon



Regards

Nguyen D Nguyen
Garvan Institute of Medical Research

Sydney, Australia


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



Thomas Lumley   Assoc. Professor, Biostatistics
tlum...@u.washington.eduUniversity of Washington, Seattle

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] problem with the plm package

2009-03-27 Thread Helen Chen

Dear R help, 
  I use the package plm  the function plm() to analyse a panel data and 
   estimate a fixeffect model.
   I use the code as follow :
fe - plm(y~x+z, data, model = within)
   But my result have no intercept.why?  I consult the paper plm.pdf , but I
can't find any answer.
   Can I estimate a model with intercept?

   Thanks, 
   Helen Chen 
-- 
View this message in context: 
http://www.nabble.com/problem-with-the-plm-package-tp22737251p22737251.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Remove error data and clustering analysis

2009-03-27 Thread guodong wang
Hi, all,

I’d like to do the clustering analysis in my dataset. The example data
are as follows:

Dataset 1:

500, 490, 486, 490, 491, 493, 480, 461, 504, 476, 434, 500, 470, 495,
3116, 3142, 12836, 3062, 3091, 3141, 3177, 3150, 3114, 3149;

Dataset 2:

506, 473, 495, 494, 434, 459, 445, 475, 476, 128367, 470, 513, 466,
476,482, 1201, 469, 502;

I had so many datasets like that. Basically, every dataset can
classify one or two clusters (no more than 2), meanwhile, there have
error data points, for example, 12836 is error data point in Dataset
1; and 128367, 1201 is error data points in dataset2.

The clustered data is following the normal distribution, the standard
deviation was known. That’s mean the one cluster is following the
normal distribution when the dataset classified one cluster like
dataset2; the two clusters are following the normal distribution
respectively when the dataset classified two clusters like dataset1.
Error data are far away of the mean.

I am wondering is there any mathematic pipeline/function can do
the analysis that removing error data, and clustering the dataset in 1
or 2 clusters?

Thank you for your reply.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plot with nested x-labels

2009-03-27 Thread Jim Lemon

Antje wrote:

Hi there,

I was wondering wether it is possible to creeate plots with nested 
lables like in excel?

If yes, could anyone provide me the information how to do it?
I've attached an image of an Excel plot to show you, what I'd like to 
plot with R :-)



Hi Antje,
The very question asked by Ofir Levy that led to the new hierobarp 
function in the plotrix package.


Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] A beginner's question

2009-03-27 Thread Florin Maican

You can do like this:

1.
mydata[mydata$skill==1,]

2. 
   mydata[mydata[,skill]==1,]
  
/Forin


On Thu, 26 Mar 2009 23:40:32 -0700 (PDT)
minben minb...@gmail.com wrote:

 I am a new R-language user. I have set up a data frame mydata,one of
 the colume of which is skill. Now I want to select the observations
 whose skill value is equal to 1,by what command can I get it?
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html and provide commented,
 minimal, self-contained, reproducible code.
 


-- 
 Florin G. Maican
==

Ph.D. candidate,
Department of Economics,
School of Business, Economics and Law, 
Gothenburg University, Sweden   
---
P.O. Box 640 SE-405 30, 
Gothenburg, Sweden  

 Mobil:  +46 76 235 3039 
 Phone:  +46 31 786 4866 
 Fax:+46 31 786 4154  
 Home Page: http://maicanfg.googlepages.com/index.html
 E-mail: florin.mai...@handels.gu.se 

 Not everything that counts can be 
 counted, and not everything that can be 
 counted counts.
 --- Einstein ---

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sort by timestamp

2009-03-27 Thread Arien Lam
Good morning Johannes,

This might help. Try:

df - data.frame(V1=as.factor(c('2008-10-14 09:10:00','2008-10-14
9:20:20','2008-10-14 08:45:00')),V2=runif(3))

df   # is a dataframe, just as yours

class(df$V1) # is a factor, just as yours. See ?factor
 # This will probably not be ordered
 # in a way you like.

df$V1 - as.POSIXct(df$V1, tz='CET') # makes it a time. See ?POSIXct

class(df$V1) # is a POSIX time now

df2 - df[do.call(order, df), ] # see ?order

df2  # sorted in a way you like


Cheers, Arien


On Thu, March 26, 2009 08:54, j.k wrote:

 #Good morning alltogheter. I'm using R for a short time to analyse
 TimeSeries
 and I have the following Problem:
 #I have a bunch of Time Series:
 #First of all I import them from a txt File

 data.input01 -read.csv(./LD/20081030.txt, header = TRUE, sep = ;,
 quote=\, dec=,, fill = TRUE, comment.char=)
 data.input02 -read.csv(./LD/20090305.txt, header = TRUE, sep = ;,
 quote=\, dec=,, fill = TRUE, comment.char=)
 data.input03 -read.csv(./LD/20081114.txt, header = TRUE, sep = ;,
 quote=\, dec=,, fill = TRUE, comment.char=)
 data.input04 -read.csv(./LD/20081201.txt, header = TRUE, sep = ;,
 quote=\, dec=,, fill = TRUE, comment.char=)
 data.input05 -read.csv(./LD/20081219.txt, header = TRUE, sep = ;,
 quote=\, dec=,, fill = TRUE, comment.char=)
 data.input06 -read.csv(./LD/20090107.txt, header = TRUE, sep = ;,
 quote=\, dec=,, fill = TRUE, comment.char=)

 #After the import they look like that:

   V1   V2
 1  2008-10-14 08:45:00 92130.68
 2  2008-10-14 08:50:00 94051.70
 3  2008-10-14 08:55:00 97050.85
 4  2008-10-14 09:00:00 81133.81
 5  2008-10-14 09:05:00 70705.40
 6  2008-10-14 09:10:00 75213.92
 7  2008-10-14 09:15:00 90876.14
 8  2008-10-14 09:20:00 85995.17

 #Next steps are to combine them with rbind and sort duplicates out

 data.troughput01 -
 rbind(data.input03,data.input01,data.input04,data.input02,data.input05,data.input06)
 data.troughput02 - unique(data.troughput01)

 #The Problem is that the dates are mixed and I want to sort/order them by
 the date and time.
 #The class of the Date/time is as followed:
 class(data.input01$V1)
 [1] factor

 # I've already tried sort and order but it didn't work
 #Are there any suggestions, how I can solve this issue??

 Thanks in advance
 Johannes

 --
 View this message in context:
 http://www.nabble.com/Sort-by-timestamp-tp22717322p22717322.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



-- 
drs. H.A. (Arien) Lam (Ph.D. student)
Department of Physical Geography
Faculty of Geosciences
Utrecht University, The Netherlands

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plot with nested x-labels

2009-03-27 Thread Antje

Hi Jim,

is this plotrix package version higher than 2.2-7 ?

Antje


Jim Lemon schrieb:

Antje wrote:

Hi there,

I was wondering wether it is possible to creeate plots with nested 
lables like in excel?

If yes, could anyone provide me the information how to do it?
I've attached an image of an Excel plot to show you, what I'd like to 
plot with R :-)



Hi Antje,
The very question asked by Ofir Levy that led to the new hierobarp 
function in the plotrix package.


Jim




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sort by timestamp

2009-03-27 Thread j.k

Works perfect!!

Thanks a lot...

Cheers
Johannes



Arien Lam wrote:
 
 Good morning Johannes,
 
 This might help. Try:
 
 df - data.frame(V1=as.factor(c('2008-10-14 09:10:00','2008-10-14
 9:20:20','2008-10-14 08:45:00')),V2=runif(3))
 
 df   # is a dataframe, just as yours
 
 class(df$V1) # is a factor, just as yours. See ?factor
  # This will probably not be ordered
  # in a way you like.
 
 df$V1 - as.POSIXct(df$V1, tz='CET') # makes it a time. See ?POSIXct
 
 class(df$V1) # is a POSIX time now
 
 df2 - df[do.call(order, df), ] # see ?order
 
 df2  # sorted in a way you like
 
 
 Cheers, Arien
 
 
 On Thu, March 26, 2009 08:54, j.k wrote:

 #Good morning alltogheter. I'm using R for a short time to analyse
 TimeSeries
 and I have the following Problem:
 #I have a bunch of Time Series:
 #First of all I import them from a txt File

 data.input01 -read.csv(./LD/20081030.txt, header = TRUE, sep = ;,
 quote=\, dec=,, fill = TRUE, comment.char=)
 data.input02 -read.csv(./LD/20090305.txt, header = TRUE, sep = ;,
 quote=\, dec=,, fill = TRUE, comment.char=)
 data.input03 -read.csv(./LD/20081114.txt, header = TRUE, sep = ;,
 quote=\, dec=,, fill = TRUE, comment.char=)
 data.input04 -read.csv(./LD/20081201.txt, header = TRUE, sep = ;,
 quote=\, dec=,, fill = TRUE, comment.char=)
 data.input05 -read.csv(./LD/20081219.txt, header = TRUE, sep = ;,
 quote=\, dec=,, fill = TRUE, comment.char=)
 data.input06 -read.csv(./LD/20090107.txt, header = TRUE, sep = ;,
 quote=\, dec=,, fill = TRUE, comment.char=)

 #After the import they look like that:

   V1   V2
 1  2008-10-14 08:45:00 92130.68
 2  2008-10-14 08:50:00 94051.70
 3  2008-10-14 08:55:00 97050.85
 4  2008-10-14 09:00:00 81133.81
 5  2008-10-14 09:05:00 70705.40
 6  2008-10-14 09:10:00 75213.92
 7  2008-10-14 09:15:00 90876.14
 8  2008-10-14 09:20:00 85995.17

 #Next steps are to combine them with rbind and sort duplicates out

 data.troughput01 -
 rbind(data.input03,data.input01,data.input04,data.input02,data.input05,data.input06)
 data.troughput02 - unique(data.troughput01)

 #The Problem is that the dates are mixed and I want to sort/order them by
 the date and time.
 #The class of the Date/time is as followed:
 class(data.input01$V1)
 [1] factor

 # I've already tried sort and order but it didn't work
 #Are there any suggestions, how I can solve this issue??

 Thanks in advance
 Johannes

 --
 View this message in context:
 http://www.nabble.com/Sort-by-timestamp-tp22717322p22717322.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 
 
 -- 
 drs. H.A. (Arien) Lam (Ph.D. student)
 Department of Physical Geography
 Faculty of Geosciences
 Utrecht University, The Netherlands
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
 

-- 
View this message in context: 
http://www.nabble.com/Sort-by-timestamp-tp22717322p22738808.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] A beginner's question

2009-03-27 Thread Paul Hiemstra

minben schreef:

I am a new R-language user. I have set up a data frame mydata,one of
the colume of which is skill. Now I want to select the observations
whose skill value is equal to 1,by what command can I get it?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
  

To add the number of possibilities :):

subset(mydata, skill == 1)

cheers,
Paul

--
Drs. Paul Hiemstra
Department of Physical Geography
Faculty of Geosciences
University of Utrecht
Heidelberglaan 2
P.O. Box 80.115
3508 TC Utrecht
Phone:  +3130 274 3113 Mon-Tue
Phone:  +3130 253 5773 Wed-Fri
http://intamap.geo.uu.nl/~paul

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Remove error data and clustering analysis

2009-03-27 Thread wanggd1983
Hi, all,
I'd like to do the clustering analysis in my dataset. The example data are as 
follows:
 
Dataset 1:
500, 490, 486, 490, 491, 493, 480, 461, 504, 476, 434, 500, 470, 495, 3116, 
3142, 12836, 3062, 3091, 3141, 3177, 3150, 3114, 3149;
Dataset 2:
506, 473, 495, 494, 434, 459, 445, 475, 476, 128367, 470, 513, 466, 476,482, 
1201, 469, 502;
 
I had so many datasets like that. Basically, every dataset can classify one or 
two clusters (no more than 2), meanwhile, there have error data points, for 
example, 12836 is error data point in Dataset 1; and 128367, 1201 is error data 
points in dataset2.
 
The clustered data is following the normal distribution, the standard deviation 
was known. That’s mean the one cluster is following the normal distribution 
when the dataset classified one cluster like dataset2; the two clusters are 
following the normal distribution respectively when the dataset classified two 
clusters like dataset1. Error data are far away of the mean.
 
I am wondering is there any mathematic pipeline/function can do the 
analysis that removing error data, and clustering the dataset in 1 or 2 
clusters?

Thank you for your reply.

2009-03-27 



wanggd1983 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Plot with nested x-labels

2009-03-27 Thread Antje

Hi there,

I was wondering wether it is possible to creeate plots with nested lables like 
in excel?

If yes, could anyone provide me the information how to do it?
I've attached an image of an Excel plot to show you, what I'd like to plot with 
R :-)


Ciao,
Antje
inline: plot.png__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Ploting a matrix

2009-03-27 Thread skrug

Hi evrybody,

in a matrix consisting of 49 columns, I would like to plot all columns 
against the first in 48 different graphs.

Can you help me?

Thank you in advance
Sebastian

--
***

Dipl. Biol. Sebastian Krug
PhD - student
IFM - GEOMAR
Leibniz Institute of Marine Sciences
Research Division 2 - Marine Biogeochemistry
Düsternbrooker Weg 20
D - 24105 Kiel
Germany

Tel.: +49 431 600-4282
Fax.: +49 431 600-4446
email: sk...@ifm-geomar.de

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Ploting a matrix

2009-03-27 Thread baptiste auguie

Something like this perhaps,


a - matrix(rnorm(5*49), ncol=49)

pdf(width=15, height=15)

par(mfrow= c(8,6))
apply(a[,-1], 2, plot, x= a[,1])

dev.off()



HTH,

baptiste

On 27 Mar 2009, at 11:05, skrug wrote:


Hi evrybody,

in a matrix consisting of 49 columns, I would like to plot all columns
against the first in 48 different graphs.
Can you help me?

Thank you in advance
Sebastian

--
***

Dipl. Biol. Sebastian Krug
PhD - student
IFM - GEOMAR
Leibniz Institute of Marine Sciences
Research Division 2 - Marine Biogeochemistry
Düsternbrooker Weg 20
D - 24105 Kiel
Germany

Tel.: +49 431 600-4282
Fax.: +49 431 600-4446
email: sk...@ifm-geomar.de

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


_

Baptiste Auguié

School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK

Phone: +44 1392 264187

http://newton.ex.ac.uk/research/emag

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] 'stretching' a binomial variable

2009-03-27 Thread imicola

Hi,

Im carrying out some Bayesian analysis using a binomial response variable
(proportion: 0 to 1), but most of my observations have a value of 0 and many
have very small values (i.e. 0.001).  I'm having troubles getting my MCMC
algorithm to converge, so I have decided to try normalising my response
variable to see if this helps.

I want it to stay between 0 and 1 but to have a larger range of values, or
just for them all to be slightly higher.

Does anyone know the best way to acheive this?  I could just add a value to
each observation (say 10 to increase the proportion a bit, but ensuring it
would still be between 0 and 1) - would that be ok?  Or is there a better
way to stretch the values up?

Sorry - i know its not really an R specific question, but I have never found
a forum with as many stats litterate people as this one :-)

Cheers - any advice much appreciated!

nicola
-- 
View this message in context: 
http://www.nabble.com/%27stretching%27-a-binomial-variable-tp22740114p22740114.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] ICC question: Interrater and intrarater variability (intraclass correlation coefficients)

2009-03-27 Thread Tal Galili
Hello dear R help group.

I encountered this old thread (http://tinyurl.com/dklgsk) containing the a
similar question to the one I have, but left without an answer.
I am and hoping one of you might help.


A simplified situation: I have a factorial design (with 2^3 experiment
combinations), for 167 subjects, each one has answered the same question
twice (out of a bunch of types of questions).
Each answer could get an integer number between 0 to 3.

I wish to combine the two answers, but first to be sure I could, I would
have liked to run an ICC (Intraclass correlation) check on the two answers.
Naturally, I would use the irr, condord, or psy packages (as John Fox
suggested back then), but I can't because of the repetitions of different
design question for each patient.
Since the mentioned packages (irr, condord, and psy) Can take only a n*m
matrix for subjects and raters. But no place is given for the repetitions as
data and therefore it will be impossible to get results
for the INTRArater reliability.


Thanks,

Tal

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] loading and manipulating 10 data frames-simplified

2009-03-27 Thread Uwe Ligges
Put the data.frames as elements in a list and loop / sapply() over that 
list.


Uwe Ligges


PDXRugger wrote:

I have to load 10 different data frames and then manipulate those 10 data
frames but would like to do this in a more simplified code than what i am
doing.  I have tried a couple of approaches but cannot get it to work
correctly.  


So the initial (bulky) code is:

#Bin 1
#--- 


#Loads bin data frame from csv files with acres and TAZ data
Bin1_main -
read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin1_lookup.csv,head=FALSE);

#Separates Acres data from main data and converts acres to square feet
Bin1_Acres=Bin1_main[[1]]*43560

#Separates TAZ data from main data 
Bin1_TAZ=Bin1_main[[2]]


#Separates TAZ data from main data and converts acres to square feet
Bin1_TAZvacant=Bin1_main[[3]]*43560

#Sums each parcel acreage data of the bin
Bin1Acres_sum=sum(Bin1_Acres)

#Creates data frame of cumlative percentages of each parcel of bin
Bin1_cumper=cumsum(Bin1_Acres/Bin1Acres_sum)

#Calculates the probability of choosing particular parcel from bin
Bin1_parprob=abs(1-Bin1_cumper)

#Combines parcel acreage data and cumlative percentage data
Bin1Main.data = cbind(Bin1_Acres,Bin1_parprob,Bin1_TAZ,Bin1_TAZvacant)


#Bin 2
#--- 


#Loads bin data frame from csv files with acres and TAZ data
Bin2_main -
read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin2_lookup.csv,header=FALSE);

#Separates Acres data from main data and converts acres to square feet
Bin2_Acres=Bin2_main[[1]]*43560

#Separates TAZ data from main data 
Bin2_TAZ=Bin2_main[[2]]


#Separates TAZ data from main data and converts acres to square feet
Bin2_TAZvacant=Bin2_main[[3]]*43560

#Sums each parcel acreage data of the bin
Bin2Acres_sum=sum(Bin2_Acres)

#Creates data frame of cumlative percentages of each parcel of bin
Bin2_cumper=cumsum(Bin2_Acres/Bin2Acres_sum)

#Calculates the probability of choosing particular parcel from bin
Bin2_parprob=abs(1-Bin2_cumper)

#Combines parcel acreage data and cumlative percentage data
Bin2Main.data = cbind(Bin2_Acres,Bin2_parprob,Bin2_TAZ,Bin2_TAZvacant)

#Bin 3 
#--- 


#Loads bin data frame from csv files with acres and TAZ data
Bin3_main -
read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin3_lookup.csv,header=FALSE);

#Separates Acres data from main data and converts acres to square feet
Bin3_Acres=Bin3_main[[1]]*43560

#Separates TAZ data from main data 
Bin3_TAZ=Bin3_main[[2]]


#Separates TAZ data from main data and converts acres to square feet
Bin3_TAZvacant=Bin3_main[[3]]*43560

#Sums each parcel acreage data of the bin
Bin3Acres_sum=sum(Bin3_Acres)

#Creates data frame of cumlative percentages of each parcel of bin
Bin3_cumper=cumsum(Bin3_Acres/Bin3Acres_sum)

#Calculates the probability of choosing particular parcel from bin
Bin3_parprob=abs(1-Bin3_cumper)

#Combines parcel acreage data and cumlative percentage data
Bin3Main.data = cbind(Bin3_Acres,Bin3_parprob,Bin3_TAZ,Bin3_TAZvacant)

#Bin 4
#--- 


#Loads bin data frame from csv files with acres and TAZ data
Bin4_main -
read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin4_lookup.csv,header=FALSE);

#Separates Acres data from main data and converts acres to square feet
Bin4_Acres=Bin4_main[[1]]*43560

#Separates TAZ data from main data 
Bin4_TAZ=Bin4_main[[2]]


#Separates TAZ data from main data and converts acres to square feet
Bin4_TAZvacant=Bin4_main[[3]]*43560

#Sums each parcel acreage data of the bin
Bin4Acres_sum=sum(Bin4_Acres)

#Creates data frame of cumlative percentages of each parcel of bin
Bin4_cumper=cumsum(Bin4_Acres/Bin4Acres_sum)

#Calculates the probability of choosing particular parcel from bin
Bin4_parprob=abs(1-Bin4_cumper)

#Combines parcel acreage data and cumlative percentage data
Bin4Main.data = cbind(Bin4_Acres,Bin4_parprob,Bin4_TAZ,Bin4_TAZvacant)

#Bin 5
#--- 


#Loads bin data frame from csv files with acres and TAZ data
Bin5_main -
read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin5_lookup.csv,header=FALSE);

#Separates Acres data from main data and converts acres to square feet
Bin5_Acres=Bin5_main[[1]]*43560

#Separates TAZ data from main data 
Bin5_TAZ=Bin5_main[[2]]


#Separates TAZ data from main data and converts acres to square feet
Bin5_TAZvacant=Bin5_main[[3]]*43560

#Sums each parcel acreage data of the bin
Bin5Acres_sum=sum(Bin5_Acres)

#Creates data frame of cumlative percentages of each parcel of bin
Bin5_cumper=cumsum(Bin5_Acres/Bin5Acres_sum)

#Calculates the probability of choosing particular parcel from bin
Bin5_parprob=abs(1-Bin5_cumper)

#Combines parcel acreage data and cumlative percentage data
Bin5Main.data = cbind(Bin5_Acres,Bin5_parprob,Bin5_TAZ,Bin5_TAZvacant)

#Bin 6
#--- 

Re: [R] Data manipulation - multiplicate cases

2009-03-27 Thread jim holtman
Is this what you are looking for:

 x
X Y Z
1 123 3 1
2 234 3 1
3 345 4 2
4 456 3 2
 new.x - x[rep(seq(nrow(x)), times=x$Y),]
 new.x
  X Y Z
1   123 3 1
1.1 123 3 1
1.2 123 3 1
2   234 3 1
2.1 234 3 1
2.2 234 3 1
3   345 4 2
3.1 345 4 2
3.2 345 4 2
3.3 345 4 2
4   456 3 2
4.1 456 3 2
4.2 456 3 2
 new.x$Z - ave(new.x$Z, new.x$X, FUN=function(z) c(rep(1,z[1]), rep(0, 
 length(z) - z[1])))
 new.x
  X Y Z
1   123 3 1
1.1 123 3 0
1.2 123 3 0
2   234 3 1
2.1 234 3 0
2.2 234 3 0
3   345 4 1
3.1 345 4 1
3.2 345 4 0
3.3 345 4 0
4   456 3 1
4.1 456 3 1
4.2 456 3 0



On Thu, Mar 26, 2009 at 4:26 PM, MarcioRibeiro mes...@pop.com.br wrote:

 Hi listers,
 I am trying to arrange my data and I didn't find any information how to do
 it!
 I have a data with 3 variables: X Y Z
 1-I would like to multiplicate de information of X according to the number I
 have for my Y variable...
 2-Then I want to identify with a dicotomic variable by the number according
 my variable Z from X...
 I can do the first part by...
 z-rep(x,y)
 But I don't know how to set a dicotomic variable according to Z...
 Exemple...
 I have...
 X      Y    Z
 123   3    1
 234   3    1
 345   4    2
 456   3    2
 I want to get...
 X      Y    Z
 123   3    1
 123   3    0
 123   3    0
 234   3    1
 234   3    0
 234   3    0
 345   4    1
 345   4    1
 345   4    0
 345   4    0
 456   3    1
 456   3    1
 456   3    0

 Thanks in advance...
 --
 View this message in context: 
 http://www.nabble.com/Data-manipulation---multiplicate-cases-tp22730453p22730453.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Some install package fixes for Ubuntu Hardy

2009-03-27 Thread John C Nash

Thanks Dirk.

I should have noted the debian sig. My question -- as Dirk recognized -- 
was where to let folk
know. Part of the issue is that Ubuntu users who are new to R may not 
find this stuff. My

workaround may also be a pointer for those with other Linux distros.

There is now another issue that I don't find r-cran-java as an available 
package when I
run the apt-cache search. (I do find r-cran-rgl; rgl popped up as a 
dependency in the

install.packages, but I should have looked for the debian package.)

I'll post my r-cran-java query to the r-sig-debian.

JN

Dirk Eddelbuettel wrote:

On 26 March 2009 at 09:45, John C Nash wrote:
| I encountered some failures in using install.packages() to install rgl 
| and rJava in some of my (multiple) Ubuntu Hardy systems. A quick search 
| of the 'Net did not show any debian packages for these. The 
| install.packages messages said header or other files were missing, 
| suggesting path and related woes. Email with Duncan Murdoch (thanks!) 
| pointed the way with rgl and led to a fix for rJava in similar fashion. 
| It may save others some frustration to know my resolution. See below.
| 
| However, I do have a question which a brief rummage of r-project did not 
| answer. Where should information like this be put? My opinion is that it 


Maybe on the r-sig-debian list that is dedicated to Debian / Ubuntu and R?

| should go on the wiki, but possibly there is a better solution if we can 
| get the right messages into the package installers, though I recognize 
| the load that puts on maintainers.
| 
| Cheers, JN
| 
| Ubuntu Hardy rgl install fix:
| 
| The headers gl.h and glu.h are installed with the dev packages 
| libgl1-mesa-dev and libglu1-mesa-dev. So the fix is to run (in at 
| terminal as root)
| 
| apt-get install libgl1-mesa-dev

| apt-get install libglu1-mesa-dev

Yes, which is why the r-cran-rgl package (available in Debian for over five
years now, and hence in Ubuntu for probably 4 1/2) has the following
Build-Depends (with my manual indentation here):

  Build-Depends: debhelper (= 5.0.0), r-base-dev (= 2.8.1), cdbs, \
  libgl1-mesa-dev | libgl-dev, libglu1-mesa-dev | libglu-dev, \
	  libpng12-dev, libx11-dev, libxt-dev, x11proto-core-dev 
 
| then

| R
| .
| install.packages(rgl)
| 
| etc.


Let's not forget the 'sudo apt-get install r-cran-rgl' alternative.
 
| Ubuntu Hardy rJava install fix:
| 
| Needed to get Sun JDK (not JRE)
| 
| Then add new

| ln -s /usr/java/jdkx/bin/java java
| and
| ln -s /usr/java/jdkx/bin/javac javac
| 
| where xx is the version information on the jdk directory name -- in 
| my case 1.6.0_13 (see below)
| 
| Then
| 
| R CMD javareconf
| 
| still fails to find the java compiler.
| 
| Seems $JAVA_HOME may not be defined.
| 
| Try

| export JAVA_HOME=/usr/java/jdk1.6.0_13/
| 
| Then (as root)
| 
| R CMD javareconf
| 
| seems to work.
| Then rJava installed OK. I was then able to install RWeka (my original 
| objective) and it seems to run OK.


Likewise, the r-cran-rjava package has 


   Build-Depends: debhelper (= 7.0.0), r-base-dev (= 2.8.1), cdbs, \
  openjdk-6-jdk, automake

and R is now configured for this Java version at the built.


Again, questions on the r-sig-debian list may have been of help.

Hope this helps,  Dirk




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] anova with means and SEMs (no raw data)

2009-03-27 Thread Martin Batholdy

hi,


I have only the means and standard errors of this means of different  
groups and different conditions (and the group sizes).


Is there a function which can compute me an anova out of this  
information?



thanks!

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] pgmm (Blundell-Bond) sample needed

2009-03-27 Thread Millo Giovanni
Dear Ivo,
please find below some answers to your pgmm-related questions.

##

Was: Message: 70
Date: Thu, 26 Mar 2009 21:39:19 +
From: ivo...@gmail.com
Subject: [R] pgmm (Blundell-Bond) sample needed
To: r-help r-h...@stat.math.ethz.ch
Message-ID: 0016361e8962dfdfd704660c7...@google.com
Content-Type: text/plain

Dear R Experts---

Sorry for all the questions yesterday and today. I am trying to use Yves 
Croissant's pgmm function in the plm package with Blundell-Bond moments. I  
have read the Blundell-Bond paper, and want to run the simplest model  
first, d[i,t] = a*d[i,t-1] + fixed[i] + u[i,t] . no third conditioning 
variables yet. the full set of moment conditions recommended for  
system-GMM, which is (T-1)*(T-2)/2+(T-3), in which the u's interact with 
all possible lagged y's and delta y's.

I believe that pgmm operates by demanding that firm (i) and year (t) be  
the first two columns in the data set.

 Almost correct: this is the easiest way. Else you can supply data 
organized as you like but then you have to specify who the index is. See 
vignette(plm), § 4

library(plm)
NF=20; NT=10
d= data.frame( firm= rep(1:NF, each=NT), year= rep( 1:NT, NF),  
x=rnorm(NF*NT) );

# the following fails, because dynformula magic is required; learned this  
the hard way
# v=pgmm( x ~ lag(x), data=d, gmm.inst=~x, lag.gmm=c(2,99),  
transformation=ld )

 The reason for 'dynformula magic' is that lags in panel data are only well 
defined in conjunction with the group and time indices; therefore in 'plm' lags 
(and first differences) are best supplied through a 'dynformula' interface 
inside a model. else you get the standard time-series lag, which is incorrect 
here.

formula= dynformula( x ~ 1, list(1)); # this creates x ~ lag(x)
v=pgmm( formula, data=d, gmm.inst=~x, lag.gmm=c(2,99), transformation=ld )

Error in solve.default(suml(Vi)) :
system is computationally singular: reciprocal condition number =  
8.20734e-20

obviously, I am confused.

 You should not, as you yourself state that the full set of moment 
conditions recommended for  
system-GMM [...] is (T-1)*(T-2)/2+(T-3). If T=10 then you have the equivalent 
of 9*8/2+7 = 43 regressors (instruments). That's why N=20 is way too little. 
The original Arellano and Bond example in UKEmpl (which is actually called 
'EmplUK'!) has N=140, T=9. I already pointed this out in another r-help post, 
not many days ago (March 9th, 17:59).

 May I suggest you give a further look at Arellano's panel data book? This 
would probably clarify how the instrumments are constructed (by the way, that's 
also what I am currently reading in my spare time). See also Greene, 
Econometric analysis, § 18.5 and the Z matrix in particular. (Yves Croissant 
has put this down nicely in the package vignette as well).

 when I execute the same command on the included  
UKEmpl data set, it works. however, my inputs would seem perfectly  
reasonable. I would hope that the procedure could produce a lag(x)  
coefficient estimate of around 0, and then call it a day.

 would be nice; but your troubles aren't over yet :^)

could someone please tell me how to instruct pgmm to just estimate this  
simplest of all BB models?

 OK, you found out by yourself. Just for the benefit of other list readers, 
I reproduce the lines you sent us by private email (comments are mine):
 lagformula= dynformula(x ~ 1, list(1)) 
 # reproduces x~lag(x, 1) in standard OLS parlance
 v=pgmm(lagformula, data=d, gmm.inst=~x, lag.gmm=c(1,99), transformation=ld )
 # means the GMM-system estimator
 # where you use both levels and differences as instruments.

[My ultimate goal is to replicate what another author has run via xtabond2  
d ld, gmm(L.(d), lag(1 3)) robust in Stata; if you know the magic of  
moving this statement into pgmm syntax, I would be even more grateful.  
Right now, I am so stuck on square 1 that I do not know how to move towards  
figuring out where I ultimately need to go.]

 GMM are a tricky subject I still don't master. I'll try to figure out what 
both Stata and plm do with the instruments and let you know. 
 Anyway, the 'plm' equivalent of Stata's Robust option, which uses the 
Windmeijer correction if I'm not mistaken, is to specify a robust covariance 
via vcovHC().

 Now to your second message:

#

Was: Message: 82
Date: Thu, 26 Mar 2009 21:45:49 -0400
From: ivo welch ivo...@gmail.com
Subject: Re: [R] pgmm (blundell-bond) help needed
To: r-help r-h...@stat.math.ethz.ch
Message-ID:
50d1c22d0903261845m7d8b321fq97faab26542a...@mail.gmail.com
Content-Type: text/plain; charset=ISO-8859-1

I have been playing with more examples, and I now know that with
larger NF's my example code actually produces a result, instead of a
singular matrix error.  interestingly, stata's xtabond2 command seems
ok with these sorts of data sets.  either R has more stringent
requirements, or stata is too casual.  


Re: [R] 'stretching' a binomial variable

2009-03-27 Thread Robert A LaBudde

At 06:49 AM 3/27/2009, imicola wrote:


Hi,

Im carrying out some Bayesian analysis using a binomial response variable
(proportion: 0 to 1), but most of my observations have a value of 0 and many
have very small values (i.e. 0.001).  I'm having troubles getting my MCMC
algorithm to converge, so I have decided to try normalising my response
variable to see if this helps.

I want it to stay between 0 and 1 but to have a larger range of values, or
just for them all to be slightly higher.

Does anyone know the best way to acheive this?  I could just add a value to
each observation (say 10 to increase the proportion a bit, but ensuring it
would still be between 0 and 1) - would that be ok?  Or is there a better
way to stretch the values up?

Sorry - i know its not really an R specific question, but I have never found
a forum with as many stats litterate people as this one :-)

Cheers - any advice much appreciated!

nicola


Work with events instead of proportions, and use a Poisson model.


Robert A. LaBudde, PhD, PAS, Dpl. ACAFS  e-mail: r...@lcfltd.com
Least Cost Formulations, Ltd.URL: http://lcfltd.com/
824 Timberlake Drive Tel: 757-467-0954
Virginia Beach, VA 23464-3239Fax: 757-467-2947

Vere scire est per causas scire

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] loading and manipulating 10 data frames-simplified

2009-03-27 Thread jim holtman
Look at using a 'list' as obtained from 'lapply'

fileNames - 'your files to be read'
Bin_main - lapply(fileNames, function(.file){
input - read.csv(fileNames, )
# all your calculations; e.g.,
acre - ...

cbind(acres, parprob, ...)
})

Now look at the structure ('str(Bin_main)') and it should have 10 (or
how ever many files you have) elements with the data you want.

On Thu, Mar 26, 2009 at 5:25 PM, PDXRugger j_r...@hotmail.com wrote:

 I have to load 10 different data frames and then manipulate those 10 data
 frames but would like to do this in a more simplified code than what i am
 doing.  I have tried a couple of approaches but cannot get it to work
 correctly.

 So the initial (bulky) code is:

 #Bin 1
 #---

 #Loads bin data frame from csv files with acres and TAZ data
 Bin1_main -
 read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin1_lookup.csv,head=FALSE);

 #Separates Acres data from main data and converts acres to square feet
 Bin1_Acres=Bin1_main[[1]]*43560

 #Separates TAZ data from main data
 Bin1_TAZ=Bin1_main[[2]]

 #Separates TAZ data from main data and converts acres to square feet
 Bin1_TAZvacant=Bin1_main[[3]]*43560

 #Sums each parcel acreage data of the bin
 Bin1Acres_sum=sum(Bin1_Acres)

 #Creates data frame of cumlative percentages of each parcel of bin
 Bin1_cumper=cumsum(Bin1_Acres/Bin1Acres_sum)

 #Calculates the probability of choosing particular parcel from bin
 Bin1_parprob=abs(1-Bin1_cumper)

 #Combines parcel acreage data and cumlative percentage data
 Bin1Main.data = cbind(Bin1_Acres,Bin1_parprob,Bin1_TAZ,Bin1_TAZvacant)


 #Bin 2
 #---

 #Loads bin data frame from csv files with acres and TAZ data
 Bin2_main -
 read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin2_lookup.csv,header=FALSE);

 #Separates Acres data from main data and converts acres to square feet
 Bin2_Acres=Bin2_main[[1]]*43560

 #Separates TAZ data from main data
 Bin2_TAZ=Bin2_main[[2]]

 #Separates TAZ data from main data and converts acres to square feet
 Bin2_TAZvacant=Bin2_main[[3]]*43560

 #Sums each parcel acreage data of the bin
 Bin2Acres_sum=sum(Bin2_Acres)

 #Creates data frame of cumlative percentages of each parcel of bin
 Bin2_cumper=cumsum(Bin2_Acres/Bin2Acres_sum)

 #Calculates the probability of choosing particular parcel from bin
 Bin2_parprob=abs(1-Bin2_cumper)

 #Combines parcel acreage data and cumlative percentage data
 Bin2Main.data = cbind(Bin2_Acres,Bin2_parprob,Bin2_TAZ,Bin2_TAZvacant)

 #Bin 3
 #---

 #Loads bin data frame from csv files with acres and TAZ data
 Bin3_main -
 read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin3_lookup.csv,header=FALSE);

 #Separates Acres data from main data and converts acres to square feet
 Bin3_Acres=Bin3_main[[1]]*43560

 #Separates TAZ data from main data
 Bin3_TAZ=Bin3_main[[2]]

 #Separates TAZ data from main data and converts acres to square feet
 Bin3_TAZvacant=Bin3_main[[3]]*43560

 #Sums each parcel acreage data of the bin
 Bin3Acres_sum=sum(Bin3_Acres)

 #Creates data frame of cumlative percentages of each parcel of bin
 Bin3_cumper=cumsum(Bin3_Acres/Bin3Acres_sum)

 #Calculates the probability of choosing particular parcel from bin
 Bin3_parprob=abs(1-Bin3_cumper)

 #Combines parcel acreage data and cumlative percentage data
 Bin3Main.data = cbind(Bin3_Acres,Bin3_parprob,Bin3_TAZ,Bin3_TAZvacant)

 #Bin 4
 #---

 #Loads bin data frame from csv files with acres and TAZ data
 Bin4_main -
 read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin4_lookup.csv,header=FALSE);

 #Separates Acres data from main data and converts acres to square feet
 Bin4_Acres=Bin4_main[[1]]*43560

 #Separates TAZ data from main data
 Bin4_TAZ=Bin4_main[[2]]

 #Separates TAZ data from main data and converts acres to square feet
 Bin4_TAZvacant=Bin4_main[[3]]*43560

 #Sums each parcel acreage data of the bin
 Bin4Acres_sum=sum(Bin4_Acres)

 #Creates data frame of cumlative percentages of each parcel of bin
 Bin4_cumper=cumsum(Bin4_Acres/Bin4Acres_sum)

 #Calculates the probability of choosing particular parcel from bin
 Bin4_parprob=abs(1-Bin4_cumper)

 #Combines parcel acreage data and cumlative percentage data
 Bin4Main.data = cbind(Bin4_Acres,Bin4_parprob,Bin4_TAZ,Bin4_TAZvacant)

 #Bin 5
 #---

 #Loads bin data frame from csv files with acres and TAZ data
 Bin5_main -
 read.csv(file=I:/Research/Samba/urb_transport_modeling/LUSDR/Workspace/BizLandPrice/data/Bin_lookup_values/Bin5_lookup.csv,header=FALSE);

 #Separates Acres data from main data and converts acres to square feet
 Bin5_Acres=Bin5_main[[1]]*43560

 #Separates TAZ data from main data
 Bin5_TAZ=Bin5_main[[2]]

 #Separates TAZ data from main data and converts acres to square feet
 Bin5_TAZvacant=Bin5_main[[3]]*43560

 #Sums 

Re: [R] Snow Parallel R: makeCluster with more nodes than available

2009-03-27 Thread Uwe Ligges



Ubuntu Diego wrote:

Hi all,
I would like to know what would happen if using snow I create a cluster
of size 50, for example using makeCluster(50,type='SOCK') on a machine
with 2 Cores and run a function. Does snow run 25 and 25 functions on
each of my 2 real processors or it just run 50 functions in one
processor ?


It will run the 50 in parallel and is not advisable to do so on a 
machine with 3 cores - a slow down due to administrative overhead is 
expected.


Uwe Ligges




Thanks.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] color vectors other than gray()

2009-03-27 Thread Paulo E. Cardoso
I'm trying to create a graph where different cells of a grid (a shapefile)
will be painted with a color share scale, where the most easy way is to use
gray().

Can I somehow get a vector (gradient) of colors, a vector of colors with
other methods but gray()?

I'm doing this until now

 

  quad_N_sp -
merge(sp_dist[sp_dist$sp==splist[i],],grelha_ID,by.x=quad,by.y=quadricula
,all.y=T,)  

  quad_N_sp$x[is.na(quad_N_sp$x)] - 0

  quad_N_sp - quad_N_sp[order(quad_N_sp$id),]

  paleta - gray(1-(quad_N_sp$x)/max(quad_N_sp$x)) #! Tons de cinzento

  win.graph(4,5)

  plot(grelha,ol=grey80, #! Gráfico com grelha de amostragem e gradiente
de abundância

  fg=paleta,

  cex.lab=0.7,

  cex.axis=0.7,

  cex.main=0.7,

  xlab=Coord X,

  ylab=Coord Y,

  main=paste(Espécie: ,splist[i]),

  xlim=c(21,24)

  )

  col_lab - c(max(quad_N_sp$x),min(quad_N_sp$x)) #! Vector com os limites
min e max do N de indivíduos observados

 
color.legend(248000,12,25,128000,col_lab,sort(unique(paleta)),gradie
nt=y,cex=0.6)#! Legenda

  text(245300,130500,Nº Indivíduos,cex=0.6)

  plot(blocos,ol=grey40,fg=NA,add=T)

 

I'd like to replace the grey shade by other colors.

 

Thanks in advance



Paulo E. Cardoso

 


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Some install package fixes for Ubuntu Hardy

2009-03-27 Thread Dirk Eddelbuettel

On 27 March 2009 at 08:59, John C Nash wrote:
| Thanks Dirk.
| 
| I should have noted the debian sig. My question -- as Dirk recognized -- 
| was where to let folk know. Part of the issue is that Ubuntu users who are
| new to R may not find this stuff. 

No disrespect --- but we feel that it is easier for new (and even
experienced) users to install the r-cran-* binaries, not compile from source.
That is why we have been providing binary packages for all these years.

| workaround may also be a pointer for those with other Linux distros.
| 
| There is now another issue that I don't find r-cran-java as an available 
| package when I run the apt-cache search. (I do find r-cran-rgl; rgl popped up 
as a 
| dependency in the install.packages, but I should have looked for the debian
| package.) 

It is called r-cran-rjava, but as it was added to Debian only this winter it
has not yet made it into Ubuntu 8.10. I expect it to be in 9.04 which will be
released next month.

Dirk

| I'll post my r-cran-java query to the r-sig-debian.
| 
| JN
| 
| Dirk Eddelbuettel wrote:
|  On 26 March 2009 at 09:45, John C Nash wrote:
|  | I encountered some failures in using install.packages() to install rgl 
|  | and rJava in some of my (multiple) Ubuntu Hardy systems. A quick search 
|  | of the 'Net did not show any debian packages for these. The 
|  | install.packages messages said header or other files were missing, 
|  | suggesting path and related woes. Email with Duncan Murdoch (thanks!) 
|  | pointed the way with rgl and led to a fix for rJava in similar fashion. 
|  | It may save others some frustration to know my resolution. See below.
|  | 
|  | However, I do have a question which a brief rummage of r-project did not 
|  | answer. Where should information like this be put? My opinion is that it 
| 
|  Maybe on the r-sig-debian list that is dedicated to Debian / Ubuntu and R?
| 
|  | should go on the wiki, but possibly there is a better solution if we can 
|  | get the right messages into the package installers, though I recognize 
|  | the load that puts on maintainers.
|  | 
|  | Cheers, JN
|  | 
|  | Ubuntu Hardy rgl install fix:
|  | 
|  | The headers gl.h and glu.h are installed with the dev packages 
|  | libgl1-mesa-dev and libglu1-mesa-dev. So the fix is to run (in at 
|  | terminal as root)
|  | 
|  | apt-get install libgl1-mesa-dev
|  | apt-get install libglu1-mesa-dev
| 
|  Yes, which is why the r-cran-rgl package (available in Debian for over five
|  years now, and hence in Ubuntu for probably 4 1/2) has the following
|  Build-Depends (with my manual indentation here):
| 
|Build-Depends: debhelper (= 5.0.0), r-base-dev (= 2.8.1), cdbs, \
|libgl1-mesa-dev | libgl-dev, libglu1-mesa-dev | libglu-dev, \
|libpng12-dev, libx11-dev, libxt-dev, x11proto-core-dev 
|   
|  | then
|  | R
|  | .
|  | install.packages(rgl)
|  | 
|  | etc.
| 
|  Let's not forget the 'sudo apt-get install r-cran-rgl' alternative.
|   
|  | Ubuntu Hardy rJava install fix:
|  | 
|  | Needed to get Sun JDK (not JRE)
|  | 
|  | Then add new
|  | ln -s /usr/java/jdkx/bin/java java
|  | and
|  | ln -s /usr/java/jdkx/bin/javac javac
|  | 
|  | where xx is the version information on the jdk directory name -- in 
|  | my case 1.6.0_13 (see below)
|  | 
|  | Then
|  | 
|  | R CMD javareconf
|  | 
|  | still fails to find the java compiler.
|  | 
|  | Seems $JAVA_HOME may not be defined.
|  | 
|  | Try
|  | export JAVA_HOME=/usr/java/jdk1.6.0_13/
|  | 
|  | Then (as root)
|  | 
|  | R CMD javareconf
|  | 
|  | seems to work.
|  | Then rJava installed OK. I was then able to install RWeka (my original 
|  | objective) and it seems to run OK.
| 
|  Likewise, the r-cran-rjava package has 
| 
| Build-Depends: debhelper (= 7.0.0), r-base-dev (= 2.8.1), cdbs, \
|openjdk-6-jdk, automake
| 
|  and R is now configured for this Java version at the built.
| 
| 
|  Again, questions on the r-sig-debian list may have been of help.
| 
|  Hope this helps,  Dirk
| 
| 
| 
| __
| R-help@r-project.org mailing list
| https://stat.ethz.ch/mailman/listinfo/r-help
| PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
| and provide commented, minimal, self-contained, reproducible code.

-- 
Three out of two people have difficulties with fractions.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Snow Parallel R: makeCluster with more nodes than available

2009-03-27 Thread Dirk Eddelbuettel

On 27 March 2009 at 13:19, Uwe Ligges wrote:
| 
| Ubuntu Diego wrote:
|  Hi all,
|  I would like to know what would happen if using snow I create a cluster
|  of size 50, for example using makeCluster(50,type='SOCK') on a machine
|  with 2 Cores and run a function. Does snow run 25 and 25 functions on
|  each of my 2 real processors or it just run 50 functions in one
|  processor ?
| 
| It will run the 50 in parallel and is not advisable to do so on a 
| machine with 3 cores - a slow down due to administrative overhead is 
| expected.

Morevoer those 50 session have to share the existing memory allocation --
which is hardly likely to be large enough.

Dirk
 
| Uwe Ligges
| 
| 
| 
|  Thanks.
|  
|  __
|  R-help@r-project.org mailing list
|  https://stat.ethz.ch/mailman/listinfo/r-help
|  PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
|  and provide commented, minimal, self-contained, reproducible code.
| 
| __
| R-help@r-project.org mailing list
| https://stat.ethz.ch/mailman/listinfo/r-help
| PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
| and provide commented, minimal, self-contained, reproducible code.

-- 
Three out of two people have difficulties with fractions.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] adding matrices with common column names

2009-03-27 Thread Murali.MENON
folks,
 
if i have three matrices, a, b, cc with some colnames in common, and i
want to create a matrix which consists of the common columns added up,
and the other columns tacked on, what's a good way to do it? i've got
the following roundabout code for two matrices, but if the number of
matrices increases, then i'm a bit stymied.
 
 a - matrix(1:20,ncol=4); colnames(a) - c(a,b,c,d) b - 
 matrix(1:20,ncol=4); colnames(b) - c(b,c,d, e)
 cbind(a[,!(colnames(a) %in% colnames(b)), drop = FALSE],
a[,intersect(colnames(a),colnames(b))] +
b[,intersect(colnames(a),colnames(b)), drop = FALSE],
b[,!(colnames(b) %in% colnames(a)), drop = FALSE])
 
 a  b  c  d  e
[1,] 1  7 17 27 16
[2,] 2  9 19 29 17
[3,] 3 11 21 31 18
[4,] 4 13 23 33 19
[5,] 5 15 25 35 20
 
now, what if i had a matrix cc? i want to perform the above operation on
all three matrices a, b, cc.
 
 cc - matrix(1:10,ncol=2); colnames(cc) - c(e,f)

i need to end up with:

 a  b  c  d  e  f
[1,] 1  7 17 27 17  6
[2,] 2  9 19 29 19  7
[3,] 3 11 21 31 21  8
[4,] 4 13 23 33 23  9
[5,] 5 15 25 35 25 10

and, in general, with multiple matrices with intersecting colnames?

thanks,

murali

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] color vectors other than gray()

2009-03-27 Thread baptiste auguie

?colorRamp

Hope this helps,

baptiste

On 27 Mar 2009, at 13:16, Paulo E. Cardoso wrote:

I'm trying to create a graph where different cells of a grid (a  
shapefile)
will be painted with a color share scale, where the most easy way is  
to use

gray().

Can I somehow get a vector (gradient) of colors, a vector of colors  
with

other methods but gray()?

I'm doing this until now



 quad_N_sp -
merge(sp_dist[sp_dist 
$sp==splist[i],],grelha_ID,by.x=quad,by.y=quadricula

,all.y=T,)

 quad_N_sp$x[is.na(quad_N_sp$x)] - 0

 quad_N_sp - quad_N_sp[order(quad_N_sp$id),]

 paleta - gray(1-(quad_N_sp$x)/max(quad_N_sp$x)) #! Tons de cinzento

 win.graph(4,5)

 plot(grelha,ol=grey80, #! Gráfico com grelha de amostragem e  
gradiente

de abundância

 fg=paleta,

 cex.lab=0.7,

 cex.axis=0.7,

 cex.main=0.7,

 xlab=Coord X,

 ylab=Coord Y,

 main=paste(Espécie: ,splist[i]),

 xlim=c(21,24)

 )

 col_lab - c(max(quad_N_sp$x),min(quad_N_sp$x)) #! Vector com os  
limites

min e max do N de indivíduos observados


color 
.legend 
(248000,12,25,128000,col_lab,sort(unique(paleta)),gradie

nt=y,cex=0.6)#! Legenda

 text(245300,130500,Nº Indivíduos,cex=0.6)

 plot(blocos,ol=grey40,fg=NA,add=T)



I'd like to replace the grey shade by other colors.



Thanks in advance



Paulo E. Cardoso




[[alternative HTML version deleted]]

ATT1.txt


_

Baptiste Auguié

School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK

Phone: +44 1392 264187

http://newton.ex.ac.uk/research/emag

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] anova with means and SEMs (no raw data)

2009-03-27 Thread Peter Dalgaard
Martin Batholdy wrote:
 hi,
 
 
 I have only the means and standard errors of this means of different
 groups and different conditions (and the group sizes).
 
 Is there a function which can compute me an anova out of this information?
 

There's an example in my book (Section 12.4), or look at fake.trypsin.R
in the rawdata subdirectory of the ISwR package.

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - (p.dalga...@biostat.ku.dk)  FAX: (+45) 35327907

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] color vectors other than gray()

2009-03-27 Thread Romain Francois

See ?colorRampPalette and ?colorRamp

colorRampPalette( c(blue, white, red) )(100)
colorRamp( c(blue, white, red) )( 0:10 / 10 )

Romain

Paulo E. Cardoso wrote:

I'm trying to create a graph where different cells of a grid (a shapefile)
will be painted with a color share scale, where the most easy way is to use
gray().

Can I somehow get a vector (gradient) of colors, a vector of colors with
other methods but gray()?

I'm doing this until now

 


  quad_N_sp -
merge(sp_dist[sp_dist$sp==splist[i],],grelha_ID,by.x=quad,by.y=quadricula
,all.y=T,)  


  quad_N_sp$x[is.na(quad_N_sp$x)] - 0

  quad_N_sp - quad_N_sp[order(quad_N_sp$id),]

  paleta - gray(1-(quad_N_sp$x)/max(quad_N_sp$x)) #! Tons de cinzento

  win.graph(4,5)

  plot(grelha,ol=grey80, #! Gráfico com grelha de amostragem e gradiente
de abundância

  fg=paleta,

  cex.lab=0.7,

  cex.axis=0.7,

  cex.main=0.7,

  xlab=Coord X,

  ylab=Coord Y,

  main=paste(Espécie: ,splist[i]),

  xlim=c(21,24)

  )

  col_lab - c(max(quad_N_sp$x),min(quad_N_sp$x)) #! Vector com os limites
min e max do N de indivíduos observados

 
color.legend(248000,12,25,128000,col_lab,sort(unique(paleta)),gradie

nt=y,cex=0.6)#! Legenda

  text(245300,130500,Nº Indivíduos,cex=0.6)

  plot(blocos,ol=grey40,fg=NA,add=T)

 


I'd like to replace the grey shade by other colors.

 


Thanks in advance



Paulo E. Cardoso

 



--
Romain Francois
Independent R Consultant
+33(0) 6 28 91 30 30
http://romainfrancois.blog.free.fr

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] color vectors other than gray()

2009-03-27 Thread Tsjerk Wassenaar
Hi,

Have a look at:

?rainbow
?rgb
?heatmap

In my opinion this would've likely popped up with just a little effort
of searching. In fact, the help of grey() (?grey) already gives
pointers to the other color functions. Please show that you at least
have tried to find answers before posting questions on a user list.

Tsjerk

On Fri, Mar 27, 2009 at 2:16 PM, Paulo E. Cardoso pecard...@netcabo.pt wrote:
 I'm trying to create a graph where different cells of a grid (a shapefile)
 will be painted with a color share scale, where the most easy way is to use
 gray().

 Can I somehow get a vector (gradient) of colors, a vector of colors with
 other methods but gray()?

 I'm doing this until now



  quad_N_sp -
 merge(sp_dist[sp_dist$sp==splist[i],],grelha_ID,by.x=quad,by.y=quadricula
 ,all.y=T,)

  quad_N_sp$x[is.na(quad_N_sp$x)] - 0

  quad_N_sp - quad_N_sp[order(quad_N_sp$id),]

  paleta - gray(1-(quad_N_sp$x)/max(quad_N_sp$x)) #! Tons de cinzento

  win.graph(4,5)

  plot(grelha,ol=grey80, #! Gráfico com grelha de amostragem e gradiente
 de abundância

  fg=paleta,

  cex.lab=0.7,

  cex.axis=0.7,

  cex.main=0.7,

  xlab=Coord X,

  ylab=Coord Y,

  main=paste(Espécie: ,splist[i]),

  xlim=c(21,24)

  )

  col_lab - c(max(quad_N_sp$x),min(quad_N_sp$x)) #! Vector com os limites
 min e max do N de indivíduos observados


 color.legend(248000,12,25,128000,col_lab,sort(unique(paleta)),gradie
 nt=y,cex=0.6)#! Legenda

  text(245300,130500,Nº Indivíduos,cex=0.6)

  plot(blocos,ol=grey40,fg=NA,add=T)



 I'd like to replace the grey shade by other colors.



 Thanks in advance

 

 Paulo E. Cardoso




        [[alternative HTML version deleted]]


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.





-- 
Tsjerk A. Wassenaar, Ph.D.
Junior UD (post-doc)
Biomolecular NMR, Bijvoet Center
Utrecht University
Padualaan 8
3584 CH Utrecht
The Netherlands
P: +31-30-2539931
F: +31-30-2537623

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] adding matrices with common column names

2009-03-27 Thread Dimitris Rizopoulos

one approach is:

a - matrix(1:20,ncol=4); colnames(a) - c(a,b,c,d)
b - matrix(1:20,ncol=4); colnames(b) - c(b,c,d, e)
cc - matrix(1:10,ncol=2); colnames(cc) - c(e,f)

f - function (...) {
mat.lis - list(...)
unq.cnams - unique(unlist(lapply(mat.lis, colnames)))
out - matrix(0, nrow(mat.lis[[1]]), length(unq.cnams),
dimnames = list(NULL, unq.cnams))
for (i in seq_along(mat.lis)) {
mm - mat.lis[[i]]
out[, colnames(mm)] - out[, colnames(mm)] + mm
}
out
}

f(a, b)
f(a, cc)
f(a, b, cc)


I hope it helps.

Best,
Dimitris


murali.me...@fortisinvestments.com wrote:

folks,
 
if i have three matrices, a, b, cc with some colnames in common, and i

want to create a matrix which consists of the common columns added up,
and the other columns tacked on, what's a good way to do it? i've got
the following roundabout code for two matrices, but if the number of
matrices increases, then i'm a bit stymied.
 
a - matrix(1:20,ncol=4); colnames(a) - c(a,b,c,d) b - 
matrix(1:20,ncol=4); colnames(b) - c(b,c,d, e)

cbind(a[,!(colnames(a) %in% colnames(b)), drop = FALSE],

a[,intersect(colnames(a),colnames(b))] +
b[,intersect(colnames(a),colnames(b)), drop = FALSE],
b[,!(colnames(b) %in% colnames(a)), drop = FALSE])
 
 a  b  c  d  e

[1,] 1  7 17 27 16
[2,] 2  9 19 29 17
[3,] 3 11 21 31 18
[4,] 4 13 23 33 19
[5,] 5 15 25 35 20
 
now, what if i had a matrix cc? i want to perform the above operation on

all three matrices a, b, cc.
 

cc - matrix(1:10,ncol=2); colnames(cc) - c(e,f)


i need to end up with:

 a  b  c  d  e  f
[1,] 1  7 17 27 17  6
[2,] 2  9 19 29 19  7
[3,] 3 11 21 31 21  8
[4,] 4 13 23 33 23  9
[5,] 5 15 25 35 25 10

and, in general, with multiple matrices with intersecting colnames?

thanks,

murali

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Dimitris Rizopoulos
Assistant Professor
Department of Biostatistics
Erasmus University Medical Center

Address: PO Box 2040, 3000 CA Rotterdam, the Netherlands
Tel: +31/(0)10/7043478
Fax: +31/(0)10/7043014

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] 'stretching' a binomial variable

2009-03-27 Thread Duncan Murdoch

On 3/27/2009 7:49 AM, imicola wrote:

Hi,

Im carrying out some Bayesian analysis using a binomial response variable
(proportion: 0 to 1), but most of my observations have a value of 0 and many
have very small values (i.e. 0.001).  I'm having troubles getting my MCMC
algorithm to converge, so I have decided to try normalising my response
variable to see if this helps.


It seems to me that the problem in a situation like this is with the 
algorithm, not with the data.  Can't you modify it to get better 
convergence?  For example, set your target to be the square root of your 
posterior (or some other power between 0 and 1); this is more diffuse, 
so it's easier to sample from.  Then use importance sampling to reweight 
the sample.


Duncan Murdoch



I want it to stay between 0 and 1 but to have a larger range of values, or
just for them all to be slightly higher.

Does anyone know the best way to acheive this?  I could just add a value to
each observation (say 10 to increase the proportion a bit, but ensuring it
would still be between 0 and 1) - would that be ok?  Or is there a better
way to stretch the values up?

Sorry - i know its not really an R specific question, but I have never found
a forum with as many stats litterate people as this one :-)

Cheers - any advice much appreciated!

nicola


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] adding matrices with common column names

2009-03-27 Thread Nutter, Benjamin
Shucks, Dimitris beat me to it.  And his code is a bit more elegant than
mine.  But since I did the work I may as well post it, right?

This version incorporates a couple of error checks to make sure all your
arguments are matrices with the same number of rows.

add.by.name - function(...){
  args - list(...)
  
  mat.test - sapply(args,is.matrix)
  if(FALSE %in% mat.test) stop(All arguments must be matrices)

  mat.row - unique(sapply(args,nrow))
  if(length(mat.row)1) stop(All matrices must have the same number of
rows)
  
  all.names - unique(as.vector(sapply(args,colnames)))
  
  sum.mat - matrix(0,nrow=mat.row,ncol=length(all.names))
  colnames(sum.mat) - all.names

  for(i in 1:length(args)){
tmp - args[[i]]
sum.mat[,colnames(tmp)] - sum.mat[,colnames(tmp)] + tmp
  }

  return(sum.mat)
}

m1 - matrix(1:20,ncol=4); colnames(m1) - c(a,b,c,d)
m2 - matrix(1:20,ncol=4); colnames(m2) - c(b,c,d,e)
m3 - matrix(1:20,ncol=4); colnames(m3) - c(a,b,d,e)

add.by.name(m1,m2,m3)



-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
On Behalf Of murali.me...@fortisinvestments.com
Sent: Friday, March 27, 2009 9:25 AM
To: r-help@r-project.org
Subject: [R] adding matrices with common column names

folks,
 
if i have three matrices, a, b, cc with some colnames in common, and i
want to create a matrix which consists of the common columns added up,
and the other columns tacked on, what's a good way to do it? i've got
the following roundabout code for two matrices, but if the number of
matrices increases, then i'm a bit stymied.
 
 a - matrix(1:20,ncol=4); colnames(a) - c(a,b,c,d) b - 
 matrix(1:20,ncol=4); colnames(b) - c(b,c,d, e)
 cbind(a[,!(colnames(a) %in% colnames(b)), drop = FALSE],
a[,intersect(colnames(a),colnames(b))] +
b[,intersect(colnames(a),colnames(b)), drop = FALSE],
b[,!(colnames(b) %in% colnames(a)), drop = FALSE])
 
 a  b  c  d  e
[1,] 1  7 17 27 16
[2,] 2  9 19 29 17
[3,] 3 11 21 31 18
[4,] 4 13 23 33 19
[5,] 5 15 25 35 20
 
now, what if i had a matrix cc? i want to perform the above operation on
all three matrices a, b, cc.
 
 cc - matrix(1:10,ncol=2); colnames(cc) - c(e,f)

i need to end up with:

 a  b  c  d  e  f
[1,] 1  7 17 27 17  6
[2,] 2  9 19 29 19  7
[3,] 3 11 21 31 21  8
[4,] 4 13 23 33 23  9
[5,] 5 15 25 35 25 10

and, in general, with multiple matrices with intersecting colnames?

thanks,

murali

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


===

P Please consider the environment before printing this e-mail

Cleveland Clinic is ranked one of the top hospitals
in America by U.S. News  World Report (2008).  
Visit us online at http://www.clevelandclinic.org for
a complete listing of our services, staff and
locations.


Confidentiality Note:  This message is intended for use\...{{dropped:13}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] 3D PLOT

2009-03-27 Thread bastian2507hk
Hello,

I would like to create a 3D plot with the following data formats:

a - 1:100

b - 1:100

c - matrix(, 100, 100)

i.e.

c(i,j) = f ( a(i) , b(j) )

each of the 1 elements i,j in matrix c is a function of a(i) and
b(j). I would like to have a,b on the x and z axis and c on the y-axis. 

Does anybody have an idea how to accomplish that? Thanks in advance.

Regards

BO






#adBox3 {display:none;}



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Competing risks Kalbfleisch Prentice method

2009-03-27 Thread Terry Therneau
Ravi's last note finished with
  I am wondering why Terry Therneau's survival package doesn't
  have this option.  

  The short answer is that there are only so many hours in a day.  

  I've recently moved the code base from an internal Mayo repository to 
R-forge, 
one long term goal with this is to broaden the developer base to n2 (me and 
Thomas Lumley).  
  
  A longer statistical answer:
  
  I'm not sure if the this of Ravi's question is a. smoothed hazards, b. the 
KP cumulative incidence or c. the Fine  Gray model.
  
  b. I like the CI model and am using it more.  We also have local code. The 
latest version of survival (on rforge, likely in the next default R release) 
has 
added simple CI curves to the survfit function.  Adding code for survfit on Cox 
models is on the todo list.  But -- this release also fixes up survfit.coxph to 
handle weighted Cox models and that was on my list for approx 10 years, i.e., 
don't hold your breath.  I don't release something until it also has a set of 
worked out test cases to add to the 'tests' directory.
  
  a. smoothed hazards.  For the case at hand I don't see any particular 
advantage of this.  On the other hand, I often would like to display hazard 
functions instead of CI functions for Cox models; with time dependent 
covariates 
I don't think a survival curve makes sense.  But I haven't had the time to 
think 
through exactly which methods should be added.
  
  c. Fine  Gray model, i.e., where covariates have a direct influence on the 
competing risk.  I find the model completely untenable from a biologic point of 
view, so have no interest in adding it.  (Due to finite time, everything in the 
survival package is code that I needed for an analysis; medical research is 
what 
pays my salary.)  Assume that I have competing processes/risks, say progression 
of a tumor and heart disease;  I expect that the tumor process pays no 
attention 
whatsoever to what is going on in the heart.  But this is necessary if 
type=squamous is modeled as an absolute beta=__ increase in the CI for 
cancer. 
 The squamous cells need to step up the pace of invasion if heart failure 
threatens, like jockeys in a horse race. 
  
   Terry T.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reading in files with variable parts to names

2009-03-27 Thread Steve Murray

Dear all,

Thanks for the help in the previous posts. I've considered each one and have 
nearly managed to get it working. The structure of the filelist being produced 
is correct, except for a single space which I can't seem to eradicate! This is 
my amended code, followed by the first twelve rows of the output (it really 
goes up to 120 rows).

filelist - paste(C:\\Documents and 
Settings\\Data\\comp_runoff_hd_,do.call(paste, expand.grid(year = 
sprintf(%04d, seq(1986,1995)), month = sprintf(%02d,1:12))),.asc, sep=)


filelist

 [1] C:\\Documents and Settings\\Data\\comp1986 01.asc
 [2] C:\\Documents and Settings\\Data\\comp1987 01.asc
 [3] C:\\Documents and Settings\\Data\\comp1988 01.asc
 [4] C:\\Documents and Settings\\Data\\comp1989 01.asc
 [5] C:\\Documents and Settings\\Data\\comp1990 01.asc
 [6] C:\\Documents and Settings\\Data\\comp1991 01.asc
 [7] C:\\Documents and Settings\\Data\\comp1992 01.asc
 [8] C:\\Documents and Settings\\Data\\comp1993 01.asc
 [9] C:\\Documents and Settings\\Data\\comp1994 01.asc
 [10] C:\\Documents and Settings\\Data\\comp1995 01.asc
 [11] C:\\Documents and Settings\\Data\\comp1986 02.asc
 [12] C:\\Documents and Settings\\Data\\comp1987 02.asc


I've tried inserting sep= after the 'month=sprintf(%02d,1:12)' but this 
doesn't appear to solve the problem - in fact it doesn't change the output at 
all...!

Any help would be much appreciated,

Steve

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] snow Error.

2009-03-27 Thread jgarcia
Hello,

I have a program that used to run well in October, it uses library snow.
Since then, one change has ocurred (snow library has been updated) and
another could have ocurred (I've unadvertently modified something).

Anyway, now when I make the call:

parallel.model.results - clusterApply(cl,processors.struct,MCexe)

 exactly as I used to do, where MCexe is my function and processors.struct
is a list containing everything required by MCexe, I obtain the following
error:

Error in checkForRemoteErrors(val) :
  2 nodes produced errors; first error: incorrect number of dimensions

Please, do you have any clue about what could be the error?

Best regards,

Javier García-Pintado

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] 3D PLOT

2009-03-27 Thread Duncan Murdoch

On 3/27/2009 9:57 AM, bastian250...@freenet.de wrote:

Hello,

I would like to create a 3D plot with the following data formats:

a - 1:100

b - 1:100

c - matrix(, 100, 100)

i.e.

c(i,j) = f ( a(i) , b(j) )

each of the 1 elements i,j in matrix c is a function of a(i) and
b(j). I would like to have a,b on the x and z axis and c on the y-axis. 


Does anybody have an idea how to accomplish that? Thanks in advance.



persp, contour, image, wireframe, contourplot, etc. (in graphics and 
lattice); persp3d (in rgl).


Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Sweave-output causes error-message in pdflatex

2009-03-27 Thread Gerrit Voigt

Dear list,
Latex/Sweave has trouble processing Sveave-output coming from the 
summary-command of a linear Model.

summary(lmRub)
The output line causing the trouble looks in R like this
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

In my Sweaved Tex-file that line looks like this
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1’ (actually 
in the editor the quotation signs are replaced by bars, but they got 
lost through copy  paste. I don't know if that says anything about my 
problem.)


In the error message produced through pdflatex, the quotation signs 
reappear.

Latex error-message:
! Package inputenc Error: Keyboard character used is undefined

(inputenc) in inputencoding `Latin1'.

See the inputenc package documentation for explanation.

Type H return for immediate help.

...

l.465 ...*’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

You need to provide a definition with \DeclareInputText

or \DeclareInputMath before using this key.


I hope anybody knows how I can prevent that error message. Thanks in 
advance.


Gerrit

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R 2.8.1 and 2.9 alpha crash when running survest of Design package

2009-03-27 Thread Terry Therneau
A couple additions to Thomas's message.

  The 'survest' function in design directly called C routines in the survival 
package.  The argument list to the routines changed due to the addition of 
weights; calling a C routine with the wrong arguments is one of the more 
reliable ways to crash a program.  The simplest (short term) solution is to use 
survfit for your curves rather than survest.  Frank Harrell has been aware of 
the issue for several weeks and is working hard on solving it.  The simple fix 
is a few minutes, but he's thinking about how to avoid any future problems.  
The 
C routines in survival change arguments VERY rarely, but direcly calling the 
routines of another package is considered dangerous in general.
  
  Most breakage was less severe.  For instance there were a couple of errors in 
the PBC data set.  I fixed these, and also replaced all the 999 codes with NA 
to make it easier to use.  Some other packages use this data.  (My name is on 
most of the PBC papers and I have the master PBC data with all labs, patient 
id, 
etc, but I was not the source of the first data set).  
  
  We'll be keeping an eye on the R list as the package rolls out; sending a 
message directly to Thomas and/or I would also be appreciated for issues like 
this.
  
Terry Therneau

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] color vectors other than gray()

2009-03-27 Thread Achim Zeileis

On Fri, 27 Mar 2009, Tsjerk Wassenaar wrote:


Hi,

Have a look at:

?rainbow
?rgb
?heatmap


Furthermore, the packages colorspace, ggplot, plotrix, and RColorBrewer 
have useful tools for this.


For the ides behind the palettes in colorspace, see
  Achim Zeileis, Kurt Hornik, and Paul Murrell (2009).
  Escaping RGBland: Selecting Colors for Statistical Graphics.
  Computational Statistics  Data Analysis, Forthcoming.
  doi:10.1016/j.csda.2008.11.033
  Preprint: 
http://statmath.wu-wien.ac.at/~zeileis/papers/Zeileis+Hornik+Murrell-2008.pdf

hth,
Z


In my opinion this would've likely popped up with just a little effort
of searching. In fact, the help of grey() (?grey) already gives
pointers to the other color functions. Please show that you at least
have tried to find answers before posting questions on a user list.

Tsjerk

On Fri, Mar 27, 2009 at 2:16 PM, Paulo E. Cardoso pecard...@netcabo.pt wrote:

I'm trying to create a graph where different cells of a grid (a shapefile)
will be painted with a color share scale, where the most easy way is to use
gray().

Can I somehow get a vector (gradient) of colors, a vector of colors with
other methods but gray()?

I'm doing this until now



 quad_N_sp -
merge(sp_dist[sp_dist$sp==splist[i],],grelha_ID,by.x=quad,by.y=quadricula
,all.y=T,)

 quad_N_sp$x[is.na(quad_N_sp$x)] - 0

 quad_N_sp - quad_N_sp[order(quad_N_sp$id),]

 paleta - gray(1-(quad_N_sp$x)/max(quad_N_sp$x)) #! Tons de cinzento

 win.graph(4,5)

 plot(grelha,ol=grey80, #! Gráfico com grelha de amostragem e gradiente
de abundância

 fg=paleta,

 cex.lab=0.7,

 cex.axis=0.7,

 cex.main=0.7,

 xlab=Coord X,

 ylab=Coord Y,

 main=paste(Espécie: ,splist[i]),

 xlim=c(21,24)

 )

 col_lab - c(max(quad_N_sp$x),min(quad_N_sp$x)) #! Vector com os limites
min e max do N de indivíduos observados


color.legend(248000,12,25,128000,col_lab,sort(unique(paleta)),gradie
nt=y,cex=0.6)#! Legenda

 text(245300,130500,Nº Indivíduos,cex=0.6)

 plot(blocos,ol=grey40,fg=NA,add=T)



I'd like to replace the grey shade by other colors.



Thanks in advance



Paulo E. Cardoso




       [[alternative HTML version deleted]]


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.






--
Tsjerk A. Wassenaar, Ph.D.
Junior UD (post-doc)
Biomolecular NMR, Bijvoet Center
Utrecht University
Padualaan 8
3584 CH Utrecht
The Netherlands
P: +31-30-2539931
F: +31-30-2537623

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reading in files with variable parts to names

2009-03-27 Thread jim holtman
Does this give you what you want (just did it in two steps);

 x - expand.grid(year = sprintf(%04d, seq(1986, 1995)), month = 
 sprintf(%02d, 1:12))
 filelist - paste(C:\\Documents and Settings\\Data\\comp_runoff_hd_, 
 paste(x$year, x$month, sep=''), '.asc', sep='')




 filelist
  [1] C:\\Documents and Settings\\Data\\comp_runoff_hd_198601.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_198701.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_198801.asc
  [4] C:\\Documents and Settings\\Data\\comp_runoff_hd_198901.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199001.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199101.asc
  [7] C:\\Documents and Settings\\Data\\comp_runoff_hd_199201.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199301.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199401.asc
 [10] C:\\Documents and Settings\\Data\\comp_runoff_hd_199501.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_198602.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_198702.asc
 [13] C:\\Documents and Settings\\Data\\comp_runoff_hd_198802.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_198902.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199002.asc
 [16] C:\\Documents and Settings\\Data\\comp_runoff_hd_199102.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199202.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199302.asc
 [19] C:\\Documents and Settings\\Data\\comp_runoff_hd_199402.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199502.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_198603.asc
 [22] C:\\Documents and Settings\\Data\\comp_runoff_hd_198703.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_198803.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_198903.asc
 [25] C:\\Documents and Settings\\Data\\comp_runoff_hd_199003.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199103.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199203.asc
 [28] C:\\Documents and Settings\\Data\\comp_runoff_hd_199303.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199403.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199503.asc
 [31] C:\\Documents and Settings\\Data\\comp_runoff_hd_198604.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_198704.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_198804.asc
 [34] C:\\Documents and Settings\\Data\\comp_runoff_hd_198904.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199004.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199104.asc
 [37] C:\\Documents and Settings\\Data\\comp_runoff_hd_199204.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199304.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199404.asc
 [40] C:\\Documents and Settings\\Data\\comp_runoff_hd_199504.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_198605.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_198705.asc
 [43] C:\\Documents and Settings\\Data\\comp_runoff_hd_198805.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_198905.asc
C:\\Documents and Settings\\Data\\comp_runoff_hd_199005.asc


On Fri, Mar 27, 2009 at 9:56 AM, Steve Murray smurray...@hotmail.com wrote:

 Dear all,

 Thanks for the help in the previous posts. I've considered each one and have 
 nearly managed to get it working. The structure of the filelist being 
 produced is correct, except for a single space which I can't seem to 
 eradicate! This is my amended code, followed by the first twelve rows of the 
 output (it really goes up to 120 rows).

filelist - paste(C:\\Documents and 
Settings\\Data\\comp_runoff_hd_,do.call(paste, expand.grid(year = 
sprintf(%04d, seq(1986,1995)), month = sprintf(%02d,1:12))),.asc, 
sep=)


filelist

  [1] C:\\Documents and Settings\\Data\\comp1986 01.asc
  [2] C:\\Documents and Settings\\Data\\comp1987 01.asc
  [3] C:\\Documents and Settings\\Data\\comp1988 01.asc
  [4] C:\\Documents and Settings\\Data\\comp1989 01.asc
  [5] C:\\Documents and Settings\\Data\\comp1990 01.asc
  [6] C:\\Documents and Settings\\Data\\comp1991 01.asc
  [7] C:\\Documents and Settings\\Data\\comp1992 01.asc
  [8] C:\\Documents and Settings\\Data\\comp1993 01.asc
  [9] C:\\Documents and Settings\\Data\\comp1994 01.asc
  [10] C:\\Documents and Settings\\Data\\comp1995 01.asc
  [11] C:\\Documents and Settings\\Data\\comp1986 02.asc
  [12] C:\\Documents and Settings\\Data\\comp1987 02.asc


 I've tried inserting sep= after the 'month=sprintf(%02d,1:12)' but this 
 doesn't appear to solve the problem - in fact it doesn't change the output at 
 all...!

 Any help would be much appreciated,

 Steve

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?


Re: [R] Reading in files with variable parts to names

2009-03-27 Thread Romain Francois

That's because do.call wants a list:

what about this one:

 do.call( sprintf, append(  list(C:\\Documents and 
Settings\\Data\\comp_runoff_hd_%04d%02d.asc), expand.grid( 
seq(1986,1995), 1:12) ) )


Romain

Steve Murray wrote:

Dear all,

Thanks for the help in the previous posts. I've considered each one and have 
nearly managed to get it working. The structure of the filelist being produced 
is correct, except for a single space which I can't seem to eradicate! This is 
my amended code, followed by the first twelve rows of the output (it really 
goes up to 120 rows).

  

filelist - paste(C:\\Documents and Settings\\Data\\comp_runoff_hd_,do.call(paste, expand.grid(year = 
sprintf(%04d, seq(1986,1995)), month = sprintf(%02d,1:12))),.asc, sep=)




  

filelist



 [1] C:\\Documents and Settings\\Data\\comp1986 01.asc
 [2] C:\\Documents and Settings\\Data\\comp1987 01.asc
 [3] C:\\Documents and Settings\\Data\\comp1988 01.asc
 [4] C:\\Documents and Settings\\Data\\comp1989 01.asc
 [5] C:\\Documents and Settings\\Data\\comp1990 01.asc
 [6] C:\\Documents and Settings\\Data\\comp1991 01.asc
 [7] C:\\Documents and Settings\\Data\\comp1992 01.asc
 [8] C:\\Documents and Settings\\Data\\comp1993 01.asc
 [9] C:\\Documents and Settings\\Data\\comp1994 01.asc
 [10] C:\\Documents and Settings\\Data\\comp1995 01.asc
 [11] C:\\Documents and Settings\\Data\\comp1986 02.asc
 [12] C:\\Documents and Settings\\Data\\comp1987 02.asc


I've tried inserting sep= after the 'month=sprintf(%02d,1:12)' but this 
doesn't appear to solve the problem - in fact it doesn't change the output at all...!

Any help would be much appreciated,

Steve

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


  



--
Romain Francois
Independent R Consultant
+33(0) 6 28 91 30 30
http://romainfrancois.blog.free.fr

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] pca vs. pfa: dimension reduction

2009-03-27 Thread Michael Dewey

At 18:22 25/03/2009, Jonathan Baron wrote:

On 03/25/09 19:06, soeren.vo...@eawag.ch wrote:
 Can't make sense of calculated results and hope I'll find help here.

 I've collected answers from about 600 persons concerning three
 variables. I hypothesise those three variables to be components (or
 indicators) of one latent factor. In order to reduce data (vars), I
 had the following idea: Calculate the factor underlying these three
 vars. Use the loadings and the original var values to construct an new
 (artificial) var: (B1 * X1) + (B2 * X2) + (B3 * X3) = ArtVar (brackets
 for readability). Use ArtVar for further analysis of the data, that
 is, as predictor etc.

 In my (I realise, elementary) psychological statistics readings I was
 taught to use pca for these problems. Referring to Venables  Ripley
 (2002, chapter 11), I applied princomp to my vars. But the outcome
 shows 4 components -- which is obviously not what I want. Reading
 further I found factanal, which produces loadings on the one
 specified factor very fine. But since this is a contradiction to
 theoretical introductions in so many texts I'm completely confused
 whether I'm right with these calculations.


Perhaps I am missing something here but how do you get four 
components with three variables?




 (1) Is there an easy example, which explains the differences between
 pca and pfa? (2) Which R procedure should I use to get what I want?

Possibly what you want is the first principal component, which the
weighted sum that accounts for the most variance of the three
variables.  It does essentially what you say in your first paragraph.
So you want something like

p1 - princomp(cbind(X1,X2,X3),scores=TRUE)
p1$scores[,1]

The trouble with factanal is that it does a rotation, and the default
is varimax.  The first factor will usually not be the same as the
first principal component (I think).  Perhaps there is another
rotation option that will give you this, but why bother even to look?
(I didn't, obviously.)

Jon
--
Jonathan Baron, Professor of Psychology, University of Pennsylvania
Home page: http://www.sas.upenn.edu/~baron


Michael Dewey
http://www.aghmed.fsnet.co.uk

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] cannot reproduce matlab wavelet results with R

2009-03-27 Thread Martin Ivanov
Dear R users,
I want to get the D1 details reconstructed
to the level of my time series. My original time series is NH$annual[,]
and it has 118 elements. This is the code I use and the results:
library(wavelets)
NHj - extend.series(X=NH$annual[,], method=reflection,
length=powerof2, j=7);
detach(package:wavelets)
attributes(mra(X=NHj, filter=d4, n.levels=7, boundary=reflection,
fast=TRUE, method=dwt))$D[[1]][1:20]
[1] -0.0363166651  0.0793856487  0.0229855716 -0.0863862067  0.0586129763
 [6] -0.0552697096  0.0049741291  0.0327406169  0.0006990289 -0.0150729128
[11] -0.0203610433  0.0424490289 -0.0103379856 -0.0072504717 -0.0310480084
[16]  0.0543716287  0.0491613932 -0.1187032803  0.0317373320  0.0132924682
detach(package:wavelets)
library(waveslim)
mra(x=NHj, wf = d4, J = 7, method = dwt, boundary =
reflection)$D1[1:20]
 [1] -0.0363166651  0.0793856487  0.0229855716 -0.0863862067  0.0586129763
 [6] -0.0552697096  0.0049741291  0.0327406169  0.0006990289 -0.0150729128
[11] -0.0203610433  0.0424490289 -0.0103379856 -0.0072504717 -0.0310480084
[16]  0.0543716287  0.0491613932 -0.1187032803  0.0317373320  0.0132924682
detach(package:waveslim)
library(wmtsa)
wavMRDSum(x=NHj, wavelet=d4,levels=1, xform=dwt, reflect=FALSE,
keep.smooth=FALSE, keep.details=TRUE)[1:20]
  t=0   t=1   t=2   t=3   t=4
-0.1449234169  0.0166815113  0.0229855716 -0.0863862067  0.0586129763
  t=5   t=6   t=7   t=8   t=9
-0.0552697096  0.0049741291  0.0327406169  0.0006990289 -0.0150729128
 t=10  t=11  t=12  t=13  t=14
-0.0203610433  0.0424490289 -0.0103379856 -0.0072504717 -0.0310480084
 t=15  t=16  t=17  t=18  t=19
 0.0543716287  0.0491613932 -0.1187032803  0.0317373320  0.0132924682

detach(package:wmtsa)
library(wavethresh)
NHwd.obj - wd(data=NHj, filter.number=4, family=DaubExPhase,
type=wavelet, bc=symmetric, verbose=TRUE);
NHwd.objA0 - putC(wd=NHwd.obj, level=6, v=rep(0,2^6), boundary=FALSE,
index=FALSE);
D1 - accessC(wd=wr(wd=NHwd.objA0, start.level = 6, return.object = TRUE,
verbose = TRUE),level=7,boundary=FALSE);
D1[1:20]
 [1] -0.25283845  0.06657357  0.03389600 -0.04797488  0.05665413 -0.09317851
 [7]  0.06466827  0.06839502 -0.07792329 -0.06458924  0.07678030  0.04101479
[13] -0.08070069  0.06491276 -0.02459910 -0.05140745  0.07088627 -0.03537575
[19]  0.01366095 -0.01599816

As you can see, with wavethresh the results are quite different. Have I
messed something up? Is this the correct way of getting the D1 details? I
am a former matlab user, and the results I get with it are only
reproducible in R with the other 3 packages and the d2 or haar
wavelets. With any other wavelet, e.g. d4 I get different results.I
would be very thankful to you if you give me some clue.

I really apologize for taking some of your precious time. I wish you fruitful 
work.

Regards,
Martin
27.03.2009

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] color vectors other than gray()

2009-03-27 Thread Paulo E. Cardoso
I'm certainly missing something.

In fact the ramp I need must be scaled according to a vector of values (in
this case species abundance in each grid cell), as in the example vector
below:

 length(quad_N_sp$x) # where x is the abundance value
[1] 433

quad_N_sp$x
[1] 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 3
2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
[101] 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[201] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[301] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 1 0 2 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[401] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

I need to discriminate shading level accordingly to the abundance value
(level).

I don't know how to proceed.


Paulo E. Cardoso

-Mensagem original-
De: baptiste auguie [mailto:ba...@exeter.ac.uk] 
Enviada: sexta-feira, 27 de Março de 2009 13:30
Para: Paulo E. Cardoso
Cc: r-h...@stat.math.ethz.ch; r-help@r-project.org
Assunto: Re: [R] color vectors other than gray()

?colorRamp

Hope this helps,

baptiste

On 27 Mar 2009, at 13:16, Paulo E. Cardoso wrote:

 I'm trying to create a graph where different cells of a grid (a  
 shapefile)
 will be painted with a color share scale, where the most easy way is  
 to use
 gray().

 Can I somehow get a vector (gradient) of colors, a vector of colors  
 with
 other methods but gray()?

 I'm doing this until now



  quad_N_sp -
 merge(sp_dist[sp_dist 
 $sp==splist[i],],grelha_ID,by.x=quad,by.y=quadricula
 ,all.y=T,)

  quad_N_sp$x[is.na(quad_N_sp$x)] - 0

  quad_N_sp - quad_N_sp[order(quad_N_sp$id),]

  paleta - gray(1-(quad_N_sp$x)/max(quad_N_sp$x)) #! Tons de cinzento

  win.graph(4,5)

  plot(grelha,ol=grey80, #! Gráfico com grelha de amostragem e  
 gradiente
 de abundância

  fg=paleta,

  cex.lab=0.7,

  cex.axis=0.7,

  cex.main=0.7,

  xlab=Coord X,

  ylab=Coord Y,

  main=paste(Espécie: ,splist[i]),

  xlim=c(21,24)

  )

  col_lab - c(max(quad_N_sp$x),min(quad_N_sp$x)) #! Vector com os  
 limites
 min e max do N de indivíduos observados


 color 
 .legend 
 (248000,12,25,128000,col_lab,sort(unique(paleta)),gradie
 nt=y,cex=0.6)#! Legenda

  text(245300,130500,Nº Indivíduos,cex=0.6)

  plot(blocos,ol=grey40,fg=NA,add=T)



 I'd like to replace the grey shade by other colors.



 Thanks in advance

 

 Paulo E. Cardoso




   [[alternative HTML version deleted]]

 ATT1.txt

_

Baptiste Auguié

School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK

Phone: +44 1392 264187

http://newton.ex.ac.uk/research/emag
__

No virus found in this incoming message.
Checked by AVG - www.avg.com 

03/27/09
07:13:00

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Competing risks Kalbfleisch Prentice method

2009-03-27 Thread Eleni Rapsomaniki

Dear Prof. Therneau, 

Thank you for your views on this subject. I think all R users who play
with survival analysis are most grateful for the functions you have
already supplied us with.

I'm guessing Ravi is wondering why you have not implemented the
smoothing of the baseline hazard from the Cox model. 

I actually tried to do this originally, inspired from this thread (i.e
use sm.spline to smooth the hazard):
https://stat.ethz.ch/pipermail/r-help/2004-July/053843.html

but it overestimated the CI (perhaps I implemented it wrong). I was then
advised to treat CI as a step function, rather than continuous, which
means that F(t+1, cause k)-F(t, cause k) will be 0 unless an event of
cause k has occurred in that interval (see also Competing Risks, by
Melanie Pintilie, page 62). This is obviously problematic if one wants
to estimate the CI at times that are not close to observed events for
either cause (perhaps a parametric model could be used in this case).
But then again, this was not an issue wtih my data. 

Eleni Rapsomaniki
 Research Associate
Strangeways Research Laboratory
Department of Public Health and Primary Care
University of Cambridge
 

-Original Message-
From: Terry Therneau [mailto:thern...@mayo.edu] 
Sent: 27 March 2009 13:53
To: Eleni Rapsomaniki; tuech...@gmx.at; Ravi Varadhan
Cc: r-help@r-project.org
Subject: RE: Competing risks Kalbfleisch  Prentice method

Ravi's last note finished with
  I am wondering why Terry Therneau's survival package doesn't
  have this option.  

  The short answer is that there are only so many hours in a day.  

  I've recently moved the code base from an internal Mayo repository to
R-forge, 
one long term goal with this is to broaden the developer base to n2 (me
and 
Thomas Lumley).  
  
  A longer statistical answer:
  
  I'm not sure if the this of Ravi's question is a. smoothed hazards,
b. the 
KP cumulative incidence or c. the Fine  Gray model.
  
  b. I like the CI model and am using it more.  We also have local code.
The 
latest version of survival (on rforge, likely in the next default R
release) has 
added simple CI curves to the survfit function.  Adding code for survfit
on Cox 
models is on the todo list.  But -- this release also fixes up
survfit.coxph to 
handle weighted Cox models and that was on my list for approx 10 years,
i.e., 
don't hold your breath.  I don't release something until it also has a
set of 
worked out test cases to add to the 'tests' directory.
  
  a. smoothed hazards.  For the case at hand I don't see any particular 
advantage of this.  On the other hand, I often would like to display
hazard 
functions instead of CI functions for Cox models; with time dependent
covariates 
I don't think a survival curve makes sense.  But I haven't had the time
to think 
through exactly which methods should be added.
  
  c. Fine  Gray model, i.e., where covariates have a direct influence
on the 
competing risk.  I find the model completely untenable from a biologic
point of 
view, so have no interest in adding it.  (Due to finite time, everything
in the 
survival package is code that I needed for an analysis; medical research
is what 
pays my salary.)  Assume that I have competing processes/risks, say
progression 
of a tumor and heart disease;  I expect that the tumor process pays no
attention 
whatsoever to what is going on in the heart.  But this is necessary if 
type=squamous is modeled as an absolute beta=__ increase in the CI for
cancer. 
 The squamous cells need to step up the pace of invasion if heart
failure 
threatens, like jockeys in a horse race. 
  
   Terry T. 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Competing risks Kalbfleisch Prentice method

2009-03-27 Thread Ravi Varadhan
Hi Terry,

My this was your (a), i.e. the smoothed hazard rate function. 

I apologize if I came across as being rude.  I was only curious to see if you 
had any scientific/statistical rationale for not including the smoothed hazard 
option in your survival package, which is, by far, the most widely used tool 
for time-to-event analysis in R.  Therefore, I just felt that having this, 
fairly useful, capability in survival would be nice.  

I have a couple of questions related to your two other points:

point (b):  How would  you estimate the effect of a treatment on the cumulative 
incidence of primary outcome, adjusted for covariates, using the KP approach 
(both point and interval estimation)?

point (c):   I don't quite understand why you find the FG model completely 
biologically untenable.  I view it as mathematical trickery to obtain a compact 
summary of the impact of a covariate on the cumulative incidence.  The FG 
model is especially useful in estimating covariate adjusted treatment effect, 
provided the proportionality assumption on the sub-distribution hazard is 
reasonable.  The KP approach does not provide such compactness as you have to 
model all the cause-specific hazards.  

Best,
Ravi.



Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


- Original Message -
From: Terry Therneau thern...@mayo.edu
Date: Friday, March 27, 2009 9:52 am
Subject: RE: Competing risks Kalbfleisch   Prentice method
To: er...@medschl.cam.ac.uk, tuech...@gmx.at, Ravi Varadhan rvarad...@jhmi.edu
Cc: r-help@r-project.org


 Ravi's last note finished with
I am wondering why Terry Therneau's survival package doesn't
have this option.  
  
The short answer is that there are only so many hours in a day.  
  
I've recently moved the code base from an internal Mayo repository 
 to R-forge, 
  one long term goal with this is to broaden the developer base to n2 
 (me and 
  Thomas Lumley).  

A longer statistical answer:

I'm not sure if the this of Ravi's question is a. smoothed 
 hazards, b. the 
  KP cumulative incidence or c. the Fine  Gray model.

b. I like the CI model and am using it more.  We also have local 
 code. The 
  latest version of survival (on rforge, likely in the next default R 
 release) has 
  added simple CI curves to the survfit function.  Adding code for 
 survfit on Cox 
  models is on the todo list.  But -- this release also fixes up 
 survfit.coxph to 
  handle weighted Cox models and that was on my list for approx 10 
 years, i.e., 
  don't hold your breath.  I don't release something until it also has 
 a set of 
  worked out test cases to add to the 'tests' directory.

a. smoothed hazards.  For the case at hand I don't see any 
 particular 
  advantage of this.  On the other hand, I often would like to display 
 hazard 
  functions instead of CI functions for Cox models; with time dependent 
 covariates 
  I don't think a survival curve makes sense.  But I haven't had the 
 time to think 
  through exactly which methods should be added.

c. Fine  Gray model, i.e., where covariates have a direct 
 influence on the 
  competing risk.  I find the model completely untenable from a 
 biologic point of 
  view, so have no interest in adding it.  (Due to finite time, 
 everything in the 
  survival package is code that I needed for an analysis; medical 
 research is what 
  pays my salary.)  Assume that I have competing processes/risks, say 
 progression 
  of a tumor and heart disease;  I expect that the tumor process pays 
 no attention 
  whatsoever to what is going on in the heart.  But this is necessary 
 if 
  type=squamous is modeled as an absolute beta=__ increase in the CI 
 for cancer. 
   The squamous cells need to step up the pace of invasion if heart 
 failure 
  threatens, like jockeys in a horse race. 

 Terry T. 


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] color vectors other than gray()

2009-03-27 Thread baptiste auguie
Can you provide a minimal example that we can run directly after copy  
and paste (using a standard data set or dummy data)?


It's always helpful to try and nail down the core of your question  
(often you'll find the answer while formulating your question in  
minimal terms).


 baptiste



On 27 Mar 2009, at 14:36, Paulo E. Cardoso wrote:


I'm certainly missing something.

In fact the ramp I need must be scaled according to a vector of  
values (in
this case species abundance in each grid cell), as in the example  
vector

below:


length(quad_N_sp$x) # where x is the abundance value

[1] 433

quad_N_sp$x
[1] 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 3 0 0 0 0 0 0 0 0  
0 0 0 3
2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  
0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
[101] 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0  
0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0  
0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[201] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  
0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  
0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[301] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0  
0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 1 0 2 0 0 0 0 0  
0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[401] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  
0 0


I need to discriminate shading level accordingly to the abundance  
value

(level).

I don't know how to proceed.


Paulo E. Cardoso

-Mensagem original-
De: baptiste auguie [mailto:ba...@exeter.ac.uk]
Enviada: sexta-feira, 27 de Março de 2009 13:30
Para: Paulo E. Cardoso
Cc: r-h...@stat.math.ethz.ch; r-help@r-project.org
Assunto: Re: [R] color vectors other than gray()

?colorRamp

Hope this helps,

baptiste

On 27 Mar 2009, at 13:16, Paulo E. Cardoso wrote:


I'm trying to create a graph where different cells of a grid (a
shapefile)
will be painted with a color share scale, where the most easy way is
to use
gray().

Can I somehow get a vector (gradient) of colors, a vector of colors
with
other methods but gray()?

I'm doing this until now



quad_N_sp -
merge(sp_dist[sp_dist
$sp==splist[i],],grelha_ID,by.x=quad,by.y=quadricula
,all.y=T,)

quad_N_sp$x[is.na(quad_N_sp$x)] - 0

quad_N_sp - quad_N_sp[order(quad_N_sp$id),]

paleta - gray(1-(quad_N_sp$x)/max(quad_N_sp$x)) #! Tons de cinzento

win.graph(4,5)

plot(grelha,ol=grey80, #! Gráfico com grelha de amostragem e
gradiente
de abundância

fg=paleta,

cex.lab=0.7,

cex.axis=0.7,

cex.main=0.7,

xlab=Coord X,

ylab=Coord Y,

main=paste(Espécie: ,splist[i]),

xlim=c(21,24)

)

col_lab - c(max(quad_N_sp$x),min(quad_N_sp$x)) #! Vector com os
limites
min e max do N de indivíduos observados


color
.legend
(248000,12,25,128000,col_lab,sort(unique(paleta)),gradie
nt=y,cex=0.6)#! Legenda

text(245300,130500,Nº Indivíduos,cex=0.6)

plot(blocos,ol=grey40,fg=NA,add=T)



I'd like to replace the grey shade by other colors.



Thanks in advance



Paulo E. Cardoso




[[alternative HTML version deleted]]

ATT1.txt


_

Baptiste Auguié

School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK

Phone: +44 1392 264187

http://newton.ex.ac.uk/research/emag
__

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.0.238 / Virus Database: 270.11.30/2026 - Release Date:  
03/27/09

07:13:00



_

Baptiste Auguié

School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK

Phone: +44 1392 264187

http://newton.ex.ac.uk/research/emag

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Ploting a matrix

2009-03-27 Thread skrug
Unfortunately, I could not solve the problem of plotting all columns of 
a matrix against the first column


I used:

b=read.table(d:\\programme\\R\\übungen\\Block 1b.txt, header=T)

b is a table with the first column using  Dates and the following 
columns with vectors.


apply(b[,-1], 2, plot, x= b[,1])

Also all columns have the same length, [R] states that the length are 
different.


Can you help me?



baptiste auguie schrieb:

Something like this perhaps,


a - matrix(rnorm(5*49), ncol=49)

pdf(width=15, height=15)

par(mfrow= c(8,6))
apply(a[,-1], 2, plot, x= a[,1])

dev.off()



HTH,

baptiste

On 27 Mar 2009, at 11:05, skrug wrote:


Hi evrybody,

in a matrix consisting of 49 columns, I would like to plot all columns
against the first in 48 different graphs.
Can you help me?

Thank you in advance
Sebastian

--
*** 



Dipl. Biol. Sebastian Krug
PhD - student
IFM - GEOMAR
Leibniz Institute of Marine Sciences
Research Division 2 - Marine Biogeochemistry
Düsternbrooker Weg 20
D - 24105 Kiel
Germany

Tel.: +49 431 600-4282
Fax.: +49 431 600-4446
email: sk...@ifm-geomar.de

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.


_

Baptiste Auguié

School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK

Phone: +44 1392 264187

http://newton.ex.ac.uk/research/emag
__




--
***

Dipl. Biol. Sebastian Krug
PhD - student
IFM - GEOMAR
Leibniz Institute of Marine Sciences
Research Division 2 - Marine Biogeochemistry
Düsternbrooker Weg 20
D - 24105 Kiel
Germany

Tel.: +49 431 600-4282
Fax.: +49 431 600-4446
email: sk...@ifm-geomar.de

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R: plm and pgmm

2009-03-27 Thread ivowel
dear giovanni---

thanks for answering on r-help to me as well as privately. I very much  
appreciate your responding. I read the plm vignette. I don't have the book,  
so I can't consult it. :-(. I am going to post this message now (rather  
than just email it privately), because other amateurs may have similar  
questions in the future, and find this message and your answers via google.  
Real Statisticians---please don't waste your time.


so here is my amateur interpretation of GMM in general and Arellano-Bond  
and Blundell-Bond specifically. I will do an example with T=4. The model is
x(i,t) = a*x(i,t-1) + u(i,t)
ie
x(i,2) = a*x(i,1) + u(i,2)
x(i,3) = a*x(i,2) + u(i,3)
x(i,4) = a*x(i,4) + u(i,4)
I view u(i,t) as a function of a: u(i,t)[a] = x(i,t)-a*x(i,t-1) . the  
Arellano-Bond method then claims that u(i,3) should be uncorrelated with  
x(i,1); u(i,4) should be uncorrelated with x(i,1) and also with x(i,2).  
Blundell Bond adds the further condition that u(i,4) should be uncorrelated  
with x(i,2)-x(i,1). so, I think of having four sums, each over all firms  
i's. Let me call cross-sectional summing as sumi. the penalty function to  
minimize is

sumi u(i,3)[a]*x(i,1) + sumi u(i,4)[a]*x(i,1) + sumi u(i,4)[a]*x(i,2) +  
sumi u(i,4)*(x(i,2)-x(i,1))

I am missing the correct H weights on the terms in this sum, which is some  
GMM magic that I do not understand (though I can copy it from their  
article). for this post, the exact moment weights are not conceptually  
important. now, for this sum to be well-defined, I should not need very  
many observations at all. even with, say, N=7 firms, there should be no  
problem in finding an a that minimizes the sum. (To me, it seems that the  
more moment conditions I have, the merrier.) I was a little more encouraged  
to make such daring statements, because stata seemed able of running this  
and producing output.

On the other hand, the exact NF number at which pgmm() dies does suggest  
that you are right.

function( NF=7, NT=4 ) {
d= data.frame( firm= rep(1:NF, each=NT), year= rep( 1:NT, NF),  
x=rnorm(NF*NT) )
lagformula= dynformula( x ~ 1, list(1) )
v=pgmm( lagformula, data=d, gmm.inst=~x, model=onestep, effect=NULL,  
lag.gmm=c(1,99), transformation=ld )
}

with NF=8, it works; and with NF=7, it dies. With NF=7, I have 28 data  
points in levels and 21 data points in differences, which are used to  
estimate only one auto-coefficient via 4 moment conditions. (Is this  
correct?)

my best guess now is that even though one can get the GMM estimates with 7  
firms, one cannot use the two-step method to learn how to best weight the  
different moment conditions. the only method that may work is the one-step  
matrix. of course, all of this is about conceptual tryouts, not about real  
data. these methods work only well when NF is very large.


now, for the plm package: the non-descriptive error messages are also what  
creates confusion when amateurs like myself want to create simple examples  
[not real data] to understand how to provide proper inputs. if one needs a  
minimum number of N, then may I strongly suggest that you trap this with a  
descriptive error message at the outset? similarly, I would add an error  
message if the formula provided to pgmm is not a dynformula, but a plain  
formula. just die with please use dynformula instead.

there is also a small bug in the documentation. the vignette says that NULL  
is a possible input to effect, while the standard docs mention  
only individual or twoways.


I also emailed Yves that it would be great if you could provide a wrapper  
for your more general function that does the simple estimation that 99% of  
all end users would ever want. this would have the following inputs:

[a] method = arellano-bond or blundell-bond
[b] fixed effects or not
[c] a set of totally exogenous variables
[d] the number of lags of the dependent variable, defaults to 1

the version omits the GMM instrument vs. non-GMM instrument lingo, (though  
after reading the vignette I have more of an inkling that all I need is to  
not tell the function about exogenous variables and leave them in the  
model), and knows that the dependent variable is dynamic by assumption, so  
no more gmm.inst specification is required. yes, it is great to have the  
implementation built on more heavy artillery that the statisticians can use  
for more flexible estimations; but for end-users, having this simplified  
function would really be terrific. (it would presumably default to using  
the two-step method, which has more intelligent standard errors.) with such  
a function wrapper, using these dynamic panel methods would become really  
easy. just a suggestion...


May I end with stating that writing such a general plm seems like a  
Herculean tasks, and that I want to express my thanks on behalf of many R  
users that will benefit from it.

regards,

/ivo

[[alternative HTML version deleted]]


[R] Re sults sometimes in seconds with difftime unit=mins

2009-03-27 Thread Ptit_Bleu

Hello,

I'm trying to calculate an integration and x-axis is a time (format :
%Y-%m-%d %H:%M:%S).
I use diff(date, units=mins) in a loop for but sometimes the results stay
in seconds (95% is ok).

Examples for 2 sets of data are given below (first result stays in seconds
whereas the second in minutes as expected).
Have you already seen this behaviour ?
Any idea to solve this problem ?

Thanks in advance.
Have a good week-end,
Ptit Bleu. 


 strptime(datajour$Date, format=%Y-%m-%d %H:%M:%S)
 [1] 2009-03-26 11:21:31 2009-03-26 11:22:17 2009-03-26 11:27:18
2009-03-26 11:36:59 2009-03-26 11:41:59 2009-03-26 11:46:59
 [7] 2009-03-26 11:51:59 2009-03-26 11:57:00 2009-03-26 12:02:00
2009-03-26 12:07:00 2009-03-26 12:12:00 2009-03-26 12:17:00
[13] 2009-03-26 12:22:00 2009-03-26 12:27:01 2009-03-26 12:32:01
2009-03-26 12:37:01 2009-03-26 12:42:01 2009-03-26 12:47:01
[19] 2009-03-26 12:52:01 2009-03-26 12:57:01 2009-03-26 13:02:02
2009-03-26 13:07:02 2009-03-26 13:12:03 2009-03-26 13:17:03
[25] 2009-03-26 13:22:03 2009-03-26 13:27:03 2009-03-26 13:32:03
2009-03-26 13:37:03 2009-03-26 13:42:03 2009-03-26 13:47:03
[31] 2009-03-26 13:52:03 2009-03-26 13:57:04 2009-03-26 14:01:02
2009-03-26 14:06:05 2009-03-26 14:11:05 2009-03-26 14:16:06
[37] 2009-03-26 14:21:06 2009-03-26 14:26:08 2009-03-26 14:31:09
2009-03-26 14:36:10 2009-03-26 14:41:10 2009-03-26 14:46:15
[43] 2009-03-26 14:51:15 2009-03-26 14:56:15 2009-03-26 15:01:15
2009-03-26 15:06:17 2009-03-26 15:11:17 2009-03-26 15:16:19
[49] 2009-03-26 15:21:19 2009-03-26 15:26:19 2009-03-26 15:31:22
2009-03-26 15:36:23 2009-03-26 15:41:24 2009-03-26 15:46:24
[55] 2009-03-26 15:51:25 2009-03-26 15:56:25 2009-03-26 16:01:25
2009-03-26 16:06:26 2009-03-26 16:11:26 2009-03-26 16:16:26
[61] 2009-03-26 16:21:27 2009-03-26 16:26:27 2009-03-26 16:31:28
2009-03-26 16:36:28 2009-03-26 16:41:29 2009-03-26 16:46:30
[67] 2009-03-26 16:51:31 2009-03-26 16:56:31 2009-03-26 17:01:32
2009-03-26 17:06:32 2009-03-26 17:11:33 2009-03-26 17:16:33
[73] 2009-03-26 17:21:33 2009-03-26 17:26:35 2009-03-26 17:31:36
2009-03-26 17:36:36 2009-03-26 17:41:36 2009-03-26 17:46:36
[79] 2009-03-26 17:51:39 2009-03-26 17:56:40 2009-03-26 18:01:40
2009-03-26 18:06:40 2009-03-26 18:11:40 2009-03-26 18:16:40
[85] 2009-03-26 18:21:41 2009-03-26 18:26:41 2009-03-26 18:31:41
2009-03-26 18:36:41 2009-03-26 18:41:41 2009-03-26 18:46:41
[91] 2009-03-26 18:51:42 2009-03-26 18:56:42 2009-03-26 19:06:42
 
 as.numeric(diff(strptime(datajour$Date, format=%Y-%m-%d %H:%M:%S),
 units=mins))
 [1]  46 301 581 300 300 300 301 300 300 300 300 300 301 300 300 300 300 300
300 301 300 301 300 300 300 300 300 300 300 300 301 238 303 300 301 300 302
301
[39] 301 300 305 300 300 300 302 300 302 300 300 303 301 301 300 301 300 300
301 300 300 301 300 301 300 301 301 301 300 301 300 301 300 300 302 301 300
300
[77] 300 303 301 300 300 300 300 301 300 300 300 300 300 301 300 600


 strptime(datajour$Date, format=%Y-%m-%d %H:%M:%S)
 [1] 2009-03-26 11:22:24 2009-03-26 11:27:25 2009-03-26 11:37:04
2009-03-26 11:42:04 2009-03-26 11:47:04 2009-03-26 11:52:04
 [7] 2009-03-26 11:57:04 2009-03-26 12:02:05 2009-03-26 12:07:06
2009-03-26 12:12:06 2009-03-26 12:17:06 2009-03-26 12:22:06
[13] 2009-03-26 12:27:07 2009-03-26 12:32:07 2009-03-26 12:37:07
2009-03-26 12:42:07 2009-03-26 12:47:07 2009-03-26 12:52:08
[19] 2009-03-26 12:57:08 2009-03-26 13:02:08 2009-03-26 13:07:09
2009-03-26 13:12:09 2009-03-26 13:17:09 2009-03-26 13:22:09
[25] 2009-03-26 13:27:09 2009-03-26 13:32:09 2009-03-26 13:37:09
2009-03-26 13:42:09 2009-03-26 13:47:09 2009-03-26 13:52:09
[31] 2009-03-26 13:57:10 2009-03-26 14:01:08 2009-03-26 14:06:11
2009-03-26 14:11:11 2009-03-26 14:16:12 2009-03-26 14:21:12
[37] 2009-03-26 14:26:15 2009-03-26 14:31:18 2009-03-26 14:36:18
2009-03-26 14:41:19 2009-03-26 14:46:22 2009-03-26 14:51:22
[43] 2009-03-26 14:56:23 2009-03-26 15:01:24 2009-03-26 15:06:24
2009-03-26 15:11:24 2009-03-26 15:16:24 2009-03-26 15:21:24
[49] 2009-03-26 15:26:24 2009-03-26 15:31:28 2009-03-26 15:36:29
2009-03-26 15:41:29 2009-03-26 15:46:30 2009-03-26 15:51:33
[55] 2009-03-26 15:56:33 2009-03-26 16:01:33 2009-03-26 16:06:33
2009-03-26 16:11:34 2009-03-26 16:16:34 2009-03-26 16:21:35
[61] 2009-03-26 16:26:35 2009-03-26 16:31:35 2009-03-26 16:36:35
2009-03-26 16:41:35 2009-03-26 16:46:36 2009-03-26 16:51:37
[67] 2009-03-26 16:56:37 2009-03-26 17:01:37 2009-03-26 17:06:38
2009-03-26 17:11:38 2009-03-26 17:16:38 2009-03-26 17:21:38
[73] 2009-03-26 17:26:40 2009-03-26 17:31:43 2009-03-26 17:36:43
2009-03-26 17:41:43 2009-03-26 17:46:44 2009-03-26 17:51:45
[79] 2009-03-26 17:56:46 2009-03-26 18:01:46 2009-03-26 18:06:46
2009-03-26 18:11:47 2009-03-26 18:16:47 2009-03-26 18:21:48
[85] 2009-03-26 18:26:48 2009-03-26 18:31:48 2009-03-26 18:36:48
2009-03-26 18:41:48 2009-03-26 18:46:48 2009-03-26 18:51:48
[91] 2009-03-26 

Re: [R] color vectors other than gray()

2009-03-27 Thread Petr PIKAL
Hi

r-help-boun...@r-project.org napsal dne 27.03.2009 15:36:23:

 I'm certainly missing something.
 
 In fact the ramp I need must be scaled according to a vector of values 
(in
 this case species abundance in each grid cell), as in the example vector
 below:
 
  length(quad_N_sp$x) # where x is the abundance value
 [1] 433
 
 quad_N_sp$x
 [1] 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 
0 3
 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
 [101] 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 
0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [201] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [301] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 
0 0
 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 1 0 2 0 0 0 0 0 0 0 
0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [401] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 
 I need to discriminate shading level accordingly to the abundance value
 (level).

If I understand correctly

pal-grey(0:max(quad_N_sp$x)/max(quad_N_sp$x))
shall give you vector of equally spaced grey values

pal[quad_N_sp$x+1]
shall give you shadings for each quad_N_sp$x value

Regards
Petr


 
 I don't know how to proceed.
 
 
 Paulo E. Cardoso
 
 -Mensagem original-
 De: baptiste auguie [mailto:ba...@exeter.ac.uk] 
 Enviada: sexta-feira, 27 de Março de 2009 13:30
 Para: Paulo E. Cardoso
 Cc: r-h...@stat.math.ethz.ch; r-help@r-project.org
 Assunto: Re: [R] color vectors other than gray()
 
 ?colorRamp
 
 Hope this helps,
 
 baptiste
 
 On 27 Mar 2009, at 13:16, Paulo E. Cardoso wrote:
 
  I'm trying to create a graph where different cells of a grid (a 
  shapefile)
  will be painted with a color share scale, where the most easy way is 
  to use
  gray().
 
  Can I somehow get a vector (gradient) of colors, a vector of colors 
  with
  other methods but gray()?
 
  I'm doing this until now
 
 
 
   quad_N_sp -
  merge(sp_dist[sp_dist 
  $sp==splist[i],],grelha_ID,by.x=quad,by.y=quadricula
  ,all.y=T,)
 
   quad_N_sp$x[is.na(quad_N_sp$x)] - 0
 
   quad_N_sp - quad_N_sp[order(quad_N_sp$id),]
 
   paleta - gray(1-(quad_N_sp$x)/max(quad_N_sp$x)) #! Tons de cinzento
 
   win.graph(4,5)
 
   plot(grelha,ol=grey80, #! Gráfico com grelha de amostragem e 
  gradiente
  de abundância
 
   fg=paleta,
 
   cex.lab=0.7,
 
   cex.axis=0.7,
 
   cex.main=0.7,
 
   xlab=Coord X,
 
   ylab=Coord Y,
 
   main=paste(Espécie: ,splist[i]),
 
   xlim=c(21,24)
 
   )
 
   col_lab - c(max(quad_N_sp$x),min(quad_N_sp$x)) #! Vector com os 
  limites
  min e max do N de indivíduos observados
 
 
  color 
  .legend 
  (248000,12,25,128000,col_lab,sort(unique(paleta)),gradie
  nt=y,cex=0.6)#! Legenda
 
   text(245300,130500,Nº Indivíduos,cex=0.6)
 
   plot(blocos,ol=grey40,fg=NA,add=T)
 
 
 
  I'd like to replace the grey shade by other colors.
 
 
 
  Thanks in advance
 
  
 
  Paulo E. Cardoso
 
 
 
 
 [[alternative HTML version deleted]]
 
  ATT1.txt
 
 _
 
 Baptiste Auguié
 
 School of Physics
 University of Exeter
 Stocker Road,
 Exeter, Devon,
 EX4 4QL, UK
 
 Phone: +44 1392 264187
 
 http://newton.ex.ac.uk/research/emag
 __
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com 
 
 03/27/09
 07:13:00
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Ploting a matrix

2009-03-27 Thread baptiste auguie
the result of read.table is a data.frame, not a matrix as you first  
suggested. Can you copy the result of str(b) so we know what your data  
is made of?


I'm guessing the most elegant solution will be to use the reshape  
package, followed by ggplot2 or lattice.


baptiste

On 27 Mar 2009, at 14:54, skrug wrote:

Unfortunately, I could not solve the problem of plotting all columns  
of

a matrix against the first column

I used:

b=read.table(d:\\programme\\R\\übungen\\Block 1b.txt, header=T)

b is a table with the first column using  Dates and the following
columns with vectors.

apply(b[,-1], 2, plot, x= b[,1])

Also all columns have the same length, [R] states that the length are
different.

Can you help me?



baptiste auguie schrieb:

Something like this perhaps,


a - matrix(rnorm(5*49), ncol=49)

pdf(width=15, height=15)

par(mfrow= c(8,6))
apply(a[,-1], 2, plot, x= a[,1])

dev.off()



HTH,

baptiste

On 27 Mar 2009, at 11:05, skrug wrote:


Hi evrybody,

in a matrix consisting of 49 columns, I would like to plot all  
columns

against the first in 48 different graphs.
Can you help me?

Thank you in advance
Sebastian

--
***


Dipl. Biol. Sebastian Krug
PhD - student
IFM - GEOMAR
Leibniz Institute of Marine Sciences
Research Division 2 - Marine Biogeochemistry
Düsternbrooker Weg 20
D - 24105 Kiel
Germany

Tel.: +49 431 600-4282
Fax.: +49 431 600-4446
email: sk...@ifm-geomar.de

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


_

Baptiste Auguié

School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK

Phone: +44 1392 264187

http://newton.ex.ac.uk/research/emag
__




--
***

Dipl. Biol. Sebastian Krug
PhD - student
IFM - GEOMAR
Leibniz Institute of Marine Sciences
Research Division 2 - Marine Biogeochemistry
Düsternbrooker Weg 20
D - 24105 Kiel
Germany

Tel.: +49 431 600-4282
Fax.: +49 431 600-4446
email: sk...@ifm-geomar.de



_

Baptiste Auguié

School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK

Phone: +44 1392 264187

http://newton.ex.ac.uk/research/emag

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] functions flagged for debugging

2009-03-27 Thread Christos Hatzis
Hi,
 
Is there a way to find which functions are flagged for debugging in a given
session?
 
Thank you.
-Christos
 
 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Ploting a matrix

2009-03-27 Thread skrug
Sorry for the mistake. As you probably already guesed, I am just 
starting using R. I could not name the difference between a matrix and a 
data.frame.


 str(b)
'data.frame':   9 obs. of  7 variables:
$ Datum: Factor w/ 9 levels 06.03.,07.03.,..: 1 2 3 4 5 6 7 8 9
$ X1   : int  408 335 2123 4685 7669 17060 31330 70730 109667
$ X2   : int  230 241 1509 2226 7839 13997 24797 53133 93061
$ X3   : int  25 16 38 61 114 299 140 172 196
$ X4   : int  248 588 2083 2071 4563 9798 17611 38554 82354
$ X5   : int  407 201 1339 3699 8375 19200 36563 83993 123167
$ X6   : int  248 730 3056 2327 4092 8905 15931 37895 84565



Thanks





baptiste auguie schrieb:
the result of read.table is a data.frame, not a matrix as you first 
suggested. Can you copy the result of str(b) so we know what your data 
is made of?


I'm guessing the most elegant solution will be to use the reshape 
package, followed by ggplot2 or lattice.


baptiste

On 27 Mar 2009, at 14:54, skrug wrote:


Unfortunately, I could not solve the problem of plotting all columns of
a matrix against the first column

I used:

b=read.table(d:\\programme\\R\\übungen\\Block 1b.txt, header=T)

b is a table with the first column using  Dates and the following
columns with vectors.

apply(b[,-1], 2, plot, x= b[,1])

Also all columns have the same length, [R] states that the length are
different.

Can you help me?



baptiste auguie schrieb:

Something like this perhaps,


a - matrix(rnorm(5*49), ncol=49)

pdf(width=15, height=15)

par(mfrow= c(8,6))
apply(a[,-1], 2, plot, x= a[,1])

dev.off()



HTH,

baptiste

On 27 Mar 2009, at 11:05, skrug wrote:


Hi evrybody,

in a matrix consisting of 49 columns, I would like to plot all columns
against the first in 48 different graphs.
Can you help me?

Thank you in advance
Sebastian

--
*** 




Dipl. Biol. Sebastian Krug
PhD - student
IFM - GEOMAR
Leibniz Institute of Marine Sciences
Research Division 2 - Marine Biogeochemistry
Düsternbrooker Weg 20
D - 24105 Kiel
Germany

Tel.: +49 431 600-4282
Fax.: +49 431 600-4446
email: sk...@ifm-geomar.de

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


_

Baptiste Auguié

School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK

Phone: +44 1392 264187

http://newton.ex.ac.uk/research/emag
__




--
*** 



Dipl. Biol. Sebastian Krug
PhD - student
IFM - GEOMAR
Leibniz Institute of Marine Sciences
Research Division 2 - Marine Biogeochemistry
Düsternbrooker Weg 20
D - 24105 Kiel
Germany

Tel.: +49 431 600-4282
Fax.: +49 431 600-4446
email: sk...@ifm-geomar.de



_

Baptiste Auguié

School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK

Phone: +44 1392 264187

http://newton.ex.ac.uk/research/emag
__




--
***

Dipl. Biol. Sebastian Krug
PhD - student
IFM - GEOMAR
Leibniz Institute of Marine Sciences
Research Division 2 - Marine Biogeochemistry
Düsternbrooker Weg 20
D - 24105 Kiel
Germany

Tel.: +49 431 600-4282
Fax.: +49 431 600-4446
email: sk...@ifm-geomar.de

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Weighting data with normal distribution

2009-03-27 Thread Mark Lyman
Alice Lin alice.ly at gmail.com writes:

 
 
 I have a vector of binary data – a string of 0’s and 1’s. 
 I want to weight these inputs with a normal kernel centered around entry x
 so it is transformed into a new vector of data that takes into account the
 values of the entries around it (weighting them more heavily if they are
 near).
 
 Example:
   -
- -
 -  -
 0 1 0 0 1 0 0 1 1 1 1 
 If x = 3, it’s current value is 0 but it’s new value with the Gaussian
 weighting around would be something like .1*0+.5*1+1*0+0.5*0+.1*1= 0.6
 
 I want to be able to play with adjusting the variance to different values as
 well.
 I’ve found wkde in the mixtools library and think it may be useful but I
 have not figured out how to use it yet.
 
 Any tips would be appreciated.
 
 Thanks!
 

I don't know anything about wkde. But the filter function in stats package 
should do what you want.

 x - c(0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1)
 filter(x, c(.1, .5, 1, .5, .1))
Time Series:
Start = 1 
End = 11 
Frequency = 1 
 [1]  NA  NA 0.6 0.6 1.0 0.6 0.7 1.6 2.1  NA  NA

In the signal package, there is also a variety of windows, including the 
gausswin function. However, the filter function in the signal package masks the 
filter function from the stats package

 stats::filter(x, gausswin(5, 2.68))

Mark Lyman
Statistician, ATK Launch Systems

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] error when setting up Rcmd BATCH on new computer

2009-03-27 Thread Brigid Mooney
Hello,

I got a new computer, and am trying to reinstall R and have run into a
bit of a problem when running the BATCH command.
For reference, the OS is Windows Vista, 64 bit.

I installed R 2.8.1 and have the 4-3 files from the following link
extracted with the containing folder in my system PATH variable.

However, when I try to run the following command from the dos prompt:
Rcmd BATCH TestBatch.R testoutput.txt

Note: TestBatch.R is simply a file containing the statement:
print(hello world)

I get the error: \Common was unexpected at this time.

If anyone can provide any insight into this problem, I would really
appreciate it as I thought I remembered all the steps from when I set
this all up on my old computer...

Thanks,
Brigid

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] color vectors other than gray()

2009-03-27 Thread Paulo E. Cardoso
OK

I got it working partially.

The plot results in a ramp varying from white - red - black, doing this:

plot(grelha,ol=grey80, #! Gráfico com grelha de amostragem e gradiente de
abundância
 
fg=color.scale(((1-(quad_N_sp$x)/max(quad_N_sp$x))),c(0,1,1),c(0,1),c(0,1)),
  cex.lab=0.7,
  cex.axis=0.7,
  cex.main=0.7,
  xlab=Coord X,
  ylab=Coord Y,
  main=paste(Espécie: ,splist[i]),
  xlim=c(21,24)
  )

Where the ramp results from the red, green, blue ranges used in:
color.scale(((1-(quad_N_sp$x)/max(quad_N_sp$x))),c(0,1,1),c(0,1),c(0,1))

I don't know how to control the ramp. In this particular case it results
reasonably well but I ave no idea how to control the RGB channels to produce
a ramp-of-interest.

Attached workspace contains a data sample to create the plot.

Use the code below:

library(maptools)
library(plotrix)
pallete - color.scale(vec.ab.01,c(0,1,1),c(0,1),c(0,1))
win.graph(4,5)
plot(grelha,ol=grey80, #! Gráfico com grelha de amostragem e gradiente de
abundância
  fg=pallete,
  cex.lab=0.7,
  cex.axis=0.7,
  cex.main=0.7,
  xlab=Coord X,
  ylab=Coord Y,
  main=paste(Espécie: ,Carduelis carduelis),
  xlim=c(21,24)
  )
col_lab - c(max(vec.ab),min(vec.ab)) #! Vector com os limites min e max do
N de indivíduos observados
color.legend(248000,12,25,128000,col_lab,sort(unique(pallete)),gradi
ent=y,cex=0.7)#! Legenda
text(245300,130500,Nº Indivíduos,cex=0.6)
plot(blocos,ol=grey40,fg=NA,add=T)





Paulo E. Cardoso


-Mensagem original-
De: baptiste auguie [mailto:ba...@exeter.ac.uk] 
Enviada: sexta-feira, 27 de Março de 2009 14:50
Para: Paulo E. Cardoso
Cc: r-h...@stat.math.ethz.ch; r-help@r-project.org
Assunto: Re: [R] color vectors other than gray()

Can you provide a minimal example that we can run directly after copy  
and paste (using a standard data set or dummy data)?

It's always helpful to try and nail down the core of your question  
(often you'll find the answer while formulating your question in  
minimal terms).

  baptiste



On 27 Mar 2009, at 14:36, Paulo E. Cardoso wrote:

 I'm certainly missing something.

 In fact the ramp I need must be scaled according to a vector of  
 values (in
 this case species abundance in each grid cell), as in the example  
 vector
 below:

 length(quad_N_sp$x) # where x is the abundance value
 [1] 433

 quad_N_sp$x
 [1] 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 3 0 0 0 0 0 0 0 0  
 0 0 0 3
 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  
 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
 [101] 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0  
 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0  
 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [201] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  
 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  
 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [301] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0  
 0 0 0 0
 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 1 0 2 0 0 0 0 0  
 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [401] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  
 0 0

 I need to discriminate shading level accordingly to the abundance  
 value
 (level).

 I don't know how to proceed.

 
 Paulo E. Cardoso

 -Mensagem original-
 De: baptiste auguie [mailto:ba...@exeter.ac.uk]
 Enviada: sexta-feira, 27 de Março de 2009 13:30
 Para: Paulo E. Cardoso
 Cc: r-h...@stat.math.ethz.ch; r-help@r-project.org
 Assunto: Re: [R] color vectors other than gray()

 ?colorRamp

 Hope this helps,

 baptiste

 On 27 Mar 2009, at 13:16, Paulo E. Cardoso wrote:

 I'm trying to create a graph where different cells of a grid (a
 shapefile)
 will be painted with a color share scale, where the most easy way is
 to use
 gray().

 Can I somehow get a vector (gradient) of colors, a vector of colors
 with
 other methods but gray()?

 I'm doing this until now



 quad_N_sp -
 merge(sp_dist[sp_dist
 $sp==splist[i],],grelha_ID,by.x=quad,by.y=quadricula
 ,all.y=T,)

 quad_N_sp$x[is.na(quad_N_sp$x)] - 0

 quad_N_sp - quad_N_sp[order(quad_N_sp$id),]

 paleta - gray(1-(quad_N_sp$x)/max(quad_N_sp$x)) #! Tons de cinzento

 win.graph(4,5)

 plot(grelha,ol=grey80, #! Gráfico com grelha de amostragem e
 gradiente
 de abundância

 fg=paleta,

 cex.lab=0.7,

 cex.axis=0.7,

 cex.main=0.7,

 xlab=Coord X,

 ylab=Coord Y,

 main=paste(Espécie: ,splist[i]),

 xlim=c(21,24)

 )

 col_lab - c(max(quad_N_sp$x),min(quad_N_sp$x)) #! Vector com os
 limites
 min e max do N de indivíduos observados


 color
 .legend
 (248000,12,25,128000,col_lab,sort(unique(paleta)),gradie
 nt=y,cex=0.6)#! Legenda

 text(245300,130500,Nº Indivíduos,cex=0.6)

 plot(blocos,ol=grey40,fg=NA,add=T)



 I'd like to replace the grey shade by other colors.



 Thanks in advance

 

Re: [R] Ploting a matrix

2009-03-27 Thread baptiste auguie
Here's my suggestion using the ggplot2 package (but you may prefer to  
stick with base functions),



date = factor(letters[1:9])
d - data.frame(x1=seq(1, 9), x2=seq(2, 10), date=date)

head(d) # dummy data that resembles yours
str(d)

library(reshape)
md - melt(d, id=date) # creates a data.frame in the long format
head(md)

library(ggplot2)

qplot(date, value, data=md, geom=point) + facet_wrap(~variable) #  
the layout is done automatically for you

# see Hadley's book for customisations
# http://had.co.nz/ggplot2/facet_wrap.html


HTH,

baptiste

On 27 Mar 2009, at 15:19, skrug wrote:


Sorry for the mistake. As you probably already guesed, I am just
starting using R. I could not name the difference between a matrix  
and a

data.frame.


str(b)

'data.frame':   9 obs. of  7 variables:
$ Datum: Factor w/ 9 levels 06.03.,07.03.,..: 1 2 3 4 5 6 7 8 9
$ X1   : int  408 335 2123 4685 7669 17060 31330 70730 109667
$ X2   : int  230 241 1509 2226 7839 13997 24797 53133 93061
$ X3   : int  25 16 38 61 114 299 140 172 196
$ X4   : int  248 588 2083 2071 4563 9798 17611 38554 82354
$ X5   : int  407 201 1339 3699 8375 19200 36563 83993 123167
$ X6   : int  248 730 3056 2327 4092 8905 15931 37895 84565



Thanks





baptiste auguie schrieb:

the result of read.table is a data.frame, not a matrix as you first
suggested. Can you copy the result of str(b) so we know what your  
data

is made of?

I'm guessing the most elegant solution will be to use the reshape
package, followed by ggplot2 or lattice.

baptiste

On 27 Mar 2009, at 14:54, skrug wrote:

Unfortunately, I could not solve the problem of plotting all  
columns of

a matrix against the first column

I used:

b=read.table(d:\\programme\\R\\übungen\\Block 1b.txt, header=T)

b is a table with the first column using  Dates and the following
columns with vectors.

apply(b[,-1], 2, plot, x= b[,1])

Also all columns have the same length, [R] states that the length  
are

different.

Can you help me?



baptiste auguie schrieb:

Something like this perhaps,


a - matrix(rnorm(5*49), ncol=49)

pdf(width=15, height=15)

par(mfrow= c(8,6))
apply(a[,-1], 2, plot, x= a[,1])

dev.off()



HTH,

baptiste

On 27 Mar 2009, at 11:05, skrug wrote:


Hi evrybody,

in a matrix consisting of 49 columns, I would like to plot all  
columns

against the first in 48 different graphs.
Can you help me?

Thank you in advance
Sebastian

--
***



Dipl. Biol. Sebastian Krug
PhD - student
IFM - GEOMAR
Leibniz Institute of Marine Sciences
Research Division 2 - Marine Biogeochemistry
Düsternbrooker Weg 20
D - 24105 Kiel
Germany

Tel.: +49 431 600-4282
Fax.: +49 431 600-4446
email: sk...@ifm-geomar.de

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


_

Baptiste Auguié

School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK

Phone: +44 1392 264187

http://newton.ex.ac.uk/research/emag
__




--
***


Dipl. Biol. Sebastian Krug
PhD - student
IFM - GEOMAR
Leibniz Institute of Marine Sciences
Research Division 2 - Marine Biogeochemistry
Düsternbrooker Weg 20
D - 24105 Kiel
Germany

Tel.: +49 431 600-4282
Fax.: +49 431 600-4446
email: sk...@ifm-geomar.de



_

Baptiste Auguié

School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK

Phone: +44 1392 264187

http://newton.ex.ac.uk/research/emag
__




--
***

Dipl. Biol. Sebastian Krug
PhD - student
IFM - GEOMAR
Leibniz Institute of Marine Sciences
Research Division 2 - Marine Biogeochemistry
Düsternbrooker Weg 20
D - 24105 Kiel
Germany

Tel.: +49 431 600-4282
Fax.: +49 431 600-4446
email: sk...@ifm-geomar.de



_

Baptiste Auguié

School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK

Phone: +44 1392 264187

http://newton.ex.ac.uk/research/emag

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Passing parameters to R script in Rgui

2009-03-27 Thread Daren Tan
How do I pass parameters to R script in Rgui ? Currently, I am using
source(foo.R).

Thanks

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reading in files with variable parts to names

2009-03-27 Thread Steve Murray

Thanks, that's great - just what I was looking for.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] color vectors other than gray()

2009-03-27 Thread Paulo E. Cardoso
Petr,

I'd like to be able to change the ramp to other than grey shades.

Please see my previous message with some data.


Paulo E. Cardoso


-Mensagem original-
De: Petr PIKAL [mailto:petr.pi...@precheza.cz] 
Enviada: sexta-feira, 27 de Março de 2009 15:12
Para: Paulo E. Cardoso
Cc: r-h...@stat.math.ethz.ch
Assunto: Re: [R] color vectors other than gray()

Hi

r-help-boun...@r-project.org napsal dne 27.03.2009 15:36:23:

 I'm certainly missing something.
 
 In fact the ramp I need must be scaled according to a vector of values 
(in
 this case species abundance in each grid cell), as in the example vector
 below:
 
  length(quad_N_sp$x) # where x is the abundance value
 [1] 433
 
 quad_N_sp$x
 [1] 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 
0 3
 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
 [101] 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 
0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [201] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [301] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 
0 0
 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 1 0 2 0 0 0 0 0 0 0 
0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 [401] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 
 I need to discriminate shading level accordingly to the abundance value
 (level).

If I understand correctly

pal-grey(0:max(quad_N_sp$x)/max(quad_N_sp$x))
shall give you vector of equally spaced grey values

pal[quad_N_sp$x+1]
shall give you shadings for each quad_N_sp$x value

Regards
Petr


 
 I don't know how to proceed.
 
 
 Paulo E. Cardoso
 
 -Mensagem original-
 De: baptiste auguie [mailto:ba...@exeter.ac.uk] 
 Enviada: sexta-feira, 27 de Março de 2009 13:30
 Para: Paulo E. Cardoso
 Cc: r-h...@stat.math.ethz.ch; r-help@r-project.org
 Assunto: Re: [R] color vectors other than gray()
 
 ?colorRamp
 
 Hope this helps,
 
 baptiste
 
 On 27 Mar 2009, at 13:16, Paulo E. Cardoso wrote:
 
  I'm trying to create a graph where different cells of a grid (a 
  shapefile)
  will be painted with a color share scale, where the most easy way is 
  to use
  gray().
 
  Can I somehow get a vector (gradient) of colors, a vector of colors 
  with
  other methods but gray()?
 
  I'm doing this until now
 
 
 
   quad_N_sp -
  merge(sp_dist[sp_dist 
  $sp==splist[i],],grelha_ID,by.x=quad,by.y=quadricula
  ,all.y=T,)
 
   quad_N_sp$x[is.na(quad_N_sp$x)] - 0
 
   quad_N_sp - quad_N_sp[order(quad_N_sp$id),]
 
   paleta - gray(1-(quad_N_sp$x)/max(quad_N_sp$x)) #! Tons de cinzento
 
   win.graph(4,5)
 
   plot(grelha,ol=grey80, #! Gráfico com grelha de amostragem e 
  gradiente
  de abundância
 
   fg=paleta,
 
   cex.lab=0.7,
 
   cex.axis=0.7,
 
   cex.main=0.7,
 
   xlab=Coord X,
 
   ylab=Coord Y,
 
   main=paste(Espécie: ,splist[i]),
 
   xlim=c(21,24)
 
   )
 
   col_lab - c(max(quad_N_sp$x),min(quad_N_sp$x)) #! Vector com os 
  limites
  min e max do N de indivíduos observados
 
 
  color 
  .legend 
  (248000,12,25,128000,col_lab,sort(unique(paleta)),gradie
  nt=y,cex=0.6)#! Legenda
 
   text(245300,130500,Nº Indivíduos,cex=0.6)
 
   plot(blocos,ol=grey40,fg=NA,add=T)
 
 
 
  I'd like to replace the grey shade by other colors.
 
 
 
  Thanks in advance
 
  
 
  Paulo E. Cardoso
 
 
 
 
 [[alternative HTML version deleted]]
 
  ATT1.txt
 
 _
 
 Baptiste Auguié
 
 School of Physics
 University of Exeter
 Stocker Road,
 Exeter, Devon,
 EX4 4QL, UK
 
 Phone: +44 1392 264187
 
 http://newton.ex.ac.uk/research/emag
 __
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com 
 
 03/27/09
 07:13:00
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

No virus found in this incoming message.
Checked by AVG - www.avg.com 

03/27/09
07:13:00

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] interactive image graphic

2009-03-27 Thread Greg Snow
Here is some code that may get you started (if I am understanding your question 
correctly):

library(TeachingDemos)

myfunc - function(zmin=90, zmax=195, ncol=100, pal='heat') {
cols - switch(pal,
heat=heat.colors(ncol),
terrain=terrain.colors(ncol),
topo=topo.colors(ncol),
cm=cm.colors(ncol)
)
image(volcano, col=cols, zlim=c(zmin,zmax))
}

mylist - list( 
zmin=list('slider',from=0, to=90, init=90, resolution=5),
zmax=list('slider',from=195, to=250, init=195, resolution=5),
ncol=list('spinbox', init=100, from=2, to=150, increment=5),
pal=list('radiobuttons', values=c('heat','terrain','topo','cm'),
init='heat')) 

tkexamp(myfunc,mylist)


hope this helps,

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
 project.org] On Behalf Of Abelian
 Sent: Thursday, March 26, 2009 9:43 PM
 To: r-help@r-project.org
 Subject: [R] interactive image graphic
 
 Dear All
 I want to plot a kind of figures, which can interactive with user.
 For example, i have a matirx which can be showed by image function.
 i.e. we can compare the value depend on different colors.
 However, the change of colors depend on the range of value.
 Nowaday, i want to set a bar, which can be moved by user such that the
 user can obtain the appropriate range.
 Does anyone suggest me which function can be applied to solve this
 problem?
 Thanks
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Passing parameters to R script in Rgui

2009-03-27 Thread Bert Gunter
You are misusing source.

Write a function to do what you want. An Introduction to R documents how.
Have you read it?

-- Bert 


Bert Gunter
Genentech Nonclinical Biostatistics
650-467-7374

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Daren Tan
Sent: Friday, March 27, 2009 8:45 AM
To: r-help@r-project.org
Subject: [R] Passing parameters to R script in Rgui

How do I pass parameters to R script in Rgui ? Currently, I am using
source(foo.R).

Thanks

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] about the Choleski factorization

2009-03-27 Thread 93354504
Hi there, 

Given a positive definite symmetric matrix, I can use chol(x) to obtain U where 
U is upper triangular
and x=U'U. For example,

x=matrix(c(5,1,2,1,3,1,2,1,4),3,3)
U=chol(x)
U
# [,1]  [,2]  [,3]
#[1,] 2.236068 0.4472136 0.8944272
#[2,] 0.00 1.6733201 0.3585686
#[3,] 0.00 0.000 1.7525492
t(U)%*%U   # this is exactly x

Does anyone know how to obtain L such that L is lower triangular and x=L'L? 
Thank you.

Alex

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] print table (data.frame) to pdf

2009-03-27 Thread Paulo E. Cardoso
How can I print a data.frame to a PDF with pdf()...dev.off()



Paulo E. Cardoso

 


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Re sults sometimes in seconds with difftime unit=mins

2009-03-27 Thread jim holtman
I think the problem is is that 'diff' does not have a 'units'
parameter; 'difftime' does.  Here is a way of doing it:

 x
 [1] 2009-03-27 13:00:00 EDT 2009-03-27 13:00:35 EDT 2009-03-27
13:01:10 EDT 2009-03-27 13:01:45 EDT 2009-03-27 13:02:20 EDT
2009-03-27 13:02:55 EDT 2009-03-27 13:03:30 EDT
 [8] 2009-03-27 13:04:05 EDT 2009-03-27 13:04:40 EDT 2009-03-27
13:05:15 EDT
 difftime(tail(x, -1), head(x, -1), units='mins')
Time differences in mins
[1] 0.583 0.583 0.583 0.583 0.583 0.583
0.583 0.583 0.583
attr(,tzone)
[1] 
 diff(x, units='mins')  # 'units' ignored
Time differences in secs
[1] 35 35 35 35 35 35 35 35 35
attr(,tzone)
[1] 



On Fri, Mar 27, 2009 at 11:04 AM, Ptit_Bleu ptit_b...@yahoo.fr wrote:

 Hello,

 I'm trying to calculate an integration and x-axis is a time (format :
 %Y-%m-%d %H:%M:%S).
 I use diff(date, units=mins) in a loop for but sometimes the results stay
 in seconds (95% is ok).

 Examples for 2 sets of data are given below (first result stays in seconds
 whereas the second in minutes as expected).
 Have you already seen this behaviour ?
 Any idea to solve this problem ?

 Thanks in advance.
 Have a good week-end,
 Ptit Bleu.

 
 strptime(datajour$Date, format=%Y-%m-%d %H:%M:%S)
  [1] 2009-03-26 11:21:31 2009-03-26 11:22:17 2009-03-26 11:27:18
 2009-03-26 11:36:59 2009-03-26 11:41:59 2009-03-26 11:46:59
  [7] 2009-03-26 11:51:59 2009-03-26 11:57:00 2009-03-26 12:02:00
 2009-03-26 12:07:00 2009-03-26 12:12:00 2009-03-26 12:17:00
 [13] 2009-03-26 12:22:00 2009-03-26 12:27:01 2009-03-26 12:32:01
 2009-03-26 12:37:01 2009-03-26 12:42:01 2009-03-26 12:47:01
 [19] 2009-03-26 12:52:01 2009-03-26 12:57:01 2009-03-26 13:02:02
 2009-03-26 13:07:02 2009-03-26 13:12:03 2009-03-26 13:17:03
 [25] 2009-03-26 13:22:03 2009-03-26 13:27:03 2009-03-26 13:32:03
 2009-03-26 13:37:03 2009-03-26 13:42:03 2009-03-26 13:47:03
 [31] 2009-03-26 13:52:03 2009-03-26 13:57:04 2009-03-26 14:01:02
 2009-03-26 14:06:05 2009-03-26 14:11:05 2009-03-26 14:16:06
 [37] 2009-03-26 14:21:06 2009-03-26 14:26:08 2009-03-26 14:31:09
 2009-03-26 14:36:10 2009-03-26 14:41:10 2009-03-26 14:46:15
 [43] 2009-03-26 14:51:15 2009-03-26 14:56:15 2009-03-26 15:01:15
 2009-03-26 15:06:17 2009-03-26 15:11:17 2009-03-26 15:16:19
 [49] 2009-03-26 15:21:19 2009-03-26 15:26:19 2009-03-26 15:31:22
 2009-03-26 15:36:23 2009-03-26 15:41:24 2009-03-26 15:46:24
 [55] 2009-03-26 15:51:25 2009-03-26 15:56:25 2009-03-26 16:01:25
 2009-03-26 16:06:26 2009-03-26 16:11:26 2009-03-26 16:16:26
 [61] 2009-03-26 16:21:27 2009-03-26 16:26:27 2009-03-26 16:31:28
 2009-03-26 16:36:28 2009-03-26 16:41:29 2009-03-26 16:46:30
 [67] 2009-03-26 16:51:31 2009-03-26 16:56:31 2009-03-26 17:01:32
 2009-03-26 17:06:32 2009-03-26 17:11:33 2009-03-26 17:16:33
 [73] 2009-03-26 17:21:33 2009-03-26 17:26:35 2009-03-26 17:31:36
 2009-03-26 17:36:36 2009-03-26 17:41:36 2009-03-26 17:46:36
 [79] 2009-03-26 17:51:39 2009-03-26 17:56:40 2009-03-26 18:01:40
 2009-03-26 18:06:40 2009-03-26 18:11:40 2009-03-26 18:16:40
 [85] 2009-03-26 18:21:41 2009-03-26 18:26:41 2009-03-26 18:31:41
 2009-03-26 18:36:41 2009-03-26 18:41:41 2009-03-26 18:46:41
 [91] 2009-03-26 18:51:42 2009-03-26 18:56:42 2009-03-26 19:06:42

 as.numeric(diff(strptime(datajour$Date, format=%Y-%m-%d %H:%M:%S),
 units=mins))
  [1]  46 301 581 300 300 300 301 300 300 300 300 300 301 300 300 300 300 300
 300 301 300 301 300 300 300 300 300 300 300 300 301 238 303 300 301 300 302
 301
 [39] 301 300 305 300 300 300 302 300 302 300 300 303 301 301 300 301 300 300
 301 300 300 301 300 301 300 301 301 301 300 301 300 301 300 300 302 301 300
 300
 [77] 300 303 301 300 300 300 300 301 300 300 300 300 300 301 300 600

 
 strptime(datajour$Date, format=%Y-%m-%d %H:%M:%S)
  [1] 2009-03-26 11:22:24 2009-03-26 11:27:25 2009-03-26 11:37:04
 2009-03-26 11:42:04 2009-03-26 11:47:04 2009-03-26 11:52:04
  [7] 2009-03-26 11:57:04 2009-03-26 12:02:05 2009-03-26 12:07:06
 2009-03-26 12:12:06 2009-03-26 12:17:06 2009-03-26 12:22:06
 [13] 2009-03-26 12:27:07 2009-03-26 12:32:07 2009-03-26 12:37:07
 2009-03-26 12:42:07 2009-03-26 12:47:07 2009-03-26 12:52:08
 [19] 2009-03-26 12:57:08 2009-03-26 13:02:08 2009-03-26 13:07:09
 2009-03-26 13:12:09 2009-03-26 13:17:09 2009-03-26 13:22:09
 [25] 2009-03-26 13:27:09 2009-03-26 13:32:09 2009-03-26 13:37:09
 2009-03-26 13:42:09 2009-03-26 13:47:09 2009-03-26 13:52:09
 [31] 2009-03-26 13:57:10 2009-03-26 14:01:08 2009-03-26 14:06:11
 2009-03-26 14:11:11 2009-03-26 14:16:12 2009-03-26 14:21:12
 [37] 2009-03-26 14:26:15 2009-03-26 14:31:18 2009-03-26 14:36:18
 2009-03-26 14:41:19 2009-03-26 14:46:22 2009-03-26 14:51:22
 [43] 2009-03-26 14:56:23 2009-03-26 15:01:24 2009-03-26 15:06:24
 2009-03-26 15:11:24 2009-03-26 15:16:24 2009-03-26 15:21:24
 [49] 2009-03-26 15:26:24 2009-03-26 15:31:28 2009-03-26 15:36:29
 2009-03-26 

Re: [R] Ploting a matrix

2009-03-27 Thread Jorge Ivan Velez
Dear Sebastian,
Consider matplot() for this. Here is an example (taken from Baptiste
Auguie's post):

 date - factor(letters[1:9])
 d - data.frame(x1=seq(1, 9), x2=seq(2, 10), date=date)
 matplot(d[,-3],pch=16,xaxt='n',las=1,ylab='Some label here',xlab='Date')
 axis(1,d[,3],d[,3])
 legend('topleft',c('x1','x2'),pch=16,col=1:2)

See ?matplot, ?axis and ?legend for more information.

HTH,

Jorge


On Fri, Mar 27, 2009 at 7:05 AM, skrug sk...@ifm-geomar.de wrote:

 Hi evrybody,

 in a matrix consisting of 49 columns, I would like to plot all columns
 against the first in 48 different graphs.
 Can you help me?

 Thank you in advance
 Sebastian

 --

 ***

 Dipl. Biol. Sebastian Krug
 PhD - student
 IFM - GEOMAR
 Leibniz Institute of Marine Sciences
 Research Division 2 - Marine Biogeochemistry
 Düsternbrooker Weg 20
 D - 24105 Kiel
 Germany

 Tel.: +49 431 600-4282
 Fax.: +49 431 600-4446
 email: sk...@ifm-geomar.de

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R 2.8.1 and 2.9 alpha crash when running survest of Design package

2009-03-27 Thread Frank E Harrell Jr
We will try to quickly get out a new version of Design that checks for 
the version of survival that is installed and uses a different .C call 
accordingly.  This will involve ignoring (for now) the new weights 
option Terry has implemented.


Frank


Terry Therneau wrote:

A couple additions to Thomas's message.

  The 'survest' function in design directly called C routines in the survival 
package.  The argument list to the routines changed due to the addition of 
weights; calling a C routine with the wrong arguments is one of the more 
reliable ways to crash a program.  The simplest (short term) solution is to use 
survfit for your curves rather than survest.  Frank Harrell has been aware of 
the issue for several weeks and is working hard on solving it.  The simple fix 
is a few minutes, but he's thinking about how to avoid any future problems.  The 
C routines in survival change arguments VERY rarely, but direcly calling the 
routines of another package is considered dangerous in general.
  
  Most breakage was less severe.  For instance there were a couple of errors in 
the PBC data set.  I fixed these, and also replaced all the 999 codes with NA 
to make it easier to use.  Some other packages use this data.  (My name is on 
most of the PBC papers and I have the master PBC data with all labs, patient id, 
etc, but I was not the source of the first data set).  
  
  We'll be keeping an eye on the R list as the package rolls out; sending a 
message directly to Thomas and/or I would also be appreciated for issues like 
this.
  
  	Terry Therneau


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




--
Frank E Harrell Jr   Professor and Chair   School of Medicine
 Department of Biostatistics   Vanderbilt University

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] print table (data.frame) to pdf

2009-03-27 Thread Greg Snow
See either textplot in the gplots package or addtable2plot in the plotrix 
package, or for even more flexibility learn Sweave or its variants.

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111

 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
 project.org] On Behalf Of Paulo E. Cardoso
 Sent: Friday, March 27, 2009 9:53 AM
 To: r-help@r-project.org; r-h...@stat.math.ethz.ch
 Subject: [R] print table (data.frame) to pdf
 
 How can I print a data.frame to a PDF with pdf()...dev.off()
 
 
 
 Paulo E. Cardoso
 
 
 
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sweave-output causes error-message in pdflatex

2009-03-27 Thread Duncan Murdoch

On 3/27/2009 10:12 AM, Gerrit Voigt wrote:

Dear list,
Latex/Sweave has trouble processing Sveave-output coming from the 
summary-command of a linear Model.

 summary(lmRub)
The output line causing the trouble looks in R like this
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

In my Sweaved Tex-file that line looks like this
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1’ (actually 
in the editor the quotation signs are replaced by bars, but they got 
lost through copy  paste. I don't know if that says anything about my 
problem.)


In the error message produced through pdflatex, the quotation signs 
reappear.

Latex error-message:
! Package inputenc Error: Keyboard character used is undefined

(inputenc) in inputencoding `Latin1'.

See the inputenc package documentation for explanation.

Type H return for immediate help.

...

l.465 ...*’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

You need to provide a definition with \DeclareInputText

or \DeclareInputMath before using this key.


I hope anybody knows how I can prevent that error message. Thanks in 
advance.


You are running the code on a platform where the character used for one 
of the quote characters is unrecognized by LaTeX.  The simplest solution 
is to tell R not to use those characters, via executing


options(useFancyQuotes = FALSE)

early in your document.  See ?sQuote for more details.

Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] functions flagged for debugging

2009-03-27 Thread Duncan Murdoch

On 3/27/2009 11:18 AM, Christos Hatzis wrote:

Hi,
 
Is there a way to find which functions are flagged for debugging in a given

session?



The isdebugged() function (which is new in 2.9.0) will tell you this.

Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] creating a matrix as input to lowess

2009-03-27 Thread Graves, Gregory
I have a very large file with many rows and columns.  I want to create a plot 
with lowess.  
 
If I try the following it works fine:

data(PrecipGL)

plot(PrecipGL)

lines(lowess(time(PrecipGL),PrecipGL),lwd=3, col=2)

 

In my file, 2 columns are nox and sdate, and are both typeof() = double.  
If I issue command 

Plot(nox~sdate) 

I can get a nice plot.

 

However if I try

lines(lowess(time(nox~sdate),nox~sdate),lwd=3, col=2)

it returns an error that it is not a matrix

 

if I try to extract these 2 columns into a matrix

mdat - matrix(date,nox, byrow=TRUE)

doesn't work, and search for help did not work, so am posting.  

 

Obviously I am a newbee here.  Thanks for any help!!

 
Gregory A. Graves
Lead Scientist
REstoration COoordination and VERification (RECOVER) Division
Everglades Restoration Resource Area
South Florida Water Management District
Phones:  DESK: 561 / 681 - 2563 x3730
 CELL:  561 / 719 - 8157
 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Out of memory crash on 64-bit Fedora

2009-03-27 Thread Adam Wilson
Greetings all,

First of all, thanks to all of you for creating such a useful, powerful
program.

I regularly work with very large datasets (several GB) in R in 64-bit Fedora
8 (details below).  I'm lucky to have 16GB RAM available.  However if I am
not careful and load too much into R's memory, I can crash the whole
system.  There does not seem to be a check in place that will stop R from
trying to allocate all available memory (including swap space).  I have
system status plots in my task bar, which I can watch to see when all the
ram is taken and R then reserves all the swap space. If I don't kill the R
process before the swap hits 100%, it will freeze the machine.  I don't know
if this is an R problem or a Fedora problem (I suppose the kernal should be
killing R before it crashes, but shouldn't R stop before it takes all the
memory?).

To replicate this behavior, I can crash the system by allocating more and
more memory in R:
v1=matrix(nrow=1e5,ncol=1e4)
v2=matrix(nrow=1e5,ncol=1e4)
v3=matrix(nrow=1e5,ncol=1e4)
v4=matrix(nrow=1e5,ncol=1e4)

etc. until R claims all RAM and swap space, and crashes the machine.  If I
try this on a windows machine eventually  the allocation fails with an error
in R,  Error: cannot allocate vector of size XX MB.  This is much
preferable to crashing the whole system.  Why doesn't this happen in Linux?

Is there some setting that will prevent this?  I've looked though the
archives and not found a similar problem.

Thanks for any help.

Adam



The facts:
 sessionInfo()
R version 2.8.0 (2008-10-20)
x86_64-redhat-linux-gnu

locale:
LC_CTYPE=en_US.UTF-8;LC_NUMERIC=C;LC_TIME=en_US.UTF-8;LC_COLLATE=en_US.UTF-8;LC_MONETARY=C;LC_MESSAGES=en_US.UTF-8;LC_PAPER=en_US.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US.UTF-8;LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base
 version
   _
platform   x86_64-redhat-linux-gnu
arch   x86_64
os linux-gnu
system x86_64, linux-gnu
status
major  2
minor  8.0
year   2008
month  10
day20
svn rev46754
language   R
version.string R version 2.8.0 (2008-10-20)

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Random Forest Variable Importance

2009-03-27 Thread Li GUO
Hello,

I have an object of Random Forest : iris.rf (importance = TRUE).
What is the difference between iris.rf$importance and importance(iris.rf)? 

Thank you in advance,
Best,
Li GUO



  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] about the Choleski factorization

2009-03-27 Thread Duncan Murdoch

On 3/27/2009 11:46 AM, 93354504 wrote:
Hi there, 


Given a positive definite symmetric matrix, I can use chol(x) to obtain U where 
U is upper triangular
and x=U'U. For example,

x=matrix(c(5,1,2,1,3,1,2,1,4),3,3)
U=chol(x)
U
# [,1]  [,2]  [,3]
#[1,] 2.236068 0.4472136 0.8944272
#[2,] 0.00 1.6733201 0.3585686
#[3,] 0.00 0.000 1.7525492
t(U)%*%U   # this is exactly x

Does anyone know how to obtain L such that L is lower triangular and x=L'L? 
Thank you.

Alex

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


 rev - matrix(c(0,0,1,0,1,0,1,0,0),3,3)
 rev
 [,1] [,2] [,3]
[1,]001
[2,]010
[3,]100

(the matrix that reverses the row and column order when you pre and post 
multiply it).


Then

L - rev %*% chol(rev %*% x %*% rev) %*% rev

is what you want, i.e. you reverse the row and column order of the 
Choleski square root of the reversed x:


 x
 [,1] [,2] [,3]
[1,]512
[2,]131
[3,]214

 L - rev %*% chol(rev %*% x %*% rev) %*% rev
 L
  [,1] [,2] [,3]
[1,] 1.9771421 0.000
[2,] 0.3015113 1.6583120
[3,] 1.000 0.502
 t(L) %*% L
 [,1] [,2] [,3]
[1,]512
[2,]131
[3,]214

Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] first time poster

2009-03-27 Thread rilleyel

hi, so, please bear with me as I am new to the wonderful world of
computers...

i am trying to answer the following question, and having no luck:
Focus your analysis on a comparison between respondents labeled “Low” (coded
1) on attend4 and respondents labeled “High” (coded 4). Then, examine the
variance of distributions. That is, run a command var.test.
I feel like I need to recode somehow and create 2 new variables, one for the
low responses, one for the high responses. I do not know how to 'get into'
the variable to deal with just the answers...

I hope this makes enough sense for someone out there to help me

-- 
View this message in context: 
http://www.nabble.com/first-time-poster-tp22745190p22745190.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] General help for a function I'm attempting to write

2009-03-27 Thread Colin Garroway
Hello,

I have written a small function ('JostD' based upon a recent molecular
ecology paper) to calculate genetic distance between populations (columns in
my data set).  As I have it now I have to tell it which 2 columns to use (X,
Y).  I would like it to automatically calculate 'JostD' for all combinations
of columns, perhaps returning a matrix of distances.  Thanks for any help or
suggestions.
Cheers
Colin



Function:

JostD - function(DF, X, Y) {

Ni1 - DF[,X]
Ni2 - DF[,Y]

N1 - sum(Ni1)
N2 - sum(Ni2)

pi1 -Ni1/N1
pi2 -Ni2/N2
pisqr - ((pi1+pi2)/2)^2

H1 - 1 - sum(pi1^2)
H2 - 1 - sum(pi2^2)
Ha - 0.5*(H1+H2)

Da - 1/(1-Ha)
Dg - 1/sum(pisqr)
Db - Dg/Da
D - -2*((1/Db) - 1)
D
}

Sample data:

e-c(0,0,0,4,27)
r-c(0,1,0,7,16)
t-c(1,0,0,16,44)
y-c(0,0,0,2,39)
df-cbind(e,r,t,y)
rownames(df)-q
colnames(df)-w

 df
  P01 P02 P03 P04
L01.1   0   0   1   0
L01.2   0   1   0   0
L01.3   0   0   0   0
L01.4   4   7  16   2
L01.5  27  16  44  39

 JostD(df, 1, 2)
[1] 0.0535215

 JostD(df, 1, 3)
[1] 0.02962404






-- 
Colin Garroway (PhD candidate)
Wildlife Research and Development Section
Ontario Ministry of Natural Resources
Trent University, DNA Building
2140 East Bank Drive
Peterborough, ON, K9J 7B8
Canada

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to get all iterations if I meet NaN?

2009-03-27 Thread huiming song
hi, everybody, please help me with this question:

If I want to do iteration for 1000 times, however, for the 500th iteration,
there is NaN appears. Then the iteration will stop. If I don't want the stop
and want the all the 1000 iterations be done. What shall I do?


suppose I have x[1:1000] and z[1:1000],I want to do some calculation for all
x[1] to x[1000].

z=rep(0,1000)
for (i in 1:1000){
  z[i]=sin(1/x[i])
}

if x[900] is 0, in the above code it will not stop when NaN appears. Suppose
when sin(1/x[900]) is NaN appears and the iteration will now fulfill the
rest 100 iterations. How can I write a code to let all the 1000 iterations
be done?

Thanks!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] about the Choleski factorization

2009-03-27 Thread Ravi Varadhan
You want a factorizzation of the form: A = L' L.  Am I right (we may name this 
a Lochesky factorization)?

By convention, Cholesky factorization is of the form A = L L', where L is a 
lower triangular matrix, and L', its transpose, is upper traingular. So, all 
numerical routines compute L according to this definition.  R gives you U = L', 
which is obviously upper triangular.

If you want to use a different definition: A = L' L, that is fine 
mathematically.  Although there is no easy way to transform the result of 
existing routines to get what you want, you can actually derive an algorithm to 
convert the standard factorization to the form you want.  Rather than go to 
this trouble, you might as well just code it up from scratch.  

The big question of course is why do you want the Lochesky factorization?  It 
doesn't do anything special that the traditional Cholesky factorization can do 
for a symmetric, positive-definite matrix (mainly, solve a system of equations).

Ravi.


Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


- Original Message -
From: 93354504 93354...@nccu.edu.tw
Date: Friday, March 27, 2009 11:58 am
Subject: [R] about the Choleski factorization
To: r-help r-help@r-project.org


 Hi there, 
  
  Given a positive definite symmetric matrix, I can use chol(x) to 
 obtain U where U is upper triangular
  and x=U'U. For example,
  
  x=matrix(c(5,1,2,1,3,1,2,1,4),3,3)
  U=chol(x)
  U
  # [,1]  [,2]  [,3]
  #[1,] 2.236068 0.4472136 0.8944272
  #[2,] 0.00 1.6733201 0.3585686
  #[3,] 0.00 0.000 1.7525492
  t(U)%*%U   # this is exactly x
  
  Does anyone know how to obtain L such that L is lower triangular and 
 x=L'L? Thank you.
  
  Alex
  
  __
  R-help@r-project.org mailing list
  
  PLEASE do read the posting guide 
  and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] about the Choleski factorization

2009-03-27 Thread Ravi Varadhan
Very nice, Duncan.

Here is a little function called loch() that implements your idea for the 
Lochesky factorization:

loch - function(mat) {
n - ncol(mat)
rev - diag(1, n)[, n: 1]
rev %*% chol(rev %*% mat %*% rev) %*% rev
}

x=matrix(c(5,1,2,1,3,1,2,1,4),3,3)

L - loch(x)
all.equal(x, t(L) %*% L)

A - matrix(rnorm(36), 6, 6)
A - A %*% t(A)
L - loch(x)
all.equal(x, t(L) %*% L)


Ravi.



Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


- Original Message -
From: 93354504 93354...@nccu.edu.tw
Date: Friday, March 27, 2009 11:58 am
Subject: [R] about the Choleski factorization
To: r-help r-help@r-project.org


 Hi there, 
  
  Given a positive definite symmetric matrix, I can use chol(x) to 
 obtain U where U is upper triangular
  and x=U'U. For example,
  
  x=matrix(c(5,1,2,1,3,1,2,1,4),3,3)
  U=chol(x)
  U
  # [,1]  [,2]  [,3]
  #[1,] 2.236068 0.4472136 0.8944272
  #[2,] 0.00 1.6733201 0.3585686
  #[3,] 0.00 0.000 1.7525492
  t(U)%*%U   # this is exactly x
  
  Does anyone know how to obtain L such that L is lower triangular and 
 x=L'L? Thank you.
  
  Alex
  
  __
  R-help@r-project.org mailing list
  
  PLEASE do read the posting guide 
  and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] about the Choleski factorization

2009-03-27 Thread Ravi Varadhan
Very nice, Duncan.

Here is a little function called loch() that implements your idea for the 
Lochesky factorization:

loch - function(mat) {
n - ncol(mat)
rev - diag(1, n)[, n: 1]
rev %*% chol(rev %*% mat %*% rev) %*% rev
}

x=matrix(c(5,1,2,1,3,1,2,1,4),3,3)

L - loch(x)
all.equal(x, t(L) %*% L)

A - matrix(rnorm(36), 6, 6)
A - A %*% t(A)
L - loch(x)
all.equal(x, t(L) %*% L)


Ravi.



Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


- Original Message -
From: Duncan Murdoch murd...@stats.uwo.ca
Date: Friday, March 27, 2009 1:29 pm
Subject: Re: [R] about the Choleski factorization
To: 93354...@nccu.edu.tw
Cc: r-help r-help@r-project.org


 On 3/27/2009 11:46 AM, 93354504 wrote:
   Hi there, 
   
   Given a positive definite symmetric matrix, I can use chol(x) to 
 obtain U where U is upper triangular
   and x=U'U. For example,
   
   x=matrix(c(5,1,2,1,3,1,2,1,4),3,3)
   U=chol(x)
   U
   # [,1]  [,2]  [,3]
   #[1,] 2.236068 0.4472136 0.8944272
   #[2,] 0.00 1.6733201 0.3585686
   #[3,] 0.00 0.000 1.7525492
   t(U)%*%U   # this is exactly x
   
   Does anyone know how to obtain L such that L is lower triangular 
 and x=L'L? Thank you.
   
   Alex
   
   __
   R-help@r-project.org mailing list
   
   PLEASE do read the posting guide 
   and provide commented, minimal, self-contained, reproducible code.
  
rev - matrix(c(0,0,1,0,1,0,1,0,0),3,3)
rev
[,1] [,2] [,3]
  [1,]001
  [2,]010
  [3,]100
  
  (the matrix that reverses the row and column order when you pre and 
 post 
  multiply it).
  
  Then
  
  L - rev %*% chol(rev %*% x %*% rev) %*% rev
  
  is what you want, i.e. you reverse the row and column order of the 
  Choleski square root of the reversed x:
  
x
[,1] [,2] [,3]
  [1,]512
  [2,]131
  [3,]214
  
L - rev %*% chol(rev %*% x %*% rev) %*% rev
L
 [,1] [,2] [,3]
  [1,] 1.9771421 0.000
  [2,] 0.3015113 1.6583120
  [3,] 1.000 0.502
t(L) %*% L
[,1] [,2] [,3]
  [1,]512
  [2,]131
  [3,]214
  
  Duncan Murdoch
  
  __
  R-help@r-project.org mailing list
  
  PLEASE do read the posting guide 
  and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] [R-sig-hpc] snow Error.

2009-03-27 Thread luke

Recent versons of snow signal an error if the value returned from a
worker indicates an error.  The error handling facilities in snow are
still evolving; for now id you don't want an error on a worker to
become an error an the master you need to catch the error in the
worker yourself and produce an appropriate result, e.g. by replacing
MCexe by something like

function(...) tryCatch(MCexe(...), error = function(e) NULL)

if NULL is OK as a result when the MCexe computation produces an error.

luke

On Fri, 27 Mar 2009, jgar...@ija.csic.es wrote:


Hello,

I have a program that used to run well in October, it uses library snow.
Since then, one change has ocurred (snow library has been updated) and
another could have ocurred (I've unadvertently modified something).

Anyway, now when I make the call:

parallel.model.results - clusterApply(cl,processors.struct,MCexe)

exactly as I used to do, where MCexe is my function and processors.struct
is a list containing everything required by MCexe, I obtain the following
error:

Error in checkForRemoteErrors(val) :
 2 nodes produced errors; first error: incorrect number of dimensions

Please, do you have any clue about what could be the error?

Best regards,

Javier García-Pintado

___
R-sig-hpc mailing list
r-sig-...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-hpc



--
Luke Tierney
Chair, Statistics and Actuarial Science
Ralph E. Wareham Professor of Mathematical Sciences
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  l...@stat.uiowa.edu
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] constraint optimization: solving large scale general nonlinear problems

2009-03-27 Thread Florin Maican
Hi

I need advice regarding constraint optimization with large number of
variables. 

I need to solve the following problem

   max  f(x1,...,xn)
  x1,..xn
  
x1=g1(x1,...,xn)
.
.
xn=gn(x1,...,xn) 

I am using Rdonlp2  package which works well until 40 variables in my
case. I need to solve this problem with over 300 variables. In this case
Rdonlp2 is very  very slowly. I know that in Matlab exists  Knitro
(http://www.ziena.com/knitro.htm.) for large optimization problems.

It will be great if you can suggest me some alternatives solutions.


Thanks in advance,
Florin



-- 
 Florin G. Maican
==

Ph.D. candidate,
Department of Economics,
School of Business, Economics and Law, 
Gothenburg University, Sweden   
---
P.O. Box 640 SE-405 30, 
Gothenburg, Sweden  

 Mobil:  +46 76 235 3039 
 Phone:  +46 31 786 4866 
 Fax:+46 31 786 4154  
 Home Page: http://maicanfg.googlepages.com/index.html
 E-mail: florin.mai...@handels.gu.se 

 Not everything that counts can be 
 counted, and not everything that can be 
 counted counts.
 --- Einstein ---

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Efficiency: speeding up unlist that is currently running by row

2009-03-27 Thread Dimitri Liakhovitski
Hello everyone!
I have a piece of code that works and does what I need but...:

# I have 3 slots:
nr.of.slots-3

# My data frame is new.a:
new.a-data.frame(x=c(john,
mary),y=c(pete,john),z=c(mary,pete),stringsAsFactors=FALSE)
print(new.a)

# Creating all possible combinations of the rows of new.a with all
possible combinations of p1 and p2 in 3 locations (3 new columns):
big.a-cbind(new.a[rep(1:nrow(new.a),each=8),],expand.grid(paste(p,1:2,sep=),paste(p,1:2,sep=),paste(p,1:2,sep=))[rep(1:8,nrow(new.a)),])
print(big.a)

# Making sure the last 3 columns are characters, not factors:
for(i in 1:nr.of.slots) { big.a[[(i+3)]]-as.character(big.a[[(i+3)]]) }
str(big.a)

# Creating a final dataframe with as many columns as slots (i.e., 3);
each cell contains a name of a person and p1 or p2:
output-data.frame(matrix(nrow = nrow(big.a), ncol = nr.of.slots))
for(i in 1:nr.of.slots) {
names(output)[i]-paste(slot,i,sep=.)
}

# THIS IS THE SECTION OF THE CODE I HAVE A QUESTION ABOUT:
for(i in 1:nr.of.slots) {
output[[i]]-lapply(1:nrow(big.a),function(x){
out-unlist(c(big.a[x,i],big.a[x,i+nr.of.slots]))
return(out)
})
}
print(output)

# This is exactly the output I am looking for: Each cell of output
contains just 2 words:
print(output[1,1])
str(output[1,1])


MY QUESTION:
The section of the code above, in which I am running an unlist is
looping through rows. My problem is that in my real data frame I'll
have over a million of rows and more than 3 columns in output. It's
very slow. Is it at all possible to speed it up somehow? Somehow merge
(pairwise) the whole columns of the dataframe and not row by row?

Thank you very much for any adivce!

-- 
Dimitri Liakhovitski
MarketTools, Inc.
dimitri.liakhovit...@markettools.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


  1   2   >