Hi
your code is strange and not reproducible.
We do not have data
h is not defined but you use it in h <- rbind(h,z)
Based on what you say about your problem, I would probably try to
split data frame into list accorging to "i". Then I would shufle
columns in each part of list and then I would
Hi,
I'm trying to analyse some time series data on dissolved organic
nitrogen. Because it has gaps in it, I try to interpolate (linear) with
the regul method from the pastecs package.
I have a number of stations with measurement series in a matrix
constructed from a data frame with the unstack
> "Daniel" == Daniel Sarrazin <[EMAIL PROTECTED]>
> on Thu, 26 Oct 2006 23:34:53 -0400 writes:
Daniel> I'm on mandriva 2006
Daniel> I did:
Daniel> urpmi R-2.0.0-1mdk.i586.rpm
I'm not really helping,
but in *ANY* case you should not install an antique version of R,
unless y
On Thu, 2006-10-26 at 17:17 -0700, lvdtime wrote:
> Please, I need this information, it's important for my work
By that, do you mean that your time/work is more important than that of
the other members of this list?
Anyway, yes, there is an equivalent of label(X[, i]). You can use
colnames, as in
> "lvdtime" == lvdtime <[EMAIL PROTECTED]>
> on Thu, 26 Oct 2006 17:17:01 -0700 (PDT) writes:
lvdtime> Please, I need this information, it's important for my work
Well, then do your home work (read the posting guide and follow it!!),
instead of sending this again to the several t
Dear all,
glht (from the multcomp package) needs a term and a model component
in it's fitted model.
In fitted models from e.g. repeated measurements ANOVAs I do not find
neither model nor term.
Is it possible to build together a model and term component myself,
so that glht will work for
I have the following Data structure
$ step45 : Factor w/ 2 levels
$ obserror : num 6.2 6.2 5.6 6.6 6.6 ...
$ Mon: num 2.2 2.0 1.0 3.2 2.0 ...
$ inc.comp : num 4 5 2 5 5 5 5 5 4 4 ...
all I wanted to do is plotting Mon against obserror, the colors should
be by step45 and the siz
Dear R users,
I would like to sort elements of a matrix by row and use this ordering to
also sort another matrix. I am trying to post-order the means of components
for a mixture model and would also like to do the same for the component
probabilities. This is what I have tried thus far, but I do
Hello.
I try to compute a "White-corrected" ANOVA. Therefore I use the command "Anova"
implemented in the car()-package):
Anova(modelname, white.adjust="hc3")
and receive the error message:
"Fehler in SS[i] <- SS.term(names[i]) : nichts zu ersetzen" (German)
"Error in SS[i] <- SS.term(names[i]) :
Finally I got the unreasonable contour plotted.
http://www.geocities.com/useebi/data/kdecontour.jpeg
The codes to get the plot,
cdplot2d(survived ~ age + sibsp, data = titanic3)
are in http://www.geocities.com/useebi/data/cdplot2d.txt
One step that I couldn't accomplis
Since I had too much junk cluttering my computer, I recently re-set it
back to factory settings, in the process eliminating R. I now have to
reload R and packages, and would like to do that via the install.views
command. I have successfully downloaded the package "ctv" but R does
not recognize in
On Fri, 2006-10-27 at 19:36 +0900, Pierre D'Ange wrote:
> Since I had too much junk cluttering my computer, I recently re-set it
> back to factory settings, in the process eliminating R. I now have to
> reload R and packages, and would like to do that via the install.views
> command. I have succe
hi All,
i have a .csv of size 272 MB and a RAM of 512MB and working on windows XP.
I am not able to import the csv file.
R hangs means it stops responding even SciViews hangs.
i am using read.csv(FILENAME,sep=",",header=TRUE). Is there any way to
import it.
i have tried archives already but i
for example, I have two sets, x and y.
I want to draw their histograms using different colors in a graph.
I didn't find how to do this by reading ?hist
Thanks very much.
[[alternative HTML version deleted]]
__
R-help@stat.math.ethz.ch mailing li
On 10/26/2006 11:35 PM, lan gao wrote:
> Hi, all,
> Can anyone give me steps how to call Fortan routine from Visual Fortran
> in R?
>
You might want to look at
http://www.stats.uwo.ca/faculty/murdoch/software/compilingDLLs/
It has a couple of entries about Compaq Visual Fortran. I did
At 19:59 26/10/2006, karen xie wrote:
>Dear List,
>
>I try to implement the latent class model with the unknown number of
>classes. I wonder whether someone can provide me some sample codes.
RSiteSearch("latent class")
brings up 160 documents some of which may be relevant
>Thank you for your hel
On Fri, 27 Oct 2006 19:36:38 +0900 Pierre D'Ange wrote:
> Since I had too much junk cluttering my computer, I recently re-set it
> back to factory settings, in the process eliminating R. I now have to
> reload R and packages, and would like to do that via the install.views
> command. I have succ
"Hu Chen" <[EMAIL PROTECTED]> writes:
> for example, I have two sets, x and y.
> I want to draw their histograms using different colors in a graph.
> I didn't find how to do this by reading ?hist
> Thanks very much.
You can't do that because it looks horrible ;-)
Actually, you can, because plot.
On 10/27/06, Stefan Grosse <[EMAIL PROTECTED]> wrote:
> I have the following Data structure
>
> $ step45 : Factor w/ 2 levels
> $ obserror : num 6.2 6.2 5.6 6.6 6.6 ...
> $ Mon: num 2.2 2.0 1.0 3.2 2.0 ...
> $ inc.comp : num 4 5 2 5 5 5 5 5 4 4 ...
>
> all I wanted to do is plott
> Finally I got the unreasonable contour plotted.
>
> http://www.geocities.com/useebi/data/kdecontour.jpeg
>
> The codes to get the plot,
>
> cdplot2d(survived ~ age + sibsp, data = titanic3)
>
> are in http://www.geocities.com/useebi/data/cdplot2d.txt
>
> One step that I couldn't accompl
Hi
maybe
swap<-function(x) x[,2:1]
can be of some help
sel<-which(mean.data[,1]>mean.data[,2])
dfm<-mean.data
dfm[sel,]<-swap(dfm[sel,])
all.equal(dfm, sorted.mean)
HTH
Petr
On 27 Oct 2006 at 10:10, Vumani Dlamini wrote:
From: "Vumani Dlamini" <[EMAIL PROTECTED]>
To:
Hi,
I have generated a profile likelihood for a parameter (x) and am
trying to get 95% confidence limits by calculating the two points
where the log likelihood (LogL) is 2 units less than the maximum
LogL. I would like to do this by linear interpolation and so I have
been trying to use th
Dear Kilian,
This error was introduced when the linear.hypothesis function was modified,
and I've not yet fixed it because I'm not entirely sure what's the best way
to proceed. It's a bad idea simply to allow Anova() to throw a cryptic
error, so I'll try to resolve the issue some time in the next
here an example with the car data
qplot(drat,wt,data=mtcars,shape=as.factor(carb),col=am,size=3)
it gets a bit better if one takes away the as.factor (but that of course
changes other things)
The picture is saved via metafile, screenshot did not work (mostly black
whyever) I know its deleted for
hadley wickham schrieb:
>
> I think you mean
>> qplot(obserror,Mon,data=obscomp, size=inc.comp,col=step45)
Yes sorry, that was a typo (from playing around with the options)
>> unfortunately the size of the is something I do not want it to be, the
>> legend for inc.comp says: 4, 2.25, 1 ,0.25 , 0
Hi all,
i have a dataset that have this stemplot
0 | 1123
0 | 55699
1 | 0033
1 | 6677
2 | 011123344
2 | 55566677888
What would be the R formulae for a two-sided test?
I have a formula for a one-sided test:
powertest <- function(a,m0,m1,n,s){
t1 = -qnorm(1-a)
num = abs(m0-m1) * sqrt(n)
t2 = num/s
pow = pnorm(t1 + t2)
}
Would you pls let me know if you know of?
Thank you,
ej
_
Ethan Johnsons wrote:
> What would be the R formulae for a two-sided test?
>
> I have a formula for a one-sided test:
>
> powertest <- function(a,m0,m1,n,s){
> t1 = -qnorm(1-a)
> num = abs(m0-m1) * sqrt(n)
> t2 = num/s
> pow = pnorm(t1 + t2)
> }
>
> Would you pls let me know if you know of?
RSi
"Ethan Johnsons" <[EMAIL PROTECTED]> writes:
> What would be the R formulae for a two-sided test?
>
> I have a formula for a one-sided test:
>
> powertest <- function(a,m0,m1,n,s){
> t1 = -qnorm(1-a)
> num = abs(m0-m1) * sqrt(n)
> t2 = num/s
> pow = pnorm(t1 + t2)
> }
>
> Would you pls let me k
> here an example with the car data
> qplot(drat,wt,data=mtcars,shape=as.factor(carb),col=am,size=3)
>
> it gets a bit better if one takes away the as.factor (but that of course
> changes other things)
Thanks - that's definitely a bug. I was summing the width of the
legends, instead of taking the
Thank you so mcuh for the explanation, Chuck & Peter.
Two quick questions,please.
It states that delta = True difference in means. When the true diff
is unkown, can you use the expected diff for delta.
If you want to know the n (number of observations) off of power.t.test
to have i.e. 80% power
Hi,
how to measure the goodness of fit, when using the rq() function of quantreg? I
need something like an R^2 for quantile regression, a single number which tells
me if the fit of the whole quantile process (not only for a single quantile) is
o.k. or not.
Is it possible to compare the (condi
Does this do what you want?
> f.LogL <- approxfun(dat$x, dat$LogL, ties='ordered')
> f.2 <- function(x) abs(max(dat$LogL) - f.LogL(x) - 2) # find crossing at 2
> optimize(f.2, c(0,2)) # one minimum
$minimum
[1] 1.505855
$objective
[1] 2.293820e-05
> optimize(f.2, c(0,.1)) # other minimum
$mi
I have approximated a function y = f(x) by sampling at different
values of x. The curve produced by joining up the points crosses y=0
zero at two points, which are within the range of x values sampled
and I would like to estimate (approximately) the x values at these
points. I think this
Hi,
I am new to R community and I have a question on panel configurations in
the Trellis package.
Particularly, I have the following code:
require(lattice)
plotTable <- NULL
Date <- seq(as.Date("2006-11-01"), as.Date("2009-12-01"), by = 1)
nYear <- length(unique(format(Date,"%Y")))
plotTable$Date
A quick answer to your questions:
1. Since nobody knows the "true" delta. I prefer to calculate the power for a
range of deltas. Most of the time for a range spanning - 2 * expected delta up
to 2 * expected delta. This gives an idea on how the power changes if delta
changes.
2. ?power.t.test ex
Oh...
power.t.test has the magic in it, which I overlooked.
Thank you so much.
ej
On 10/27/06, ONKELINX, Thierry <[EMAIL PROTECTED]> wrote:
> A quick answer to your questions:
>
> 1. Since nobody knows the "true" delta. I prefer to calculate the power for a
> range of deltas. Most of the time
Hi all
I'm have a character vector and would like to suppress the blanks if there are
more than one after the other.
Example:
Character value is: "abc def ghi"
The result should be: "abc def ghi"
I know that it's possible to delete the leading blanks with the command "trim".
But how can
> x <- "abc def ghi"
> gsub(" +", " ", x)
[1] "abc def ghi"
>
On 10/27/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Hi all
>
> I'm have a character vector and would like to suppress the blanks if there
> are more than one after the other.
>
> Example:
>
> Character value is: "abc
I have a pdf scan of several pages of data from a quite famous old
paper by
C.S. Pierce (1873). I would like (what else?) to convert it into an
R dataframe.
Somewhat to my surprise the pdf seems to already be in a character
recognized
form, since I can search for numerical strings and they a
Hi all,
how can I divide two tables of the same dimension so that all names are
preserved, ie do not become NA? I have "tab1" and "tab2", each having
names in the first column. I want "tab3" with the same names and values
"tab1/tab2".
Thanks,
Serguei
_
Hello,
I'am currently experiencing some problems with the odbcCloseAll() and
odbcClose() function. I'm trying to connect an R script to a MySQL 5.0 database
using RODBC1.1-7 and the MySQL ODBC Driver v.3.51 on a Windows XP Machine. At
first everything seems fine. The script connects, reads and
On Fri, 2006-10-27 at 14:30 +0200, Christian Hager wrote:
> Hello,
>
> I'am currently experiencing some problems with the odbcCloseAll() and
> odbcClose() function. I'm trying to connect an R script to a MySQL 5.0
> database using RODBC1.1-7 and the MySQL ODBC Driver v.3.51 on a
> Windows XP Machi
You can cut execution time by another 50% by using crossprod.
> n <- 1000
> a <- matrix(rnorm(n*n),n,n)
> b <- matrix(rnorm(n*n),n,n)
> system.time(print(sum(diag(a %*% b
[1] -905.0063
[1] 8.120 0.000 8.119 0.000 0.000
> system.time(print(sum(a*t(b
[1] -905.0063
[1] 1.510 0.000 1.514 0.0
Dear All:
I run a logistic regression (using lrm in the Design package), and
after that, I use the command "summary" to get the marginal effects
of each variable. But one strange thing happens on my binary
dependent variable: The marginal effect of it jumping from 0 to 1 is
1.77. I believe
I don't have specific experience with this but strapply
of package gsubfn can extract information from a string by content
as opposed to delimiters. e.g.
> library(gsubfn)
> strapply("abc34def56xyz", "[0-9]+", c)[[1]]
[1] "34" "56"
On 10/27/06, roger koenker <[EMAIL PROTECTED]> wrote:
> I have a
On Oct 27, 2006, at 5:54 PM, Minyu Chen wrote:
> Dear All:
>
> I run a logistic regression (using lrm in the Design package), and
> after that, I use the command "summary" to get the marginal effects
> of each variable. But one strange thing happens on my binary
> dependent variable: The ma
Hi,
Suppose I have a multivariate response Y (n x k) obtained at a set of
predictors X (n x p). I would like to perform a linear regression taking
into consideration the covariance structure of Y within each unit - this
would be represented by a specified matrix V (k x k), assumed to be the sa
Dear All:
Sorry if I duplicated the mail, as I just registered and not knowing
whether the former mail went through.
I run a logistic regression (using lrm in the Design package), and
after that, I use the command "summary" to get the marginal effects
of each variable. But one strange thing
On 10/27/06, Zheleznyak, Anatoley <[EMAIL PROTECTED]> wrote:
> Hi,
> I am new to R community and I have a question on panel configurations in
> the Trellis package.
> Particularly, I have the following code:
>
> require(lattice)
> plotTable <- NULL
> Date <- seq(as.Date("2006-11-01"), as.Date("2009
No all queries are finished.
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible c
Minyu Chen wrote:
> Dear All:
>
> I run a logistic regression (using lrm in the Design package), and
> after that, I use the command "summary" to get the marginal effects
> of each variable. But one strange thing happens on my binary
> dependent variable: The marginal effect of it jumping fr
At office I have been introduced by another company to new, complex energy
forecasting models using gams as the basic software.
I have been told by the company offering the models that gams is specialised
in dealing with huge, hevy-weight linear and non-linear modelling (see an
example in http:
Can you be more specific about what you mean by "gams"? Do you mean
generalized additive models (GAM)? If so, R is a good environment for
forecasting models and GAM. However, the link that you provided is NOT for
generalized additive modeling, but it is for General Algebraic Modeling
System (GAM
On Fri, 2006-10-27 at 19:04 +0200, Christian Hager wrote:
> No all queries are finished.
Then, I don't know. Can you write and send a short code example which
reproduces the problem on your side so we can try to diagnose? The
simpler the code, the more likely you'll get help.
Cheers,
Jerome
--
On 10/27/06, Zheleznyak, Anatoley <[EMAIL PROTECTED]> wrote:
> Hi,
> I am new to R community and I have a question on panel configurations in
> the Trellis package.
> Particularly, I have the following code:
>
> require(lattice)
> plotTable <- NULL
> Date <- seq(as.Date("2006-11-01"), as.Date("2009
Hello,
Suppose we need a function that takes a POSIXct object and need to
calculate the time difference between it and GMT time:
gmtDiff <- function(time) {
time.gmt <- as.POSIXct(format(time, tz="GMT"))
time.plt <- as.POSIXlt(time)
dlstime <- ifelse(time.plt$isdst > 0, 1, 0)
tim
> c <- "abcdef ghi"
> gsub(" +"," ", c)
[1] "abc def ghi"
>
On Fri, 27 Oct 2006 [EMAIL PROTECTED] wrote:
> Hi all
>
> I'm have a character vector and would like to suppress the blanks if there
> are more than one after the other.
>
> Example:
>
> Character value is: "abc def ghi"
> T
TFM ("R Data Import/Export" manual in this case) can be a better place
to look than the archive. Try specifying colClasses in read.csv()
might help.
Andy
From: [EMAIL PROTECTED]
>
> hi All,
>
> i have a .csv of size 272 MB and a RAM of 512MB and working
> on windows XP.
> I am not able t
Try this:
gmtDiff <- function(time) time - as.POSIXct(format(time), tz = "GMT")
gmtDiff(Sys.time())
gmtDiff(as.POSIXct("2006-10-27", tz = "GMT"))
which both give me the correct answer currently.
The expression after the minus sign comes from the table at the end
of the help desk article in R Ne
Thank you very much... this command is exactly what I need. I only had
something in the back of my mind that there is another command similar to trim
to suppress blanks/spaces.
But however, the gsub command works great...
> c <- "abcdef ghi"
> gsub(" +"," ", c)
[1] "abc def ghi"
>
On F
I am so surprised to hear "gams is specialised in dealing with huge,
hevy-weight linear and non-linear modelling ". That is not what I know
of about GAM, which means generalized additive model.
On 10/27/06, vittorio <[EMAIL PROTECTED]> wrote:
> At office I have been introduced by another company
On 10/27/06, Wensui Liu <[EMAIL PROTECTED]> wrote:
> I am so surprised to hear "gams is specialised in dealing with huge,
> hevy-weight linear and non-linear modelling ". That is not what I know
> of about GAM, which means generalized additive model.
That would be because the original question wa
The people presenting the models cited as a reference the site www.gams.com
which, as you said, is about high-level modeling system for mathematical
programming and optimization.
Vittorio
Alle 17:52, venerdì 27 ottobre 2006, Ravi Varadhan ha scritto:
> Can you be more specific about what you me
Thanks for your suggestions. Trial and error experimentation
with adobe acrobat produced the following method:
It looks like it is possible to highlight the numerical part of the
table in Acrobat and then copy/paste into a text file, with about
98 percent accuracy. Wonders never cease.
url:
Hi,
I wonder if it would make sense to make uniroot detect zeros at the
endpoints, eg
if f(lower)==0, return lower as the root, else
if f(upper)==0, return upper as the root, else
stop if f(upper)*f(lower) > 0 (currently it stops if >=), else
proceed to the algorithm proper.
Currently I am using
System: R 2.3.1 on a Windows XP computer.
I am validating several cancer prognostic models that have been
published with a large independent dataset. Some of the models report a
probability of survival at a specified timepoint, usually at 5 and 10
years. Others report only the linear predictor
I am using the read.table function to load an Excel data set into R.
It has a few variables with very long qualitative (free response
typically in sentences) response that I would like to keep, but would
like to limit the "length" of the response that R shows. Is there some
sort of string or column
Since the databases that I handle with are very large, I need answers to some
questions:
1) Can I work with R data frames like databases in SAS, that is, with data
frames storage in files instead of memory allocation?
2) Can I use SQL codes to work with R data frames? Note that I ask for the
d
Can I please ask a quick question again on this?
Is there a power test function for z-test? Obviuosly, ?power.z.test
does not give me anything.
thx much
ej
On 10/27/06, ONKELINX, Thierry <[EMAIL PROTECTED]> wrote:
> A quick answer to your questions:
>
> 1. Since nobody knows the "true" delta.
I don't know if there is one but if you use the t.test with df greater than 30,
you will
Get answers very close to that for the normal because the tables get pretty
close after
df of 30. I guess to be safe you can use set df to some huge #.
Dear list,
A while ago, I posted a question asking how to use data or subset
arguments in a user-defined function. Duncan Murdoch suggested the
following solution in the context of a data argument:
data <- data.frame(a=c(1:10),b=c(1:10))
eg.fn <- function(expr, data) {
x <- eva
Dear Davia,
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Davia Cox
> Sent: Friday, October 27, 2006 3:37 PM
> To: r-help@stat.math.ethz.ch
> Subject: [R] Qualitative Data??(String command)
>
> I am using the read.table function to load an Excel
On 10/27/2006 5:18 PM, Manuel Morales wrote:
> Dear list,
>
> A while ago, I posted a question asking how to use data or subset
> arguments in a user-defined function. Duncan Murdoch suggested the
> following solution in the context of a data argument:
>
> data <- data.frame(a=c(1:10),b=c(1:10))
Hi,
I have a vector which contains the cdf of a univariate distribution.
The support of the rv is [0,1], so eg if the vector has 101 elements,
the give the CDF at 0,0.01,...,1. The function is quite smooth. [1]
I would like to calculate (approximate) the density. I tried fitting
a splinefun and
Has anyone been able to read a Google Spreadsheet into R?
RODBC permits reading from a Microsoft Excel (.xls) file.
Copying to the clipboard also works. However that is a nuisance unless
one one does it a few times. For ongoing repeated analysis on a
collaborative spreadsheet that changes a few tim
Serguei Kaniovski <[EMAIL PROTECTED]> writes:
> Hi all,
>
> how can I divide two tables of the same dimension so that all names are
> preserved, ie do not become NA? I have "tab1" and "tab2", each having
> names in the first column. I want "tab3" with the same names and values
> "tab1/tab2".
Exp
Any answer to this question will be insufficient without more detail
regarding the nature of the models you are investigating.
As mentioned in prior posts GAMS does solve large-scale and
computationally intensive optimization and math programming problems.
One strength is its ability to inte
"Leeds, Mark (IED)" <[EMAIL PROTECTED]> writes:
> I don't know if there is one but if you use the t.test with df greater than
> 30, you will
> Get answers very close to that for the normal because the tables get pretty
> close after
> df of 30. I guess to be safe you can use set df to some huge
would someone be kind enough to paste the code below into an R session (
ir you
can paste it into a file and just source it ) and take a look at it ? I
must be doing something wrong but
i can't find it.
I start out with a zoo object that has 100 elements in it.
then, i only want to keep the ro
On Fri, 27 Oct 2006 14:55:15 -0400,
"Gabor Grothendieck" <[EMAIL PROTECTED]> wrote:
> Try this: gmtDiff <- function(time) time - as.POSIXct(format(time), tz =
> "GMT")
> gmtDiff(Sys.time()) gmtDiff(as.POSIXct("2006-10-27", tz = "GMT"))
> which both give me the correct answer currently.
> The ex
as jim pointed out ( i think we were figuring this out simultaneously.
thanks a lot jim ), it looks it does have something to do with the
fact that it's a zoo object because below i consider two cases.
in the first case, fxdatab is a zoo object and i get the length of temp
to be 1.
in the second
also, i have a typo on the fifth to last line of what is pasted below (
should be fxdatac not fxdatab ) but it doesn't matter. what i said about
the results still holds. thanks.
From: Leeds, Mark (IED)
Sent: Friday, October 27, 2006 9:56 PM
To: r-help@stat.math
zoo objects must have unique index values but the last two in the head
output below are the same:
> head(fxdata)
bid ask
2006-04-03 03:30:00 27.21 27.26
2006-04-03 03:46:42 27.21 27.26
2006-04-03 03:46:54 27.25 27.26
2006-04-03 03:57:08 27.55 27.26
2006-04-03 04:00:00 27.50
so my guess is that if I just unique it first, everything will work
fine. Oh geez. Thanks so much.
-Original Message-
From: Gabor Grothendieck [mailto:[EMAIL PROTECTED]
Sent: Friday, October 27, 2006 10:03 PM
To: Leeds, Mark (IED)
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] really stra
To make it unique one normally uses aggregate. Depending on what you
want we aggregate over equal times using mean or tail(x, 1), say:
aggregate(fxdata, time(fxdata), mean)
or
aggregate(fxdata, time(fxdata), tail, 1)
On 10/27/06, Leeds, Mark (IED) <[EMAIL PROTECTED]> wrote:
> so my gues
Thanks gabor : I was filtering first and then aggregating so I just have
to switch my lines and
Everything should be fine. I'm sorry for all the confusion. Jim : I'm
also sorry that
you spent time on it. Thanks though.
mark
-Original
You can use gee (
http://finzi.psych.upenn.edu/R/library/geepack/html/00Index.html) or maybe
the function gls in nlme.
Ritwik.
On 10/27/06, Ravi Varadhan <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
>
>
> Suppose I have a multivariate response Y (n x k) obtained at a set of
> predictors X (n x p). I wou
Hi everyone,
I think I have found a minor issue with the R function "boxplot.stats".
But before I make such a rash comment, I'd like to check my facts by
fixing what I think is the problem. However, why I try to do this, R
does not behave as I expect. Can you tell me what I'm doing wrong?
If I
The edit operations does not change the boxplot.stats that you
are debugging. It creates a new boxplot.stats (i.e. now there
are two) and the new one does not have debugging turned on.
Try
getAnywhere("boxplot.stats")
and it finds two. If you remove the one you just created using rm debugging
r
hi Jim,
if i partition the file, then for further operation like merging the
partitioned files and after that doing some analysis on whole data set
would again require the same amount of memory. If i am not able to do or
if i am not having memory then i feel there should be serious thinking
o
91 matches
Mail list logo