It is indeed a negative value for sigma that causes the issue.
You can check this by inserting this line
if(sigma <= 0 ) cat("Negative sigma=",sigma,"\n")
after the line
mu <- x %*% beta
in function llk.mar
Negative values for sigma can be avoided with the use of a transforma
Dear all,
I'm using R function "glmmPQL" in "MASS" package for generalized linear mixed
model considering the temporal correlations in random effect. There are 1825
observations in my data, in which the random effect is called "Date", and there
are five levels in "Date", each repeats 365 times
On 09/10/2010 01:03 AM, David Winsemius wrote:
>
> On Sep 9, 2010, at 6:34 PM, Gosse, Michelle wrote:
>
>> Greetings,
>>
>> I am using R version 2.11.1 on a Dell computer, via a VMware
>> connection to a remote server. My browser version is IE
>> 8.0.6001.18702 and the OS is some corporate ve
Daniel Brewer icr.ac.uk> writes:
>
> Hello,
>
> I have a bar plot where I am already using colour to distinguish one set
> of samples from another. I would also like to highlight a few of these
> bars as ones that should be looked at in detail. I was thinking of
> using hatching, but I can't
Ok. These operations are on a string and the result is added to a
data.frame.
I have strings of the form
"x,y,z,a,b,c,da,b,c,d,e,f,g
essentially comma separated values delimited by a
I first do a
unlist(strsplit(string,split=""))
and then a
strsplit(string,split=",")
The list of vectors i end up
For what it's worth:
I turned off the "Real time file scanning" and everything worked fine.
This is using McAfee anti-virus software.
Thanks,
Erin
On Thu, Sep 9, 2010 at 10:15 AM, peter dalgaard wrote:
>
> On Sep 9, 2010, at 13:52 , Duncan Murdoch wrote:
>
>> On 09/09/2010 12:01 AM, Erin Hodg
First thing to do is to use Rprof to profile your code to see where
the time is being spent, then you can make a decision as to what to
change. Are you carrying out the operations on a dataframe, if so can
you change it to a matrix for some of the operations? You have
provided no idea of what you
Bert,
I appreciate you comments, and I have read Doug Bates writing about p values in
mixed effects regression. It is precisely because I read Doug's material that I
asked "how are we to interpret the estimates" rather than "how can we compute a
p value". My question is a simple question whose
Tali
I am one of your estimated 29 Wordpress bloggers. Thanks for your RBloggers
site!!
I use Wordpress.com's site for my blog.
I use a simple method to highlight my R script in Wordpress, example
http://chartsgraphs.wordpress.com/2010/07/17/time-series-regression-of-temperature-anomaly-data-
windows Vista
R 2.10.1
Is it possible to get p values from gee? Summary(geemodel) does not appear to
produce p values.:
> fit4<- gee(y~time, id=Subject, data=data.frame(data))
Beginning Cgee S-function, @(#) geeformula.q 4.13 98/01/27
running glm to get initial regression estimate
(Intercept)
Hi,
I perform the operations unlist,strsplit,gsub and the for loop on a lot of
strings and its heavily slowing down the overall system. Is there some way
for me to speeden up these operations..maybe like alternate versions that
exist which use multiprocessors etc.
--
Rajesh.J
[[alternat
Try coef(summary(fit3))
On Thu, Sep 9, 2010 at 11:00 PM, John Sorkin
wrote:
> windows Vista
> R 2.10.1
>
>
> (1) How can I get the complete table of for the fixed effects from lmer. As
> can be seen from the example below, fixef(fit2) only give the estimates and
> not the SE or t value
>
>> fi
windows Vista
R 2.10.1
(1) How can I get the complete table of for the fixed effects from lmer. As can
be seen from the example below, fixef(fit2) only give the estimates and not the
SE or t value
> fit3<- lmer(y~time + (1|Subject) + (time|Subject),data=data.frame(data))
> summary(fit3)
Linear
This is a more general statiscal question, not specific to R:
As I move through my masters curriculum in statistics, I am becoming
more and more attuned to issues of model fit and diagnostics (graphical
methods, AIC, BIC, deviance, etc.) As my regression professor always
likes to say, only dra
On 09/09/2010 08:50 PM, andre bedon wrote:
Hi,
I am attempting to graph a Kaplan Meier estimate for some claims
using the survfit function. However, I was wondering if it is
possible to plot a cdf of the kaplan meier rather than the survival
function. Here is some of my code:
Do you really
Hi Philipp,
I like to use something like
lapply(2:10, function(j) lm.fit(cbind(1, DataMatrix[,j]), DataMatrix[,1]))
for this sort of thing. I'd be curious to know if there are other
approaches that are better.
--Gray
On Wed, Sep 8, 2010 at 4:34 AM, Philipp Kunze wrote:
> Hi,
> I have huge m
On Sep 9, 2010, at 8:50 PM, andre bedon wrote:
I am attempting to graph a Kaplan Meier estimate for some claims
using the survfit function. However, I was wondering if it is
possible to plot a cdf of the kaplan meier rather than the survival
function. Here is some of my code:
It's not re
Hi,
I am attempting to graph a Kaplan Meier estimate for some claims using the
survfit function. However, I was wondering if it is possible to plot a cdf of
the kaplan meier rather than the survival function. Here is some of my code:
library(survival)
Surv(claimj,censorj==0)
survfit(Surv
I think the reason this type of computation is not performed more routinely is
the correlation issue. If two quantities are negatively correlated, the
standard deviation of a result computed with them may actually be smaller
(percentage-wise) than the larger off the original two standard deviat
Thank you Peter, yes this is what I need!
John
- Original Message
From: Peter Alspach
To: array chip ; David Winsemius
Cc: "r-help@r-project.org"
Sent: Thu, September 9, 2010 4:26:53 PM
Subject: RE: [R] "sequeeze" a data frame
Tena koe John
?aggregate
maybe?
HTH
Peter Als
Tena koe John
?aggregate
maybe?
HTH
Peter Alspach
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
> project.org] On Behalf Of array chip
> Sent: Friday, 10 September 2010 11:13 a.m.
> To: David Winsemius
> Cc: r-help@r-project.org
> Subject: Re:
I tried RSiteSearch("Interval aritmetic")
which gives zero hits.
There exist a http://www.boost.org/
free software library for interval aritmetic, which it shoub be
possible to link to R.
Kjetil
On Thu, Sep 9, 2010 at 6:28 PM, Carl Witthoft wrote:
>
> That won't do much good. Tolerances don'
Thank you David. I probably didn't make my question clear enough. In the end, I
want a "shorter" version of the original data frame, basically with variable
"rep" removed, then just one row per id, time and mode combination. The
original
data frame has 54 rows, I wish to have a data frame with
On Sep 9, 2010, at 6:34 PM, Gosse, Michelle wrote:
Greetings,
I am using R version 2.11.1 on a Dell computer, via a VMware
connection to a remote server. My browser version is IE
8.0.6001.18702 and the OS is some corporate version of Microsoft XP.
I'm trying to learn more about the tappl
Greetings,
I am using R version 2.11.1 on a Dell computer, via a VMware connection to a
remote server. My browser version is IE 8.0.6001.18702 and the OS is some
corporate version of Microsoft XP.
I'm trying to learn more about the tapply function , so I typed ?tapply into
the command line. Th
That won't do much good. Tolerances don't add (except in rare
circumstances), and certainly not when they're in different units.
There's nothing wrong with the first part, i.e. setting up variables
whose contents include the mean and the tolerance, but is that peak? or
sigma? and so on.
You can also plot the +'s yourself using for example matlines:
# Some data
x <- 1:10
y <- 1:10
# Height and width of the crosses
dx1 <- 0.1 # width in negative x-direction
dx2 <- 0.2 # width in positive x-direction
dy1 <- 0.2 # height in negative y-direction
dy2 <- 0.3 # height in positive y-dir
On Sep 9, 2010, at 5:47 PM, array chip wrote:
Hi, suppose I have a data frame as below:
dat<-
cbind
(expand
.grid
(id=c(1,2,3),time=c(0,3,6),mode=c('R','L'),rep=(1:3)),y=rnorm(54))
I kind of want to "squeeze" the data frame into a new one with
averaged "y" over
"rep" for the same id,
Hi, suppose I have a data frame as below:
dat<-cbind(expand.grid(id=c(1,2,3),time=c(0,3,6),mode=c('R','L'),rep=(1:3)),y=rnorm(54))
I kind of want to "squeeze" the data frame into a new one with averaged "y"
over
"rep" for the same id, time and mode. taking average is easy with tapply:
tapply(
This is a bug, which I've fixed in the development version (hopefully
to be released next week).
In the plyr 1.2:
OK, thank you both for your answers. I'll wait for the next version.
Regards,
Jan
__
R-help@r-project.org mailing list
https://stat
I'm having difficulty running R2WinBugs on a setup that previously
worked for me (Dell Laptop, Windows XP service pack 3, R 2.11.1, WinBugs
1.43) . When I issue the following command
smds.sim <- bugs (data, inits, parameters, "SMDSbrandLoc2.bug",
debug = T,
n.thin = n.thin,
n.chains = n.
You can record all arguments and return values of the
calls that optim(par,fn) makes to fn with a function
like the following. It takes your function and makes
a new function that returns the same thing but also
records information it its environment. Thus, after
optim is done you can see its pa
I have a plotting function that is plots a multi-panel plot, with the x-axis
as a date and various y-axes.
I would like to control the frequency of the X-axis labels, ticks and grid
lines. However with the following code I get no annotation on the X-axis at
all.
Here is a minimal data.frame
The question would be performance issues for having too many
functions. We could just limit it to the reserved keywords. Another
option for the functions is to highlight anything that looks like a
function with the regular expression /[\w._]+(?=\()/ that is any
function name with periods and unde
I'm using odfWeave for reproducible research and would like to extract the R
code chunks form an OpenOffice .odt document in a manner similar to the way
Stangle is used to extract code chunks from Sweave input files. Is this
possible?
Thanks in advance,
Denné Reed
Assistant Professor
Universi
R-help,
I am interested in estimating the effect of a treatment (2 levels) on a
response. I've used a randomized blocked experiment (5 blocks). I run
the full model, let's say that it is...
lm1 <- lm(resp ~ treat + block)
...and find that there are no significant block effects. Now with
On 9/6/2010 8:46 AM, David A. wrote:
Dear list,
I am using a external program that outputs Q1, Q3, median, upper and
lower whisker values for various datasets simultaneously in a tab
delimited format. After importing this text file into R, I would like
to plot a boxplot using these given values
Yanwei!!!
Have you tried to write the likelihood function using log-normal directly?
if you haven't so, you may want to check ?rlnorm
--
View this message in context:
http://r.789695.n4.nabble.com/Help-on-simple-problem-with-optim-tp2533420p2533487.html
Sent from the R help maili
This is only a guess because I don't have your data:
sigma is must be positive in the dnorm function. My guess is that optim may
attempt an iteration with a negative sigma.
You may want to see help(optim) for dealing with this constraint.
Specifically see the lower argument.
If you specify th
Hello Yihui,
I'd be glad to have you try and create the R brush - thanks for offering!
In case you'll come up against walls, I hope there would be people in the
mailing list that would be able to help out.
Cheers,
Tal
Contact
Details:-
Hi everyone,
Thanks for the help.
On Thu, 9 Sep 2010, Peter Ehlers wrote:
The first thing to do when you get results that you don't expect is
to check the help page. The page for cor clearly states that its
input is to a *numeric* vector, matrix or data frame (my emphasis).
I would not be happ
On 2010-09-09 11:53, Stephane Vaucher wrote:
Hi Josh,
Initially, I was expecting R to simply ignore non-numeric data. I guess I
was wrong... I copy-pasted what I observe, and I do not get an error when
The first thing to do when you get results that you don't expect is
to check the help page.
Thanks, Tal. It does not look too difficult to write such a "brush",
which is actually a JS file. However, I have a concern that R has
thousands of functions (in base R only), so it might not worth
including all of them in the brush, which is the way that they
implemented the highlighting script fo
Hi Stephane,
According to the NEWS file, as of 2.11.0: "cor() and cov() now test
for misuse with non-numeric arguments, such as the non-bug report
PR#14207" so there is no need for a new bug report.
Here is a simple way to select only numeric columns:
# Sample data
dat <- data.frame(a = 1:10L, b
Dear all,
I ran into problems with the function "optim" when I tried to do an mle
estimation of a simple lognormal regression. Some warning message poped up
saying NANs have been produced in the optimization process. But I could not
figure out which part of my code has caused this. I wonder if
Hello, how do I centralize the texts inside a RTclTk listbox?
The code I am using is:
require(tcltk)
tt<-tktoplevel()
tl<-tklistbox(tt,height=4,selectmode="single",background="white")
tkgrid(tklabel(tt,text="What's your favorite fruit?"))
tkgrid(tl)
fruits <- c("Apple","Orange","Banana
Okay, I misread what you wanted.
Try this
==
x <- 4:7
positions <- barplot(x)
mid <- x/2
arrows(positions-.5,mid,positions+.5,mid,angle=0)
==
--- On Thu, 9/9/10, Steve Murray wrote:
> From: St
Ok, conversion to POSIXct does the trick - why doesn't tapply work with the
other, not-obviously-improper POSIX type?
(Incidentally, now it gives me more trouble, with sorting - a reproducible
sample coming up in another thread).
--
View this message in context:
http://r.789695.n4.nabble.com/F
On Sep 9, 2010, at 2:14 PM, Marc Schwartz wrote:
On Sep 9, 2010, at 12:59 PM, Jonathan Finlay wrote:
Ok friends, I tried but I not know! I'm a Linux SysAdmin and
Stadistical and
i working to migrate all the software in my workplace to free or open
software. The OS was easy, ofimatic suite t
On Sep 9, 2010, at 12:59 PM, Jonathan Finlay wrote:
> Ok friends, I tried but I not know! I'm a Linux SysAdmin and Stadistical and
> i working to migrate all the software in my workplace to free or open
> software. The OS was easy, ofimatic suite too, multimedia and graphics you
> know, everything
On Wed, Sep 8, 2010 at 3:03 PM, Joshua Wiley wrote:
> I got into the habit of starting a
> new R session and testing the code I was going to email, before hand.
> It helps to notice missing variables, libraries that need to be
> loaded, etc.
It was rightly pointed out to me that the correct term
Ok friends, I tried but I not know! I'm a Linux SysAdmin and Stadistical and
i working to migrate all the software in my workplace to free or open
software. The OS was easy, ofimatic suite too, multimedia and graphics you
know, everything was relatively easy. But i work with SPSS and I produce
tabl
Trafim,
You'll get more answers if you adhere to the posting guide and tell us
you version information and other necessary details. For example, this
function is in the caret package (but nobody but me probably knows
that =]).
The first argument should be a vector of outcome values (not the
possi
Hi Josh,
Initially, I was expecting R to simply ignore non-numeric data. I guess I
was wrong... I copy-pasted what I observe, and I do not get an error when
calculating correlations with text data. I can also do cor(test.n$P3,
test$P7) without an error.
If you have a function to select only
A Reproducible Research CRAN task view was recently created:
http://cran.r-project.org/web/views/ReproducibleResearch.html
I will be updating it with some of the information in this thread.
thanks,
Max
On Thu, Sep 9, 2010 at 11:41 AM, Matt Shotwell wrote:
> Well, the attachment was a dud
I've tried with other zoo series and I have always the same problem.
--
View this message in context:
http://r.789695.n4.nabble.com/Bug-on-chron-with-zoo-tp2533135p2533287.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-pro
On Sep 9, 2010, at 12:16 PM, David Winsemius wrote:
>
> On Sep 9, 2010, at 11:20 AM, David Winsemius wrote:
>
>>
>> On Sep 8, 2010, at 7:32 PM, Jonathan Finlay wrote:
>>
>>> Thanks David, gmodels::Crosstable partially work because can show only 1 x 1
>>> tablen
>>> CrossTable(x,y,...)
>>> I ne
On Thu, Sep 9, 2010 at 12:39 PM, Dimitri Shvorob
wrote:
>
> Update: What did make a difference for me - and something that was present in
> Jim's example, but not reproduced by myself initially - was dropping columns
> other than the two involved. When I dropped all columns except for h and
> src,
try this:
i <- 0
n <- 10
repeat{
cat(i + 1) # use the value; add 1 since we start at zero
i <- (i + 1) %% n
}
On Thu, Sep 9, 2010 at 12:53 PM, cassie jones wrote:
> Dear all,
>
> I am writing a program using for loop. The loop contains i which runs from
> 1:n. There is a feature I ne
Dear all,
I am writing a program using for loop. The loop contains i which runs from
1:n. There is a feature I need to include here. That is, when i=n, i+1 would
be 1, not n+1. Basically the loop should run in a circular fashion. That
also means, if i=1, i-1=n.
Can anyone help me with this? How c
Update: What did make a difference for me - and something that was present in
Jim's example, but not reproduced by myself initially - was dropping columns
other than the two involved. When I dropped all columns except for h and
src, the sqldf call worked.
... Is it an R bug or what? (I am saying
Thanks a lot, Jim. I am not sure what difference the various POSIXes make -
in the end, you are replacing a datetime hour with a numeric value, e.g., 1
or 9. That does not work for me, unfortunately.
> g = head(x)
> dput(g)
structure(list(price = c(500L, 500L, 501L, 501L, 500L, 501L),
size
On Sep 9, 2010, at 11:20 AM, David Winsemius wrote:
On Sep 8, 2010, at 7:32 PM, Jonathan Finlay wrote:
Thanks David, gmodels::Crosstable partially work because can show
only 1 x 1
tablen
CrossTable(x,y,...)
I need something how can process at less 1 variable in X an 10 in Y.
A further th
hello,
i don not really understand what you me, sorry. can you describe what you
mean.
tuggi
--
View this message in context:
http://r.789695.n4.nabble.com/problem-with-outer-tp2532074p2533259.html
Sent from the R help mailing list archive at Nabble.com.
__
Like this?
x <- 4:7
barplot(x, density=10, angle=180)
--- On Thu, 9/9/10, Steve Murray wrote:
> From: Steve Murray
> Subject: [R] Alignment of lines within barplot bars
> To: r-help@r-project.org
> Received: Thursday, September 9, 2010, 11:35 AM
>
> Dear all,
>
> I have a barplot upon whic
On 09/09/2010 12:02 PM, james.fo...@diamond.ac.uk wrote:
Dear R community (and Duncan more specifically),
I can't work out how to make additional light sources work in rgl.
Here is the example.
First I create a cube and visualize it:
> cubo<- cube3d(col="black")
> shade3d(cubo)
Next I posit
A confidence interval around the p-value makes no sense because there is
no parameter being estimated, but the sampling distribution of the p-value
makes a lot of sense. The pre-observational P-value is a random variable
that is a function of the underlying random variable being tested. That
Barry Rowlingson lancaster.ac.uk> writes:
>
> On Wed, Sep 8, 2010 at 1:35 PM, Michael Bernsteiner
> hotmail.com> wrote:
> >
> > Dear all,
> >
> > I'm optimizing a relatively simple function. Using optimize the optimized
> > parameter value is worse than the starting. why?
I would like to stre
I don't know.
You can look at the file, is very short.
http://r.789695.n4.nabble.com/file/n2533223/test test
--
View this message in context:
http://r.789695.n4.nabble.com/Bug-on-chron-tp2533135p2533223.html
Sent from the R help mailing list archive at Nabble.com.
_
Could this be a case of faq 7.31? where rounding error means that you are
seeing a time that is slightly before midnight (but printing shows it at
midnight).
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Mess
On Thu, Sep 9, 2010 at 9:28 AM, Jakson A. Aquino wrote:
> On Thu, Sep 9, 2010 at 1:14 PM, Joshua Wiley wrote:
>> On Thu, Sep 9, 2010 at 7:05 AM, Bos, Roger wrote:
>>> Josh,
>>>
>>> I liked your idea of setting the repo in the .Rprofile file, so I tried it:
>>>
>>> r <- getOption("repos")
>>> r["
Hi Stephane,
When I use your sample data (e.g., test, test.number), cor() throws an
error that x must be numeric (because of the factor or character
data). Are you not getting any errors when trying to calculate the
correlation on these data? If you are not, I wonder what version of R
are you us
Something strange.
Your example work but...
I have a zoo object.
I extract its element 21
>> index(test[21])
> [1] (05/12/05 23:00:00)
>
>> index(test[21])+1/24
> [1] (05/12/05 24:00:00)
>
>
Why 24:00 ?
>> packageDescription("chron")$Version
> [1] "2.3-35"
>> R.version.string
> [1] "R
One other case where a confidence interval on a p-value may make sense is
permutation (or other resampling) tests. The population parameter p-value
would be the p-value that would be obtained from the distribution of all
possible permutations, but in practice we just sample from that population
On Thu, Sep 9, 2010 at 1:14 PM, Joshua Wiley wrote:
> On Thu, Sep 9, 2010 at 7:05 AM, Bos, Roger wrote:
>> Josh,
>>
>> I liked your idea of setting the repo in the .Rprofile file, so I tried it:
>>
>> r <- getOption("repos")
>> r["CRAN"] <- "http://cran.stat.ucla.edu";
>> options(repos = r)
>> rm
Dear all
I would like to run in R an uncertainty/sensitivity analysis. I know that these
two are performed together. I have a geochemical model where I have the inputs,
the water variables (e.g. pH, temperature, oxygen ect) and as well an output of
different variables. What I would like to do i
Hi,
If your predictor variable is categorical than it should be converted
to a factor. If it is continuous or being treated as such, you do not
need to. It is generally quite easy to do:
varname <- factor(varname)
or if it is in a data frame
yourdf$varname <- factor(yourdf$varname)
Cheers,
On Thu, Sep 9, 2010 at 11:59 AM, skan wrote:
>
> hello
>
> I think I've found a bug
> I don't know if it's a chron bug or a R one.
>
> (05/12/05 23:00:00) +1/24 gives
> (05/12/05 24:00:00)
> instead of
> (05/13/05 00:00:00)
> it looks like the same but it's not because when you get the date of th
On Thu, Sep 9, 2010 at 7:05 AM, Bos, Roger wrote:
> Josh,
>
> I liked your idea of setting the repo in the .Rprofile file, so I tried it:
>
> r <- getOption("repos")
> r["CRAN"] <- "http://cran.stat.ucla.edu";
> options(repos = r)
> rm(r)
>
> And now when I open R I get an error:
>
> Error in r["
On Wed, 8 Sep 2010, Paul Johnson wrote:
run it with factor() instead of ordered(). You don't want the
"orthogonal polynomial" contrasts that result from ordered if you need
to compare against Stata.
If you don't want polynomial contrasts for ordered factors, you can just tell R
not to use th
Dear R community (and Duncan more specifically),
I can't work out how to make additional light sources work in rgl.
Here is the example.
First I create a cube and visualize it:
> cubo <- cube3d(col="black")
> shade3d(cubo)
Next I position the viewpoint at theta=0 and phi=30:
> view3d(theta=0,ph
hello
I think I've found a bug
I don't know if it's a chron bug or a R one.
(05/12/05 23:00:00) +1/24 gives
(05/12/05 24:00:00)
instead of
(05/13/05 00:00:00)
it looks like the same but it's not because when you get the date of this
datetime it says day 12 instead of 13.
Please, forward it t
I think your main problem is that you have your time as POSIXlt which
is a multiple valued vector. I converted the 't' to POSIXct, removed
the other POSIXlt value and created a 'h' as the character for the
hour and it works fine:
> str(g)
'data.frame': 6 obs. of 5 variables:
$ price: int 500
Dear list,
I read in ?plotmath that I can use bgroup to draw scalable delimiters
such as [ ] and ( ). The same technique fails with < > however, and I
cannot find a workaround,
grid.text(expression(bgroup("<",atop(x,y),">")))
Error in bgroup("<", atop(x, y), ">") : invalid group delimiter
Regar
The image function will create a plot with the values transformed to colors. Or
the View function (note the capitol V) will let you look at it in a spreadsheet
like window with scrollbars.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.4
Dear all,
I have a barplot upon which I hope to superimpose horizontal lines extending
across the width of each bar. I am able to partly achieve this through the
following set of commands:
positions <- barplot(bar_values, col="grey")
par(new=TRUE)
plot(positions, horiz_values, col="red", pch="
Well, the attachment was a dud. Try this:
http://biostatmatt.com/R/markup_0.0.tar.gz
-Matt
On Thu, 2010-09-09 at 10:54 -0400, Matt Shotwell wrote:
> I have a little package I've been using to write template blog posts (in
> HTML) with embedded R code. It's quite small but very flexible and
> ext
Thanks a lot!
--
View this message in context:
http://r.789695.n4.nabble.com/a-question-about-replacing-the-value-in-the-data-frame-tp2532010p2533036.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
ht
Hi, thank you very much for the help.
one more quick question: is that, my predictor variable should be coded as
'factor' when using either 'lm' or 'glm'?
sincerely,
karena
--
View this message in context:
http://r.789695.n4.nabble.com/regression-function-for-categorical-predictor-data-tp2532
Hi everyone.
I'm trying to break the y axis on a plot. For instance, I have 2 series
(points and a loess). Since the loess is a "continuous" set of points, it
passes in the break section. However, with gap.plot I cant plot the loess
because of this (I got the message "some values of y will not be
> g = head(x)
> dput(g)
structure(list(price = c(500L, 500L, 501L, 501L, 500L, 501L),
size = c(221000L, 2000L, 1000L, 13000L, 3000L, 3000L), src = c("R",
"R", "R", "R", "R", "R"), t = structure(list(sec = c(24.133,
47.096, 12.139, 18.142, 10.721, 28.713), min = c(0L, 0L,
1L, 1L
I am following up on an old post. Please, comment:
it appears that
predict(glm.model,type="response",se.fit=T)
will do all the conversions and give se on the scale of the response. This
only takes into account the error in parameter estimation. what a
"prediction" interval is meant to be usual
On Sep 8, 2010, at 7:32 PM, Jonathan Finlay wrote:
Thanks David, gmodels::Crosstable partially work because can show
only 1 x 1
tablen
CrossTable(x,y,...)
I need something how can process at less 1 variable in X an 10 in Y.
A further thought (despite a lack of clarification on what your dat
Can you set the multinomial prob. to zero for p1+p2+p3 != 1 if you have to
use the multinomial distribution in guete(). Otherwise, I would say the
problem/guete() itself is problematic.
--
View this message in context:
http://r.789695.n4.nabble.com/problem-with-outer-tp2532074p2533050.html
Sent
On Sep 9, 2010, at 13:52 , Duncan Murdoch wrote:
> On 09/09/2010 12:01 AM, Erin Hodgess wrote:
>> Dear R People:
>> I keep getting the "Error in normalizePath(path) :" while trying to
>> obtain the necessary packages to use with the "Applied Spatial
>> Statistics with R" book.
>> I turned off th
2010/9/8 David Winsemius
>
> I hope you mean only two factors and an n x m table.
>
> Yes David I like say factor, but am new here.
--
Jonathan.
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/m
"Jan private" wrote in message
news:1284029454.2740.361.ca...@localhost.localdomain...
> Hello Bernardo,
>
> -
> If I understood your problem this script solve your problem:
>
> q<-0.15 + c(-.1,0,.1)
> h<-10 + c(-.1,0,.1)
> 5*q*h
> [1] 2.475 7.500 12.625
> -
>
> OK, this solves
Hi
you has to provide some more info about x e.g. str(x)
x<-data.frame(price=1, h=Sys.time())
r-help-boun...@r-project.org napsal dne 08.09.2010 10:18:52:
>
> Mnay thanks fr suggestions. I am afraid this is one tough daatframe...
>
> > t = sqldf("select h, count(*) from x group by h")
> Err
I have a little package I've been using to write template blog posts (in
HTML) with embedded R code. It's quite small but very flexible and
extensible, and aims to do something similar to Sweave and brew. In
fact, the package is heavily influenced by the brew package, though
implemented quite diffe
On Sep 9, 2010, at 6:50 AM, Jan private wrote:
Hello Bernardo,
-
If I understood your problem this script solve your problem:
q<-0.15 + c(-.1,0,.1)
h<-10 + c(-.1,0,.1)
5*q*h
[1] 2.475 7.500 12.625
-
OK, this solves the simple example.
But what if the example is not that si
1 - 100 of 162 matches
Mail list logo