Thanks Michael for your help. At least its good to know that there is no
function which does what I wanted. I will definitely try to code something
that fulfills my requirements.
Regards,
S.
On Fri, Sep 17, 2010 at 2:32 AM, Michael Bedward
michael.bedw...@gmail.comwrote:
Hello Sunny,
Cool. If you get some code working and don't mind sharing it please
post it here.
Michael
On 17 September 2010 16:49, Sunny Srivastava research.b...@gmail.com wrote:
Thanks Michael for your help. At least its good to know that there is no
function which does what I wanted. I will definitely
Use as.data.frame instead. It does what you want it to do.
newdata.df-as.data.frame(stocks1[1:5,2:5])
Cheers,
Michael
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
On Behalf Of Leigh E. Lommen
Sent: 16. september 2010 17:18
To:
Hi,
Well, it works for me:
x - matrix(1:20, nrow=5, ncol=4)
data.frame(x[1:5,2:4])
X1 X2 X3
1 6 11 16
2 7 12 17
3 8 13 18
4 9 14 19
5 10 15 20
Maybe with as.data.frame(), or set the drop argument to FALSE:
data.frame(x[1:5,2:4,drop=FALSE])
Not sure why it doesn't work for you. Check
thanks. Ravi and Nash.
I will read the new package and may use it after I am familiar with it. I may
bother both of you when I have questions.thanks for that in advance.
Nan
from Montreal
Hi Nan,
You can take a look at the optimx package on CRAN. John Nash and I wrote
this
package to
On 14.09.2010 12:25, Paulo Teles wrote:
Dear Sirs,
I have been using the package staRt but it has disappeared from the latest R
versions. I emailed the author but he never replied. Is it possible to let me
know if that package has been removed or if it has been replaced by another or
what
Sunny -
I don't think mapply is needed:
lapply(1:length(mylist),function(x)rep(x,length(mylist[[x]])))
[[1]]
[1] 1 1 1
[[2]]
[1] 2
[[3]]
[1] 3 3
- Phil Spector
Statistical Computing Facility
Try this:
relist(rep(1:length(l1), sapply(l1, length)), l1)
On Mon, Sep 13, 2010 at 3:41 PM, Sunny Srivastava
research.b...@gmail.comwrote:
Dear R-Helpers,
I have a list l1 like:
l1[[1]]
a b c
l1[[2]]
d
l1[[3]]
e f
I want an output res like:
res[[1]]
1 1 1
res[[2]]
2
Thank you very much list!
On Mon, Sep 13, 2010 at 5:04 PM, Phil Spector spec...@stat.berkeley.eduwrote:
Sunny -
I don't think mapply is needed:
lapply(1:length(mylist),function(x)rep(x,length(mylist[[x]])))
[[1]]
[1] 1 1 1
[[2]]
[1] 2
[[3]]
[1] 3 3
On 09/13/2010 08:41 PM, Sunny Srivastava wrote:
Dear R-Helpers,
I have a list l1 like:
l1[[1]]
a b c
l1[[2]]
d
l1[[3]]
e f
I want an output res like:
res[[1]]
1 1 1
res[[2]]
2
res[[3]]
3 3
Essentially, I want to replicate each index equal to the number of elements
...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Hey Sky
Sent: Tuesday, September 07, 2010 2:48 PM
To: Ben Bolker; r-h...@stat.math.ethz.ch
Subject: Re: [R] question on optim
thanks. Ben
after read your email, I realized the initial value of w[5]=0 is a stupid
mistake. and I
-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Hey Sky
Sent: Tuesday, September 07, 2010 2:48 PM
To: Ben Bolker; r-h...@stat.math.ethz.ch
Subject: Re: [R] question on optim
thanks. Ben
after read your email, I realized the initial value of w[5]=0 is a stupid
mistake. and I
...@infolink.com.br]
Sent: Wednesday, September 08, 2010 4:26 PM
To: Ravi Varadhan
Cc: 'Hey Sky'; 'Ben Bolker'; r-h...@stat.math.ethz.ch
Subject: Re: [R] question on optim
Dear R-list members,
I am using R 2.11.1 on Windows XP. When I try to install package
optimx through the GUI menu Packages / Install
Hey Sky heyskywalker at yahoo.com writes:
I do not know how to describe my question. I am a new user for R and
write the
following code for a dynamic labor economics model and use OPTIM to get
optimizations and parameter values. the following code does not work due to
the equation:
sorry. there is a type in the following code. there is no w[5]*acwrk[,i] in the
regw equation. the right one should be as following:
regw[,i]-w[1]+ w[2]*eta[m]+exp(w[3]+w[4]*eta[m])*actr[,i]
- Original Message
From: Hey Sky heyskywal...@yahoo.com
To: R r-help@r-project.org
Hi,
I do not see how `data' is used in your objective function.
The objective function is not even evaluable at the initial guess.
myfunc1(guess, mydata)
[1] NaN
I also think that some of the parameters may have to be constrained, for
example, par[1] 0. At a minimum, make sure that the
recommend some R books on optimization, such as tips for
setup
gradient and others, or common mistakes? thanks
Nan
- Original Message
From: Ben Bolker bbol...@gmail.com
To: r-h...@stat.math.ethz.ch
Sent: Tue, September 7, 2010 11:15:43 AM
Subject: Re: [R] question on quot
On 10-09-07 02:48 PM, Hey Sky wrote:
thanks. Ben
after read your email, I realized the initial value of w[5]=0 is a
stupid mistake. and I have changed it. but I am sorry I cannot
reproduce the result, convergence, as you get. the error message is
non-finite finite difference value [12]. any
David Winsemius wrote:
That is different than my understanding of AIC. I thought that the AIC
and BIC both took as input the difference in -2LL and then adjusted
those differences for the differences in number of degrees of freedom.
David! Your words make sense to me now. Sorry
In 2010-08-30, C. Peng button...@hotmail.com wrote:
What statistical measure(s) tend to be answering ALL(?) question of
practical interest?
None. All I had said was that significance testing doesn't really
answer any questions of practical interest. Unfortunately, that doesn't
mean there's
What statistical measure(s) tend to be answering ALL(?) question of practical
interest?
--
View this message in context:
http://r.789695.n4.nabble.com/Re-Question-regarding-significance-of-a-covariate-in-a-coxme-survival-tp2399386p2399577.html
Sent from the R help mailing list archive at
The likelihood ratio test is more reliable when one model is nested in the
other. This true for your case.
AIC/SBC are usually used when two models are in a hiearchical structure.
Please also note that any decision made made based on AIC/SBC scores are
very subjective since no sampling
The likelihood ratio test is more reliable when one model is nested in the
other. This true for your case.
AIC/SBC are usually used when two models are in a hiearchical structure.
Please also note that any decision made
made based on AIC/SBC scores are very subjective since no sampling
My suggestion:
If compare model 1 and model 2 with model 0 respectively, the (penalized)
likelihood ratio test is valid.
IF you compare model 2 with model 3, the (penalized) likelihood ratio test
is invalid. You may want to use AIC/SBC to make a subjective decision.
--
View this message in
My suggestion for Teresa:
If compare model 1 and model 2 with model 0 respectively, the (penalized)
likelihood ratio test is valid.
IF you compare model 2 with model 3, the (penalized) likelihood ratio test
is invalid. You may want to use AIC/SBC to make a subjective decision.
--
View this
Using a p-value to make any kind of decision is questionable to begin
with, and especially unreliable in choosing covariates in regression.
Old studies, e.g. by Walls and Weeks and by Bendel and Afifi, have shown
that if predictive ability is the criterion of interest and one wishes
to use
What statistical measure(s) tend to be answering ALL(?) question of practical
interest?
--
View this message in context:
http://r.789695.n4.nabble.com/Re-Question-regarding-significance-of-a-covariate-in-a-coxme-survival-tp2399386p2399524.html
Sent from the R help mailing list archive at
Christopher David Desjardins desja004 at umn.edu writes:
Hi,
I am running a Cox Mixed Effects Hazard model using the library coxme. I
am trying to model time to onset (age_sym1) of thought problems (e.g.
hearing voices) (sym1). As I have siblings in my dataset, I have
decided to
On Aug 27, 2010, at 4:32 PM, Teresa Iglesias wrote:
Christopher David Desjardins desja004 at umn.edu writes:
Hi,
I am running a Cox Mixed Effects Hazard model using the library
coxme. I
am trying to model time to onset (age_sym1) of thought problems (e.g.
hearing voices) (sym1). As I
On Sat, Aug 28, 2010 at 8:38 PM, David Winsemius dwinsem...@comcast.netwrote:
On Aug 27, 2010, at 4:32 PM, Teresa Iglesias wrote:
Christopher David Desjardins desja004 at umn.edu writes:
Hi,
I am running a Cox Mixed Effects Hazard model using the library coxme. I
am trying to model time
I'm not sure I understand exactly what you're asking but look at the truncated
normal distribution.
On Aug 20, 2010, at 5:13 PM, solafah bh wrote:
Hello
I want to know how can i sampling from upper and lower tail of normal
distribution , in two cases , if i know the upper and lower bounds of
Scott Compton wrote:
Hi,
It looks like about 200+ addresses from people who post on this list
have been added to my contact list without my permission. Has this
happened to other people? I use Mac OS. This is quite disconcerting
and suggests a virus floating around on this distribution list.
It doesn't happen to other lists that I belong to, just the R lists. But I will
check.
Thanks for the tip.
Best,
Scott
On Aug 18, 2010, at 12:23 PM, Erik Iverson wrote:
Scott Compton wrote:
Hi,
It looks like about 200+ addresses from people who post on this list
have been added to my
...@ccbr.umn.edu
Cc: r-help@r-project.org
Subject: Re: [R] question about unwanted addresses in contact list
It doesn't happen to other lists that I belong to, just the R lists. But I will
check.
Thanks for the tip.
Best,
Scott
On Aug 18, 2010, at 12:23 PM, Erik Iverson wrote:
Scott
Hi All:
the package MuMIn can be used to select the model based on AIC or AICc.
The code is as follows:
data(Cement)
lm1 - lm(y ~ ., data = Cement)
dd - dredge(lm1,rank=AIC)
print(dd)
If I want to select the model by BIC, what code do I need to use? And when
to select the best model based on
Duncan and David, thank you so much.
You are right. We can use
z1 - outer(x, y, function(x,y) x^2+3*y^2)
rather than
xy - meshgrid(x,y)
z2 - xy$x^2+ 3*xy$y^2
to get right answer. I run these codes on my computer and found that z2 is
the transpose of z1.
So I guess in order to obtain
Hello:
I try to use function bms to select the best model with the biggest
posterior probability for different quantile. I have one return and 16
variables. However, I can just select for the linear model, how to change
the code for quantile:
bma1=bms(rq(y~.,tau=0.25,data=EQT), burn =
On 11/08/2010 11:16 AM, ba ba wrote:
Dear All,
I tried to plot contour lines using R function contour, but got the results
which are not expected.
require(RTOMO)
x - seq(-1,1,0.1)
y - seq(-1,1,0.1)
xy - meshgrid(x,y)
z - xy$x^2+ 3*xy$y^2
contour(x,y,z,col=blue,xlab=x,ylab=y)
The above code
On Aug 11, 2010, at 11:16 AM, ba ba wrote:
Dear All,
I tried to plot contour lines using R function contour, but got the
results
which are not expected.
require(RTOMO)
x - seq(-1,1,0.1)
y - seq(-1,1,0.1)
xy - meshgrid(x,y)
z - xy$x^2+ 3*xy$y^2
contour(x,y,z,col=blue,xlab=x,ylab=y)
The
Jack,
sorry for the late answer.
I agree that my last post is misleading. Here a new try:
* *
Increasing the value of *C* (...) forces the creation of a more accurate
model, that may not generalise well.(Try to imagine the feature space with
the two mapped sets very far from each other ) A model
Hi
I made some design matrix X(in linear regression model)
I there any method to normalize X?
You can normalize a matrix column-wise like this:
# m is a matrix
apply(m, 2, function(x) x / max(x) )
Or normalize row-wise like this:
t(apply(m, 1, function(x) x / max(x) ))
I'm sure there are
On 04/08/2010 5:38 AM, Sander wrote:
L.S.
I am trying to get data from an excel sheet using the RODBC library, quite
like the example on page 34 of the 'Data manipulation with R' book written
by Phil Spector. It seems very straightforward, yet I get the same error
everytime I try it, with
?scale
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
project.org] On Behalf Of leepama
Sent: Tuesday, August 03, 2010 10:55 PM
To:
?replicate
?apply
?sapply
Nikhil Kaza
Asst. Professor,
City and Regional Planning
University of North Carolina
nikhil.l...@gmail.com
On Aug 1, 2010, at 2:42 AM, leepama wrote:
hi!! imade many codes during these days..
I study Statistics
please one more question!!
ex1-function(n,p,rho){
Pau,
Sorry for getting back to you for this again. I am getting confused about
your interpretation of 3). It is obvious from your code that increasing C
results in* smaller *number of SVs, this seems to contradict with your
interpretation * Increasing the value of C (...) forces the creation of
.
Lexi
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
On Behalf Of Setlhare Lekgatlhamang
Sent: Saturday, July 24, 2010 1:01 PM
To: amatoallah ouchen; r-help@r-project.org
Subject: Re: [R] Question regarding panel data diagnostic
Let me correct
To: amatoallah ouchen; r-help@r-project.org
Subject: Re: [R] Question regarding panel data diagnostic
Dear Lexi,
Thanks a lot for your prompt answers,
The issue i'm confronted to is the following: i have a panel data N=17
T=5 (annual observations) and wanted to check for stationarity to
avoid
My thought is this:
It depends on what you have in the panel. Are your data cross-section data
observed over ten years for, say, 3 countries (or regions within the same
country)? If so, yes you can perform integration properties (what people
usually call unit root test) and then test for
Of Setlhare Lekgatlhamang
Sent: Saturday, July 24, 2010 12:54 PM
To: amatoallah ouchen; r-help@r-project.org
Subject: Re: [R] Question regarding panel data diagnostic
My thought is this:
It depends on what you have in the panel. Are your data cross-section
data observed over ten years for, say, 3 countries
Of Setlhare Lekgatlhamang
Sent: Saturday, July 24, 2010 12:54 PM
To: amatoallah ouchen; r-help@r-project.org
Subject: Re: [R] Question regarding panel data diagnostic
My thought is this:
It depends on what you have in the panel. Are your data cross-section
data observed over ten years for, say, 3
Nordlund, Dan (DSHS/RDA) wrote:
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
project.org] On Behalf Of Peter Dalgaard
Sent: Thursday, July 22, 2010 3:13 PM
To: Pat Schmitz
Cc: r-help@r-project.org
Subject: Re: [R] Question about a perceived irregularity
Subject: Re: [R] Question about a perceived irregularity in R syntax
Pat Schmitz wrote:
Both vector query's can select the values from the data.frame as
written,
however in the first form assigning a value to said selected numbers
fails.
Can you explain
Pat Schmitz wrote:
Both vector query's can select the values from the data.frame as written,
however in the first form assigning a value to said selected numbers fails.
Can you explain the reason this fails?
dat - data.frame(index = 1:10, Value = c(1:4, NA, 6, NA, 8:10))
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
project.org] On Behalf Of Peter Dalgaard
Sent: Thursday, July 22, 2010 3:13 PM
To: Pat Schmitz
Cc: r-help@r-project.org
Subject: Re: [R] Question about a perceived irregularity in R syntax
Pat Schmitz
Moohwan -
It appears that you are trying to calculate a 10
by 10 matrix when all you want are the diagonal
elements. Here's one approach that might work:
s = t(dev)%*%dev/(nr-1)
sinv = solve(s)
part = sinv%*%t(dev)
t2 = dev[,1]*part[1,] + dev[,2]*part[2,]
Thanks for the explanation!
--
View this message in context:
http://r.789695.n4.nabble.com/Question-from-day2-beginner-tp2293221p2294990.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
On Jul 20, 2010, at 2:39 AM, Nita Umashankar wrote:
Hello,
I am getting some results from my Probit estimation in R that are in the
opposite direction of what I hypothesized. In sas, the default is
probability that y=0 (instead of 1) so one needs to type the word
descending to get P(y=1).
Try:
ar(logrvar, aic=TRUE)$resid
The problem you are running into is that
'resid' is a generic function, and the 'ar'
function neither returns an object with a
class nor returns an object that the default
method works with.
The 'summary' command you used might have
instead used 'str'.
The
Is the 'eps' argument part of KLdiv (was not able to find that in the
help pages) or part of a general environment (such as the graphics
parameters 'par' ) ? I am asking so that I can read about it what it
actually does to resolve the question you already raised about its
reliability...
Ralf
On
I just answered this but realize that I did so off-list.
So, for completeness, here's what I said:
I think I see the problem. From ?KLdiv, you're getting the
modeltools help page. What you need is the flexmix help
page for KLdiv. Just get to the flexmix index page
(you can do ?flexmix and then
On 2010-07-16 7:56, Ralf B wrote:
Hi all,
when running KL on a small data set, everything is fine:
require(flexmix)
n- 20
a- rnorm(n)
b- rnorm(n)
mydata- cbind(a,b)
KLdiv(mydata)
however, when this dataset increases
require(flexmix)
n- 1000
a- rnorm(n)
b- rnorm(n)
mydata- cbind(a,b)
hey, guys, all these methods work perfectly. thank you!!
--
View this message in context:
http://r.789695.n4.nabble.com/question-about-string-handling-tp2289178p2291497.html
Sent from the R help mailing list archive at Nabble.com.
__
Use the source, Luke. varImpPlot calls
randomForest:::importance.randomForest (yeah, that is three colons) and
reading about the scale= parameter in help(importance,
package=randomForest) should enlighten you. For the impatient, try
varImpPlot(mtcars.rf, scale=FALSE)
Hope this helps a
Hi Jack,
to 1) and 2) there are telling you the same. I recommend you to read the
first sections of the article it is very well writen and clear. There you
will read about duality.
to 3) I interpret the scatter plot so: * Increasing the value of C (...)
forces the creation of a more accurate
Hello Jack,
1 ) why do you thought that larger C is prone to overfitting than smaller
C ?
2 ) if you look at the formulation of the quadratic program problem you will
see that C rules the error of the cutting plane ( and overfitting ).
Therfore for hight C you allow that the cutting plane
Pau,
Thanks a lot for your email, I found it very helpful. Please see below for
my reply, thanks.
-Jack
On Wed, Jul 14, 2010 at 10:36 AM, Pau Carrio Gaspar paucar...@gmail.comwrote:
Hello Jack,
1 ) why do you thought that larger C is prone to overfitting than smaller
C ?
*There is
On Wed, Jul 14, 2010 at 2:21 PM, karena dr.jz...@gmail.com wrote:
Hi,
I have a data.frame as following:
var1 var2
1 ab_c_(ok)
2 okf789(db)_c
3 jojfiod(90).gt
4 ij_(78)__op
5 (iojfodjfo)_ab
what I want is to create a new variable
Try this:
text - 'var1 var2
1 ab_c_(ok)
2 okf789(db)_c
3 jojfiod(90).gt
4 ij_(78)__op
5 (iojfodjfo)_ab'
df - read.table(textConnection(text), head=T, sep= ,quote=)
df$var3 - gsub((.*\\()(.*)(\\).*),\\2,df$var2)
-
A R learner.
--
View this message in context:
Another option could be:
df$var3 - gsub(.*\\((.*)\\).*, \\1, df$var2)
On Wed, Jul 14, 2010 at 3:21 PM, karena dr.jz...@gmail.com wrote:
Hi,
I have a data.frame as following:
var1 var2
1 ab_c_(ok)
2 okf789(db)_c
3 jojfiod(90).gt
4
Dear Sarah,
[snip...]
I know that samples within each facility cannot be treated as independent,
so I need an approach that accounts for (1) clustering within facilities
and
You could just use lm() some planning. The data from within a specific
facility can be fit with a model to generate
Hi Sarah,
We regularly undertake work in the food sector and have developed many
custom built solutions. To be more specific, the statistics we employ is
that of sensory analysis and we regularly use the sensominer package in
R.
Regards,
Richard Weeks
Mangosolutions
data analysis that delivers
Greetings to all --
As always, I am grateful for the help of the R community. The generosity of
other R users never ceases to impress me. Just to recap the responses I
have had to this question, in case they can help anyone else down the line:
Robert LaBudde suggested Applied Survey Data
On Jul 6, 2010, at 3:32 PM, Xiaoxi Gao wrote:
Hello R users,
I have two quick questions while using lpSolve package for linear
programming. (1) the result contains both characters and numbers,
e.g., Success: the objective function is 40.5, but I only need the
number, can I only store
== Martin Spindler martin.spind...@gmx.de
on Mon, 5 Jul 2010 07:48:42 +0200 writes:
Hello everyone,
using the VGAM package and the following code
library(VGAM)
bp1 - vglm(cbind(daten$anzahl_b, daten$deckung_b) ~ ., binom2.rho,
data=daten1)
summary(bp1)
Hello
On Thu, Jul 1, 2010 at 1:12 AM, amatoallah ouchen at.ouc...@gmail.com wrote:
serious issue for me . I'm currently running a panel data analysis
i've used the plm package to perform the Tests of poolability as
results intercepts and coefficients are assumed different. so my
The above is
Isn't that exactly what you would expect when using a _generalized_
least squares compared to a normal least squares? GLS is not the same
as WLS.
http://www.aiaccess.net/English/Glossaries/GlosMod/e_gm_least_squares_generalized.htm
Cheers
Joris
On Thu, Jun 24, 2010 at 9:16 AM, Stats Wolf
Thanks for reply.
Yes, they do differ, but does not gls() with the weights argument
(correlation being unchanged) make the special version of GLS, as this
sentence from the page you provided says: The method leading to this
result is called Generalized Least Squares estimation (GLS), of which
WLS
Indeed, WLS is a special case of GLS, where the error covariance
matrix is a diagonal matrix. OLS is a special case of GLS, where the
error is considered homoscedastic and all weights are equal to 1. And
I now realized that the varIdent() indeed makes a diagonal covariance
matrix, so the results
Indeed, they should give the same results, and hence I was worried to
see that the results were not that same. Suffice it to look at
standard errors and p-values: they do differ, and the differences are
not really that small.
Thanks,
Stats Wolf
On Thu, Jun 24, 2010 at 2:39 PM, Joris Meys
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Stats Wolf Sent:
Thursday, June 24, 2010 15:00 To: Joris Meys
Cc: r-help@r-project.org
Subject: Re: [R] Question on WLS (gls vs lm)
Indeed, they should give the same results, and hence I was worried to
see
)
6200 MD Maastricht, The Netherlands Debyeplein 1 (Randwyck)
Original Message
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Stats Wolf Sent:
Thursday, June 24, 2010 15:00 To: Joris Meys
Cc: r-help@r-project.org
Subject: Re: [R
On Thu, Jun 24, 2010 at 9:20 AM, Viechtbauer Wolfgang (STAT)
wolfgang.viechtba...@stat.unimaas.nl wrote:
The weights in 'aa' are the inverse standard deviations. But you want to use
the inverse variances as the weights:
aa - (attributes(summary(f1)$modelStruct$varStruct)$weights)^2
And then
use Rprof to determine where your function is spending time.
What is the problem you are trying to solve?
Sent from my iPhone.
On Jun 23, 2010, at 5:21, li li hannah@gmail.com wrote:
Dear all,
I have the following program for a multiple comparison procedure.
There are two functions for
Most of the computation time is in the functions qvnorm. You can win a
little bit by optimizing the code, but the gain is relatively small.
You can also decrease the interval used to evaluate qvnorm to win some
speed there. As you look for the upper tail, no need to evaluate the
function in
Changbin,
The weights don't have to sum up to one. These are the weights of the
trees in the bag used to combine them into the final fit, and if I'm
not mistaken expressed as the logit of the error for the respective
trees.
If you use a method, be sure you understand it. If you don't
understand
See below.
On Fri, Jun 18, 2010 at 7:11 PM, li li hannah@gmail.com wrote:
Dear all,
I am trying to calculate certain critical values from bivariate normal
distribution (please see the
function below).
m - 10
rho - 0.1
k - 2
alpha - 0.05
## calculate critical constants
cc_z -
cat(out, '\n')
On Thu, Jun 17, 2010 at 10:19 AM, Adolf STIPS
adolf.st...@jrc.ec.europa.eu wrote:
Hi,
Does anybody know how to have output from print, without the leading [1]?
(Or must I use cat/write?)
out=r15
print(out,quote=FALSE)
[1] r15
And I definitely do not want the leading [1] as
The cat function is probably the best approach, but if your really feel the
need to use print then you can just assign blank names (now it will be a named
vector and slower in heavy calculations, but the printing is different). Try
something like:
names(x) - rep( '', length(x) )
print(x)
#
Two possibilities : rescale your random vector, or resample to get
numbers within the range. But neither of these solutions will give you
a true exponential distribution. I am not aware of truncated
exponential distributions that are available in R, but somebody else
might know more about that.
#
On 15/06/10 21:39, GL wrote:
Have the following function that is called by the statement below. Trying to
return the two dataframes, but instead get one large list including both
tables.
ReadInputDataFrames- function() {
dbs.this= read.delim(this.txt, header = TRUE, sep = \t, quote=\,
Since there is a simple closed form for the truncated exponential CDF,
you can use inverse transform sampling. I believe this is quite common
in survival analysis methods. The first step is to compute and write an
R function to compute the inverse CDF for the truncated exponential,
say
itexp -
On Jun 15, 2010, at 4:39 PM, GL wrote:
Have the following function that is called by the statement below.
Trying to
return the two dataframes, but instead get one large list including
both
tables.
ReadInputDataFrames - function() {
dbs.this= read.delim(this.txt, header = TRUE, sep =
apply(iris[, -5], 2, tapply, iris$Species, mean)
On Wed, Jun 9, 2010 at 3:43 PM, SH.Chou cls3...@gmail.com wrote:
Hi there:
I have a question about generating mean value of a data.frame. Take
iris data for example, if I have a data.frame looking like the following:
-
: Re: [R] question about mean
apply(iris[, -5], 2, tapply, iris$Species, mean)
On Wed, Jun 9, 2010 at 3:43 PM, SH.Chou cls3...@gmail.com wrote:
Hi there:
I have a question about generating mean value of a data.frame. Take
iris data for example, if I have a data.frame looking like
One possibility is
aggregate(iris[,-5],list(iris[,5]),mean)
Group.1 Sepal.Length Sepal.Width Petal.Length Petal.Width
1 setosa5.006 3.4281.462 0.246
2 versicolor5.936 2.7704.260 1.326
3 virginica6.588 2.974
Hi Carrie,
Here are two options:
# Option 1
d - data.frame(x, t)
y - with(d, ifelse(t == 0, rbinom(2, 1, 0.2), rbinom(3, 1, 0.8)))
y
# Option 2 -- more general case, e.g. you do not know
# how many 0's and 1's you have within each strata
spd - with(d, split(d, x))
do.call(c, lapply(spd,
Thanks! Jorge
Just one more question I don't get it even after checking help
For option, why just using with(d,...), ifelse works on stratum indexed by x
automatically ?
Since in with, we didn't specify the stratum is indexed by x, what if you
have another categorical variable in the data ?
Thanks
Hi Jorge,
I found a problem.
I just want to check if the answer is random, I change the code as follows:
d - data.frame(x, t)
y - with(d, ifelse(t == 0, rbinom(2, 1, 0.5), rnorm(3)))
cbind(x, t,y)
x t y
[1,] 1 0 0.000
[2,] 1 0 0.000
[3,] 1 1 0.8920037
[4,] 1 1
Hi Carrie,
It works just fine in this case because you have the same number of 0's and
1's within each strata. If that would not be the case, option 1 would not
work. That's why I provided you a second option.
Best,
Jorge
On Thu, Jun 3, 2010 at 7:24 PM, Carrie Li wrote:
Thanks! Jorge
Just
Yes, in my case here, each strata has the same number of 0's and 1's. But I
want the y to be randomly generated within each strata, so y should have
some difference across the strata. (at least for the rnorm part we would see
much clear randomness)
(I hope what I am asking here is clear to you. )
901 - 1000 of 1516 matches
Mail list logo