Sorry to bother you. I am sort of confused with the random number generation
from a gamma distribution. For some reason, if a variable X~gamma(shape,
scale)I have been told that the mean mu can be eithe mu=shape/scale or
mu=shape*scale. Why is that so? This is making me have doubt about the
random generation I am using:
1.- The first variable I have is the precision (h) (the inverse of the
variance)distributed as:
h~gamma((T-2)/2, SSE/2) where T is the number of observations and SSE is the
sum of square errors. How do I draw random number in R with this
information. How do I plug this into
rgamma(n,shape,scale)?
2.- The second variable is L^-1 ~ gamma(T+1, vi-ln(r*)^-1) (please, don't
mind my no explanation for the terms used in the shape and scale
parameters). Again, How do I plug this into
rgamma(n,shape,scale)?
3.- I am having a problem putting the results of each iteration from for(i
1:11000) into samp. For some reason I get 10000 identical values for each
column. What am I doing wrong? To see what the problem is you can try to run
the program. Attached is the data I am using, the program with comments and
without them.
I appreciate your help.
Jorge
9.22 1.00 0.41 0.88 0.74 7.70 0.08 0.39 0.27 0.36 0.30
0.65 3.12 6.80 5.68 59.23
10.24 1.00 -0.56 -0.59 0.42 9.10 0.16 0.18 0.09 0.33 -0.23
-0.25 -5.10 -5.39 3.81 82.90
9.35 1.00 -0.56 0.88 0.54 8.41 0.16 0.39 0.15 -0.49 -0.30
0.48 -4.71 7.43 4.57 70.76
8.68 1.00 0.41 -0.06 0.92 7.02 0.08 0.00 0.42 -0.02 0.37
-0.05 2.85 -0.39 6.43 49.30
7.31 1.00 -0.26 0.88 -0.30 5.99 0.03 0.39 0.04 -0.23 0.08
-0.26 -1.55 5.29 -1.78 35.90
9.17 1.00 -0.56 0.72 0.92 5.63 0.16 0.26 0.42 -0.40 -0.51
0.66 -3.15 4.07 5.16 31.75
9.66 1.00 0.22 -0.31 0.53 8.01 0.02 0.05 0.14 -0.07 0.12
-0.16 1.79 -2.48 4.25 64.10
8.63 1.00 0.41 0.45 0.52 7.50 0.08 0.10 0.14 0.18 0.21
0.24 3.04 3.40 3.90 56.18
7.60 1.00 0.41 0.76 0.92 5.30 0.08 0.29 0.42 0.31 0.37
0.69 2.15 4.01 4.85 28.07
12.03 1.00 -0.53 -0.45 -0.09 10.97 0.14 0.10 0.00 0.24 0.05
0.04 -5.84 -4.94 -1.01 120.34
10.98 1.00 0.29 0.88 0.03 10.40 0.04 0.39 0.00 0.26 0.01
0.03 3.03 9.19 0.33 108.21
11.70 1.00 -0.56 0.70 0.53 10.65 0.16 0.25 0.14 -0.39 -0.30
0.38 -5.96 7.49 5.68 113.33
8.59 1.00 0.30 0.54 -0.07 7.96 0.04 0.14 0.00 0.16 -0.02
-0.04 2.36 4.26 -0.53 63.28
8.53 1.00 0.04 -0.59 -0.30 6.91 0.00 0.18 0.04 -0.02 -0.01
0.18 0.27 -4.09 -2.05 47.72
10.80 1.00 -0.56 0.32 -0.30 10.26 0.16 0.05 0.04 -0.18 0.17
-0.09 -5.74 3.24 -3.05 105.18
8.72 1.00 -0.21 0.37 -0.30 7.87 0.02 0.07 0.04 -0.08 0.06
-0.11 -1.68 2.94 -2.34 61.98
8.46 1.00 0.41 -0.11 -0.30 7.72 0.08 0.01 0.04 -0.04 -0.12
0.03 3.13 -0.84 -2.30 59.58
8.86 1.00 0.11 0.88 0.06 8.02 0.01 0.39 0.00 0.09 0.01
0.05 0.85 7.09 0.47 64.32
10.20 1.00 -0.13 0.56 0.92 6.91 0.01 0.16 0.42 -0.07 -0.12
0.51 -0.91 3.85 6.33 47.72
9.54 1.00 -0.17 0.88 0.15 7.50 0.01 0.39 0.01 -0.15 -0.03
0.14 -1.29 6.62 1.16 56.18
7.13 1.00 -0.56 -0.59 0.92 5.30 0.16 0.18 0.42 0.33 -0.51
-0.54 -2.97 -3.14 4.85 28.07
10.33 1.00 -0.56 -0.59 0.82 9.27 0.16 0.18 0.33 0.33 -0.46
-0.48 -5.19 -5.49 7.57 85.91
10.87 1.00 0.20 -0.59 -0.25 10.02 0.02 0.18 0.03 -0.12 -0.05
0.15 1.98 -5.94 -2.52 100.43
8.53 1.00 -0.56 -0.59 0.56 7.23 0.16 0.18 0.16 0.33 -0.31
-0.33 -4.04 -4.28 4.03 52.22
8.83 1.00 0.41 0.28 0.92 7.31 0.08 0.04 0.42 0.11 0.37
0.25 2.97 2.03 6.70 53.48
8.75 1.00 0.00 0.63 0.33 7.44 0.00 0.20 0.06 0.00 0.00
0.21 0.00 4.68 2.49 55.33
6.99 1.00 0.41 0.47 -0.30 5.01 0.08 0.11 0.04 0.19 -0.12
-0.14 2.03 2.36 -1.49 25.11
9.78 1.00 -0.56 0.88 -0.08 8.56 0.16 0.39 0.00 -0.49 0.04
-0.07 -4.79 7.56 -0.68 73.21
9.44 1.00 -0.34 0.88 0.42 7.35 0.06 0.39 0.09 -0.30 -0.14
0.37 -2.51 6.49 3.06 53.96
7.99 1.00 -0.56 0.88 0.92 5.14 0.16 0.39 0.42 -0.49 -0.51
0.81 -2.87 4.54 4.71 26.38
8.95 1.00 -0.56 -0.59 0.86 7.44 0.16 0.18 0.37 0.33 -0.48
-0.51 -4.16 -4.41 6.40 55.33
9.59 1.00 0.41 -0.59 -0.30 8.94 0.08 0.18 0.04 -0.24 -0.12
0.18 3.62 -5.29 -2.66 79.85
7.97 1.00 -0.56 0.88 -0.06 7.38 0.16 0.39 0.00 -0.49 0.04
-0.06 -4.13 6.52 -0.48 54.43
9.60 1.00 0.20 0.88 0.76 8.34 0.02 0.39 0.29 0.18 0.15
0.67 1.69 7.37 6.31 69.63
7.68 1.00 0.22 0.52 0.35 6.51 0.02 0.13 0.06 0.12 0.08
0.18 1.45 3.37 2.29 42.44
8.28 1.00 0.00 0.31 0.38 6.46 0.00 0.05 0.07 0.00 0.00
0.12 0.00 2.02 2.48 41.75
10.37 1.00 -0.01 -0.08 -0.30 9.55 0.00 0.00 0.04 0.00 0.00
0.02 -0.14 -0.74 -2.84 91.14
8.92 1.00 0.10 -0.59 0.55 7.84 0.00 0.18 0.15 -0.06 0.05
-0.33 0.77 -4.64 4.34 61.40
8.82 1.00 0.41 0.88 0.25 8.23 0.08 0.39 0.03 0.36 0.10
0.22 3.34 7.27 2.03 67.72
9.08 1.00 -0.56 -0.53 -0.30 7.92 0.16 0.14 0.04 0.30 0.17
0.16 -4.43 -4.21 -2.36 62.72
11.00 1.00 -0.15 0.88 0.92 9.17 0.01 0.39 0.42 -0.14 -0.14
0.81 -1.41 8.10 8.40 84.08
11.18 1.00 -0.56 0.88 0.13 10.62 0.16 0.39 0.01 -0.49 -0.07
0.11 -5.94 9.39 1.36 112.81
10.87 1.00 -0.56 -0.59 -0.30 10.23 0.16 0.18 0.04 0.33 0.17
0.18 -5.72 -6.06 -3.04 104.56
8.97 1.00 0.41 -0.26 -0.22 8.19 0.08 0.03 0.03 -0.11 -0.09
0.06 3.32 -2.15 -1.84 67.05
9.74 1.00 0.11 -0.59 -0.30 9.09 0.01 0.18 0.04 -0.06 -0.03
0.18 0.96 -5.38 -2.70 82.65
10.12 1.00 0.00 -0.59 -0.05 9.21 0.00 0.18 0.00 0.00 0.00
0.03 0.00 -5.46 -0.47 84.83
9.08 1.00 -0.23 0.24 -0.30 8.59 0.03 0.03 0.04 -0.06 0.07
-0.07 -1.99 2.09 -2.56 73.86
9.12 1.00 -0.05 0.08 -0.30 8.50 0.00 0.00 0.04 0.00 0.01
-0.02 -0.41 0.65 -2.53 72.20
8.59 1.00 0.41 -0.59 -0.30 8.01 0.08 0.18 0.04 -0.24 -0.12
0.18 3.25 -4.74 -2.38 64.10
9.17 1.00 0.41 -0.11 -0.30 7.82 0.08 0.01 0.04 -0.05 -0.12
0.03 3.17 -0.89 -2.33 61.22
10.47 1.00 0.41 0.88 0.92 7.44 0.08 0.39 0.42 0.36 0.37
0.81 3.02 6.57 6.82 55.33
10.30 1.00 0.22 0.08 0.92 8.70 0.02 0.00 0.42 0.02 0.20
0.08 1.94 0.73 7.97 75.68
10.94 1.00 -0.56 -0.59 0.31 10.09 0.16 0.18 0.05 0.33 -0.17
-0.18 -5.64 -5.97 3.14 101.72
11.55 1.00 -0.18 0.16 0.92 8.29 0.02 0.01 0.42 -0.03 -0.17
0.15 -1.53 1.33 7.60 68.79
8.92 1.00 0.41 -0.21 -0.30 8.30 0.08 0.02 0.04 -0.08 -0.12
0.06 3.37 -1.74 -2.47 68.96
6.92 1.00 -0.56 -0.59 0.92 3.69 0.16 0.18 0.42 0.33 -0.51
-0.54 -2.06 -2.18 3.38 13.61
9.80 1.00 0.41 0.88 0.92 8.56 0.08 0.39 0.42 0.36 0.37
0.81 3.47 7.56 7.84 73.21
9.49 1.00 -0.56 -0.59 0.92 7.78 0.16 0.18 0.42 0.33 -0.51
-0.54 -4.36 -4.61 7.13 60.58
7.72 1.00 -0.14 0.08 0.92 5.99 0.01 0.00 0.42 -0.01 -0.13
0.07 -0.86 0.47 5.49 35.90
7.89 1.00 0.41 -0.59 0.92 5.92 0.08 0.18 0.42 -0.24 0.37
-0.54 2.40 -3.51 5.43 35.10
10.19 1.00 -0.14 0.88 -0.05 9.55 0.01 0.39 0.00 -0.12 0.01
-0.05 -1.31 8.38 -0.49 91.14
9.82 1.00 0.17 -0.41 0.65 8.26 0.01 0.08 0.21 -0.07 0.11
-0.26 1.37 -3.34 5.39 68.16
10.81 1.00 -0.02 -0.59 0.92 9.10 0.00 0.18 0.42 0.01 -0.02
-0.54 -0.20 -5.39 8.34 82.90
10.46 1.00 -0.25 0.16 -0.30 9.85 0.03 0.01 0.04 -0.04 0.07
-0.05 -2.48 1.53 -2.93 97.07
9.49 1.00 -0.26 -0.32 -0.30 8.99 0.03 0.05 0.04 0.08 0.08
0.09 -2.36 -2.86 -2.67 80.77
8.19 1.00 0.00 0.88 0.86 6.68 0.00 0.39 0.37 0.00 0.00
0.76 0.00 5.91 5.78 44.68
8.96 1.00 0.41 0.12 0.92 7.31 0.08 0.01 0.42 0.05 0.37
0.11 2.97 0.89 6.70 53.48
9.37 1.00 -0.56 0.88 0.51 7.50 0.16 0.39 0.13 -0.49 -0.29
0.45 -4.19 6.62 3.83 56.18
9.61 1.00 0.34 0.42 0.03 8.99 0.06 0.09 0.00 0.14 0.01
0.01 3.09 3.74 0.27 80.83
yx <-read.table("c:/jorge/business/bayes/trans3_1_4g1.txt")
y<-as.matrix(yx[,1])
x<-as.matrix(yx[,2:16])
nobs<-nrow(x)
nvar<-ncol(x)
sig2<-0.05
rstar<-0.85
v<-matrix(data=0.05, nrow=nobs, ncol=1)
ones<-matrix(data=1, nrow=nobs, ncol=1)
effic<-matrix(data=NA, nrow=nobs, ncol=1)
samp<-matrix(data=NA, nrow=10000, ncol=nobs+nvar+2)
#(1)STARTING BIG SAMPLING LOOP
for(i in 1:11000)
{
ystar<- y - v
betahat<-as.matrix( solve(t(x) %*% x) %*% t(x) %*% ystar)
covb<-sig2*solve(t(x)%*%x)
beta <- as.matrix(mvrnorm(n=1, betahat, covb, tol=1e-6, empirical=FALSE))
#(2)SAMPLE THE PRECISION (h) FROM AN CHI-SQUARE
#CONDITIONAL ON BETA AND ESTIMATE THE VARIANCE
e<-ystar-x%*%beta
sse<-t(e)%*%e
h<- rgamma(1,shape=(nobs-2)/2,scale=sse/2)
sig2<-1/h
#(3) SAMPLE LAMBDA, THE MEAN OF THE INFFICIENCY
#ERROR FROM ITS CONDITIONAL
df1<- nobs+1
df2<-1/(t(v)%*%ones-log(rstar))
lambdainv<-rgamma(1,shape=df1,scale=df2)
lambda<-1/lambdainv
#(4)FIND THE CONDITIONAL MEAN OF THE TRUNCATED
#NORMAL, I.E., SAMPLE V FROM ITS CONDITIONAL
#SAMPLE FROM THE TRUNCATED NORMAL
meanv<-y-x%*%beta-(sig2/lambda)*ones
varv<-sig2*ones
b<-meanv/sqrt(varv)
p<-matrix(data=NA, ncol=1, nrow=nobs)
for(j in 1:nobs)
{
c1<- b[j]
t4<-0.450
if(c1>t4)
{
z<- -log(runif(1)/c1)
while(runif(1)>exp(-0.5*z*z))
{
z<- -log(runif(1))/c1
}
p[j]<-c1+z
}
else
{
p[j]<- runif(1)
while(p[j]<c1)
{
p[j]<- runif(1)
}
}
}
v<-meanv+p*sqrt(varv)
effic<- exp(-v)
#DROP INITIAL 1000 POINTS AND PUT REST IN SAMP
if(i>1000)
{
samp[i-1000, ]<-c( t(effic),t(beta),sig2,lambda)
}
}
mean(samp[,86])
hist(samp[,86])
mean(samp[,10])
hist(samp[,10])
yx <-read.table("c:/jorge/business/bayes/trans3_1_4g1.txt")
y<-as.matrix(yx[,1])
x<-as.matrix(yx[,2:16])
nobs<-nrow(x) #(1)number of
firms
nvar<-ncol(x) #(2)number of
explanatory variables
sig2<-0.05 #i(3)prior
value for the variance
rstar<-0.85 #(4)prior for
the median efficiency level
v<-matrix(data=0.05, nrow=nobs, ncol=1) #(5)a
vector of prior values for the inefficiency parameter
ones<-matrix(data=1, nrow=nobs, ncol=1) #(6)a
vector containing 1
samp<-matrix(data=NA, nrow=10000, ncol=nobs+nvar+2) #(7)a
big matrix where each iteration outcome will be dumped
#(1)STARTING BIG SAMPLING LOOP
for(i in 1:11000)
{
ystar<- y - v #(8)define
ystar as the initial y minus the prior inefficiency level (0.05)
betahat<-as.matrix( solve(t(x) %*% x) %*% t(x) %*% ystar)
#(9)estimate of initial regression coefficients using ystar
covb<-sig2*solve(t(x)%*%x)
#(10)estimate of covariance matrix using prior variance (0.05)
beta <- as.matrix(mvrnorm(n=1, betahat, covb, tol=1e-6, empirical=FALSE))
#(11)sampling regression coefficients from a multivariate
#
normal(betahat, covb)
#(2)SAMPLE THE PRECISION ESTIMATE FROM AN INVERSE CHI-SQUARE
# CONDITIONAL ON BETA AND
#ESTIMATE SIG2 AS THE INVERSE OF THE PRECISION
e<-ystar-x%*%beta
#(12)estimating the vector of errors (residuals)
sse<-t(e)%*%e
#(13)estimating the sum of square errors (residuals)
h<-rgamma(1,shape=(nobs-2)/2,scale=sse/2)
#(14)sampling the precision estimate (h) from an inverse chi-square
# using the
gamma distribution
sig2<-1/h #(15)sigma
square is defined as 1/precision
#(3) SAMPLE LAMBDA, THE MEAN OF THE INFFICIENCY
#ERROR FROM ITS CONDITIONAL
df1<- nobs+1 #(16)the shape
parameter of lambda inverse
df2<-1/(t(v)%*%ones-log(rstar))
#(17)the scale parameter of lambda inverse
lambdainv<-rgamma(1,shape=df1,scale=df2)
#(18)sampling lambda inverse from an inverse chi-square
#using the
gamma distribution lambdainv~Gamma(df1,df2) and the
#mean is
defined as df1*df2
lambda<-1/lambdainv #(19)obtaining
lambda, the mean of the inefficiency error
#(4)FIND THE CONDITIONAL MEAN OF THE TRUNCATED
#NORMAL, I.E., SAMPLE V FROM ITS CONDITIONAL
#SAMPLE FROM THE TRUNCATED NORMAL
meanv<-y-x%*%beta-(sig2/lambda)*ones
#(20)generating a vector of inefficiency errors
varv<-sig2*ones #(21)creating
a vector containing the variance estimate
b<-meanv/sqrt(varv) #(22) endpoint
of half-line interval
p<-matrix(data=NA, ncol=1, nrow=nobs) #(23) The
for... loop generates a random number
for(j in 1:nobs) #p
from a truncated to the left normal distribution
{ #this was
taken from the function x = nmlt_rnd(b)
c1<- b[j]
#written for MATLAB by James P. LeSage, Dept of Economics
t4<-0.450
#University of Toledo
if(c1>t4) #2801
W. Bancroft St,
{ #Toledo, OH
43606
z<- -log(runif(1)/c1) # [EMAIL
PROTECTED]
while(runif(1)>exp(-0.5*z*z))
{
z<- -log(runif(1))/c1
}
p[j]<-c1+z
}
else
{
p[j]<- runif(1)
while(p[j]<c1)
{
p[j]<- runif(1)
}
}
}
v<-meanv+p*sqrt(varv) #(24)once p
has been drawn, it can be used to estimate
#inefficiency
error
effic<- exp(-v) #(25)
estimating the level of efficiency for firm i
#DROP INITIAL 1000 POINTS AND PUT REST IN SAMP
if(i>1000)
{
samp[i-1000, ]<-c( t(effic),t(beta),sig2,lambda)
#(26)dumping results in matrix samp for further analysis
} #after
dropping 1000 burn-in iterations
}
______________________________________________
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html