[R] fast matrix-vector multiplication

2009-12-14 Thread parkbomee






Hi all,

Is there a way to do a matrix multiplication in a faster way?
I am making a product of a matrix (composed of a lot of dummy variables) and a 
vector, and is there any way to make it faster?
The simple X %*% y takes too long a time.
I know using sparse matrix would help, but don't know how to do it i R.


Thank you.


  
_
[[elided Hotmail spam]]

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] multi-dimensional array with different number of dimensions?

2009-12-12 Thread parkbomee

Hi,

Is it possible to assign to an array with different dimensions?
That is to say, supposing a three dimensional array, 
the third dimension of the array has matrices of different sizes?

 array
, , 1
  [1] [2] [3]
[1]  111 


, , 2
  [1] [2] [3]

[1]  111 

[2]  111

, , 3
  [1] [2] [3]


[1]  111 


[2]  111


[3]  111


something like this??


Thanks,
B
  
_


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] avoiding loop

2009-11-03 Thread parkbomee


Thanks for your help.


 Date: Mon, 2 Nov 2009 18:50:42 -0500
 Subject: Re: [R] avoiding loop
 From: jholt...@gmail.com
 To: bbom...@hotmail.com
 CC: mtmor...@fhcrc.org; r-help@r-project.org
 
 The first thing I would suggest is convert your dataframes to matrices
 so that you are not having to continually convert them in the calls to
 the functions.  Also I am not sure what the code:
 
   realized_prob = with(DF, {
   ind - (CHOSEN == 1)
   n - tapply(theta_multiple[ind], 
 CS[ind], sum)
   d - tapply(theta_multiple, CS, sum)
   n / d   
   })
 
 is doing.  It looks like 'n' and 'd' might have different lengths
 since they are being created by two different (CS  CS[ind])
 sequences.  I have no idea why you are converting to the DF
 dataframe.  THere is no need for that.  You could just leave the
 vectors (e.g., theta_multiple, CS and ind) as they are and work with
 them.  This is probably where most of your time is being spent.  So if
 you start with matrices and leave the dataframes out of the main loop
 you will probably see an increase in performance.
 
 2009/11/2 parkbomee bbom...@hotmail.com:
  This is the Rprof() report by self time.
  Is it also possible that these routines, which take long self.time, are
  causing the optim() to be slow?
 
 
  $by.self
  self.time self.pct total.time total.pct
  FUN   94.16 16.5  94.16  16.5
  unlist80.46 14.1 120.54  21.1
  lapply76.94 13.5 255.48  44.7
  match 60.76 10.6  60.88  10.7
  as.matrix.data.frame  31.00  5.4  51.12   8.9
  as.character  29.28  5.1  29.28   5.1
  unique.default24.36  4.3  24.40   4.3
  data.frame21.06  3.7  55.78   9.8
  split.default 20.42  3.6  84.38  14.8
  tapply13.84  2.4 414.28  72.5
  structure 11.32  2.0  22.36   3.9
  factor11.08  1.9 127.68  22.3
  attributes-  11.00  1.9  11.00   1.9
  ==10.56  1.8  10.56   1.8
  %*%   10.30  1.8  10.30   1.8
  as.vector 10.22  1.8  10.22   1.8
  as.integer 9.86  1.7   9.86   1.7
  list   9.64  1.7   9.64   1.7
  exp7.12  1.2   7.12   1.2
  as.data.frame.integer  5.98  1.0   8.10   1.4
 
  To: bbom...@hotmail.com
  CC: jholt...@gmail.com; r-help@r-project.org
  Subject: Re: [R] avoiding loop
  From: mtmor...@fhcrc.org
  Date: Sun, 1 Nov 2009 22:14:09 -0800
 
  parkbomee bbom...@hotmail.com writes:
 
   Thank you all.
  
   What Chuck has suggested might not be applicable since the number of
   different times is around 40,000.
  
   The object of optimization in my function is the varying value,
   which is basically data * parameter, of which parameter is the
   object of optimization..
  
   And from the r profiling with a subset of data,
   I got this report..any idea what Anonymous is?
  
  
   $by.total
   total.time total.pct self.time self.pct
   Anonymous 571.56 100.0 0.02 0.0
   optim 571.56 100.0 0.00 0.0
   fn 571.54 100.0 0.98 0.2
 
  You're giving us 'by.total', so these are saying that all the time was
  spent in these functions or the functions they called. Probably all
  are in 'optim' and its arguments; since little self.time is spent
  here, there isn't much to work with
 
   eval 423.74 74.1 0.00 0.0
   with.default 423.74 74.1 0.00 0.0
   with 423.74 74.1 0.00 0.0
 
  These are probably in the internals of optim, where the function
  you're trying to optimize is being set up for evaluation. Again
  there's little self.time, and all these say is that a big piece of the
  time is being spent in code called by this code.
 
   tapply 414.28 72.5 13.84 2.4
   lapply 255.48 44.7 76.94 13.5
   factor 127.68 22.3 11.08 1.9
   unlist 120.54 21.1 80.46 14.1
   FUN 94.16 16.5 94.16 16.5
 
  these look like they are tapply-related calls (looking at the code for
  tapply, it calls lapply, factor, and unlist, and FUN is the function
  argument to tapply), perhaps from the function you're optimizing (did
  you implement this as suggested below? it would really help to have a
  possibly simplified version of the code you're calling).
 
  There is material to work with here, as apparently a fairly large
  amount of self.time is being spent in each of these functions. So
  here's a sample data set
 
  n - 10
  set.seed(123)
  df - data.frame(time=sort(as.integer(ceiling(runif(n)*n/5))),
  value=ceiling(runif(n)*5

Re: [R] avoiding loop

2009-11-02 Thread parkbomee

This is the Rprof() report by self time.
Is it also possible that these routines, which take long self.time, are causing 
the optim() to be slow?


$by.self
self.time self.pct total.time total.pct
FUN   94.16 16.5  94.16  16.5
unlist80.46 14.1 120.54  21.1
lapply76.94 13.5 255.48  44.7
match 60.76 10.6  60.88  10.7
as.matrix.data.frame  31.00  5.4  51.12   8.9
as.character  29.28  5.1  29.28   5.1
unique.default24.36  4.3  24.40   4.3
data.frame21.06  3.7  55.78   9.8
split.default 20.42  3.6  84.38  14.8
tapply13.84  2.4 414.28  72.5
structure 11.32  2.0  22.36   3.9
factor11.08  1.9 127.68  22.3
attributes-  11.00  1.9  11.00   1.9
==10.56  1.8  10.56   1.8
%*%   10.30  1.8  10.30   1.8
as.vector 10.22  1.8  10.22   1.8
as.integer 9.86  1.7   9.86   1.7
list   9.64  1.7   9.64   1.7
exp7.12  1.2   7.12   1.2
as.data.frame.integer  5.98  1.0   8.10   1.4

 To: bbom...@hotmail.com
 CC: jholt...@gmail.com; r-help@r-project.org
 Subject: Re: [R] avoiding loop
 From: mtmor...@fhcrc.org
 Date: Sun, 1 Nov 2009 22:14:09 -0800
 
 parkbomee bbom...@hotmail.com writes:
 
  Thank you all.
 
  What Chuck has suggested might not be applicable since the number of
  different times is around 40,000.
 
  The object of optimization in my function is the varying value,
  which is basically data * parameter, of which parameter is the
  object of optimization..
   
  And from the r profiling with a subset of data,
  I got this report..any idea what Anonymous is?
 
 
  $by.total
  total.time total.pct self.time self.pct
  Anonymous   571.56 100.0  0.02  0.0
  optim 571.56 100.0  0.00  0.0
  fn571.54 100.0  0.98  0.2
 
 You're giving us 'by.total', so these are saying that all the time was
 spent in these functions or the functions they called. Probably all
 are in 'optim' and its arguments; since little self.time is spent
 here, there isn't much to work with
 
  eval  423.74  74.1  0.00  0.0
  with.default  423.74  74.1  0.00  0.0
  with  423.74  74.1  0.00  0.0
 
 These are probably in the internals of optim, where the function
 you're trying to optimize is being set up for evaluation. Again
 there's little self.time, and all these say is that a big piece of the
 time is being spent in code called by this code.
 
  tapply414.28  72.5 13.84  2.4
  lapply255.48  44.7 76.94 13.5
  factor127.68  22.3 11.08  1.9
  unlist120.54  21.1 80.46 14.1
  FUN94.16  16.5 94.16 16.5
 
 these look like they are tapply-related calls (looking at the code for
 tapply, it calls lapply, factor, and unlist, and FUN is the function
 argument to tapply), perhaps from the function you're optimizing (did
 you implement this as suggested below?  it would really help to have a
 possibly simplified version of the code you're calling).
 
 There is material to work with here, as apparently a fairly large
 amount of self.time is being spent in each of these functions. So
 here's a sample data set
 
   n - 10
   set.seed(123)
   df - data.frame(time=sort(as.integer(ceiling(runif(n)*n/5))),
value=ceiling(runif(n)*5))
 
 It would have been helpful for you to provide reproducible code like
 that above, so that the characteristics of your data were easily
 reproducible. Let's time tapply
 
  replicate(5, {
 + system.time(x0 - tapply0(df$value, df$time, sum), gcFirst=TRUE)[[1]]
 + })
 [1] 0.316 0.316 0.308 0.320 0.304
 
 tapply is quite general, but in your case I think you'd be happy with
 
   tapply1 - function(X, INDEX, FUN)
   unlist(lapply(split(X, INDEX), FUN), use.names=FALSE)
 
  replicate(5, {
 + system.time(x1 - tapply1(df$value, df$time, sum), gcFirst=TRUE)[[1]]
 + })
 [1] 0.156 0.148 0.152 0.144 0.152
 
 so about twice the speed (timing depends quite a bit on what 'time' is,
 integer or numeric or character or factor). The vector values of the
 two calculations are identical, though tapply presents the data as an
 array with column names
 
  identical(as.vector(x0), x1)
 [1] TRUE
 
 tapply allows FUN to be anything, but if the interest is in the sum of
 each time interval

Re: [R] avoiding loop

2009-11-01 Thread parkbomee

Thank you all.

What Chuck has suggested might not be applicable since the number of different 
times is around 40,000.
The object of optimization in my function is the varying value, which is 
basically data * parameter, of which parameter is the object of optimization..
 
And from the r profiling with a subset of data,
I got this report..any idea what Anonymous is?


$by.total
total.time total.pct self.time self.pct
Anonymous   571.56 100.0  0.02  0.0
optim 571.56 100.0  0.00  0.0
fn571.54 100.0  0.98  0.2
eval  423.74  74.1  0.00  0.0
with.default  423.74  74.1  0.00  0.0
with  423.74  74.1  0.00  0.0
tapply414.28  72.5 13.84  2.4
lapply255.48  44.7 76.94 13.5
factor127.68  22.3 11.08  1.9
unlist120.54  21.1 80.46 14.1
FUN94.16  16.5 94.16 16.5
.
.
.
.
.


 Date: Sun, 1 Nov 2009 15:35:41 -0400
 Subject: Re: [R] avoiding loop
 From: jholt...@gmail.com
 To: bbom...@hotmail.com
 CC: dwinsem...@comcast.net; d.rizopou...@erasmusmc.nl; r-help@r-project.org
 
 What you need to do is to understand how to use Rprof so that you can
 determine where the time is being spent.  It probably indicates that
 this is not the source of slowness in your optimization function.  How
 much time are we talking about?  You may spent more time trying to
 optimize the function than just running the current version even if it
 is slow (slow is a relative term and does not hold much meaning
 without some context round it).
 
 On Sat, Oct 31, 2009 at 11:36 PM, parkbomee bbom...@hotmail.com wrote:
 
  Thank you both.
 
  However, using tapply() instead of a loop does not seem to improve my code 
  much.
  I am using this inside of an optimization function,
  and it still takes more than it needs...
 
 
 
  CC: bbom...@hotmail.com; r-help@r-project.org
  From: dwinsem...@comcast.net
  To: d.rizopou...@erasmusmc.nl
  Subject: Re: [R] avoiding loop
  Date: Sat, 31 Oct 2009 22:26:17 -0400
 
  This is pretty much equivalent:
 
  tapply(DF$value[DF$choice==1], DF$time[DF$choice==1], sum) /
   tapply(DF$value, DF$time, sum)
 
  And both will probably fail if the number of groups with choice==1 is
  different than the number overall.
 
  --
  David.
 
  On Oct 31, 2009, at 5:14 PM, Dimitris Rizopoulos wrote:
 
   one approach is the following:
  
   # say 'DF' is your data frame, then
   with(DF, {
  ind - choice == 1
  n - tapply(value[ind], time[ind], sum)
  d - tapply(value, time, sum)
  n / d
   })
  
  
   I hope it helps.
  
   Best,
   Dimitris
  
  
   parkbomee wrote:
   Hi all,
   I am trying to figure out a way to improve my code's efficiency by
   avoiding the use of loop.
   I want to calculate a conditional mean(?) given time.
   For example, from the data below, I want to calculate sum((value|
   choice==1)/sum(value)) across time.
   Is there a way to do it without using a loop?
   time  cum_time  choicevalue
   1 4 1   3
   1 4  0   2
   1  4 0   3
   1  4 0   3
   2 6 1   4
   2 6 0   4
   2 6 0   2
   2 6 0   4
   2 6 0   2
   2 6 0   2 3 4
   1   2 3 4 0   3 3
   4 0   5 3 4 0   2
   My code looks like
   objective[1] = value[1] / sum(value[1:cum_time[1])
   for (i in 2:max(time)){
   objective[i] = value[cum_time[i-1]+1] /
   sum(value[(cum_time[i-1]+1) : cum_time[i])])
   }
   sum(objective)
   Anyone have an idea that I can do this without using a loop??
   Thanks.
  
   _
   [[elided Hotmail spam]]
  [[alternative HTML version deleted]]
   __
   R-help@r-project.org mailing list
   https://stat.ethz.ch/mailman/listinfo/r-help
   PLEASE do read the posting guide 
   http://www.R-project.org/posting-guide.html
   and provide commented, minimal, self-contained, reproducible code.
  
   --
   Dimitris Rizopoulos
   Assistant Professor
   Department of Biostatistics
   Erasmus University Medical Center
  
   Address: PO Box 2040, 3000 CA Rotterdam, the Netherlands
   Tel: +31/(0)10/7043478
   Fax: +31/(0)10/7043014
  
   __
   R-help@r-project.org mailing list
   https://stat.ethz.ch/mailman/listinfo/r-help
   PLEASE do read the posting guide 
   http://www.R-project.org/posting-guide.html
   and provide

[R] avoiding loop

2009-10-31 Thread parkbomee

Hi all,

I am trying to figure out a way to improve my code's efficiency by avoiding the 
use of loop.
I want to calculate a conditional mean(?) given time.
For example, from the data below, I want to calculate 
sum((value|choice==1)/sum(value)) across time.
Is there a way to do it without using a loop?

time  cum_time  choicevalue
1 4 1   3
1 4  0   2
1  4 0   3
1  4 0   3
2 6 1   4
2 6 0   4
2 6 0   2
2 6 0   4
2 6 0   2
2 6 0   2 
3 4 1   2 
3 4 0   3 
3 4 0   5 
3 4 0   2 



My code looks like

objective[1] = value[1] / sum(value[1:cum_time[1])
for (i in 2:max(time)){
 objective[i] = value[cum_time[i-1]+1] / sum(value[(cum_time[i-1]+1) : 
cum_time[i])])
}
sum(objective)


Anyone have an idea that I can do this without using a loop??
Thanks.

  
_
[[elided Hotmail spam]]

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] avoiding loop

2009-10-31 Thread parkbomee

Thank you both.

However, using tapply() instead of a loop does not seem to improve my code much.
I am using this inside of an optimization function,
and it still takes more than it needs...



 CC: bbom...@hotmail.com; r-help@r-project.org
 From: dwinsem...@comcast.net
 To: d.rizopou...@erasmusmc.nl
 Subject: Re: [R] avoiding loop
 Date: Sat, 31 Oct 2009 22:26:17 -0400
 
 This is pretty much equivalent:
 
 tapply(DF$value[DF$choice==1], DF$time[DF$choice==1], sum) /
  tapply(DF$value, DF$time, sum)
 
 And both will probably fail if the number of groups with choice==1 is  
 different than the number overall.
 
 -- 
 David.
 
 On Oct 31, 2009, at 5:14 PM, Dimitris Rizopoulos wrote:
 
  one approach is the following:
 
  # say 'DF' is your data frame, then
  with(DF, {
 ind - choice == 1
 n - tapply(value[ind], time[ind], sum)
 d - tapply(value, time, sum)
 n / d
  })
 
 
  I hope it helps.
 
  Best,
  Dimitris
 
 
  parkbomee wrote:
  Hi all,
  I am trying to figure out a way to improve my code's efficiency by  
  avoiding the use of loop.
  I want to calculate a conditional mean(?) given time.
  For example, from the data below, I want to calculate sum((value| 
  choice==1)/sum(value)) across time.
  Is there a way to do it without using a loop?
  time  cum_time  choicevalue
  1 4 1   3
  1 4  0   2
  1  4 0   3
  1  4 0   3
  2 6 1   4
  2 6 0   4
  2 6 0   2
  2 6 0   4
  2 6 0   2
  2 6 0   2 3 4  
  1   2 3 4 0   3 3  
  4 0   5 3 4 0   2  
  My code looks like
  objective[1] = value[1] / sum(value[1:cum_time[1])
  for (i in 2:max(time)){
  objective[i] = value[cum_time[i-1]+1] /  
  sum(value[(cum_time[i-1]+1) : cum_time[i])])
  }
  sum(objective)
  Anyone have an idea that I can do this without using a loop??
  Thanks.

  _
  [[elided Hotmail spam]]
 [[alternative HTML version deleted]]
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide 
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
  -- 
  Dimitris Rizopoulos
  Assistant Professor
  Department of Biostatistics
  Erasmus University Medical Center
 
  Address: PO Box 2040, 3000 CA Rotterdam, the Netherlands
  Tel: +31/(0)10/7043478
  Fax: +31/(0)10/7043014
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 David Winsemius, MD
 Heritage Laboratories
 West Hartford, CT
 
  
_
[[elided Hotmail spam]]
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Efficient way to code using optim()

2009-10-30 Thread parkbomee

Hi all,
 
I am trying to estimate a simple logit model.
By using MLE, I am maximizing the log likelihood, with optim().
The thing is, each observation has different set of choice options, so I need a 
loop inside the objective function,
which I think slows down the optimization process.
 
The data is constructed so that each row represent the characteristics for one 
alternative,
and CS is a variable that represents choice situations. (say, 1 ~ Number of 
observations)
cum_count is the ¡°cumulative¡± count of each choice situations, i.e. number of 
available alternatives in each CS.
So I am maximizing the sum of [exp(U(chosen)) / sum(exp(U(all alternatives)))]
 
When I have 6,7 predictors, the running time is about 10 minutes, and it slows 
down exponentially as I have more predictors. (More theta¡¯s to estimate)
I want to know if there is a way I can improve the running time.
Below is my code..
 
simple_logit = function(theta){
realized_prob = rep(0, max(data$CS))
theta_multiple = as.matrix(data[,4:35]) %*% as.matrix(theta)
realized_prob[1] = exp(theta_multiple[1]) / 
sum(exp(theta_multiple[1:cum_count[1]]))
for (i in 2:length(realized_prob)){
realized_prob[i] = 
exp(theta_multiple[cum_count[(i-1)]+1]) / 
sum(exp(theta_multiple[((cum_count[(i-1)]+1):cum_count[i])]))
}
-sum(log(realized_prob))
}
 
initial = rep(0,32)
out33 = optim(initial, simple_logit, method=BFGS, hessian=TRUE)
 
 
 
Many thanks in advance!!! 
_


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Efficient way to code using optim()

2009-10-30 Thread parkbomee

Thank you.

But I'd prefer using a written function which allows me more flexible model 
specification.
Later on, I could have random parameters.
So I want to know if there is any more efficient way so that I can speed it up.


 Date: Fri, 30 Oct 2009 16:10:29 -0600
 To: bbom...@hotmail.com
 CC: r-help@r-project.org
 Subject: Re: [R] Efficient way to code using optim()
 From: gpet...@uark.edu
 
 
 Unless this is a homework problem, you would be much better off using
 glm(). 
 
 Giovanni
 
  Date: Fri, 30 Oct 2009 12:23:45 -0700
  From: parkbomee bbom...@hotmail.com
  Sender: r-help-boun...@r-project.org
  Importance: Normal
  Precedence: list
  
  
  --Boundary_(ID_/D+lL9iK1qLhrkPBeoxH+Q)
  Content-type: text/plain
  Content-transfer-encoding: 8BIT
  Content-disposition: inline
  Content-length: 1692
  
  
  Hi all,
   
  I am trying to estimate a simple logit model.
  By using MLE, I am maximizing the log likelihood, with optim().
  The thing is, each observation has different set of choice options, so I 
  need a loop inside the objective function,
  which I think slows down the optimization process.
   
  The data is constructed so that each row represent the characteristics for 
  one alternative,
  and CS is a variable that represents choice situations. (say, 1 ~ Number of 
  observations)
  cum_count is the ¡°cumulative¡± count of each choice situations, i.e. 
  number of available alternatives in each CS.
  So I am maximizing the sum of [exp(U(chosen)) / sum(exp(U(all 
  alternatives)))]
   
  When I have 6,7 predictors, the running time is about 10 minutes, and it 
  slows down exponentially as I have more predictors. (More theta¡¯s to 
  estimate)
  I want to know if there is a way I can improve the running time.
  Below is my code..
   
  simple_logit = function(theta){
  realized_prob = rep(0, max(data$CS))
  theta_multiple = as.matrix(data[,4:35]) %*% as.matrix(theta)
  realized_prob[1] = exp(theta_multiple[1]) / 
  sum(exp(theta_multiple[1:cum_count[1]]))
  for (i in 2:length(realized_prob)){
  realized_prob[i] = 
  exp(theta_multiple[cum_count[(i-1)]+1]) / 
  sum(exp(theta_multiple[((cum_count[(i-1)]+1):cum_count[i])]))
  }
  -sum(log(realized_prob))
  }
   
  initial = rep(0,32)
  out33 = optim(initial, simple_logit, method=BFGS, hessian=TRUE)
   
   
   
  Many thanks in advance!!! 
  _
  
  
  [[alternative HTML version deleted]]
  
  
  --Boundary_(ID_/D+lL9iK1qLhrkPBeoxH+Q)
  MIME-version: 1.0
  Content-type: text/plain; charset=us-ascii
  Content-transfer-encoding: 7BIT
  Content-disposition: inline
  
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
  
  --Boundary_(ID_/D+lL9iK1qLhrkPBeoxH+Q)--
  
  
  
_
나의 글로벌 인맥, Windows Live Space!
http://www.spaces.live.com
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] fixed effects regression

2009-02-06 Thread parkbomee

Hi everyone,

I am running this fixed effects model in R.
The thing is, my own code and the canned routine keep returning different 
estimates for the fixed effects factors and intercept.
(The other variables' estimates and R^2 are the same!!! weird!!)

I attach my own code, just in case.
I am pretty sure this would not have any fundamental error. Well, hopefully. I 
have been using this code for a while..)

And does anyone know how I can include another fixed effect into this code? :(
Any help will be deeply appreciated





feols = function(y,X,id) {
n = length(y);
uniq_id = unique(id);
n_id = length(uniq_id);
dummy = matrix(rep(0,n*(n_id-1)),nrow=n);

for (i in 1:(n_id-1))
dummy[id==uniq_id[i],i] = 1; 

X = cbind(rep(1,n),X,dummy);
k = ncol(X);

invXX = solve(t(X) %*% X);
b = as.numeric(invXX %*% (t(X) %*% y));
res = y - X%*%b;
s2 = as.numeric(t(res) %*% res /(n-k));
omega = as.numeric(s2) * invXX;
se = sqrt(diag(omega));
dev = y - mean(y);
r2 = 1 - as.numeric(t(res)%*%res)/as.numeric(t(dev) %*% dev);
t = b/se;
list(b=b,se=se,t=t,r2=r2,s2=s2,omega=omega,res=res);
}



B


_
[[elided Hotmail spam]]

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.