I think you need to worry about correctness before you worry about speed, 
because as it is this code makes no sense to me. For example, at the top of 
the code you define:

    R = size(X, 2)

This means R is an integer. Then, inside your loop, you do:

    tmp = randperm(R)

Doing randperm() on an integer is the same as doing randperm() on a vector 
from 1 to the integer, so now tmp is a vector of the integers from 1 to R 
in a random order. Then you do:

    r = tmp[1]

So, now r is the first integer from the randomly ordered vector tmp. Which 
is the same thing as setting r to be a random integer in 1:R. Then you do:

    comp_r = setdiff(tmp, r)

This serts comp_r to be a vector of the integers in tmp, that are not in r. 
But since tmp is a permutation of 1:R every element is unique this is the 
same as comp_r = tmp[2:R]. Then:

    comp_r = comp_r[1]

So now comp_r is effectively the second element of the random permutation 
of 1:R. What you've effectively done is pick a random integer in the range 
1 to R, but you've gone through a lot of computations to do so. And you do 
this every iteration. I think you have made a mistake somewhere, because 
you then go on to:

    for k in comp_r

Since comp_r is a integer this loop will run only once with k taking on the 
value of comp_r. The first thing you do in the loop is:

     ind_k = find(comp_r == k)

which doesn't make any sense since both must be the same value and you will 
always get a vector with a single 1 in it. I assume comp_r is supposed to 
be a vector of something, but I don't know enough about the algorithm you 
are trying to implement to tell what it is your trying to do. 

On Sunday, November 30, 2014 9:29:14 AM UTC+2, Napoleon Vargas wrote:
>
> Dear Julia users,
>
> I'm new to Julia, but from what I've been reading it looks like it has a 
> very bright future. I've been working on a function in R that performs 
> Metropolis-within Gibbs to estimate linear mixing proportions. The code in 
> R works well, but I wanted to speed things up, since I have to try 
> different specifications, data, etc., and redoing analysis with several 
> data points takes some time. I attached my current R function (along with 
> simulated data), and what I think is the "equivalent" Julia code. It is not 
> yet complete (it is not wrapped in a function yet), as there are some 
> pieces of code that I couldn't quite figure out (I commented out some lines 
> in order to make it work temporarily, but the results are not what they're 
> supposed to). The R version right now takes around 11 seconds for 25000 
> iterations on a single observation, but for my validation I would probably 
> have to try it on several hundred observations, thus my wanting to speed 
> things up. 
>
> I'd appreciate any help or suggestion.
>
> Napo.
>

Reply via email to