[julia-users] Re: Good, short set of slides introducing Julia

2015-09-10 Thread Carlos Becker
Hi Andrew, my slides are here, 
https://sites.google.com/site/carlosbecker/a-few-notes , they are for v0.3: 

If you need the openoffice original let me know, I can send it to you.
Cheers.

El miércoles, 9 de septiembre de 2015, 14:07:36 (UTC+2), andrew cooke 
escribió:
>
> ok, thanks everyone i'll have a look at all those.  andrew
>
> On Tuesday, 8 September 2015 17:58:33 UTC-3, andrew cooke wrote:
>>
>>
>> I need to give a presentation at work and was wondering is slides already 
>> exist that:
>>
>>   * show how fast it is in benchmarks
>>
>>   * show that it's similar to matlab (matrix stuff)
>>
>>   * show that you can write fast inner loops
>>
>>  For bonus points:
>>
>>   * show how you can add other numerical types at no "cost"
>>
>>   * show how mutiple dispatch can be useful
>>
>>   * show how someone used to OO in, say, python, won't feel too lost
>>
>> Preferably just one slide per point.  Very short.
>>
>> Thanks,
>> Andrew
>>
>>

Re: [julia-users] REPL v0.3, matlab-like completion

2014-08-20 Thread Carlos Becker
Hi Tim, yes, I do know about Ctrl-R, but it is much lazier to use the 
up/down arrows, particularly when navigating history (otherwise you have to 
use Ctrl-R repeatedly, and Ctrl-Shift-R to go back).

El miércoles, 20 de agosto de 2014 00:39:33 UTC+2, Tim Holy escribió:

 You know about the Ctrl-r shortcut though, right? 
 --Tim 

 On Tuesday, August 19, 2014 10:57:19 PM Carlos Becker wrote: 
  Hi all, 
  
  I think this is a typically asked question, but I don't know whether it 
 is 
  possible now in julia v0.3. 
  Namely to make the up/down arrows search completion history in the REPL. 
 If 
  so, I will be happy to document it in the docs in the FAQ section. 
  
  Thanks. 
  
  -- 
  Carlos 



[julia-users] REPL v0.3, matlab-like completion

2014-08-19 Thread Carlos Becker
Hi all,

I think this is a typically asked question, but I don't know whether it is
possible now in julia v0.3.
Namely to make the up/down arrows search completion history in the REPL. If
so, I will be happy to document it in the docs in the FAQ section.

Thanks.

--
Carlos


Re: [julia-users] Re: 0.3.0 Release Candidate 4 released

2014-08-15 Thread Carlos Becker
To keep up-to-date with the latest changes for v0.3, would it suffice to 
checkout branch release-0.3 periodically?
Is that the one supposed to have the latest development code for v0.3?

Thanks.

El viernes, 15 de agosto de 2014 07:25:28 UTC+2, Elliot Saba escribió:

 Your packages should remain untouched through upgrades on minor versions. 
 (E.g. if you were on a 0.3.0 prerelease version before, upgrading to 
 0.3.0-rc4 or even 0.3.0-final should not affect your packages)

 If you are on 0.2.1, your packages will probably need to be reinstalled, 
 as Julia separates major versions in the package manager.  So you'll just 
 need to Pkg.add() all the packages you had before.  This won't erase your 
 0.2.1 packages, they will persist as long as your `~/.julia/v0.2` directory 
 persists.
 -E


 On Fri, Aug 15, 2014 at 1:06 AM, KK Sasa genw...@gmail.com javascript: 
 wrote:

 A very basic question: How to update without losing packages? Just 
 re-install?





Re: [julia-users] Typo in local variable name not detected

2014-08-14 Thread Carlos Becker
Hi all.

I have been busy and not following the julia development news. are there 
any news wrt this topic? 

What I find dangerous is mistakenly referencing a global variable from a 
local context, when that is not intended.
To me it seems worth adding a qualifier to specify that whatever is not 
declared as 'global', should only be local (or an error should be thrown).
This could also be a julia flag. Do these ideas seem reasonable?

Cheers.

El sábado, 8 de marzo de 2014 03:40:37 UTC+1, Stefan Karpinski escribió:

 How about check_locals? You can check for both unused and potentially 
 unassigned locals.

 On Mar 7, 2014, at 5:39 PM, Leah Hanson astri...@gmail.com javascript: 
 wrote:

 Adding that to TypeCheck sounds pretty reasonable. Functions already 
 provide their local variable names, so it would be a matter of finding all 
 variable usages (excluding LHS assignments). I can probably find time in 
 the next week or so to add it. Maybe check_for_unused_local_variables? 
 (which seems long, but descriptive)

 -- Leah


 On Fri, Mar 7, 2014 at 4:02 PM, Jiahao Chen jia...@mit.edu javascript: 
 wrote:

 On Fri, Mar 7, 2014 at 4:22 PM, Stefan Karpinski ste...@karpinski.org 
 javascript: wrote:
  I would prefer to have opt-in (but easy to use) code analysis that can 
 tell
  you that anwser is an unused variable (or in slight variations of this
  code, that answer or anwser is always or sometimes not assigned).

 That sounds like -Wimplicit in fortran compilers, which forces IMPLICIT 
 NONE.




Re: [julia-users] A[]: Accessing array with empty index

2014-06-01 Thread Carlos Becker
I see now. I thought 0-dimensional arrays could not contain any element at
all.

For consistency, wouldn't it be better to thrown an error when myArray[] is
used for non-zero-dimensional arrays?
Looks like a difficult to find typo.


--
Carlos


On Fri, May 30, 2014 at 11:45 PM, Steven G. Johnson stevenj@gmail.com
wrote:



 On Friday, May 30, 2014 5:19:35 PM UTC-4, Carlos Becker wrote:

 HI Jacob,

 I get that, but which is the reasoning behind myArray[] ?
 why should it return a ref to the 1st element?


 Because you can have a 0-dimensional array, and a 0-dimensional array
 contains exactly 1 element (the empty product of the dimensions).



[julia-users] A[]: Accessing array with empty index

2014-05-30 Thread Carlos Becker
My apologies if this is something that was addressed before, I didn't find 
it.

Why does myArray[] return the first element of the array? is there a 
reasoning behind it or is it an 'unexpected language feature'?
For example:

a = [1,2,3,4]
a[]   = returns 1


Thanks.


Re: [julia-users] A[]: Accessing array with empty index

2014-05-30 Thread Carlos Becker
HI Jacob,

I get that, but which is the reasoning behind myArray[] ?
why should it return a ref to the 1st element?




--
Carlos


On Fri, May 30, 2014 at 11:09 PM, Jacob Quinn quinn.jac...@gmail.com
wrote:

 a[] is rewritten as `getindex(a)`, which has a definition in array.jl#244

 getindex(a::Array) = arrayref(a,1)

 -Jacob


 On Fri, May 30, 2014 at 5:05 PM, Carlos Becker carlosbec...@gmail.com
 wrote:

 My apologies if this is something that was addressed before, I didn't
 find it.

 Why does myArray[] return the first element of the array? is there a
 reasoning behind it or is it an 'unexpected language feature'?
 For example:

 a = [1,2,3,4]
 a[]   = returns 1


 Thanks.





Re: [julia-users] fill! with copies

2014-05-18 Thread Carlos Becker
Thanks all, those look like neat solutions.


--
Carlos


On Fri, May 16, 2014 at 11:36 PM, Stefan Karpinski ste...@karpinski.orgwrote:

 When you write fill!(arr, ChannVals()) you are asking to fill arr with
 the one value that is the result of evaluating ChannVals() once. Doing
 anything else would be bizarre. We could have a version of fill! that takes
 a thunk so you could write

 fill!(arr) do
   ChannVals()
 end


 That would have the desired effect as well, but it seems to me that using
 a comprehension is just as easy in that case.


 On Fri, May 16, 2014 at 4:21 PM, Ivar Nesje iva...@gmail.com wrote:

 @Jameson They are immutable, but they contain references to mutable
 arrays, and all the immutable types will reference the same arrays. That
 way you would not just need a copy but a deepcopy. That will probably be
 too much overhead for fill!(), and will be problematic if someone decided
 to fill! an array with some large structure.

 On the other hand, I think it would be reasonable for fill! to take a
 shallow copy of mutable types. Not sure what others think on that subject
 though.

 Ivar

 kl. 17:01:56 UTC+2 fredag 16. mai 2014 skrev Jameson følgende:

 Since they are immutable, fill! did exactly what you wanted

 On Friday, May 16, 2014, Tim Holy tim@gmail.com wrote:

 Try

 arr = [ChannVals() for i = 1:10]

 On Friday, May 16, 2014 01:27:18 AM Carlos Becker wrote:
  Hello all,
 
  I wanted to create an array of an immutable type and initialize an
 empty
  copy in each (with the default constructor).
  I am wondering which is the best way to do it, so far:
 
 immutable ChannVals
  taus::Vector{Float64}
  alphas::Vector{Float64}
 
  ChannVals() = new( Float64[], Float64[] )
 end
 
 # create 10 new instances
 arr = ChannVals[ChannVals() for i=1:10]
 
 
  Now, a neat but incorrect way is to do
 
 arr = Array( ChannVals, 10 )
 fill!(allVals, ChannVals())
 
  because it will fill them with the same instance.
  Is there a neat way, such as a fillwithcopies!() ?
 
 
  Cheers.





[julia-users] tanh() speed / multi-threading

2014-05-18 Thread Carlos Becker
This is probably related to openblas, but it seems to be that tanh() is not 
multi-threaded, which hinders a considerable speed improvement.
For example, MATLAB does multi-thread it and gets something around 3x 
speed-up over the single-threaded version.

For example,

  x = rand(10,200);
  @time y = tanh(x);

yields:
  - 0.71 sec in Julia
  - 0.76 sec in matlab with -singleCompThread
  - and 0.09 sec in Matlab (this one uses multi-threading by default)

Good news is that julia (w/openblas) is competitive with matlab 
single-threaded version,
though setting the env variable OPENBLAS_NUM_THREADS doesn't have any 
effect on the timings, nor I see higher CPU usage with 'top'.

Is there an override for OPENBLAS_NUM_THREADS in julia? what am I missing?


[julia-users] Re: tanh() speed / multi-threading

2014-05-18 Thread Carlos Becker
forgot to add versioninfo():

julia versioninfo()
Julia Version 0.3.0-prerelease+2921
Commit ea70e4d* (2014-05-07 17:56 UTC)
Platform Info:
  System: Linux (x86_64-linux-gnu)
  CPU: Intel(R) Xeon(R) CPU   X5690  @ 3.47GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)
  LAPACK: libopenblas
  LIBM: libopenlibm


El domingo, 18 de mayo de 2014 11:33:45 UTC+2, Carlos Becker escribió:

 This is probably related to openblas, but it seems to be that tanh() is 
 not multi-threaded, which hinders a considerable speed improvement.
 For example, MATLAB does multi-thread it and gets something around 3x 
 speed-up over the single-threaded version.

 For example,

   x = rand(10,200);
   @time y = tanh(x);

 yields:
   - 0.71 sec in Julia
   - 0.76 sec in matlab with -singleCompThread
   - and 0.09 sec in Matlab (this one uses multi-threading by default)

 Good news is that julia (w/openblas) is competitive with matlab 
 single-threaded version,
 though setting the env variable OPENBLAS_NUM_THREADS doesn't have any 
 effect on the timings, nor I see higher CPU usage with 'top'.

 Is there an override for OPENBLAS_NUM_THREADS in julia? what am I missing?



[julia-users] Re: tanh() speed / multi-threading

2014-05-18 Thread Carlos Becker
now that I think about it, maybe openblas has nothing to do here, since 
@which tanh(y) leads to a call to vectorize_1arg().

If that's the case, wouldn't it be advantageous to have a 
vectorize_1arg_openmp() function (defined in C/C++) that works for 
element-wise operations on scalar arrays,
multi-threading with OpenMP?


El domingo, 18 de mayo de 2014 11:34:11 UTC+2, Carlos Becker escribió:

 forgot to add versioninfo():

 julia versioninfo()
 Julia Version 0.3.0-prerelease+2921
 Commit ea70e4d* (2014-05-07 17:56 UTC)
 Platform Info:
   System: Linux (x86_64-linux-gnu)
   CPU: Intel(R) Xeon(R) CPU   X5690  @ 3.47GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)
   LAPACK: libopenblas
   LIBM: libopenlibm


 El domingo, 18 de mayo de 2014 11:33:45 UTC+2, Carlos Becker escribió:

 This is probably related to openblas, but it seems to be that tanh() is 
 not multi-threaded, which hinders a considerable speed improvement.
 For example, MATLAB does multi-thread it and gets something around 3x 
 speed-up over the single-threaded version.

 For example,

   x = rand(10,200);
   @time y = tanh(x);

 yields:
   - 0.71 sec in Julia
   - 0.76 sec in matlab with -singleCompThread
   - and 0.09 sec in Matlab (this one uses multi-threading by default)

 Good news is that julia (w/openblas) is competitive with matlab 
 single-threaded version,
 though setting the env variable OPENBLAS_NUM_THREADS doesn't have any 
 effect on the timings, nor I see higher CPU usage with 'top'.

 Is there an override for OPENBLAS_NUM_THREADS in julia? what am I missing?



Re: [julia-users] Re: tanh() speed / multi-threading

2014-05-18 Thread Carlos Becker
HI Tobias, I saw your pull request and have been following it closely, nice
work ;)

Though, in the case of element-wise matrix operations, like tanh, there is
no need for extra allocations, since the buffer should be allocated only
once.

From your first code snippet, is julia smart enough to pre-compute i*N/2 ?
In such cases, creating a kind of array view on the original data would
probably be faster, right? (though I don't know how allocations work here).

For vectorize_1arg_openmp, I was thinking of hard-coding it for known
operations such as trigonometric ones, that benefit a lot from
multi-threading.
I know this is a hack, but it is quick to implement and brings an amazing
speed up (8x in the case of the code I posted above).




--
Carlos


On Sun, May 18, 2014 at 12:30 PM, Tobias Knopp
tobias.kn...@googlemail.comwrote:

 Hi Carlos,

 I am working on something that will allow to do multithreading on Julia
 functions (https://github.com/JuliaLang/julia/pull/6741). Implementing
 vectorize_1arg_openmp is actually a lot less trivial as the Julia runtime
 is not thread safe (yet)

 Your example is great. I first got a slowdown of 10 because the example
 revealed a locking issue. With a little trick I now get a speedup of 1.75
 on a 2 core machine. Not to bad taking into account that memory allocation
 cannot be parallelized.

 The tweaked code looks like

 function tanh_core(x,y,i)

 N=length(x)

 for l=1:N/2

   y[l+i*N/2] = tanh(x[l+i*N/2])

 end

 end


 function ptanh(x;numthreads=2)

 y = similar(x)

 N = length(x)

 parapply(tanh_core,(x,y), 0:1, numthreads=numthreads)

 y

 end


 I actually want this to be also fast for


 function tanh_core(x,y,i)

 y[i] = tanh(x[i])

 end


 function ptanh(x;numthreads=2)

 y = similar(x)

 N = length(x)

 parapply(tanh_core,(x,y), 1:N, numthreads=numthreads)

 y

 end

 Am Sonntag, 18. Mai 2014 11:40:13 UTC+2 schrieb Carlos Becker:

 now that I think about it, maybe openblas has nothing to do here, since
 @which tanh(y) leads to a call to vectorize_1arg().

 If that's the case, wouldn't it be advantageous to have a
 vectorize_1arg_openmp() function (defined in C/C++) that works for
 element-wise operations on scalar arrays,
 multi-threading with OpenMP?


 El domingo, 18 de mayo de 2014 11:34:11 UTC+2, Carlos Becker escribió:

 forgot to add versioninfo():

 julia versioninfo()
 Julia Version 0.3.0-prerelease+2921
 Commit ea70e4d* (2014-05-07 17:56 UTC)
 Platform Info:
   System: Linux (x86_64-linux-gnu)
   CPU: Intel(R) Xeon(R) CPU   X5690  @ 3.47GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)
   LAPACK: libopenblas
   LIBM: libopenlibm


 El domingo, 18 de mayo de 2014 11:33:45 UTC+2, Carlos Becker escribió:

 This is probably related to openblas, but it seems to be that tanh() is
 not multi-threaded, which hinders a considerable speed improvement.
 For example, MATLAB does multi-thread it and gets something around 3x
 speed-up over the single-threaded version.

 For example,

   x = rand(10,200);
   @time y = tanh(x);

 yields:
   - 0.71 sec in Julia
   - 0.76 sec in matlab with -singleCompThread
   - and 0.09 sec in Matlab (this one uses multi-threading by default)

 Good news is that julia (w/openblas) is competitive with matlab
 single-threaded version,
 though setting the env variable OPENBLAS_NUM_THREADS doesn't have any
 effect on the timings, nor I see higher CPU usage with 'top'.

 Is there an override for OPENBLAS_NUM_THREADS in julia? what am I
 missing?




Re: [julia-users] Re: tanh() speed / multi-threading

2014-05-18 Thread Carlos Becker
btw, the code you just sent works as is with your pull request branch?


--
Carlos


On Sun, May 18, 2014 at 1:04 PM, Carlos Becker carlosbec...@gmail.comwrote:

 HI Tobias, I saw your pull request and have been following it closely,
 nice work ;)

 Though, in the case of element-wise matrix operations, like tanh, there is
 no need for extra allocations, since the buffer should be allocated only
 once.

 From your first code snippet, is julia smart enough to pre-compute i*N/2 ?
 In such cases, creating a kind of array view on the original data would
 probably be faster, right? (though I don't know how allocations work here).

 For vectorize_1arg_openmp, I was thinking of hard-coding it for known
 operations such as trigonometric ones, that benefit a lot from
 multi-threading.
 I know this is a hack, but it is quick to implement and brings an amazing
 speed up (8x in the case of the code I posted above).




 --
 Carlos


 On Sun, May 18, 2014 at 12:30 PM, Tobias Knopp 
 tobias.kn...@googlemail.com wrote:

 Hi Carlos,

 I am working on something that will allow to do multithreading on Julia
 functions (https://github.com/JuliaLang/julia/pull/6741). Implementing
 vectorize_1arg_openmp is actually a lot less trivial as the Julia runtime
 is not thread safe (yet)

 Your example is great. I first got a slowdown of 10 because the example
 revealed a locking issue. With a little trick I now get a speedup of 1.75
 on a 2 core machine. Not to bad taking into account that memory allocation
 cannot be parallelized.

 The tweaked code looks like

 function tanh_core(x,y,i)

 N=length(x)

 for l=1:N/2

   y[l+i*N/2] = tanh(x[l+i*N/2])

 end

 end


 function ptanh(x;numthreads=2)

 y = similar(x)

 N = length(x)

 parapply(tanh_core,(x,y), 0:1, numthreads=numthreads)

 y

 end


 I actually want this to be also fast for


 function tanh_core(x,y,i)

 y[i] = tanh(x[i])

 end


 function ptanh(x;numthreads=2)

 y = similar(x)

 N = length(x)

 parapply(tanh_core,(x,y), 1:N, numthreads=numthreads)

 y

 end

 Am Sonntag, 18. Mai 2014 11:40:13 UTC+2 schrieb Carlos Becker:

 now that I think about it, maybe openblas has nothing to do here, since
 @which tanh(y) leads to a call to vectorize_1arg().

 If that's the case, wouldn't it be advantageous to have a
 vectorize_1arg_openmp() function (defined in C/C++) that works for
 element-wise operations on scalar arrays,
 multi-threading with OpenMP?


 El domingo, 18 de mayo de 2014 11:34:11 UTC+2, Carlos Becker escribió:

 forgot to add versioninfo():

 julia versioninfo()
 Julia Version 0.3.0-prerelease+2921
 Commit ea70e4d* (2014-05-07 17:56 UTC)
 Platform Info:
   System: Linux (x86_64-linux-gnu)
   CPU: Intel(R) Xeon(R) CPU   X5690  @ 3.47GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)
   LAPACK: libopenblas
   LIBM: libopenlibm


 El domingo, 18 de mayo de 2014 11:33:45 UTC+2, Carlos Becker escribió:

 This is probably related to openblas, but it seems to be that tanh()
 is not multi-threaded, which hinders a considerable speed improvement.
 For example, MATLAB does multi-thread it and gets something around 3x
 speed-up over the single-threaded version.

 For example,

   x = rand(10,200);
   @time y = tanh(x);

 yields:
   - 0.71 sec in Julia
   - 0.76 sec in matlab with -singleCompThread
   - and 0.09 sec in Matlab (this one uses multi-threading by default)

 Good news is that julia (w/openblas) is competitive with matlab
 single-threaded version,
 though setting the env variable OPENBLAS_NUM_THREADS doesn't have any
 effect on the timings, nor I see higher CPU usage with 'top'.

 Is there an override for OPENBLAS_NUM_THREADS in julia? what am I
 missing?





Re: [julia-users] Re: tanh() speed / multi-threading

2014-05-18 Thread Carlos Becker
Sounds great!
I just gave it a try, and with 16 threads I get 0.07sec which is impressive.

That is when I tried it in isolated code. When put together with other
julia code I have, it segfaults. Have you experienced this as well?
 Le 18 mai 2014 16:05, Tobias Knopp tobias.kn...@googlemail.com a
écrit :

 sure, the function is Base.parapply though. I had explicitly imported it.

 In the case of vectorize_1arg it would be great to automatically
 parallelize comprehensions. If someone could tell me where the actual
 looping happens, this would be great. I have not found that yet. Seems to
 be somewhere in the parser.

 Am Sonntag, 18. Mai 2014 14:30:49 UTC+2 schrieb Carlos Becker:

 btw, the code you just sent works as is with your pull request branch?


 --
 Carlos


 On Sun, May 18, 2014 at 1:04 PM, Carlos Becker carlos...@gmail.comwrote:

 HI Tobias, I saw your pull request and have been following it closely,
 nice work ;)

 Though, in the case of element-wise matrix operations, like tanh, there
 is no need for extra allocations, since the buffer should be allocated only
 once.

 From your first code snippet, is julia smart enough to pre-compute i*N/2
 ?
 In such cases, creating a kind of array view on the original data would
 probably be faster, right? (though I don't know how allocations work here).

 For vectorize_1arg_openmp, I was thinking of hard-coding it for known
 operations such as trigonometric ones, that benefit a lot from
 multi-threading.
 I know this is a hack, but it is quick to implement and brings an
 amazing speed up (8x in the case of the code I posted above).




 --
 Carlos


 On Sun, May 18, 2014 at 12:30 PM, Tobias Knopp tobias...@googlemail.com
  wrote:

 Hi Carlos,

 I am working on something that will allow to do multithreading on Julia
 functions (https://github.com/JuliaLang/julia/pull/6741). Implementing
 vectorize_1arg_openmp is actually a lot less trivial as the Julia runtime
 is not thread safe (yet)

 Your example is great. I first got a slowdown of 10 because the example
 revealed a locking issue. With a little trick I now get a speedup of 1.75
 on a 2 core machine. Not to bad taking into account that memory allocation
 cannot be parallelized.

 The tweaked code looks like

 function tanh_core(x,y,i)

 N=length(x)

 for l=1:N/2

   y[l+i*N/2] = tanh(x[l+i*N/2])

 end

 end


 function ptanh(x;numthreads=2)

 y = similar(x)

 N = length(x)

 parapply(tanh_core,(x,y), 0:1, numthreads=numthreads)

 y

 end


 I actually want this to be also fast for


 function tanh_core(x,y,i)

 y[i] = tanh(x[i])

 end


 function ptanh(x;numthreads=2)

 y = similar(x)

 N = length(x)

 parapply(tanh_core,(x,y), 1:N, numthreads=numthreads)

 y

 end

 Am Sonntag, 18. Mai 2014 11:40:13 UTC+2 schrieb Carlos Becker:

 now that I think about it, maybe openblas has nothing to do here,
 since @which tanh(y) leads to a call to vectorize_1arg().

 If that's the case, wouldn't it be advantageous to have a
 vectorize_1arg_openmp() function (defined in C/C++) that works for
 element-wise operations on scalar arrays,
 multi-threading with OpenMP?


 El domingo, 18 de mayo de 2014 11:34:11 UTC+2, Carlos Becker escribió:

 forgot to add versioninfo():

 julia versioninfo()
 Julia Version 0.3.0-prerelease+2921
 Commit ea70e4d* (2014-05-07 17:56 UTC)
 Platform Info:
   System: Linux (x86_64-linux-gnu)
   CPU: Intel(R) Xeon(R) CPU   X5690  @ 3.47GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)
   LAPACK: libopenblas
   LIBM: libopenlibm


 El domingo, 18 de mayo de 2014 11:33:45 UTC+2, Carlos Becker escribió:

 This is probably related to openblas, but it seems to be that tanh()
 is not multi-threaded, which hinders a considerable speed improvement.
 For example, MATLAB does multi-thread it and gets something around
 3x speed-up over the single-threaded version.

 For example,

   x = rand(10,200);
   @time y = tanh(x);

 yields:
   - 0.71 sec in Julia
   - 0.76 sec in matlab with -singleCompThread
   - and 0.09 sec in Matlab (this one uses multi-threading by default)

 Good news is that julia (w/openblas) is competitive with matlab
 single-threaded version,
 though setting the env variable OPENBLAS_NUM_THREADS doesn't have
 any effect on the timings, nor I see higher CPU usage with 'top'.

 Is there an override for OPENBLAS_NUM_THREADS in julia? what am I
 missing?






Re: [julia-users] Re: tanh() speed / multi-threading

2014-05-18 Thread Carlos Becker
Great to see that Tobias' PR rocks ;)

I am still getting a weird segfault, and cannot reproduce it when put to
simpler code.
I will keep working on it, and post it as soon as I nail it.

Tobias: any pointer towards possible incompatibilities of the current state
of the PR?

thanks.


--
Carlos


On Sun, May 18, 2014 at 5:26 PM, Tobias Knopp
tobias.kn...@googlemail.comwrote:

 And I am pretty excited that it seems to scale so well at your setup. I
 have only 2 cores so could not see if it scales to more cores.

 Am Sonntag, 18. Mai 2014 16:40:18 UTC+2 schrieb Tobias Knopp:

 Well when I started I got segfaullt all the time :-)

 Could you please send me a minimal code example that segfaults? This
 would be great! This is the only way we can get this stable.

 Am Sonntag, 18. Mai 2014 16:35:47 UTC+2 schrieb Carlos Becker:

 Sounds great!
 I just gave it a try, and with 16 threads I get 0.07sec which is
 impressive.

 That is when I tried it in isolated code. When put together with other
 julia code I have, it segfaults. Have you experienced this as well?
  Le 18 mai 2014 16:05, Tobias Knopp tobias...@googlemail.com a
 écrit :

 sure, the function is Base.parapply though. I had explicitly imported
 it.

 In the case of vectorize_1arg it would be great to automatically
 parallelize comprehensions. If someone could tell me where the actual
 looping happens, this would be great. I have not found that yet. Seems to
 be somewhere in the parser.

 Am Sonntag, 18. Mai 2014 14:30:49 UTC+2 schrieb Carlos Becker:

 btw, the code you just sent works as is with your pull request branch?


 --
 Carlos


 On Sun, May 18, 2014 at 1:04 PM, Carlos Becker carlos...@gmail.comwrote:

 HI Tobias, I saw your pull request and have been following it
 closely, nice work ;)

 Though, in the case of element-wise matrix operations, like tanh,
 there is no need for extra allocations, since the buffer should be
 allocated only once.

 From your first code snippet, is julia smart enough to pre-compute
 i*N/2 ?
 In such cases, creating a kind of array view on the original data
 would probably be faster, right? (though I don't know how allocations 
 work
 here).

 For vectorize_1arg_openmp, I was thinking of hard-coding it for
 known operations such as trigonometric ones, that benefit a lot from
 multi-threading.
 I know this is a hack, but it is quick to implement and brings an
 amazing speed up (8x in the case of the code I posted above).




 --
 Carlos


 On Sun, May 18, 2014 at 12:30 PM, Tobias Knopp 
 tobias...@googlemail.com wrote:

 Hi Carlos,

 I am working on something that will allow to do multithreading on
 Julia functions (https://github.com/JuliaLang/julia/pull/6741).
 Implementing vectorize_1arg_openmp is actually a lot less trivial as the
 Julia runtime is not thread safe (yet)

 Your example is great. I first got a slowdown of 10 because the
 example revealed a locking issue. With a little trick I now get a 
 speedup
 of 1.75 on a 2 core machine. Not to bad taking into account that memory
 allocation cannot be parallelized.

 The tweaked code looks like

 function tanh_core(x,y,i)

 N=length(x)

 for l=1:N/2

   y[l+i*N/2] = tanh(x[l+i*N/2])

 end

 end


 function ptanh(x;numthreads=2)

 y = similar(x)

 N = length(x)

 parapply(tanh_core,(x,y), 0:1, numthreads=numthreads)

 y

 end


 I actually want this to be also fast for


 function tanh_core(x,y,i)

 y[i] = tanh(x[i])

 end


 function ptanh(x;numthreads=2)

 y = similar(x)

 N = length(x)

 parapply(tanh_core,(x,y), 1:N, numthreads=numthreads)

 y

 end

 Am Sonntag, 18. Mai 2014 11:40:13 UTC+2 schrieb Carlos Becker:

 now that I think about it, maybe openblas has nothing to do here,
 since @which tanh(y) leads to a call to vectorize_1arg().

 If that's the case, wouldn't it be advantageous to have a
 vectorize_1arg_openmp() function (defined in C/C++) that works for
 element-wise operations on scalar arrays,
 multi-threading with OpenMP?


 El domingo, 18 de mayo de 2014 11:34:11 UTC+2, Carlos Becker
 escribió:

 forgot to add versioninfo():

 julia versioninfo()
 Julia Version 0.3.0-prerelease+2921
 Commit ea70e4d* (2014-05-07 17:56 UTC)
 Platform Info:
   System: Linux (x86_64-linux-gnu)
   CPU: Intel(R) Xeon(R) CPU   X5690  @ 3.47GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)
   LAPACK: libopenblas
   LIBM: libopenlibm


 El domingo, 18 de mayo de 2014 11:33:45 UTC+2, Carlos Becker
 escribió:

 This is probably related to openblas, but it seems to be that
 tanh() is not multi-threaded, which hinders a considerable speed
 improvement.
 For example, MATLAB does multi-thread it and gets something
 around 3x speed-up over the single-threaded version.

 For example,

   x = rand(10,200);
   @time y = tanh(x);

 yields:
   - 0.71 sec in Julia
   - 0.76

[julia-users] fill! with copies

2014-05-16 Thread Carlos Becker
Hello all,

I wanted to create an array of an immutable type and initialize an empty 
copy in each (with the default constructor).
I am wondering which is the best way to do it, so far:

   immutable ChannVals
taus::Vector{Float64}
alphas::Vector{Float64}

ChannVals() = new( Float64[], Float64[] )
   end

   # create 10 new instances
   arr = ChannVals[ChannVals() for i=1:10]


Now, a neat but incorrect way is to do

   arr = Array( ChannVals, 10 )
   fill!(allVals, ChannVals())

because it will fill them with the same instance.
Is there a neat way, such as a fillwithcopies!() ?


Cheers.


Re: [julia-users] fill! with copies

2014-05-16 Thread Carlos Becker
* correction, 'allVals'  is 'arr' in the last line of code.


--
Carlos


On Fri, May 16, 2014 at 10:27 AM, Carlos Becker carlosbec...@gmail.comwrote:

 Hello all,

 I wanted to create an array of an immutable type and initialize an empty
 copy in each (with the default constructor).
 I am wondering which is the best way to do it, so far:

immutable ChannVals
 taus::Vector{Float64}
 alphas::Vector{Float64}

  ChannVals() = new( Float64[], Float64[] )
end

# create 10 new instances
arr = ChannVals[ChannVals() for i=1:10]


 Now, a neat but incorrect way is to do

arr = Array( ChannVals, 10 )
fill!(allVals, ChannVals())

 because it will fill them with the same instance.
 Is there a neat way, such as a fillwithcopies!() ?


 Cheers.



Re: [julia-users] Re: Simulating fixed-size arrays

2014-05-08 Thread Carlos Becker
Thanks, that's great!

Maybe this should make it to upstream julia.
Fixed-size arrays are essential to get good performance, and compact memory
usage.


--
Carlos


On Wed, May 7, 2014 at 5:46 PM, Tobias Knopp tobias.kn...@googlemail.comwrote:

 see https://github.com/JuliaLang/julia/issues/5857. I have been using
 ImmutableArrays.jl which works fine.

 Am Mittwoch, 7. Mai 2014 17:12:04 UTC+2 schrieb Carlos Becker:

 Hello, I am trying to find out the best way to deal with immutables (or
 types) that contain fixed-size arrays, such as this:

 # should have variable number of

 # Uint64's

 immutable Descriptor

a1::Uint64


a2::Uint64

a3::Uint64

a4::Uint64

 end

 # should work with variable-size Descriptor

 function myDist(x1::Descriptor,x2::Descriptor) :inline

  return count_ones(x1.a1 $ x2.a1) + count_ones(x1.a2 $ x2.a2) + 
 count_ones(x1.a3 $ x2.a3) + count_ones(x1.a4 $ x2.a4)

 end



 In this specific case, Descriptor is 32-bytes wide, but I would like to
 make the code generic for different number of elements in the immutable.
 If fixed-size arrays were available, this would be very easy, but which
 is a neat way of doing this with the current julia v0.3?

 btw, the code above runs amazingly fast, using popcount instructions when
 available, almost c-speed ;)
 This is good news for Julia indeed.

 Thanks.




[julia-users] Re: Memory allocation for vectorized operations

2014-04-29 Thread Carlos Becker
I just saw another part of your message, I am wondering also why memory 
consumption is so high.

El martes, 29 de abril de 2014 11:31:09 UTC+2, Carlos Becker escribió:

 This is likely to be because Julia is creating temporaries. This is 
 probably why you get increasing memory usage when increasing array size.

 This is a long topic, that will have to be solved (hopefully soon), I had 
 a previous question related to something similar here: 
 https://groups.google.com/d/topic/julia-users/Pbrm9Nn9fWc/discussion


 El martes, 29 de abril de 2014 08:05:17 UTC+2, John Aslanides escribió:

 I'm aware that evaluating a vectorized operation (say, an elementwise 
 product of two arrays) will result in the allocation of a temporary array. 
 I'm surprised, though, at just how much memory this seems to consume in 
 practice -- unless there's something I'm not understanding. Here is an 
 extreme example:

 julia a = rand(2); b = rand(2);

 julia @time a .*= b;
 elapsed time: 0.505942281 seconds (11612212 bytes allocated)

 julia @time a .*= b;
 elapsed time: 1.4177e-5 seconds (800 bytes allocated)

 julia @time a .*= b;
 elapsed time: 2.5334e-5 seconds (800 bytes allocated)

 800 bytes seems like a lot of overhead given that a and b are both only 
 16 bytes each. Of course, this overhead (whatever it is) becomes 
 comparatively less significant as we move to larger arrays, but it's still 
 sizeable:

 julia a = rand(20); b = rand(20);

 julia @time a.*= b;
 elapsed time: 1.4162e-5 seconds (944 bytes allocated)

 julia @time a.*= b;
 elapsed time: 2.3754e-5 seconds (944 bytes allocated)

 Can someone explain what's going on here?



[julia-users] Re: Memory allocation for vectorized operations

2014-04-29 Thread Carlos Becker
Besides Julia internals, I suppose there is memory overhead in terms of the 
structure holding the array itself (when temporaries are created).
I suppose an array isn't just the size in bytes of the data it holds, but 
also information about its size/type/etc. Though I doubt that would add up 
to 800 bytes, it could explain part of it.

I wonder if there is a way within julia to know the 'real' size of a julia 
object.

El martes, 29 de abril de 2014 11:32:21 UTC+2, Carlos Becker escribió:

 I just saw another part of your message, I am wondering also why memory 
 consumption is so high.

 El martes, 29 de abril de 2014 11:31:09 UTC+2, Carlos Becker escribió:

 This is likely to be because Julia is creating temporaries. This is 
 probably why you get increasing memory usage when increasing array size.

 This is a long topic, that will have to be solved (hopefully soon), I had 
 a previous question related to something similar here: 
 https://groups.google.com/d/topic/julia-users/Pbrm9Nn9fWc/discussion


 El martes, 29 de abril de 2014 08:05:17 UTC+2, John Aslanides escribió:

 I'm aware that evaluating a vectorized operation (say, an elementwise 
 product of two arrays) will result in the allocation of a temporary array. 
 I'm surprised, though, at just how much memory this seems to consume in 
 practice -- unless there's something I'm not understanding. Here is an 
 extreme example:

 julia a = rand(2); b = rand(2);

 julia @time a .*= b;
 elapsed time: 0.505942281 seconds (11612212 bytes allocated)

 julia @time a .*= b;
 elapsed time: 1.4177e-5 seconds (800 bytes allocated)

 julia @time a .*= b;
 elapsed time: 2.5334e-5 seconds (800 bytes allocated)

 800 bytes seems like a lot of overhead given that a and b are both only 
 16 bytes each. Of course, this overhead (whatever it is) becomes 
 comparatively less significant as we move to larger arrays, but it's still 
 sizeable:

 julia a = rand(20); b = rand(20);

 julia @time a.*= b;
 elapsed time: 1.4162e-5 seconds (944 bytes allocated)

 julia @time a.*= b;
 elapsed time: 2.3754e-5 seconds (944 bytes allocated)

 Can someone explain what's going on here?



Re: [julia-users] Re: Help me optimize Stochastic Gradient Descent of Least Squares Error

2014-04-27 Thread Carlos Becker
I agree with Elliot, take a look at the performance tips.
Also, you may want to move the tic(), toc() out of the function, make sure 
you compile it first, and then use @time function calll to time it.

you may also get a considerable boost by using @simd in your for loops 
(together with @inbounds)
Let us know how it goes ;)

cheers.


El domingo, 27 de abril de 2014 09:39:03 UTC+2, Freddy Chua escribió:

 Alright, thanks! All these is looking very positive for Julia.

 On Sunday, April 27, 2014 3:36:23 PM UTC+8, Elliot Saba wrote:

 I highly suggest you read through the whole Performance 
 Tipshttp://julia.readthedocs.org/en/latest/manual/performance-tips/ 
 page I linked to above; it has documentation on all these little features 
 and stuff.  I did get a small improvement (~5%) by enabling SIMD extensions 
 on the two inner for loops, but that requires a very recent build of Julia 
 and is a somewhat experimental feature.  Neat to have though.
 -E


 On Sun, Apr 27, 2014 at 12:14 AM, Freddy Chua fred...@gmail.com wrote:

 wooh, this @inbounds thing is new to me... At least it does shows that 
 Julia is comparable to Java.


 On Sunday, April 27, 2014 3:04:26 PM UTC+8, Elliot Saba wrote:

 Since we have made sure that our for loops have the right boundaries, 
 we can assure the compiler that we're not going to step out of the bounds 
 of an array, and surround our code in the @inbounds macro.  This is not 
 something you should do unless you're certain that you'll never try to 
 access memory out of bounds, but it does get the runtime down to 0.23 
 seconds, which is on the same order as Java.  Here's the full 
 codehttps://gist.github.com/staticfloat/11339342with all the 
 modifications made.
 -E


 On Sat, Apr 26, 2014 at 11:55 PM, Freddy Chua fred...@gmail.comwrote:

 Stochastic Gradient Descent is one of the most important optimisation 
 algorithm in Machine Learning. So having it perform better than Java is 
 important to have more widespread adoption.


 On Sunday, April 27, 2014 2:03:28 PM UTC+8, Freddy Chua wrote:

 This code takes 60+ secs to execute on my machine. The Java 
 equivalent takes only 0.2 secs!!! Please tell me how to optimise the 
 following code.begin

 begin
   N = 1
   K = 100
   rate = 1e-2
   ITERATIONS = 1

   # generate y
   y = rand(N)

   # generate x
   x = rand(K, N)

   # generate w
   w = zeros(Float64, K)

   tic()
   for i=1:ITERATIONS
 for n=1:N
   y_hat = 0.0
   for k=1:K
 y_hat += w[k] * x[k,n]
   end

   for k=1:K
 w[k] += rate * (y[n] - y_hat) * x[k,n]   
   end
 end
   end
   toc()
 end

 Sorry for repeated posting, I did so to properly indent the code..





[julia-users] yet another julia presentation

2014-04-07 Thread Carlos Becker
Hi all,

just to let you know that I gave a presentation two weeks ago about Julia, 
and the slides are available online 
herehttps://sites.google.com/site/carlosbecker/a-few-notes/julia-intro.pdf?attredirects=0d=1
 , 
together with an ijulia 
notebookhttps://sites.google.com/site/carlosbecker/a-few-notes/julia-intro.ipynb?attredirects=0d=1
 
(pdfhttps://sites.google.com/site/carlosbecker/a-few-notes/slides-rfandboosting.pdf?attredirects=0d=1
)

I found an example from Tim Salimans very interesting, where the Julia 
version is faster than even the equivalent C++ code *when using GSL*(because 
for some reason random sampling in GSL is slower than Julia's)
You can find the example in the link above.

I also found an interesting tutorial that may be worth adding to 
julialang.org: http://learnxinyminutes.com/docs/julia/


Cheers.


Re: [julia-users] Re: yet another julia presentation

2014-04-07 Thread Carlos Becker
Hi John, thanks!

On Mon, Apr 7, 2014 at 10:42 PM, John Eric Humphries 
johnerichumphr...@gmail.com wrote:

 Thanks Carlos, This is pretty interesting and I enjoyed reading through
 your ensemble code. I also really like the side-by-side comparisons. A site
 which could provide many of these with a clear layout could be pretty
 useful (and convincing) for those of us coming from R/python/matlab.


It would be great to set up a website to do many examples like that, though
I don't have the time to do so now, but I would happily contribute to it if
someone builds such website.


(on a side note, what did you make your slides with?)


I used LibreOffice Impress, with some nice fonts like Linux Libertine.

Cheers.


Re: [julia-users] Overriding type promotion

2014-04-04 Thread Carlos Becker
Thanks Tim, that makes total sense, though I was thinking of a way of
expressing this in a matlab-ish kind of way.

How about defining a macro to override type promotion, similar to @inbounds?

@nopromote b = A / uint8(2)

I would like something shorter, but we could decide on the exact name later.
Does this make sense?


--
Carlos


On Fri, Apr 4, 2014 at 11:40 AM, Tim Holy tim.h...@gmail.com wrote:

 This doesn't address your bigger question, but for truncating division by
 n I
 usually use b[i] = div(A[i], convert(eltype(A), n)). For the particular
 case
 of dividing by 2, an even better choice is b[i] = A[i]  1.

 --Tim

 On Friday, April 04, 2014 02:09:24 AM Carlos Becker wrote:
  I've seen previous posts in this list about this, but either I missed
 some
  of them or this particular issue is not addressed. I apologize if it is
 the
  former.
 
  I have a Uint8 array and I want to divide (still in Uint8) by 2. This is
  very handy when dealing with large images: no need to use more memory
 than
  needed.
  So, for example:
 
  A = rand(Uint8, (100,100));   # simulated image
 
  b = A / uint8(2)
 
  typeof( b )  # == returns Array{Float32,2}
 
 
  I understand why one may want that, but is there a way to override it and
  do the plain, element-wise uint8-by-uint8 division?
  (ie ignore promotion rules)
 
  Thanks.



Re: [julia-users] Overriding type promotion

2014-04-04 Thread Carlos Becker
I don't see that as a viable option either.
I am trying to find out which other operations would do such promotion, but
so far the relevant one seems to be only division, I can handle that.

Now, in terms of hypothetical workarounds, what about having a macro to
override type promotion? Would that make sense? I don't know Julia
internals, but would this be difficult to implement?
Of course, in such case operations must happen between elements of the same
(scalar) type, otherwise it should throw an error.


--
Carlos


On Fri, Apr 4, 2014 at 5:42 PM, Stefan Karpinski ste...@karpinski.orgwrote:

 The change you want would be a one-line change:

 https://github.com/JuliaLang/julia/blob/master/base/int.jl#L50

 However, that change would affect all code using division of integers,
 which seems likely to wreak havoc. As others have pointed out, the operator
 for truncated integer division is div; the operator for floored integer
 division is fld.


 On Fri, Apr 4, 2014 at 5:09 AM, Carlos Becker carlosbec...@gmail.comwrote:

 I've seen previous posts in this list about this, but either I missed
 some of them or this particular issue is not addressed. I apologize if it
 is the former.

 I have a Uint8 array and I want to divide (still in Uint8) by 2. This is
 very handy when dealing with large images: no need to use more memory than
 needed.
 So, for example:

 A = rand(Uint8, (100,100));   # simulated image

 b = A / uint8(2)

 typeof( b )  # == returns Array{Float32,2}


 I understand why one may want that, but is there a way to override it and
 do the plain, element-wise uint8-by-uint8 division?
 (ie ignore promotion rules)

 Thanks.





[julia-users] show non-local function variables

2014-03-13 Thread Carlos Becker
Hello,

When writing functions, it can happen that one can accidentally refer to a 
global variable 'by mistake', for example

function test( x1, y1 )
  return x2 + y1  # typo, x2 instead of x1, tries to locate global var x2
end

so if x2 exists as a global variable, the typo will go unnoticed and the 
bug may be very hard to spot.

I have two questions:
  1) is there a way to search for possible global variables in a function? 
This would be a great sanity check tool.
  2) wouldn't it be desirable to have a modifier to force global variables 
to have the 'global' prefix?
  like @explicitglobals function test(x1,x2) ...

(I found a previous discussion about pure functions, but I think this is 
more specific to global/local variables)

Thanks.


Re: [julia-users] Re: show non-local function variables

2014-03-13 Thread Carlos Becker
On Thu, Mar 13, 2014 at 9:55 AM, Ivar Nesje iva...@gmail.com wrote:

 This has been discussed in
 https://groups.google.com/forum/#!searchin/julia-users/typecheck/julia-users/hTQ2KI1aaTc/fqjq-1n_ax8J


Thanks, I totally missed it, I will follow that thread instead.
Cheers.


Re: [julia-users] Re: norm() strangeness

2014-03-04 Thread Carlos Becker
Hi Andreas,

I understand, though from your email it is as if this was an unintended
behaviour.

It is still very error-prone. In such cases Matlab returns 6, which is
wrong from a matrix viewpoint, but probably closer to what people typically
expect.
Numpy has the same behavior. The latter is very well explained here
http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html

The Matrix vs Vector strictness in norm() right now is interesting, though
there is a trade-off between what users generally expect of 1xN arrays
and what they technically are.

Another option is to add a vecnorm() function, explicitly to treat the
argument as a vector, which would be undefined for other matrix sizes.

Cheers.



--
Carlos


On Tue, Mar 4, 2014 at 9:48 AM, Andreas Noack Jensen 
andreasnoackjen...@gmail.com wrote:

 In master Julia the result is 3. I must have changed that when I
 reorganised the norm code. I think that 3 is the right answer anyway. In
 Julia [1 2 3] is a 1x3 matrix, i.e. not a vector. If you want it to behave
 as a vector, make it a vector. This causes some confusion when coming from
 MATLAB, but it is only in the transition until you get used to `Vector`s. I
 also think that having matrix and vector norm in the same function is the
 natural solution in Julia where so much is about dispatch.


 2014-03-04 1:45 GMT+01:00 Miguel Bazdresch eorli...@gmail.com:

 In Julia 0.2.1, I get 6.0.

 -- mb


 On Mon, Mar 3, 2014 at 7:22 PM, Carlos Becker carlosbec...@gmail.comwrote:

 My mistake there, I meant the L1 norm, re-typed:

 -
 X= [[1 2 3],[4 5 6]]

 # now, X[1,:] is 1x3 array, containing 1 2 3

 # but let's peek at its L1-norm:
 norm( X[1,:], 1 )   #  -- we get 3, where I would expect 6 (1+2+3)
 -

 can you try that on v0.2? I am on 0.3 from upstream.


 --
 Carlos


 On Tue, Mar 4, 2014 at 1:19 AM, Patrick O'Leary 
 patrick.ole...@gmail.com wrote:

 This is odd, as I get norm() working just fine with any of a row,
 column, or vector, and all getting exactly the same result of 3.741...
 (v0.2.0, on julia.forio.com, since it's quick for me to get to). Note
 that it will return the L2 norm by default, exactly as MATLAB does.
 Supplying a second argument with p in it (norm([1 2 3], 1)) will return the
 p-norm, exactly like MATLAB.


 On Monday, March 3, 2014 6:12:53 PM UTC-6, Carlos Becker wrote:

 Hello all,

 today I fought for an hour with a very simple piece of code, of the
 kind:

 -
 X= [[1 2 3],[4 5 6]]

 # now, X[1,:] is 1x3 array, containing 1 2 3

 # but let's peek at its L1-norm:
 norm( X[1,:] )   #  -- we get 3, where I would expect 6 (1+2+3)
 -

 I believe this comes back to the 'how 1xN matrices should be handled'.
 The point is that the current behaviour is totally non-intuitive for
 someone coming from Matlab,
 and having matrix and vector norms in the same function hides this (in
 this case) unwanted behavior.

 I am not sure what is the right way to deal with this, but seems like
 a hard wall that more than one
 will hit when coming from matlab-like backgrounds.

 Cheers.






 --
 Med venlig hilsen

 Andreas Noack Jensen



Re: [julia-users] Re: norm() strangeness

2014-03-04 Thread Carlos Becker
On Tue, Mar 4, 2014 at 11:02 AM, Andreas Noack Jensen 
andreasnoackjen...@gmail.com wrote:


 makes really good sense. The distinction between arrays and and matrices
 in Numpy has been confusing to me, but actually it appear that Numpy agrees
 with how Julia is doing it right now

 In [1]: import numpy as np
 In [2]: np.linalg.norm(np.matrix([1,2,3]),1)
 Out[2]: 3
 In [3]: np.linalg.norm(np.matrix([1,2,3]).transpose(),1)
 Out[3]: 6


That is correct, but the other difference with numpy is that np.array and
np.matrix are not aliases (in julia, AFAIK, Matrix is equivalent to
Array{T,2}).
What this means is the following:

X = np.array([[1,2,3],[4,5,6]])
M = np.matrix([[1,2,3],[4,5,6]])

# but norm() behaves differently for each case:

np.linalg.norm(X[0,:],1)   # -- 6
np.linalg.norm(M[0,:],1)  # -- 3


therefore this opens another question: shall a matrix be conceptually the
same as a 2D array?
It is not only numpy that takes it differently, but also Eigen for example.

in such case, we would need

norm(Vector) - vector norm
norm(Matrix) - matrix norm
norm(Array)  - maybe undefined, or vector if 1xN or Nx1 array, matrix norm
otherwise


Things get more tricky then, I don't know which is the best solution, but
there always will be trade-off.


Re: [julia-users] Re: norm() strangeness

2014-03-04 Thread Carlos Becker
I agree with Toivo's proposal.

Introducing vecnorm() would make code and behavior clearer, and at the same
time avoid the problems with the generic norm() function.
Also, since vecnorm() calls norm(), we only have to care about maintaining
the latter.

cheers.



--
Carlos


On Tue, Mar 4, 2014 at 1:05 PM, Toivo Henningsson toivo@gmail.comwrote:

 I think that this is an unpleasant gotcha right now, when I started
 reading this post I still thought that the matrix norm of a row/column
 vector would reproduce the vector norm.
 Because of multiple dispatch, we go to exceptional lengths in Julia to
 make sure to only overload the same operation on different types, not to
 create functions that do different conceptual operations based on the type.

 So I think that the heart of the matter is to settle whether the vector
 norm and matrix norm are the same operation or not. (As long as we are
 talking about row vectors, I still think that they are, right?)

 Perhaps it would be enough to leave norm as it is and introduce

 vecnorm(x::AbstractVector, args...) = norm(x, args...)

 and possible a method that tries to discover a row/column vector disguised
 as a matrix. Or we could just have (something equivalent to)

 vecnorm(x::AbstractVector, args...) = norm(x[:], args...)

 which would allow you to treat any array as a vector for the purposes of
 norm computation.

 On Tuesday, 4 March 2014 09:48:05 UTC+1, Andreas Noack Jensen wrote:

 In master Julia the result is 3. I must have changed that when I
 reorganised the norm code. I think that 3 is the right answer anyway. In
 Julia [1 2 3] is a 1x3 matrix, i.e. not a vector. If you want it to behave
 as a vector, make it a vector. This causes some confusion when coming from
 MATLAB, but it is only in the transition until you get used to `Vector`s. I
 also think that having matrix and vector norm in the same function is the
 natural solution in Julia where so much is about dispatch.


 2014-03-04 1:45 GMT+01:00 Miguel Bazdresch eorl...@gmail.com:

  In Julia 0.2.1, I get 6.0.

 -- mb


 On Mon, Mar 3, 2014 at 7:22 PM, Carlos Becker carlos...@gmail.comwrote:

 My mistake there, I meant the L1 norm, re-typed:

 -
 X= [[1 2 3],[4 5 6]]

 # now, X[1,:] is 1x3 array, containing 1 2 3

 # but let's peek at its L1-norm:
 norm( X[1,:], 1 )   #  -- we get 3, where I would expect 6 (1+2+3)
 -

 can you try that on v0.2? I am on 0.3 from upstream.


 --
 Carlos


 On Tue, Mar 4, 2014 at 1:19 AM, Patrick O'Leary 
 patrick...@gmail.comwrote:

 This is odd, as I get norm() working just fine with any of a row,
 column, or vector, and all getting exactly the same result of 3.741...
 (v0.2.0, on julia.forio.com, since it's quick for me to get to). Note
 that it will return the L2 norm by default, exactly as MATLAB does.
 Supplying a second argument with p in it (norm([1 2 3], 1)) will return 
 the
 p-norm, exactly like MATLAB.


 On Monday, March 3, 2014 6:12:53 PM UTC-6, Carlos Becker wrote:

 Hello all,

 today I fought for an hour with a very simple piece of code, of the
 kind:

 -
 X= [[1 2 3],[4 5 6]]

 # now, X[1,:] is 1x3 array, containing 1 2 3

 # but let's peek at its L1-norm:
 norm( X[1,:] )   #  -- we get 3, where I would expect 6 (1+2+3)
 -

 I believe this comes back to the 'how 1xN matrices should be handled'.
 The point is that the current behaviour is totally non-intuitive for
 someone coming from Matlab,
 and having matrix and vector norms in the same function hides this
 (in this case) unwanted behavior.

 I am not sure what is the right way to deal with this, but seems like
 a hard wall that more than one
 will hit when coming from matlab-like backgrounds.

 Cheers.






 --
 Med venlig hilsen

 Andreas Noack Jensen




Re: [julia-users] Re: norm() strangeness

2014-03-03 Thread Carlos Becker
My mistake there, I meant the L1 norm, re-typed:

-
X= [[1 2 3],[4 5 6]]

# now, X[1,:] is 1x3 array, containing 1 2 3

# but let's peek at its L1-norm:
norm( X[1,:], 1 )   #  -- we get 3, where I would expect 6 (1+2+3)
-

can you try that on v0.2? I am on 0.3 from upstream.


--
Carlos


On Tue, Mar 4, 2014 at 1:19 AM, Patrick O'Leary patrick.ole...@gmail.comwrote:

 This is odd, as I get norm() working just fine with any of a row, column,
 or vector, and all getting exactly the same result of 3.741... (v0.2.0, on
 julia.forio.com, since it's quick for me to get to). Note that it will
 return the L2 norm by default, exactly as MATLAB does. Supplying a second
 argument with p in it (norm([1 2 3], 1)) will return the p-norm, exactly
 like MATLAB.


 On Monday, March 3, 2014 6:12:53 PM UTC-6, Carlos Becker wrote:

 Hello all,

 today I fought for an hour with a very simple piece of code, of the kind:

 -
 X= [[1 2 3],[4 5 6]]

 # now, X[1,:] is 1x3 array, containing 1 2 3

 # but let's peek at its L1-norm:
 norm( X[1,:] )   #  -- we get 3, where I would expect 6 (1+2+3)
 -

 I believe this comes back to the 'how 1xN matrices should be handled'.
 The point is that the current behaviour is totally non-intuitive for
 someone coming from Matlab,
 and having matrix and vector norms in the same function hides this (in
 this case) unwanted behavior.

 I am not sure what is the right way to deal with this, but seems like a
 hard wall that more than one
 will hit when coming from matlab-like backgrounds.

 Cheers.




[julia-users] Creating array from array type

2014-02-21 Thread Carlos Becker
Hello,

this looks like a naive question, but I cannot get my way through

I defined a typealias, like

   typealias IdxListType Array{Int64,1}

which I want to initialize empty, and then add elements with push!().

My question is: how do I create an empty array of type IdxListType ?

I know I can do Array( Int64, 0 ), but that doesn't use the typealias I 
defined,
and IdxListType() is not defined either.


Thanks in advance,
Carlos


Re: [julia-users] Re: Returning julia objects with ccall

2014-02-10 Thread Carlos Becker
Hi Tobias,

I want to be able to return different types from ccall(), according to what
happens inside my C/C++ code,
without the need for telling julia what I want to return, its size, etc.
Argument passing gets complicated in such cases,
and I believe returning julia objects directly is neater.

I know this means that I have to write more C code to interface with julia,
but at the same time I don't have to simultaneously maintain C and Julia
code
if something changes in the C code (eg, I add a member in a struct, or I
change the output type)

Another example is with a boosting library I wrote, for which I wrote
matlab wrappers
(herehttps://sites.google.com/site/carlosbecker/resources/gradient-boosting-boosted-trees
).
In this case its train function accepts a structure with options, plus some
arrays. It returns an array of structs
as a 'model' that can be later used. It is nice to have a interface that
needs almost no tuning from the matlab or julia end.

I think many would consider this 'direct julia type return' as an
advantage, particularly people coming from python/matlab.
I also like very much how ccall can handle void pointers, which is
necessary in some situations, but in some other cases
it may be cleaner to work with julia object directly.

I hope I was clear. If I have nice code running I will make it available
for other developers.

Cheers.



--
Carlos


On Sun, Feb 9, 2014 at 7:23 PM, Tobias Knopp tobias.kn...@googlemail.comwrote:

 Carlos, the code that you showed can be completely written in Julia. It
 would be helpful if you could give us more insight what you want to
 achieve. Is there a specific API that you want to wrap? You said that the
 API returns a double pointer but the length of the memory is not know (if I
 get that right) How can one use this pointer if the length is unknown?

 Am Sonntag, 9. Februar 2014 12:46:24 UTC+1 schrieb Carlos Becker:

 I think I finally made it work, close to what I wanted.

 This could be good for future reference for anyone trying to do something
 similar.
 The code is at https://gist.github.com/anonymous/8897943.

 I have two questions about it:
   1) calling jl_gc_disable() and jl_gc_enable() is ok in that context?
 (see link above)
   2) I still don't know how to avoid passing a pointer to the module I
 want to get the type from to the C call.

 Thanks.


 El domingo, 9 de febrero de 2014 00:40:33 UTC+1, Carlos Becker escribió:

 Hi Steven,

 I tried that before, I know it is possible, but if the size is unknown
 to julia, it must be returned as another variable, which makes
 coding more difficult if many of such return arrays are needed.

 That is why I think it would be interesting to see the julia-api side of
 it, to see if this can be avoided.
 (I prefer to write a bit more of C code but make the ccall clearer in
 julia)

 About my previous question, I am still struggling to create a custom
 type with julia's C API and fill it in, to pass it back then.
 Has anyone done this in such scenario before? I wonder if the necessary
 symbols are already exported for 3rd party use.

 Cheers.




Re: [julia-users] Re: Returning julia objects with ccall

2014-02-10 Thread Carlos Becker
Hi Tobias,

it may be better to look at an example. For instance, to train my SQB
classifier:

model = SQBMatrixTrain( featureMatrix, labelVector, maxIters, options )

where featureMatrix and labelVector are matrices (2-dimensional) and
vectors (1-dimensional), maxIters is a uint32 and options a structure of
the form:
options.something = value, where value may be a uint32, string, array,
etc.

Lastly, model is an array of structs, where each struct also contains
arrays in some of its members.
In this case most types/structs are known at compile-time, except for the
length of featureMatrix, labelVector and model.
(and in the future may be the size of some arrays inside the structures of
the array model may as well vary).

Wrapping such call in Julia with ccall seems overly complicated to me.
About structs, I can declare an immutable struct in Julia and do the exact
equivalent in C, but I fear that at some time
I can mess up the order or types, and I have no way (AFAIK) of knowing if I
made a mistake from the Julia or C side.
I would get garbage, or hopefully a segfault.

On the other hand, if I write some general wrappers for Julia, I could
check whether the passed Julia type is an array or type
of struct I am looking for, check if a struct has field 'name' of type
'type', etc. I think this is a great advantage.
Once something like this is written, modifying structs is straightforward,
and if an error happens, I can know what is happening.

Does this make sense? Or am I missing something fundamental from ccall's
design?

Thanks.




--
Carlos


On Mon, Feb 10, 2014 at 1:56 PM, Tobias Knopp
tobias.kn...@googlemail.comwrote:

 Carlos,

 So if I understand you correctly you have some C functions. Ontop of that
 you put a C wrapper that dispatches different Julia objects based on some
 internal state.
 If this is the case it would be much cleaner if you would make this state
 accessible via ccalls and do the dispatching on the Julia side. I also
 don't see the maintainance argument. It is IMHO much nicer to maintain
 Julia code instead of C code. But if you add new types to the C (core) code
 you will have to touch the wrapper anyway.

 There is a bunch of code out there in base/ and many different packages
 and I have never seen that extra C wrappers have been written just to
 expose functionality of the core libraries.
 Wrapping is always done in Julia. Often in a way that there is a thin
 wrapper that directly exposes the C API and ontop of that a nicer Julia API
 is written.

 By the way: If you are dealing with arrays of structs, making the
 according type immutable will guarantee that the memory layout is
 compatible with C.

 Cheers

 Tobi


 Am Montag, 10. Februar 2014 12:24:27 UTC+1 schrieb Carlos Becker:

 Hi Tobias,

 I want to be able to return different types from ccall(), according to
 what happens inside my C/C++ code,
 without the need for telling julia what I want to return, its size, etc.
 Argument passing gets complicated in such cases,
 and I believe returning julia objects directly is neater.

 I know this means that I have to write more C code to interface with
 julia, but at the same time I don't have to simultaneously maintain C and
 Julia code
 if something changes in the C code (eg, I add a member in a struct, or I
 change the output type)

 Another example is with a boosting library I wrote, for which I wrote
 matlab wrappers 
 (herehttps://sites.google.com/site/carlosbecker/resources/gradient-boosting-boosted-trees
 ).
 In this case its train function accepts a structure with options, plus
 some arrays. It returns an array of structs
 as a 'model' that can be later used. It is nice to have a interface that
 needs almost no tuning from the matlab or julia end.

 I think many would consider this 'direct julia type return' as an
 advantage, particularly people coming from python/matlab.
 I also like very much how ccall can handle void pointers, which is
 necessary in some situations, but in some other cases
 it may be cleaner to work with julia object directly.

 I hope I was clear. If I have nice code running I will make it available
 for other developers.

 Cheers.



 --
 Carlos


 On Sun, Feb 9, 2014 at 7:23 PM, Tobias Knopp tobias...@googlemail.comwrote:

 Carlos, the code that you showed can be completely written in Julia. It
 would be helpful if you could give us more insight what you want to
 achieve. Is there a specific API that you want to wrap? You said that the
 API returns a double pointer but the length of the memory is not know (if I
 get that right) How can one use this pointer if the length is unknown?

 Am Sonntag, 9. Februar 2014 12:46:24 UTC+1 schrieb Carlos Becker:

 I think I finally made it work, close to what I wanted.

 This could be good for future reference for anyone trying to do
 something similar.
 The code is at https://gist.github.com/anonymous/8897943.

 I have two questions

Re: [julia-users] Re: Returning julia objects with ccall

2014-02-10 Thread Carlos Becker
Hi Tobias,

model = SQBMatrixTrain( featureMatrix, labelVector, maxIters, options )

That is how the matlab call looks like, so it is very transparent.
That is how I want the Julia call look like, and I think that it is better
to pass Julia objects directly to ccall() in those cases.

It also offers more freedom for C/C++ programmers. I have some code already
working, and to me it looks neater than
creating new function calls to pass/return array sizes/etc, and added to
that my fear to mis-matched structs between C and Julia.

For example, with what I have now working in C++, to create a struct of
type MyStruct from a module,
with members *str2, str1 *and *num*, one does:

extern C {
void * createStructTest( void *module )
{
JL::Struct::Type s = JL::Struct::create( module, MyStruct );
JL::Struct::setField( s, str2, This is str2 );
 JL::Struct::setField( s, num, (int64_t)1234 );
JL::Struct::setField( s, str2, another string );

 return s;
}
}

and it automatically checks that MyStruct is a valid type, that str2 is a
member and it is a string,
that num is a member and is a int64, etc. If  not, it throws a Julia
exception and ends cleanly.

As I said before, I think this could help Julia newcomers when wrapping
their C/C++ libraries.

Thanks.


--
Carlos


On Mon, Feb 10, 2014 at 5:20 PM, Tobias Knopp
tobias.kn...@googlemail.comwrote:


 Am Montag, 10. Februar 2014 14:17:02 UTC+1 schrieb Carlos Becker:


 model = SQBMatrixTrain( featureMatrix, labelVector, maxIters, options )


 Is this the function declaration of the C function or is this how you want
 your Julia API to look like?
 If its the C signature and model is a pointer to an array of structs, this
 function cannot be used at all as the user cannot know the length of model.
 Note, by the way that the length/size of an array is not a compile time
 parameter. It can be changed during runtime.





Re: [julia-users] Re: Returning julia objects with ccall

2014-02-10 Thread Carlos Becker
Hi Tobias, thanks for you support!

I am still working on it, but when I have something ready for pushing I
will let you know.
I have two specific questions now:

1) I think that there would be a few issues with the lack of julia's
exported symbols (non-DLLEXPORTed symbols in julia.h)
   The ones related to arrays and basic types are exported now, but others
are not, and therefore runtime linking errors occur.

  Shall I create a diff file with a proposal of the additional ones to
DLLEXPORT, to check if you agree and we can add that to the main repo?
  Having those symbols exported would also be great for anyone calling
Julia from C (aka Embedding Julia).


2) I am still not sure how to get a pointer to a module, given its name. Is
this possible with the current julia api?
I mean to be able to do something like jl_module_t *moduleptr =
jl_get_module_by_name(MyModule) or similar.


Thanks.
--
Carlos


On Mon, Feb 10, 2014 at 6:01 PM, Tobias Knopp
tobias.kn...@googlemail.comwrote:

 Fair enough, if you wrap the C-API in C++ this can get quite neat. And
 having myself pushed the documentation of the Julia C-API I definately
 think that Julia's CAPI is quite good. My intention at that time was,
 however, embedding Julia in C, in which case the C-API is definately
 required.

 So, it seems that you know what you are doing and for a C++ library the
 answer might not be as simple as do it using ccall in Julia.

 I would welcome it if you could push your c++ wrapper into the main Julia
 source code!

 Cheers,

 Tobi

 Am Montag, 10. Februar 2014 17:35:38 UTC+1 schrieb Carlos Becker:

 Hi Tobias,

 model = SQBMatrixTrain( featureMatrix, labelVector, maxIters, options )

 That is how the matlab call looks like, so it is very transparent.
 That is how I want the Julia call look like, and I think that it is
 better to pass Julia objects directly to ccall() in those cases.

 It also offers more freedom for C/C++ programmers. I have some code
 already working, and to me it looks neater than
 creating new function calls to pass/return array sizes/etc, and added to
 that my fear to mis-matched structs between C and Julia.

 For example, with what I have now working in C++, to create a struct of
 type MyStruct from a module,
 with members *str2, str1 *and *num*, one does:

 extern C {
 void * createStructTest( void *module )
 {
 JL::Struct::Type s = JL::Struct::create( module, MyStruct );
 JL::Struct::setField( s, str2, This is str2 );
  JL::Struct::setField( s, num, (int64_t)1234 );
 JL::Struct::setField( s, str2, another string );

  return s;
 }
 }

 and it automatically checks that MyStruct is a valid type, that str2 is a
 member and it is a string,
 that num is a member and is a int64, etc. If  not, it throws a Julia
 exception and ends cleanly.

 As I said before, I think this could help Julia newcomers when wrapping
 their C/C++ libraries.

 Thanks.


 --
 Carlos


 On Mon, Feb 10, 2014 at 5:20 PM, Tobias Knopp 
 tobias...@googlemail.comwrote:


 Am Montag, 10. Februar 2014 14:17:02 UTC+1 schrieb Carlos Becker:


 model = SQBMatrixTrain( featureMatrix, labelVector, maxIters, options )


 Is this the function declaration of the C function or is this how you
 want your Julia API to look like?
 If its the C signature and model is a pointer to an array of structs,
 this function cannot be used at all as the user cannot know the length of
 model.
 Note, by the way that the length/size of an array is not a compile time
 parameter. It can be changed during runtime.






[julia-users] Returning julia objects with ccall

2014-02-08 Thread Carlos Becker
Hello everyone, 

I just got started with Julia, and I wanted to try to wrap a C/C++ library 
to Julia to check whether it would work out for my purposes.

I tried out many ways of passing arrays and other objects from C back to 
Julia.
So far it seems that it takes a lot of extra code if I want to return, for 
example, a simple double-array or an array of types (eg structs).

Then I thought that I could call the Julia API from the ccalled binary, to 
allocate an array and return it to julia,
then use unsafe_pointer_to_objref() and get a neat Julia object directly.

You can see a very simple example here 
https://gist.github.com/anonymous/647

This would simplify _significantly_ a lot of code from the C side, at least 
with what I am working right now.

Now, my question is: is it safe to call functions such as jl_alloc_array_1d() 
from the C binary?
would this be a problem in some situations?

I understand that it may mess memory up if those functions are called 
outside the main thread, but I would certainly not do that.

Thanks in advance,
Carlos


Re: [julia-users] Returning julia objects with ccall

2014-02-08 Thread Carlos Becker
Hi Jeff, thanks for the quick reply.

If Any is used as the return type of the ccall, the result will be 
 treated as a julia reference and you can skip 
 unsafe_pointer_to_objref. 


If returning Any, would the GC take care of it?
 


 A variant of this is to allocate the array in julia, and pass it to 
 the C function to be filled in (ccall will effectively call 
 jl_array_ptr for you to pass the array to C). 


Right, but if the final size is not known before ccall(), then it may not 
be as easy as creating it from C.

Now, I am also looking for something a bit more involved.
For example, I need to return what in C is an array of structs 
(std::vectorstruct type).
Do you see some problems when creating and filling a specific Julia type, 
to add several of them
into a Julia array? Would you point me to some of the functions of the API 
that could be used for this?

Thanks,
Carlos


 


 On Sat, Feb 8, 2014 at 2:21 PM, Carlos Becker 
 carlos...@gmail.comjavascript: 
 wrote: 
  Hello everyone, 
  
  I just got started with Julia, and I wanted to try to wrap a C/C++ 
 library 
  to Julia to check whether it would work out for my purposes. 
  
  I tried out many ways of passing arrays and other objects from C back to 
  Julia. 
  So far it seems that it takes a lot of extra code if I want to return, 
 for 
  example, a simple double-array or an array of types (eg structs). 
  
  Then I thought that I could call the Julia API from the ccalled binary, 
 to 
  allocate an array and return it to julia, 
  then use unsafe_pointer_to_objref() and get a neat Julia object 
 directly. 
  
  You can see a very simple example here 
  https://gist.github.com/anonymous/647 
  
  This would simplify _significantly_ a lot of code from the C side, at 
 least 
  with what I am working right now. 
  
  Now, my question is: is it safe to call functions such as 
  jl_alloc_array_1d() from the C binary? 
  would this be a problem in some situations? 
  
  I understand that it may mess memory up if those functions are called 
  outside the main thread, but I would certainly not do that. 
  
  Thanks in advance, 
  Carlos 



Re: [julia-users] Returning julia objects with ccall

2014-02-08 Thread Carlos Becker
To formulate my question a bit more specifically (about the last part of my 
email)

Suppose I want to call jl_new_struct() to instantiate a 'type', I need to 
provide the type, so I guess
I have to pass the right jl_datatype_t *, which might be found 
with jl_get_global().
Now, jl_get_global() needs the module pointer, which is not available as a 
linking symbol sin this would come from an
external module (let's say called MyModule). Is there a function to lookup 
the pointer of a module given its string name?

Thanks.


El sábado, 8 de febrero de 2014 21:10:11 UTC+1, Carlos Becker escribió:

 Hi Jeff, thanks for the quick reply.

 If Any is used as the return type of the ccall, the result will be 
 treated as a julia reference and you can skip 
 unsafe_pointer_to_objref. 


 If returning Any, would the GC take care of it?
  


 A variant of this is to allocate the array in julia, and pass it to 
 the C function to be filled in (ccall will effectively call 
 jl_array_ptr for you to pass the array to C). 


 Right, but if the final size is not known before ccall(), then it may not 
 be as easy as creating it from C.

 Now, I am also looking for something a bit more involved.
 For example, I need to return what in C is an array of structs 
 (std::vectorstruct type).
 Do you see some problems when creating and filling a specific Julia type, 
 to add several of them
 into a Julia array? Would you point me to some of the functions of the API 
 that could be used for this?

 Thanks,
 Carlos


  


 On Sat, Feb 8, 2014 at 2:21 PM, Carlos Becker carlos...@gmail.com 
 wrote: 
  Hello everyone, 
  
  I just got started with Julia, and I wanted to try to wrap a C/C++ 
 library 
  to Julia to check whether it would work out for my purposes. 
  
  I tried out many ways of passing arrays and other objects from C back 
 to 
  Julia. 
  So far it seems that it takes a lot of extra code if I want to return, 
 for 
  example, a simple double-array or an array of types (eg structs). 
  
  Then I thought that I could call the Julia API from the ccalled binary, 
 to 
  allocate an array and return it to julia, 
  then use unsafe_pointer_to_objref() and get a neat Julia object 
 directly. 
  
  You can see a very simple example here 
  https://gist.github.com/anonymous/647 
  
  This would simplify _significantly_ a lot of code from the C side, at 
 least 
  with what I am working right now. 
  
  Now, my question is: is it safe to call functions such as 
  jl_alloc_array_1d() from the C binary? 
  would this be a problem in some situations? 
  
  I understand that it may mess memory up if those functions are called 
  outside the main thread, but I would certainly not do that. 
  
  Thanks in advance, 
  Carlos 



[julia-users] Re: Returning julia objects with ccall

2014-02-08 Thread Carlos Becker
Hi Steven,

I tried that before, I know it is possible, but if the size is unknown to 
julia, it must be returned as another variable, which makes
coding more difficult if many of such return arrays are needed.

That is why I think it would be interesting to see the julia-api side of 
it, to see if this can be avoided.
(I prefer to write a bit more of C code but make the ccall clearer in julia)

About my previous question, I am still struggling to create a custom type 
with julia's C API and fill it in, to pass it back then.
Has anyone done this in such scenario before? I wonder if the necessary 
symbols are already exported for 3rd party use.

Cheers.