[julia-users] Re: how to replace , to . in Array{Any,2}:

2015-05-17 Thread paul analyst
I see, . No any hints?
Paul

W dniu niedziela, 17 maja 2015 01:28:50 UTC+2 użytkownik ele...@gmail.com 
napisał:

 I don't think replace broadcasts, you have to write a loop applying 
 replace to each string at a time.

 On Sunday, May 17, 2015 at 4:40:05 AM UTC+10, paul analyst wrote:

 I have file with decimal separator lika ,
 x=readdlm(x.txt,'\t')

 julia x=x[2:end,:]
 6390x772 Array{Any,2}:

 some kolumns looks :
 julia x[:,69]
 6390-element Array{Any,1}:
  0.0
   0,33
   0,72
   1,09
   0,95
   3,57
   2,27
   2,42



 julia replace(x[:,69],',','.')
 ERROR: `replace` has no method matching replace(::Array{Any,1}, ::Char, 
 ::Char)

 julia replace(string(x[:,69],',','.'))
 ERROR: `replace` has no method matching replace(::ASCIIString)


 how to replace , to . ?

 I can repalce all , to . 

 Paul




[julia-users] How to convert a matrix in Julia ?

2015-05-17 Thread Lytu
Hello Julia users,

How can i convert an Array{Int32,2} matrix H in an Array{Float64,2} matrix?

For example: H=[5 3 6;3 9 12;6 12 17] 

I ask this because i would like to do this:   H=[5 3 6;3 9 12;6 12 17] 
  
val=-7.046128291045994
  H[2,1] = val
but it gives an error: InexactError()

I think it is because val is a* float64* and the matrix H is an 
*Array{Int32,2}*

Does anybody know how i can do this?

Thanks


[julia-users] Re: How to convert a matrix in Julia ?

2015-05-17 Thread paul analyst
H=Float64[5 3 6;3 9 12;6 12 17]
Paul

W dniu niedziela, 17 maja 2015 10:03:28 UTC+2 użytkownik Lytu napisał:

 Hello Julia users,

 How can i convert an Array{Int32,2} matrix H in an Array{Float64,2} matrix?

 For example: H=[5 3 6;3 9 12;6 12 17] 

 I ask this because i would like to do this:   H=[5 3 6;3 9 12;6 12 17] 
   
 val=-7.046128291045994
   H[2,1] = val
 but it gives an error: InexactError()

 I think it is because val is a* float64* and the matrix H is an 
 *Array{Int32,2}*

 Does anybody know how i can do this?

 Thanks



[julia-users] Problem with plot in Julia

2015-05-17 Thread Lytu
I have an issue with my code in attachment. When i run, it plots nothing.
In attachment, there are the code (2 files) and the plot that i was 
supposed to have (that i had in Matlab).

Can someone please tell me why?

Thank you



CoDeNMFSymORIGINAL.jl
Description: Binary data


Run.jl
Description: Binary data


Re: [julia-users] llvmcall printf

2015-05-17 Thread andrew cooke
thanks, but the reason i need lvmcall is because i want to call a specific 
intel instruction (which llvm actually supports - i've found the patch for 
it).  but before i do that i am trying to just get something to work, and 
printf seemed like a good intermediate goal.

i am going to look at the implementing code and work out what is generated, 
i think, and then ask llvmdev for help.  there's something i don't 
understand about the context in which the llvm ir is inserted.  declare 
instructions don't seem to be accepted, for example.  once i can pin that 
down i think i can ask a sensible question on llvmdev...

cheers,
andrew


On Saturday, 16 May 2015 22:01:25 UTC-3, Yichao Yu wrote:

 On Sat, May 16, 2015 at 5:01 PM, andrew cooke and...@acooke.org 
 javascript: wrote: 
  
  Does anyone have a working example that calls printf via llvmcall? 
  
  I realise I'm uncomfortably inbetween llvmdev and julia-users, but I'm 
  asking here first because I suspect my limitations are still more 
  julia-related. 
  
  In particular, 
  
  julia g() = Base.llvmcall( 
   call i32 (i8*, ...)* @printf(i8* chello world\00) 
   ret, 
   Void, Tuple{}) 
  g (generic function with 2 methods) 
  
  julia g() 
  ERROR: error compiling g: Failed to parse LLVM Assembly: 
  julia: llvmcall:3:35: error: expected string 
  call i32 (i8*, ...)* @printf(i8* chello world 
^ 
  
  seems like it's *almost* there...? 
  
  Thanks, 
  Andrew 
  
  (I suspect I also need something other than @printf, like 
  IntrinsicsX86.printf or something, but I can't find where I saw an 
 example 
  like that...  Related, declare doesn't seem to be accepted, or 
 assignment to 
  global vsariables.  But I am completely new to all this...) 
  

 I was also interested in knowning how to use `llvmcall` in general but 
 at least for this limited case, (and I guess you probably know 
 already) it is easier to user `ccall` 

 ```julia 
 julia ccall(:printf, Int, (Ptr{Cchar},), hellow world\n) 
 hellow world 
 13 
 ``` 



Re: [julia-users] Any Julians attending CVPR 2015?

2015-05-17 Thread Dahua Lin
I will be attending CVPR 2015.

Best,
Dahua

On Sunday, May 17, 2015 at 8:14:50 AM UTC+8, Kevin Squire wrote:

 Hi Tracy, Sebastian, I'll be there as well (with my company, no talk). 
 Would be nice to meet up!

 Cheers,
Kevin 

 On Saturday, May 16, 2015, Sebastian Nowozin now...@gmail.com 
 javascript: wrote:


 Hi Tracy,

 I am a regular Julia user and will be attending CVPR, feel free to find 
 me at the conference.
 (Also, I have seen Dahua Lin at previous CVPRs, but I am not sure he will 
 attend this year.)

 Best,
 Sebastian Nowozin


 On Saturday, 16 May 2015 12:19:30 UTC+1, Tracy Wadleigh wrote:

 It turns out my organization will be sending me to CVPR 2015 
 http://www.pamitc.org/cvpr15/. The decision is sort of last minute, 
 and I won't be travelling with anyone or presenting anything, and I've made 
 no introduction to anyone I've seen in the program, so I'm a little extra 
 eager to find excuses to talk to people.

 I won't be able to make it to JuliaCon this year (I'll be on a camping 
 trip with my son), but I'd still like to try to meet some other Julians in 
 Boston in June.




Re: [julia-users] llvmcall printf

2015-05-17 Thread Isaiah Norton

 i am going to look at the implementing code and work out what is
 generated, i think, and then ask llvmdev for help.


You should start with lli and make sure that you are writing the IR
correctly; if it works in lli, then the issue is with Julia (as is most
likely -- llvmcall is kind of brittle).


 here's something i don't understand about the context in which the llvm ir
 is inserted.  declare instructions don't seem to be accepted, for example.


See
https://github.com/JuliaLang/julia/pull/8740


On Sun, May 17, 2015 at 10:17 AM, andrew cooke and...@acooke.org wrote:

 thanks, but the reason i need lvmcall is because i want to call a specific
 intel instruction (which llvm actually supports - i've found the patch for
 it).  but before i do that i am trying to just get something to work, and
 printf seemed like a good intermediate goal.

 i am going to look at the implementing code and work out what is
 generated, i think, and then ask llvmdev for help.  there's something i
 don't understand about the context in which the llvm ir is inserted.
 declare instructions don't seem to be accepted, for example.  once i can
 pin that down i think i can ask a sensible question on llvmdev...

 cheers,
 andrew


 On Saturday, 16 May 2015 22:01:25 UTC-3, Yichao Yu wrote:

 On Sat, May 16, 2015 at 5:01 PM, andrew cooke and...@acooke.org wrote:
 
  Does anyone have a working example that calls printf via llvmcall?
 
  I realise I'm uncomfortably inbetween llvmdev and julia-users, but I'm
  asking here first because I suspect my limitations are still more
  julia-related.
 
  In particular,
 
  julia g() = Base.llvmcall(
   call i32 (i8*, ...)* @printf(i8* chello world\00)
   ret,
   Void, Tuple{})
  g (generic function with 2 methods)
 
  julia g()
  ERROR: error compiling g: Failed to parse LLVM Assembly:
  julia: llvmcall:3:35: error: expected string
  call i32 (i8*, ...)* @printf(i8* chello world
^
 
  seems like it's *almost* there...?
 
  Thanks,
  Andrew
 
  (I suspect I also need something other than @printf, like
  IntrinsicsX86.printf or something, but I can't find where I saw an
 example
  like that...  Related, declare doesn't seem to be accepted, or
 assignment to
  global vsariables.  But I am completely new to all this...)
 

 I was also interested in knowning how to use `llvmcall` in general but
 at least for this limited case, (and I guess you probably know
 already) it is easier to user `ccall`

 ```julia
 julia ccall(:printf, Int, (Ptr{Cchar},), hellow world\n)
 hellow world
 13
 ```




[julia-users] Re: Problem with plot in Julia

2015-05-17 Thread Mohammed El-Beltagy
It would help if you were to present a minimal example, not as an 
attachment. 
I noticed that your code is also not working... there is no 
include(CoDeNMFSysORIGINAL.jl) in Run.jl. 
If you are running Run.jl outside the Repl, you are unlikely to see 
anything as julia would immediately exit. 

On Sunday, May 17, 2015 at 11:34:40 AM UTC+2, Lytu wrote:

 I have an issue with my code in attachment. When i run, it plots nothing.
 In attachment, there are the code (2 files) and the plot that i was 
 supposed to have (that i had in Matlab).

 Can someone please tell me why?

 Thank you



Re: [julia-users] llvmcall printf

2015-05-17 Thread andrew cooke

ah, thanks - that issue is a huge help.

On Sunday, 17 May 2015 11:38:20 UTC-3, Isaiah wrote:

 i am going to look at the implementing code and work out what is 
 generated, i think, and then ask llvmdev for help.


 You should start with lli and make sure that you are writing the IR 
 correctly; if it works in lli, then the issue is with Julia (as is most 
 likely -- llvmcall is kind of brittle). 
  

 here's something i don't understand about the context in which the llvm 
 ir is inserted.  declare instructions don't seem to be accepted, for 
 example.


 See
 https://github.com/JuliaLang/julia/pull/8740


 On Sun, May 17, 2015 at 10:17 AM, andrew cooke and...@acooke.org 
 javascript: wrote:

 thanks, but the reason i need lvmcall is because i want to call a 
 specific intel instruction (which llvm actually supports - i've found the 
 patch for it).  but before i do that i am trying to just get something to 
 work, and printf seemed like a good intermediate goal.

 i am going to look at the implementing code and work out what is 
 generated, i think, and then ask llvmdev for help.  there's something i 
 don't understand about the context in which the llvm ir is inserted.  
 declare instructions don't seem to be accepted, for example.  once i can 
 pin that down i think i can ask a sensible question on llvmdev...

 cheers,
 andrew


 On Saturday, 16 May 2015 22:01:25 UTC-3, Yichao Yu wrote:

 On Sat, May 16, 2015 at 5:01 PM, andrew cooke and...@acooke.org 
 wrote: 
  
  Does anyone have a working example that calls printf via llvmcall? 
  
  I realise I'm uncomfortably inbetween llvmdev and julia-users, but I'm 
  asking here first because I suspect my limitations are still more 
  julia-related. 
  
  In particular, 
  
  julia g() = Base.llvmcall( 
   call i32 (i8*, ...)* @printf(i8* chello world\00) 
   ret, 
   Void, Tuple{}) 
  g (generic function with 2 methods) 
  
  julia g() 
  ERROR: error compiling g: Failed to parse LLVM Assembly: 
  julia: llvmcall:3:35: error: expected string 
  call i32 (i8*, ...)* @printf(i8* chello world 
^ 
  
  seems like it's *almost* there...? 
  
  Thanks, 
  Andrew 
  
  (I suspect I also need something other than @printf, like 
  IntrinsicsX86.printf or something, but I can't find where I saw an 
 example 
  like that...  Related, declare doesn't seem to be accepted, or 
 assignment to 
  global vsariables.  But I am completely new to all this...) 
  

 I was also interested in knowning how to use `llvmcall` in general but 
 at least for this limited case, (and I guess you probably know 
 already) it is easier to user `ccall` 

 ```julia 
 julia ccall(:printf, Int, (Ptr{Cchar},), hellow world\n) 
 hellow world 
 13 
 ``` 




[julia-users] Unnecessary Memory allocation when iterating through a DataArray

2015-05-17 Thread Mohammed El-Beltagy
Today while trying optimize a piece code I came across a rather curious 
behavior of when allocation memory when accessing a DataArray. 

x=rand(1:10,100);
function countGT(x::Array{Int,1})
count=0
for i=1:length(x)
  count+= (x[i]5)? 1: 0
end
count
end

Here is what you get after running @time (compilation excluded) 

@time countGT(x);
elapsed time: 0.00847156 seconds (96 bytes allocated)

That is not too bad. @time at least allocated 80 bytes and the extra 16 
bytes is for creating the variable count, so far so good.
Now lets see if we do the same a floating point array. 
x=rand(100);
function countGT(x::Array{Float64,1})
count=0.0
for i=1:length(x)
  count+= (x[i]5.0)? 1.0: 0.0
end
count
end

countGT(x)
@time countGT(x)

You get 
elapsed time: 0.00177126 seconds (96 bytes allocated)
Which still pretty good. Now, the problem start to show up when I have a 
DataArray
x=@data rand(100);
function countGT(x::DataArray{Float64,1})
count=0.0
for i=1:length(x)
  count+= (x[i]5.0)? 1.0: 0.0
end
count
end

countGT(x)
@time countGT(x)

You we get
elapsed time: 0.23610454 seconds (1696 bytes allocated)

The bytes allocated seems to scale with the size of the DataArray. So it 
seems that mere act of accessing an element in a DataArray allocates 
memory. 

I am wondering there could be a better way. 









Re: [julia-users] Re: Problem with plot in Julia

2015-05-17 Thread El suisse
The following posts are related with your code:

https://groups.google.com/forum/#!searchin/julia-users/$20Equivalent$20of$20MATLAB$27s$20nargout$20in$20Julia

And maybe is a good idea encapsulate the convergence history in a type,
like in this package:

https://github.com/JuliaLang/IterativeSolvers.jl/blob/master/src/common.jl#L37-L43

https://github.com/JuliaLang/IterativeSolvers.jl/blob/master/src/lanczos.jl#L34



2015-05-17 11:26 GMT-03:00 Mohammed El-Beltagy mohammed.elbelt...@gmail.com
:

 It would help if you were to present a minimal example, not as an
 attachment.
 I noticed that your code is also not working... there is no
 include(CoDeNMFSysORIGINAL.jl) in Run.jl.
 If you are running Run.jl outside the Repl, you are unlikely to see
 anything as julia would immediately exit.

 On Sunday, May 17, 2015 at 11:34:40 AM UTC+2, Lytu wrote:

 I have an issue with my code in attachment. When i run, it plots nothing.
 In attachment, there are the code (2 files) and the plot that i was
 supposed to have (that i had in Matlab).

 Can someone please tell me why?

 Thank you




[julia-users] Re: How to convert a matrix in Julia ?

2015-05-17 Thread Mohammed El-Beltagy
float(H) would yield a converted matrix. 

On Sunday, May 17, 2015 at 10:03:28 AM UTC+2, Lytu wrote:

 Hello Julia users,

 How can i convert an Array{Int32,2} matrix H in an Array{Float64,2} matrix?

 For example: H=[5 3 6;3 9 12;6 12 17] 

 I ask this because i would like to do this:   H=[5 3 6;3 9 12;6 12 17] 
   
 val=-7.046128291045994
   H[2,1] = val
 but it gives an error: InexactError()

 I think it is because val is a* float64* and the matrix H is an 
 *Array{Int32,2}*

 Does anybody know how i can do this?

 Thanks



Re: [julia-users] Unnecessary Memory allocation when iterating through a DataArray

2015-05-17 Thread Yichao Yu
On Sun, May 17, 2015 at 11:28 AM, Mohammed El-Beltagy
mohammed.elbelt...@gmail.com wrote:
 Today while trying optimize a piece code I came across a rather curious
 behavior of when allocation memory when accessing a DataArray.

 x=rand(1:10,100);
 function countGT(x::Array{Int,1})

Since the algorithm is the same for both types, I think you don't need
the type assert here. Julia will automatically specialize on the type
you pass in.

 count=0
 for i=1:length(x)
   count+= (x[i]5)? 1: 0

add `@inbounds` here will improve the performance for `Array`. Not
sure if it can help with `DataArray` yet though.

 end
 count
 end

 Here is what you get after running @time (compilation excluded)

 @time countGT(x);
 elapsed time: 0.00847156 seconds (96 bytes allocated)

 That is not too bad. @time at least allocated 80 bytes and the extra 16
 bytes is for creating the variable count, so far so good.
 Now lets see if we do the same a floating point array.
 x=rand(100);
 function countGT(x::Array{Float64,1})
 count=0.0
 for i=1:length(x)
   count+= (x[i]5.0)? 1.0: 0.0
 end
 count
 end

 countGT(x)
 @time countGT(x)

 You get
 elapsed time: 0.00177126 seconds (96 bytes allocated)
 Which still pretty good. Now, the problem start to show up when I have a
 DataArray
 x=@data rand(100);
 function countGT(x::DataArray{Float64,1})
 count=0.0
 for i=1:length(x)
   count+= (x[i]5.0)? 1.0: 0.0
 end
 count
 end

`getindex` of DataArray appears to be not type stable. It returns
either `NAType` or the data type. I think this is probably the reason
for the allocation.


 countGT(x)
 @time countGT(x)

 You we get
 elapsed time: 0.23610454 seconds (1696 bytes allocated)

 The bytes allocated seems to scale with the size of the DataArray. So it
 seems that mere act of accessing an element in a DataArray allocates memory.

 I am wondering there could be a better way.



I'm not familiar with DataArrays and it's API but I would guess it can
use Nullable or sth similar.








Re: [julia-users] Unnecessary Memory allocation when iterating through a DataArray

2015-05-17 Thread Mohammed El-Beltagy
You are quite right about the type assertions and that @inbounds would 
certainly speed things up. 
However, I am concerned here with how memory was being allocated. I wish 
that somebody who is familiar with DataArray would explain this behavior. 

On Sunday, May 17, 2015 at 6:12:11 PM UTC+2, Yichao Yu wrote:

 On Sun, May 17, 2015 at 11:28 AM, Mohammed El-Beltagy 
 mohammed@gmail.com javascript: wrote: 
  Today while trying optimize a piece code I came across a rather curious 
  behavior of when allocation memory when accessing a DataArray. 
  
  x=rand(1:10,100); 
  function countGT(x::Array{Int,1}) 

 Since the algorithm is the same for both types, I think you don't need 
 the type assert here. Julia will automatically specialize on the type 
 you pass in. 

  count=0 
  for i=1:length(x) 
count+= (x[i]5)? 1: 0 

 add `@inbounds` here will improve the performance for `Array`. Not 
 sure if it can help with `DataArray` yet though. 

  end 
  count 
  end 
  
  Here is what you get after running @time (compilation excluded) 
  
  @time countGT(x); 
  elapsed time: 0.00847156 seconds (96 bytes allocated) 
  
  That is not too bad. @time at least allocated 80 bytes and the extra 16 
  bytes is for creating the variable count, so far so good. 
  Now lets see if we do the same a floating point array. 
  x=rand(100); 
  function countGT(x::Array{Float64,1}) 
  count=0.0 
  for i=1:length(x) 
count+= (x[i]5.0)? 1.0: 0.0 
  end 
  count 
  end 
  
  countGT(x) 
  @time countGT(x) 
  
  You get 
  elapsed time: 0.00177126 seconds (96 bytes allocated) 
  Which still pretty good. Now, the problem start to show up when I have a 
  DataArray 
  x=@data rand(100); 
  function countGT(x::DataArray{Float64,1}) 
  count=0.0 
  for i=1:length(x) 
count+= (x[i]5.0)? 1.0: 0.0 
  end 
  count 
  end 

 `getindex` of DataArray appears to be not type stable. It returns 
 either `NAType` or the data type. I think this is probably the reason 
 for the allocation. 

  
  countGT(x) 
  @time countGT(x) 
  
  You we get 
  elapsed time: 0.23610454 seconds (1696 bytes allocated) 
  
  The bytes allocated seems to scale with the size of the DataArray. So it 
  seems that mere act of accessing an element in a DataArray allocates 
 memory. 
  
  I am wondering there could be a better way. 
  
  

 I'm not familiar with DataArrays and it's API but I would guess it can 
 use Nullable or sth similar. 

  
  
  
  
  



Re: [julia-users] Unnecessary Memory allocation when iterating through a DataArray

2015-05-17 Thread Milan Bouchet-Valat
Le dimanche 17 mai 2015 à 09:25 -0700, Mohammed El-Beltagy a écrit :
 You are quite right about the type assertions and that @inbounds would
 certainly speed things up. 
 However, I am concerned here with how memory was being allocated. I
 wish that somebody who is familiar with DataArray would explain this
 behavior.

That's a known design issue with DataArrays, and the reason why John
Myles White has started working on Nullable and NullableArrays to
replace them. As Yichao noted, []/getindex is type-unstable for
DataArrays as it can return NA, and this kills performance in Julia.

To improve performance, you can access the internals of the DataArray,
doing something like:

function countGT(x::DataArray{Float64,1}) 
count=0.0
for i=1:length(x)
if !isna(x, i)
count+= (x.data[i]5.0)? 1.0 : 0.0
end
end
count

 end

Always write isna(x, i) instead of isna(x[i]), since the latter suffers
from type instability.

Regards



 On Sunday, May 17, 2015 at 6:12:11 PM UTC+2, Yichao Yu wrote:
 
 On Sun, May 17, 2015 at 11:28 AM, Mohammed El-Beltagy 
 mohammed@gmail.com wrote: 
  Today while trying optimize a piece code I came across a
 rather curious 
  behavior of when allocation memory when accessing a
 DataArray. 
  
  x=rand(1:10,100); 
  function countGT(x::Array{Int,1}) 
 
 Since the algorithm is the same for both types, I think you
 don't need 
 the type assert here. Julia will automatically specialize on
 the type 
 you pass in. 
 
  count=0 
  for i=1:length(x) 
count+= (x[i]5)? 1: 0 
 
 add `@inbounds` here will improve the performance for `Array`.
 Not 
 sure if it can help with `DataArray` yet though. 
 
  end 
  count 
  end 
  
  Here is what you get after running @time (compilation
 excluded) 
  
  @time countGT(x); 
  elapsed time: 0.00847156 seconds (96 bytes allocated) 
  
  That is not too bad. @time at least allocated 80 bytes and
 the extra 16 
  bytes is for creating the variable count, so far so good. 
  Now lets see if we do the same a floating point array. 
  x=rand(100); 
  function countGT(x::Array{Float64,1}) 
  count=0.0 
  for i=1:length(x) 
count+= (x[i]5.0)? 1.0: 0.0 
  end 
  count 
  end 
  
  countGT(x) 
  @time countGT(x) 
  
  You get 
  elapsed time: 0.00177126 seconds (96 bytes allocated) 
  Which still pretty good. Now, the problem start to show up
 when I have a 
  DataArray 
  x=@data rand(100); 
  function countGT(x::DataArray{Float64,1}) 
  count=0.0 
  for i=1:length(x) 
count+= (x[i]5.0)? 1.0: 0.0 
  end 
  count 
  end 
 
 `getindex` of DataArray appears to be not type stable. It
 returns 
 either `NAType` or the data type. I think this is probably the
 reason 
 for the allocation. 
 
  
  countGT(x) 
  @time countGT(x) 
  
  You we get 
  elapsed time: 0.23610454 seconds (1696 bytes allocated) 
  
  The bytes allocated seems to scale with the size of the
 DataArray. So it 
  seems that mere act of accessing an element in a DataArray
 allocates memory. 
  
  I am wondering there could be a better way. 
  
  
 
 I'm not familiar with DataArrays and it's API but I would
 guess it can 
 use Nullable or sth similar. 
 
  
  
  
  
  



Re: [julia-users] known, working, recent examples for Clang.jl?

2015-05-17 Thread Isaiah Norton
This was recently posted:
https://github.com/denizyuret/CUDNN.jl

(please file issues...)

On Sun, May 17, 2015 at 10:44 AM, Tim Holy tim.h...@gmail.com wrote:

 I don't know if it's currently working, but CUDArt has a Clang script that
 was
 working a couple of months ago.

 --Tim

 On Sunday, May 17, 2015 02:46:38 AM Andreas Lobinger wrote:
  Hello colleagues,
 
  i seem to have problems with Clang.jl (master, on 0.4dev). The listed or
  included examples give me all kind of 'deprecated' or missing warnings
 and
  errors. I recognize, that all the underlying technology are not really
 easy
  to understand, still i'm looking for an out-of-the-box running example.
 
  Wishing a happy day,
 Andreas




Re: [julia-users] known, working, recent examples for Clang.jl?

2015-05-17 Thread Andreas Lobinger


On Sunday, May 17, 2015 at 4:44:29 PM UTC+2, Tim Holy wrote:

 I don't know if it's currently working, but CUDArt has a Clang script that 
 was 
 working a couple of months ago. 

 Thank you. I wasn't looking into it, because i'm not part of the CUDA 
club, but the structure helped me starting my own wrap_c... 




Re: [julia-users] Unnecessary Memory allocation when iterating through a DataArray

2015-05-17 Thread Yichao Yu
On Sun, May 17, 2015 at 12:52 PM, Milan Bouchet-Valat nalimi...@club.fr wrote:
 Le dimanche 17 mai 2015 à 09:25 -0700, Mohammed El-Beltagy a écrit :

 You are quite right about the type assertions and that @inbounds would
 certainly speed things up.
 However, I am concerned here with how memory was being allocated. I wish
 that somebody who is familiar with DataArray would explain this behavior.

 That's a known design issue with DataArrays, and the reason why John Myles
 White has started working on Nullable and NullableArrays to replace them. As

Didn't know about this part of the story.

P.S. your example leads me to hit
https://github.com/JuliaLang/julia/issues/11313 . Thank you for
exposing it

 Yichao noted, []/getindex is type-unstable for DataArrays as it can return
 NA, and this kills performance in Julia.

 To improve performance, you can access the internals of the DataArray, doing
 something like:

 function countGT(x::DataArray{Float64,1})
 count=0.0
 for i=1:length(x)
 if !isna(x, i)
 count+= (x.data[i]5.0)? 1.0 : 0.0
 end
 end
 count

 end

 Always write isna(x, i) instead of isna(x[i]), since the latter suffers from
 type instability.

 Regards



 On Sunday, May 17, 2015 at 6:12:11 PM UTC+2, Yichao Yu wrote:

 On Sun, May 17, 2015 at 11:28 AM, Mohammed El-Beltagy
 mohammed@gmail.com wrote:
 Today while trying optimize a piece code I came across a rather curious
 behavior of when allocation memory when accessing a DataArray.

 x=rand(1:10,100);
 function countGT(x::Array{Int,1})

 Since the algorithm is the same for both types, I think you don't need
 the type assert here. Julia will automatically specialize on the type
 you pass in.

 count=0
 for i=1:length(x)
   count+= (x[i]5)? 1: 0

 add `@inbounds` here will improve the performance for `Array`. Not
 sure if it can help with `DataArray` yet though.

 end
 count
 end

 Here is what you get after running @time (compilation excluded)

 @time countGT(x);
 elapsed time: 0.00847156 seconds (96 bytes allocated)

 That is not too bad. @time at least allocated 80 bytes and the extra 16
 bytes is for creating the variable count, so far so good.
 Now lets see if we do the same a floating point array.
 x=rand(100);
 function countGT(x::Array{Float64,1})
 count=0.0
 for i=1:length(x)
   count+= (x[i]5.0)? 1.0: 0.0
 end
 count
 end

 countGT(x)
 @time countGT(x)

 You get
 elapsed time: 0.00177126 seconds (96 bytes allocated)
 Which still pretty good. Now, the problem start to show up when I have a
 DataArray
 x=@data rand(100);
 function countGT(x::DataArray{Float64,1})
 count=0.0
 for i=1:length(x)
   count+= (x[i]5.0)? 1.0: 0.0
 end
 count
 end

 `getindex` of DataArray appears to be not type stable. It returns
 either `NAType` or the data type. I think this is probably the reason
 for the allocation.


 countGT(x)
 @time countGT(x)

 You we get
 elapsed time: 0.23610454 seconds (1696 bytes allocated)

 The bytes allocated seems to scale with the size of the DataArray. So it
 seems that mere act of accessing an element in a DataArray allocates
 memory.

 I am wondering there could be a better way.



 I'm not familiar with DataArrays and it's API but I would guess it can
 use Nullable or sth similar.










Re: [julia-users] initialize nested arrays

2015-05-17 Thread Yichao Yu
On Sun, May 17, 2015 at 7:50 PM, David P. Sanders dpsand...@gmail.com wrote:


 El domingo, 17 de mayo de 2015, 16:58:19 (UTC-5), Kuba Roth escribió:

 I see, so similarly this works with any level of nesting:
 julia Array[[1,2],Array[[3],Array[[4],[6,7
 2-element Array{Array{T,N},1}:
  [1,2]
  Array[[3],Array[[4],[6,7]]]

 Thanks

 On Sunday, May 17, 2015 at 12:59:28 PM UTC-7, Mauro wrote:

 This works:

 julia Array[[1,2],Array[[3],[4,5]]]
 2-element Array{Array{T,N},1}:
  [1,2]
  Array[[3],[4,5]]


 However, the type is incompletely specified (as given away by Array{T,N}
 where T and N are types that have not been defined.

Specifying Array as the element type is actually more specific than
specifying Any since `Array : Any`. If you don't need to push
anything other than an array to that array, you should probably use
Array. Otherwise, Any is probably the right type to use.


 An alternative, that seems to me neater, is to specify that the type stored
 in the array be Any -- literally, the elements of the array can be of any
 type:

 Any[ [3], [4,5] ]

 Any[ [3], Any[ [4], [6,7] ] ]

 This prints out different things on v0.3 and v0.4, but I believe it has the
 same result (and will not be affected
 by the change to concatenation referred to by Mauro.

 It's perhaps more readable to build it up one element at a time:

 L = Any[ ]
 push!(L, [3])
 push!(L, Any[ [4], [6,7] ])

 David.





 (BTW, automatic concatenation is being changed in 0.4  0.5)

 On Sun, 2015-05-17 at 21:41, Kuba Roth kuba...@gmail.com wrote:
  I've seen similar replies to my question on the forum but I still can't
  figure out how to initialize nested arrays of a depth higher then 2?
 
  For instance this works fine:
  julia Array[[1,2],[3,4]]
  2-element Array{Array{T,N},1}:
   [1,2]
   [3,4]
 
 
  But the following does not return what I expect. The [[3],[4,5]] part
  is
  flattened into a single array.
  julia Array[[1,2],[[3],[4,5]]]
  2-element Array{Array{T,N},1}:
   [1,2]
   [3,4,5]
 
 
  So far my workaround is to store the intermediate result and then
  assign it
  to the final array. Can that be shortened to a single line?
  b = Array[[3],[4,5]]
  julia Array[[1,2],b]
  2-element Array{Array{T,N},1}:
   [1,2]
   Array[[3],[4,5]]
 
 
  Thank you.




Re: [julia-users] initialize nested arrays

2015-05-17 Thread David P. Sanders


El domingo, 17 de mayo de 2015, 16:58:19 (UTC-5), Kuba Roth escribió:

 I see, so similarly this works with any level of nesting:
 julia Array[[1,2],Array[[3],Array[[4],[6,7
 2-element Array{Array{T,N},1}:
  [1,2]  
  Array[[3],Array[[4],[6,7]]]

 Thanks

 On Sunday, May 17, 2015 at 12:59:28 PM UTC-7, Mauro wrote:

 This works: 

 julia Array[[1,2],Array[[3],[4,5]]] 
 2-element Array{Array{T,N},1}: 
  [1,2]   
  Array[[3],[4,5]] 


However, the type is incompletely specified (as given away by Array{T,N} 
where T and N are types that have not been defined.

An alternative, that seems to me neater, is to specify that the type stored 
in the array be Any -- literally, the elements of the array can be of any 
type:

Any[ [3], [4,5] ] 

Any[ [3], Any[ [4], [6,7] ] ]

This prints out different things on v0.3 and v0.4, but I believe it has the 
same result (and will not be affected
by the change to concatenation referred to by Mauro.

It's perhaps more readable to build it up one element at a time:

L = Any[ ]
push!(L, [3])
push!(L, Any[ [4], [6,7] ])

David.


 


 (BTW, automatic concatenation is being changed in 0.4  0.5) 

 On Sun, 2015-05-17 at 21:41, Kuba Roth kuba...@gmail.com wrote: 
  I've seen similar replies to my question on the forum but I still can't 
  figure out how to initialize nested arrays of a depth higher then 2? 
  
  For instance this works fine: 
  julia Array[[1,2],[3,4]] 
  2-element Array{Array{T,N},1}: 
   [1,2] 
   [3,4] 
  
  
  But the following does not return what I expect. The [[3],[4,5]] part 
 is 
  flattened into a single array. 
  julia Array[[1,2],[[3],[4,5]]]   
  2-element Array{Array{T,N},1}: 
   [1,2]   
   [3,4,5] 
  
  
  So far my workaround is to store the intermediate result and then 
 assign it 
  to the final array. Can that be shortened to a single line? 
  b = Array[[3],[4,5]]   
  julia Array[[1,2],b] 
  2-element Array{Array{T,N},1}: 
   [1,2]   
   Array[[3],[4,5]] 
  
  
  Thank you. 



Re: [julia-users] llvmcall printf

2015-05-17 Thread andrew cooke

more exactly, i think it's called llvm.x86.pclmulqdq and it may be possible 
to call it from ccall, as that seems to support llvm intrinsics (i am still 
reading around and am busy next week, so will probably try in about a 
week's time).  andrew

On Sunday, 17 May 2015 15:42:55 UTC-3, andrew cooke wrote:




 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150126/255137.html

 On Sunday, 17 May 2015 15:23:31 UTC-3, Isaiah wrote:

 What instruction are you trying to call?

 On Sun, May 17, 2015 at 11:21 AM, andrew cooke and...@acooke.org wrote:


 ah, thanks - that issue is a huge help.

 On Sunday, 17 May 2015 11:38:20 UTC-3, Isaiah wrote:

 i am going to look at the implementing code and work out what is 
 generated, i think, and then ask llvmdev for help.


 You should start with lli and make sure that you are writing the IR 
 correctly; if it works in lli, then the issue is with Julia (as is most 
 likely -- llvmcall is kind of brittle). 
  

 here's something i don't understand about the context in which the 
 llvm ir is inserted.  declare instructions don't seem to be accepted, 
 for 
 example.


 See
 https://github.com/JuliaLang/julia/pull/8740


 On Sun, May 17, 2015 at 10:17 AM, andrew cooke and...@acooke.org 
 wrote:

 thanks, but the reason i need lvmcall is because i want to call a 
 specific intel instruction (which llvm actually supports - i've found the 
 patch for it).  but before i do that i am trying to just get something 
 to 
 work, and printf seemed like a good intermediate goal.

 i am going to look at the implementing code and work out what is 
 generated, i think, and then ask llvmdev for help.  there's something i 
 don't understand about the context in which the llvm ir is inserted.  
 declare instructions don't seem to be accepted, for example.  once i 
 can 
 pin that down i think i can ask a sensible question on llvmdev...

 cheers,
 andrew


 On Saturday, 16 May 2015 22:01:25 UTC-3, Yichao Yu wrote:

 On Sat, May 16, 2015 at 5:01 PM, andrew cooke and...@acooke.org 
 wrote: 
  
  Does anyone have a working example that calls printf via llvmcall? 
  
  I realise I'm uncomfortably inbetween llvmdev and julia-users, but 
 I'm 
  asking here first because I suspect my limitations are still more 
  julia-related. 
  
  In particular, 
  
  julia g() = Base.llvmcall( 
   call i32 (i8*, ...)* @printf(i8* chello world\00) 
   ret, 
   Void, Tuple{}) 
  g (generic function with 2 methods) 
  
  julia g() 
  ERROR: error compiling g: Failed to parse LLVM Assembly: 
  julia: llvmcall:3:35: error: expected string 
  call i32 (i8*, ...)* @printf(i8* chello world 
^ 
  
  seems like it's *almost* there...? 
  
  Thanks, 
  Andrew 
  
  (I suspect I also need something other than @printf, like 
  IntrinsicsX86.printf or something, but I can't find where I saw an 
 example 
  like that...  Related, declare doesn't seem to be accepted, or 
 assignment to 
  global vsariables.  But I am completely new to all this...) 
  

 I was also interested in knowning how to use `llvmcall` in general 
 but 
 at least for this limited case, (and I guess you probably know 
 already) it is easier to user `ccall` 

 ```julia 
 julia ccall(:printf, Int, (Ptr{Cchar},), hellow world\n) 
 hellow world 
 13 
 ``` 





[julia-users] SSL certification problem

2015-05-17 Thread Aleksander
Hello!
I recently deinstalled JuliaStudio and started to use the Juno IDE. 
However, there appeared the problem that scripts, which use Plotly, started 
to throw an error Error excuting request: Couldn't resolve host name in 
exec_as_multi at HTTPC.jl:702 ... 
after a command like 
response = Plotly.plot([data1, data2],  [filename = fn, fileopt = 
overwrite])

I'm almost certain that it is caused by problems with SSL certificates in 
my system, but I don't know how to solve it.

Info: system - Win7; Juno is not installed and works like a portable 
application.




Re: [julia-users] Unnecessary Memory allocation when iterating through a DataArray

2015-05-17 Thread Tim Holy
To clarify, there were actually two issues: one thing that may not be clear is 
that
 elapsed time: 0.23610454 seconds (1696 bytes allocated)

tells you how many bytes were allocated, but it omits mentioning that most/all 
of those were (or will be) freed. In other words, this was _not_ symptomatic 
of a leak---it was just a message whose meaning could have been clearer. That 
behavior just changed in https://github.com/JuliaLang/julia/pull/11186. 
Hopefully it will be clearer going forward.

The issue that Yichao mentioned was more subtle and a much bigger problem, but 
I don't think this is what you were noticing, Mohammed. That more serious 
issue seems to be fixed by https://github.com/JuliaLang/julia/pull/11314

--Tim

On Sunday, May 17, 2015 05:53:14 PM Yichao Yu wrote:
 On Sun, May 17, 2015 at 5:05 PM, Mohammed El-Beltagy
 
 mohammed.elbelt...@gmail.com wrote:
  Many thanks Milan and Yichao, this was very informative. I am also
  delighted that I helped in a very  small way expose what appears to be a
  problem with memory leakage.
 
 It was actually much worse than a memory leakage. It was actually
 freeing memory that is in use. (AFAICT, given how a GC works, it
 usually won't leak anything when it fires, but it can free something
 by mistake if the code that uses it is badly written.)
 See explaination in the comment of this issue[1] for why GC roots (and
 friends) are important.
 
 [1] https://github.com/JuliaLang/julia/pull/11190#issuecomment-100066267
 
  I love this community!
  
  On Sunday, May 17, 2015 at 7:51:59 PM UTC+2, Yichao Yu wrote:
  On Sun, May 17, 2015 at 12:52 PM, Milan Bouchet-Valat nali...@club.fr
  
  wrote:
   Le dimanche 17 mai 2015 à 09:25 -0700, Mohammed El-Beltagy a écrit :
   
   You are quite right about the type assertions and that @inbounds would
   certainly speed things up.
   However, I am concerned here with how memory was being allocated. I
   wish
   that somebody who is familiar with DataArray would explain this
   behavior.
   
   That's a known design issue with DataArrays, and the reason why John
   Myles
   White has started working on Nullable and NullableArrays to replace
   them. As
  
  Didn't know about this part of the story.
  
  P.S. your example leads me to hit
  https://github.com/JuliaLang/julia/issues/11313 . Thank you for
  exposing it
  
   Yichao noted, []/getindex is type-unstable for DataArrays as it can
   return
   NA, and this kills performance in Julia.
   
   To improve performance, you can access the internals of the DataArray,
   doing
   something like:
   
   function countGT(x::DataArray{Float64,1})
   
   count=0.0
   for i=1:length(x)
   
   if !isna(x, i)
   
   count+= (x.data[i]5.0)? 1.0 : 0.0
   
   end
   
   end
   count
   
   end
   
   Always write isna(x, i) instead of isna(x[i]), since the latter suffers
   from
   type instability.
   
   Regards
   
   
   
   On Sunday, May 17, 2015 at 6:12:11 PM UTC+2, Yichao Yu wrote:
   
   On Sun, May 17, 2015 at 11:28 AM, Mohammed El-Beltagy
   
   mohammed@gmail.com wrote:
   Today while trying optimize a piece code I came across a rather
   curious
   behavior of when allocation memory when accessing a DataArray.
   
   x=rand(1:10,100);
   function countGT(x::Array{Int,1})
   
   Since the algorithm is the same for both types, I think you don't need
   the type assert here. Julia will automatically specialize on the type
   you pass in.
   
   count=0
   for i=1:length(x)
   
 count+= (x[i]5)? 1: 0
   
   add `@inbounds` here will improve the performance for `Array`. Not
   sure if it can help with `DataArray` yet though.
   
   end
   count
   
   end
   
   Here is what you get after running @time (compilation excluded)
   
   @time countGT(x);
   elapsed time: 0.00847156 seconds (96 bytes allocated)
   
   That is not too bad. @time at least allocated 80 bytes and the extra
   16
   bytes is for creating the variable count, so far so good.
   Now lets see if we do the same a floating point array.
   x=rand(100);
   function countGT(x::Array{Float64,1})
   
   count=0.0
   for i=1:length(x)
   
 count+= (x[i]5.0)? 1.0: 0.0
   
   end
   count
   
   end
   
   countGT(x)
   @time countGT(x)
   
   You get
   elapsed time: 0.00177126 seconds (96 bytes allocated)
   Which still pretty good. Now, the problem start to show up when I have
   a
   DataArray
   x=@data rand(100);
   function countGT(x::DataArray{Float64,1})
   
   count=0.0
   for i=1:length(x)
   
 count+= (x[i]5.0)? 1.0: 0.0
   
   end
   count
   
   end
   
   `getindex` of DataArray appears to be not type stable. It returns
   either `NAType` or the data type. I think this is probably the reason
   for the allocation.
   
   countGT(x)
   @time countGT(x)
   
   You we get
   elapsed time: 0.23610454 seconds (1696 bytes 

Re: [julia-users] Unnecessary Memory allocation when iterating through a DataArray

2015-05-17 Thread Yichao Yu
On Sun, May 17, 2015 at 6:06 PM, Tim Holy tim.h...@gmail.com wrote:
 To clarify, there were actually two issues: one thing that may not be clear is
 that
 elapsed time: 0.23610454 seconds (1696 bytes allocated)

 tells you how many bytes were allocated, but it omits mentioning that most/all
 of those were (or will be) freed. In other words, this was _not_ symptomatic
 of a leak---it was just a message whose meaning could have been clearer. That
 behavior just changed in https://github.com/JuliaLang/julia/pull/11186.
 Hopefully it will be clearer going forward.

 The issue that Yichao mentioned was more subtle and a much bigger problem, but
 I don't think this is what you were noticing, Mohammed. That more serious
 issue seems to be fixed by https://github.com/JuliaLang/julia/pull/11314

Thanks for the clarification. That's exactly what I meant.

Sorry for the confusion =(


 --Tim

 On Sunday, May 17, 2015 05:53:14 PM Yichao Yu wrote:
 On Sun, May 17, 2015 at 5:05 PM, Mohammed El-Beltagy

 mohammed.elbelt...@gmail.com wrote:
  Many thanks Milan and Yichao, this was very informative. I am also
  delighted that I helped in a very  small way expose what appears to be a
  problem with memory leakage.

 It was actually much worse than a memory leakage. It was actually
 freeing memory that is in use. (AFAICT, given how a GC works, it
 usually won't leak anything when it fires, but it can free something
 by mistake if the code that uses it is badly written.)
 See explaination in the comment of this issue[1] for why GC roots (and
 friends) are important.

 [1] https://github.com/JuliaLang/julia/pull/11190#issuecomment-100066267

  I love this community!
 
  On Sunday, May 17, 2015 at 7:51:59 PM UTC+2, Yichao Yu wrote:
  On Sun, May 17, 2015 at 12:52 PM, Milan Bouchet-Valat nali...@club.fr
 
  wrote:
   Le dimanche 17 mai 2015 à 09:25 -0700, Mohammed El-Beltagy a écrit :
  
   You are quite right about the type assertions and that @inbounds would
   certainly speed things up.
   However, I am concerned here with how memory was being allocated. I
   wish
   that somebody who is familiar with DataArray would explain this
   behavior.
  
   That's a known design issue with DataArrays, and the reason why John
   Myles
   White has started working on Nullable and NullableArrays to replace
   them. As
 
  Didn't know about this part of the story.
 
  P.S. your example leads me to hit
  https://github.com/JuliaLang/julia/issues/11313 . Thank you for
  exposing it
 
   Yichao noted, []/getindex is type-unstable for DataArrays as it can
   return
   NA, and this kills performance in Julia.
  
   To improve performance, you can access the internals of the DataArray,
   doing
   something like:
  
   function countGT(x::DataArray{Float64,1})
  
   count=0.0
   for i=1:length(x)
  
   if !isna(x, i)
  
   count+= (x.data[i]5.0)? 1.0 : 0.0
  
   end
  
   end
   count
  
   end
  
   Always write isna(x, i) instead of isna(x[i]), since the latter suffers
   from
   type instability.
  
   Regards
  
  
  
   On Sunday, May 17, 2015 at 6:12:11 PM UTC+2, Yichao Yu wrote:
  
   On Sun, May 17, 2015 at 11:28 AM, Mohammed El-Beltagy
  
   mohammed@gmail.com wrote:
   Today while trying optimize a piece code I came across a rather
   curious
   behavior of when allocation memory when accessing a DataArray.
  
   x=rand(1:10,100);
   function countGT(x::Array{Int,1})
  
   Since the algorithm is the same for both types, I think you don't need
   the type assert here. Julia will automatically specialize on the type
   you pass in.
  
   count=0
   for i=1:length(x)
  
 count+= (x[i]5)? 1: 0
  
   add `@inbounds` here will improve the performance for `Array`. Not
   sure if it can help with `DataArray` yet though.
  
   end
   count
  
   end
  
   Here is what you get after running @time (compilation excluded)
  
   @time countGT(x);
   elapsed time: 0.00847156 seconds (96 bytes allocated)
  
   That is not too bad. @time at least allocated 80 bytes and the extra
   16
   bytes is for creating the variable count, so far so good.
   Now lets see if we do the same a floating point array.
   x=rand(100);
   function countGT(x::Array{Float64,1})
  
   count=0.0
   for i=1:length(x)
  
 count+= (x[i]5.0)? 1.0: 0.0
  
   end
   count
  
   end
  
   countGT(x)
   @time countGT(x)
  
   You get
   elapsed time: 0.00177126 seconds (96 bytes allocated)
   Which still pretty good. Now, the problem start to show up when I have
   a
   DataArray
   x=@data rand(100);
   function countGT(x::DataArray{Float64,1})
  
   count=0.0
   for i=1:length(x)
  
 count+= (x[i]5.0)? 1.0: 0.0
  
   end
   count
  
   end
  
   `getindex` of DataArray appears to be not type stable. It returns
   either `NAType` or the data type. I think this is probably the reason
   for the allocation.
  
   countGT(x)
   @time 

Re: [julia-users] llvmcall printf

2015-05-17 Thread Jameson Nash
In many cases, you can write generic code and llvm will attempt match it to
the optimal instruction sequence. (to your second question: no, ccall does
not substantially support llvm intrinsics, although llvm does intercept
some stdlib calls and replace them with intrinsics).

julia f(x,y) = widemul(x,y)
f (generic function with 1 method)

julia code_llvm(f, (Int64,Int64))
define i128 @julia_f_20526(i64, i64) {
top:
  %2 = sext i64 %0 to i128
  %3 = sext i64 %1 to i128
  %4 = mul i128 %3, %2
  ret i128 %4
}

julia code_native(f,(Int64,Int64))
.section __TEXT,__text,regular,pure_instructions
Filename: none
Source line: 1
pushq %rbp
movq %rsp, %rbp
Source line: 1
movq %rsi, %rax
imulq %rdi
popq %rbp
ret


On Sun, May 17, 2015 at 8:33 PM andrew cooke and...@acooke.org wrote:


 more exactly, i think it's called llvm.x86.pclmulqdq and it may be
 possible to call it from ccall, as that seems to support llvm intrinsics (i
 am still reading around and am busy next week, so will probably try in
 about a week's time).  andrew


 On Sunday, 17 May 2015 15:42:55 UTC-3, andrew cooke wrote:




 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150126/255137.html

 On Sunday, 17 May 2015 15:23:31 UTC-3, Isaiah wrote:

 What instruction are you trying to call?

 On Sun, May 17, 2015 at 11:21 AM, andrew cooke and...@acooke.org
 wrote:


 ah, thanks - that issue is a huge help.

 On Sunday, 17 May 2015 11:38:20 UTC-3, Isaiah wrote:

 i am going to look at the implementing code and work out what is
 generated, i think, and then ask llvmdev for help.


 You should start with lli and make sure that you are writing the IR
 correctly; if it works in lli, then the issue is with Julia (as is most
 likely -- llvmcall is kind of brittle).


 here's something i don't understand about the context in which the
 llvm ir is inserted.  declare instructions don't seem to be accepted, 
 for
 example.


 See
 https://github.com/JuliaLang/julia/pull/8740


 On Sun, May 17, 2015 at 10:17 AM, andrew cooke and...@acooke.org
 wrote:

 thanks, but the reason i need lvmcall is because i want to call a
 specific intel instruction (which llvm actually supports - i've found the
 patch for it).  but before i do that i am trying to just get something 
 to
 work, and printf seemed like a good intermediate goal.

 i am going to look at the implementing code and work out what is
 generated, i think, and then ask llvmdev for help.  there's something i
 don't understand about the context in which the llvm ir is inserted.
 declare instructions don't seem to be accepted, for example.  once i 
 can
 pin that down i think i can ask a sensible question on llvmdev...

 cheers,
 andrew


 On Saturday, 16 May 2015 22:01:25 UTC-3, Yichao Yu wrote:

 On Sat, May 16, 2015 at 5:01 PM, andrew cooke and...@acooke.org
 wrote:
 
  Does anyone have a working example that calls printf via llvmcall?
 
  I realise I'm uncomfortably inbetween llvmdev and julia-users, but
 I'm
  asking here first because I suspect my limitations are still more
  julia-related.
 
  In particular,
 
  julia g() = Base.llvmcall(
   call i32 (i8*, ...)* @printf(i8* chello world\00)
   ret,
   Void, Tuple{})
  g (generic function with 2 methods)
 
  julia g()
  ERROR: error compiling g: Failed to parse LLVM Assembly:
  julia: llvmcall:3:35: error: expected string
  call i32 (i8*, ...)* @printf(i8* chello world
^
 
  seems like it's *almost* there...?
 
  Thanks,
  Andrew
 
  (I suspect I also need something other than @printf, like
  IntrinsicsX86.printf or something, but I can't find where I saw an
 example
  like that...  Related, declare doesn't seem to be accepted, or
 assignment to
  global vsariables.  But I am completely new to all this...)
 

 I was also interested in knowning how to use `llvmcall` in general
 but
 at least for this limited case, (and I guess you probably know
 already) it is easier to user `ccall`

 ```julia
 julia ccall(:printf, Int, (Ptr{Cchar},), hellow world\n)
 hellow world
 13
 ```






[julia-users] [ANN] SparseVectors.jl

2015-05-17 Thread Dahua Lin
Dear all,

I am pleased to announce a new package SparseVectors: 
https://github.com/lindahua/SparseVectors.jl

Sparse data has become increasingly common in machine learning and related 
areas. For example, in document analysis, each document is often 
represented as a sparse vector, which each entry represents the number of 
occurrences of a certain word. However, the support of sparse vectors 
remains quite limited in Julia base.

This package provides two types SparseVector and SparseVectorView and a 
series of methods to work with sparse vectors. Specifically, this package 
provides the following functionalities:

   - Construction of sparse vectors
   - Get a view of a column in a sparse matrix (of CSC format), or a view 
   of a range of columns.
   - Specialized arithmetic functions on sparse vectors, e.g. +, -, *, etc.
   - Specialized reduction functions on sparse vectors, e.g. sum, vecnorm, 
   etc.
   
It is also worth noting that there's been some discussion (see 
https://github.com/JuliaLang/julia/pull/8718) of adding SparseVector to the 
Julia Base. Hopefully, this can be added in 0.4. Even this happens, the 
package remains useful, as it also works with Julia 0.3 (which won't have 
SparseVector support).

Best,
Dahua




Re: [julia-users] known, working, recent examples for Clang.jl?

2015-05-17 Thread Tim Holy
I don't know if it's currently working, but CUDArt has a Clang script that was 
working a couple of months ago.

--Tim

On Sunday, May 17, 2015 02:46:38 AM Andreas Lobinger wrote:
 Hello colleagues,
 
 i seem to have problems with Clang.jl (master, on 0.4dev). The listed or
 included examples give me all kind of 'deprecated' or missing warnings and
 errors. I recognize, that all the underlying technology are not really easy
 to understand, still i'm looking for an out-of-the-box running example.
 
 Wishing a happy day,
Andreas



Re: [julia-users] Re: how to replace , to . in Array{Any,2}:

2015-05-17 Thread Tim Holy
You might not realize it, but replace is a function, and that's what napisał 
was referring to.

--Tim

On Sunday, May 17, 2015 01:03:30 AM paul analyst wrote:
 I see, . No any hints?
 Paul
 
 W dniu niedziela, 17 maja 2015 01:28:50 UTC+2 użytkownik ele...@gmail.com
 
 napisał:
  I don't think replace broadcasts, you have to write a loop applying
  replace to each string at a time.
  
  On Sunday, May 17, 2015 at 4:40:05 AM UTC+10, paul analyst wrote:
  I have file with decimal separator lika ,
  x=readdlm(x.txt,'\t')
  
  julia x=x[2:end,:]
  6390x772 Array{Any,2}:
  
  some kolumns looks :
  julia x[:,69]
  
  6390-element Array{Any,1}:
   0.0
   
0,33
0,72
1,09
0,95
3,57
2,27
2,42
  
  julia replace(x[:,69],',','.')
  ERROR: `replace` has no method matching replace(::Array{Any,1}, ::Char,
  
  ::Char)
  
  julia replace(string(x[:,69],',','.'))
  ERROR: `replace` has no method matching replace(::ASCIIString)
  
  
  how to replace , to . ?
  
  I can repalce all , to .
  
  Paul



[julia-users] Re: Julia Summer of Code

2015-05-17 Thread Páll Haraldsson

I saw an interesting project:
https://github.com/JuliaLang/julialang.github.com/blob/master/gsoc/2015/index.md#project-simple-persistent-distributed-storage
This project proposes to implement a very simple persistent storage 
mechanism for Julia variables so that data can be saved to and loaded from 
disk with a consistent interface that is agnostic of the underlying storage 
layer.

[time-stamped versioning - meaning? Schema evolution was one purported 
downside to:]

This SoC project may or may not be necessary. That is, be agnostic and just 
a wrapper/API. What about implementing (the very small system) Prevayler in 
Julia? I've seen it ported to some languages but since I heard about it in 
2001, I can't say it has taken the world by storm (do not know anyone who 
uses it..). Business types and banks, etc. are very conservative and cling 
to SQL..

https://en.wikipedia.org/wiki/Prevayler
Prevayler requires enough RAM to keep the entire system state.

This was perceived as a drawback at the time.. Lot's of databases fit, now 
(and in the future). This seems the requirement anyway in HPC. You do not 
want to work on virtual memory..

Is there a reason Prevayler can't be done in Julia? I do not think so. 
Julia has types not OO in the conventional sense and I'm not sure it's an 
issue. If I recall, the only requirement was that objects must be able to 
serialize. I only know it's not appropriate for PHP (because of no state).

At least look at this article from the horses mouth:

http://www.advogato.org/article/398.html
Transparent Persistence, Fault-Tolerance and Load-Balancing for Java 
Systems.

Orders of magnitude FASTER and SIMPLER than a traditional DBMS. No pre or 
post-processing required, no weird proprietary VM required, no base-class 
inheritance or clumsy interface definition required: just PLAIN JAVA CODE.
[..]
Question: RAM is getting cheaper every day. Researchers are announcing 
major breakthroughs in memory technology. Even today, servers with 
multi-gigabyte RAM are commonplace. For many systems, it's already feasible 
to keep all business objects in RAM. Why can't I simply do that and forget 
all the database hassle?

Answer: You can, actually.
[..]
If all my objects stay in RAM, will I be able to use SQL-based tools to 
query my objects' attributes?

No. You will be able to use object-based tools. The good news is you will 
no longer be breaking your objects' encapsulation.


I thought it might be a dead project but seems not:

https://github.com/jsampson/prevayler/tree/master
authored on Aug 12, 2014


There was some controversy at the time:

http://c2.com/cgi/wiki?ThePrevayler
Orders of magnitude faster and simpler than a traditional database. Write 
plain java classes (albeit in accordance with a CommandPattern and a few 
constraints): [..] no inheritance from base-class required. Clear 
documentation and demo included.
 3251 times faster than MySQL.
 9983 times faster than ORACLE.

Aren't you simply dropping all code from DBMs and stuffing it all on the 
application?
No. I have seen prevalent systems that were thousands of lines of code. 
Most DBMs are hundreds of thousands of lines of code. --KlausWuestefeld

[This one I didn't know..:]
KentBeck, [..] and KlausWuestefeld paired up for a weekend in december 
2002, on the island of Florianopolis, Brazil, and implemented Florypa, a 
minimal prevalence layer for Smalltalk (VisualWorks) based on Prevayler.


After having testing Prevayler (and Prevayler-like IMDB's) we came to this 
conclusion: It is useful for prototyping, but fails miserabely in terms of 
a lot of OODBMS/RBMS issues.

In fact, we concluded that when all the problems with Prevayler was solved, 
we had an object-oriented database.

* Sounds like an OOP version of GreencoddsTenthRuleOfProgramming.


http://c2.com/cgi/wiki?GreencoddsTenthRuleOfProgramming
A database-oriented twist on GreenspunsTenthRuleOfProgramming in which, 
Every sufficiently complex application/language/tool will either have to 
use a database or reinvent one the hard way. I don't know who originally 
said it, so I combined DrCodd's name with Greenspuns'. Ironically, even 
Lisp, the original greenspun language, reinvents one by having an 
internal complex data structure (nested list in byte-code form) that takes 
on database-like characteristics.

http://c2.com/cgi/wiki?GreenspunsTenthRuleOfProgramming
Any sufficiently complicated C or Fortran program contains an ad-hoc, 
informally-specified, bug-ridden, slow implementation of half of 
CommonLisp. as Julia has Lisp-like macros, considered a lisp by some, I 
think this rule may not apply too well to Julia..



[julia-users] Re: Gadfly: Geom.smooth and Scale.x_discrete (labels) conflict

2015-05-17 Thread mbob
Scale.x_continuous (labels) *does* allow one to use Geom.smooth and get 
string (date) labels at the same time.

Sorry for an unnecessary post!



On Saturday, May 16, 2015 at 8:58:43 PM UTC-6, mbob wrote:

 Geom.smooth and Scale.x_discrete (labels) don't seem to play nicely 
 together.

 Geom.smooth requires  x (and y) to be bound to an array of plain numbers.

 So, if the x values are an array of dates, one needs to convert these 
 dates to (for example) day of year numbers, and Geom.smooth will be happy.

 But, if you don't want to use these day of year numbers as the x labels, 
 and would prefer to use date strings for labels, then the Scale.x_discrete 
 (labels= dayofyeartostring) option works, where dayofyeartostring is a 
 user-supplied function to output a desired string representation of the 
 date.

 But you can't do both of these things at the same time! (At least I can't.)

 Geom.smooth seems to try to use the labels (i.e., strings) that 
 Scale.x_discrete (labels) produces -- and then complains that these values 
 aren't plain numbers.

 Is it possible to use Geom.smooth when the x values are essentially dates, 
 and to use string date labels for the x-values at the same time?

 Thanks!

 (Please forgive me if this topic has been covered somewhere -- I couldn't 
 find it addressed.)



Re: [julia-users] Unnecessary Memory allocation when iterating through a DataArray

2015-05-17 Thread Mohammed El-Beltagy
Many thanks Milan and Yichao, this was very informative. I am also 
delighted that I helped in a very  small way expose what appears to be a 
problem with memory leakage. 
I love this community!

On Sunday, May 17, 2015 at 7:51:59 PM UTC+2, Yichao Yu wrote:

 On Sun, May 17, 2015 at 12:52 PM, Milan Bouchet-Valat nali...@club.fr 
 javascript: wrote: 
  Le dimanche 17 mai 2015 à 09:25 -0700, Mohammed El-Beltagy a écrit : 
  
  You are quite right about the type assertions and that @inbounds would 
  certainly speed things up. 
  However, I am concerned here with how memory was being allocated. I wish 
  that somebody who is familiar with DataArray would explain this 
 behavior. 
  
  That's a known design issue with DataArrays, and the reason why John 
 Myles 
  White has started working on Nullable and NullableArrays to replace 
 them. As 

 Didn't know about this part of the story. 

 P.S. your example leads me to hit 
 https://github.com/JuliaLang/julia/issues/11313 . Thank you for 
 exposing it 

  Yichao noted, []/getindex is type-unstable for DataArrays as it can 
 return 
  NA, and this kills performance in Julia. 
  
  To improve performance, you can access the internals of the DataArray, 
 doing 
  something like: 
  
  function countGT(x::DataArray{Float64,1}) 
  count=0.0 
  for i=1:length(x) 
  if !isna(x, i) 
  count+= (x.data[i]5.0)? 1.0 : 0.0 
  end 
  end 
  count 
  
  end 
  
  Always write isna(x, i) instead of isna(x[i]), since the latter suffers 
 from 
  type instability. 
  
  Regards 
  
  
  
  On Sunday, May 17, 2015 at 6:12:11 PM UTC+2, Yichao Yu wrote: 
  
  On Sun, May 17, 2015 at 11:28 AM, Mohammed El-Beltagy 
  mohammed@gmail.com wrote: 
  Today while trying optimize a piece code I came across a rather curious 
  behavior of when allocation memory when accessing a DataArray. 
  
  x=rand(1:10,100); 
  function countGT(x::Array{Int,1}) 
  
  Since the algorithm is the same for both types, I think you don't need 
  the type assert here. Julia will automatically specialize on the type 
  you pass in. 
  
  count=0 
  for i=1:length(x) 
count+= (x[i]5)? 1: 0 
  
  add `@inbounds` here will improve the performance for `Array`. Not 
  sure if it can help with `DataArray` yet though. 
  
  end 
  count 
  end 
  
  Here is what you get after running @time (compilation excluded) 
  
  @time countGT(x); 
  elapsed time: 0.00847156 seconds (96 bytes allocated) 
  
  That is not too bad. @time at least allocated 80 bytes and the extra 16 
  bytes is for creating the variable count, so far so good. 
  Now lets see if we do the same a floating point array. 
  x=rand(100); 
  function countGT(x::Array{Float64,1}) 
  count=0.0 
  for i=1:length(x) 
count+= (x[i]5.0)? 1.0: 0.0 
  end 
  count 
  end 
  
  countGT(x) 
  @time countGT(x) 
  
  You get 
  elapsed time: 0.00177126 seconds (96 bytes allocated) 
  Which still pretty good. Now, the problem start to show up when I have 
 a 
  DataArray 
  x=@data rand(100); 
  function countGT(x::DataArray{Float64,1}) 
  count=0.0 
  for i=1:length(x) 
count+= (x[i]5.0)? 1.0: 0.0 
  end 
  count 
  end 
  
  `getindex` of DataArray appears to be not type stable. It returns 
  either `NAType` or the data type. I think this is probably the reason 
  for the allocation. 
  
  
  countGT(x) 
  @time countGT(x) 
  
  You we get 
  elapsed time: 0.23610454 seconds (1696 bytes allocated) 
  
  The bytes allocated seems to scale with the size of the DataArray. So 
 it 
  seems that mere act of accessing an element in a DataArray allocates 
  memory. 
  
  I am wondering there could be a better way. 
  
  
  
  I'm not familiar with DataArrays and it's API but I would guess it can 
  use Nullable or sth similar. 
  
  
  
  
  
  
  
  



[julia-users] known, working, recent examples for Clang.jl?

2015-05-17 Thread Andreas Lobinger
Hello colleagues,

i seem to have problems with Clang.jl (master, on 0.4dev). The listed or 
included examples give me all kind of 'deprecated' or missing warnings and 
errors. I recognize, that all the underlying technology are not really easy 
to understand, still i'm looking for an out-of-the-box running example.

Wishing a happy day,
   Andreas


[julia-users] Re: how to replace , to . in Array{Any,2}:

2015-05-17 Thread wildart
map(e-replace(string(e), ',', '.'), x)

On Saturday, May 16, 2015 at 2:40:05 PM UTC-4, paul analyst wrote:

 I have file with decimal separator lika ,
 x=readdlm(x.txt,'\t')

 julia x=x[2:end,:]
 6390x772 Array{Any,2}:

 some kolumns looks :
 julia x[:,69]
 6390-element Array{Any,1}:
  0.0
   0,33
   0,72
   1,09
   0,95
   3,57
   2,27
   2,42



 julia replace(x[:,69],',','.')
 ERROR: `replace` has no method matching replace(::Array{Any,1}, ::Char, 
 ::Char)

 julia replace(string(x[:,69],',','.'))
 ERROR: `replace` has no method matching replace(::ASCIIString)


 how to replace , to . ?

 I can repalce all , to . 

 Paul




Re: [julia-users] llvmcall printf

2015-05-17 Thread Isaiah Norton
What instruction are you trying to call?

On Sun, May 17, 2015 at 11:21 AM, andrew cooke and...@acooke.org wrote:


 ah, thanks - that issue is a huge help.

 On Sunday, 17 May 2015 11:38:20 UTC-3, Isaiah wrote:

 i am going to look at the implementing code and work out what is
 generated, i think, and then ask llvmdev for help.


 You should start with lli and make sure that you are writing the IR
 correctly; if it works in lli, then the issue is with Julia (as is most
 likely -- llvmcall is kind of brittle).


 here's something i don't understand about the context in which the llvm
 ir is inserted.  declare instructions don't seem to be accepted, for
 example.


 See
 https://github.com/JuliaLang/julia/pull/8740


 On Sun, May 17, 2015 at 10:17 AM, andrew cooke and...@acooke.org wrote:

 thanks, but the reason i need lvmcall is because i want to call a
 specific intel instruction (which llvm actually supports - i've found the
 patch for it).  but before i do that i am trying to just get something to
 work, and printf seemed like a good intermediate goal.

 i am going to look at the implementing code and work out what is
 generated, i think, and then ask llvmdev for help.  there's something i
 don't understand about the context in which the llvm ir is inserted.
 declare instructions don't seem to be accepted, for example.  once i can
 pin that down i think i can ask a sensible question on llvmdev...

 cheers,
 andrew


 On Saturday, 16 May 2015 22:01:25 UTC-3, Yichao Yu wrote:

 On Sat, May 16, 2015 at 5:01 PM, andrew cooke and...@acooke.org
 wrote:
 
  Does anyone have a working example that calls printf via llvmcall?
 
  I realise I'm uncomfortably inbetween llvmdev and julia-users, but
 I'm
  asking here first because I suspect my limitations are still more
  julia-related.
 
  In particular,
 
  julia g() = Base.llvmcall(
   call i32 (i8*, ...)* @printf(i8* chello world\00)
   ret,
   Void, Tuple{})
  g (generic function with 2 methods)
 
  julia g()
  ERROR: error compiling g: Failed to parse LLVM Assembly:
  julia: llvmcall:3:35: error: expected string
  call i32 (i8*, ...)* @printf(i8* chello world
^
 
  seems like it's *almost* there...?
 
  Thanks,
  Andrew
 
  (I suspect I also need something other than @printf, like
  IntrinsicsX86.printf or something, but I can't find where I saw an
 example
  like that...  Related, declare doesn't seem to be accepted, or
 assignment to
  global vsariables.  But I am completely new to all this...)
 

 I was also interested in knowning how to use `llvmcall` in general but
 at least for this limited case, (and I guess you probably know
 already) it is easier to user `ccall`

 ```julia
 julia ccall(:printf, Int, (Ptr{Cchar},), hellow world\n)
 hellow world
 13
 ```





Re: [julia-users] llvmcall printf

2015-05-17 Thread andrew cooke


http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150126/255137.html

On Sunday, 17 May 2015 15:23:31 UTC-3, Isaiah wrote:

 What instruction are you trying to call?

 On Sun, May 17, 2015 at 11:21 AM, andrew cooke and...@acooke.org 
 javascript: wrote:


 ah, thanks - that issue is a huge help.

 On Sunday, 17 May 2015 11:38:20 UTC-3, Isaiah wrote:

 i am going to look at the implementing code and work out what is 
 generated, i think, and then ask llvmdev for help.


 You should start with lli and make sure that you are writing the IR 
 correctly; if it works in lli, then the issue is with Julia (as is most 
 likely -- llvmcall is kind of brittle). 
  

 here's something i don't understand about the context in which the llvm 
 ir is inserted.  declare instructions don't seem to be accepted, for 
 example.


 See
 https://github.com/JuliaLang/julia/pull/8740


 On Sun, May 17, 2015 at 10:17 AM, andrew cooke and...@acooke.org 
 wrote:

 thanks, but the reason i need lvmcall is because i want to call a 
 specific intel instruction (which llvm actually supports - i've found the 
 patch for it).  but before i do that i am trying to just get something 
 to 
 work, and printf seemed like a good intermediate goal.

 i am going to look at the implementing code and work out what is 
 generated, i think, and then ask llvmdev for help.  there's something i 
 don't understand about the context in which the llvm ir is inserted.  
 declare instructions don't seem to be accepted, for example.  once i can 
 pin that down i think i can ask a sensible question on llvmdev...

 cheers,
 andrew


 On Saturday, 16 May 2015 22:01:25 UTC-3, Yichao Yu wrote:

 On Sat, May 16, 2015 at 5:01 PM, andrew cooke and...@acooke.org 
 wrote: 
  
  Does anyone have a working example that calls printf via llvmcall? 
  
  I realise I'm uncomfortably inbetween llvmdev and julia-users, but 
 I'm 
  asking here first because I suspect my limitations are still more 
  julia-related. 
  
  In particular, 
  
  julia g() = Base.llvmcall( 
   call i32 (i8*, ...)* @printf(i8* chello world\00) 
   ret, 
   Void, Tuple{}) 
  g (generic function with 2 methods) 
  
  julia g() 
  ERROR: error compiling g: Failed to parse LLVM Assembly: 
  julia: llvmcall:3:35: error: expected string 
  call i32 (i8*, ...)* @printf(i8* chello world 
^ 
  
  seems like it's *almost* there...? 
  
  Thanks, 
  Andrew 
  
  (I suspect I also need something other than @printf, like 
  IntrinsicsX86.printf or something, but I can't find where I saw an 
 example 
  like that...  Related, declare doesn't seem to be accepted, or 
 assignment to 
  global vsariables.  But I am completely new to all this...) 
  

 I was also interested in knowning how to use `llvmcall` in general but 
 at least for this limited case, (and I guess you probably know 
 already) it is easier to user `ccall` 

 ```julia 
 julia ccall(:printf, Int, (Ptr{Cchar},), hellow world\n) 
 hellow world 
 13 
 ``` 





[julia-users] initialize nested arrays

2015-05-17 Thread Kuba Roth
I've seen similar replies to my question on the forum but I still can't 
figure out how to initialize nested arrays of a depth higher then 2?

For instance this works fine:
julia Array[[1,2],[3,4]]
2-element Array{Array{T,N},1}:
 [1,2]
 [3,4]


But the following does not return what I expect. The [[3],[4,5]] part is 
flattened into a single array.
julia Array[[1,2],[[3],[4,5]]]  
2-element Array{Array{T,N},1}:
 [1,2]  
 [3,4,5]


So far my workaround is to store the intermediate result and then assign it 
to the final array. Can that be shortened to a single line?
b = Array[[3],[4,5]]   
julia Array[[1,2],b]
2-element Array{Array{T,N},1}:
 [1,2]   
 Array[[3],[4,5]]


Thank you.


Re: [julia-users] initialize nested arrays

2015-05-17 Thread Mauro
This works:

julia Array[[1,2],Array[[3],[4,5]]]
2-element Array{Array{T,N},1}:
 [1,2]   
 Array[[3],[4,5]]

(BTW, automatic concatenation is being changed in 0.4  0.5)

On Sun, 2015-05-17 at 21:41, Kuba Roth kuba.r...@gmail.com wrote:
 I've seen similar replies to my question on the forum but I still can't 
 figure out how to initialize nested arrays of a depth higher then 2?

 For instance this works fine:
 julia Array[[1,2],[3,4]]
 2-element Array{Array{T,N},1}:
  [1,2]
  [3,4]


 But the following does not return what I expect. The [[3],[4,5]] part is 
 flattened into a single array.
 julia Array[[1,2],[[3],[4,5]]]  
 2-element Array{Array{T,N},1}:
  [1,2]  
  [3,4,5]


 So far my workaround is to store the intermediate result and then assign it 
 to the final array. Can that be shortened to a single line?
 b = Array[[3],[4,5]]   
 julia Array[[1,2],b]
 2-element Array{Array{T,N},1}:
  [1,2]   
  Array[[3],[4,5]]


 Thank you.



[julia-users] (autoconf | cmake) USE_SYSTEM_LLVM=1 Cxx.jl

2015-05-17 Thread clingcurious
Hi,

When I compile julia with my Make.user :

















*/MARCH=native/override
 
LLDB_VER=masteroverride LLVM_ASSERTIONS=1override LLVM_VER=svnoverride 
BUILD_LLVM_CLANG=1override BUILD_LLDB=1override USE_LLVM_SHLIB=1override 
LLDB_DISABLE_PYTHON=1/prefix
 
= ${LOCAL}bindir = 
../*the 
configure script is called by ${JULIA_SRC_DIR}/deps/Makefile with the 
following flags : 



*../llvm-svn/configure --prefix=$LOCAL --build=x86_64-linux-gnu 
--libdir=$LOCAL/lib  F77=gfortran -march=native -m64 CC=gcc 
-march=native -m64  CXX=g++ -march=native -m64   --disable-profiling 
--enable-shared --enable-static --enable-targets=host --disable-bindings 
--disable-docs --enable-assertions --enable-optimized --disable-threads 
CXXFLAGS= -std=c++0x -DLLDB_DISABLE_PYTHON 
--with-python=/usr/bin/pythonmake -j4* *install *compiles and links 
correctly (modulo the fact that I have to add *-lpthread* at the link for 
*liblldb.so 
, lldb-mi* and* lldb-server* )
${JULIA_SRC_DIR}/julia runs and 
${JULIA_SRC_DIR}/usr/lib/libLLVM-3.7.0svn.so symbolic link to 
${JULIA_SRC_DIR}/usr/lib/libLLVM-3.7svn.so
${LOCAL}/bin/julia runs (modulo adding ${LOCAL}/lib/julia/lib to the 
LD_LIBRARY_PATH)

Nevermind, I would like to link against my local LLVM because it takes time 
and other projects need latest llvm also...

1) if I compile LLVM with cmake ../llvm -DCMAKE_INSTALL_PREFIX=$LOCAL 
-DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_ASSERTIONS=On  make -j4  make 
install
No need to add manually -lpthread
I end up with ./llvm-config --cxxflags = ... -Wall -W -Wno-unused-parameter 
-Wwrite-strings -Wcast-qual -Wno-missing-field-initializers -pedantic 
-Wno-long-long -Wno-maybe-uninitialized -Wno-comment -std=c++11 
-ffunction-sections -fdata-sections ...
and I can't compile julia with these flags (because of -pedantic and zero 
size arrays in codegen.cpp ?)
2) if I compile LLVM with configure as in bold above,
I end up with ./llvm-config --cxxflags = ... -D_DEBUG -D_GNU_SOURCE 
-D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -O3 
-fomit-frame-pointer -std=c++11 -fvisibility-inlines-hidden -fno-exceptions 
-fno-rtti -fPIC -ffunction-sections -fdata-sections -Wcast-qual ...
and I can compile julia and I'm happy because everything is installed in my 
$LOCAL

My questions (thanks to read this giant post by the way) : 
1) I would like to compile LLVM with cmake as I'm used to... which flags I 
shoud use ?
2) Maybe there is some trick : llvm-config --cxxflags can give flags 
different from the llvm compilation flags ? ... 
3) Is there, in general, a one-to-one correspondance between cmake flags 
and configure flags ? and particularly for llvm ?
3) at the end, can we adapt the build of Cxx.jl for an external LLVM ? 
because Pkg.clone(...) and the Pkg.build(...) require the internal llvm.
4) bonus question : is it avoidable , the fact that sagemath comes with his 
own python, youcompleteme (vim plugin) comes with it's own llvm/clang, 
cling (cern interpreter) comes with it's own llvm/clang ...

Thanks ! Julia is fun.

My system : 
Linux 4.0.0-1-amd64 SMP Debian 4.0.2-1
g++ (Debian 4.9.2-16) 4.9.2
ldd (Debian GLIBC 2.19-18) 2.19
julia 1b2f4fa9d75c604978920153388bc9bd62e3dc6c

Best,