Re: [julia-users] Help creating an IDL wrapper using Clang.jl

2016-09-21 Thread Luke Stagner
You have no idea how happy this make me. 

I was able to get it running fairly quickly on linux. This is what I did

```
shell> export JL_IDL_TYPE=CALLABLE

julia> push!(Libdl.DL_LOAD_PATH, "/path/to/idl/idl84/bin/bin.linux.x86_64")
1-element Array{String,1}:
 "/path/to/idl/idl84/bin/bin.linux.x86_64"

julia> import IDLCall; idl = IDLCall
IDL Version 8.4 (linux x86_64 m64). (c) 2014, Exelis Visual Information 
Solutions, Inc.
Installation number: 284-1-.
Licensed for use by: Company I work for

IDLCall

julia> x = rand(3)
3-element Array{Float64,1}:
 0.227855
 0.209709
 0.260878

julia> idl.put_var(x,"x")

julia> idl.execute("print,x")
  0.22785452  0.20970923  0.26087754

IDL> print,x
  0.22785452  0.20970923  0.26087754

IDL> x=indgen(10)

IDL> plot,x,x
```

Generally the library directory will always be in the same directory as the 
IDL executable.

-Luke

On Wednesday, September 21, 2016 at 9:34:49 PM UTC-7, Bob Nnamtrop wrote:
>
> Yes, sorry for the delay, I have uploaded some code to github. See: 
> https://github.com/BobPortmann/IDLCall.jl.git. I have added some notes 
> there to get you started. No doubt they are too short and you may have to 
> look into the code to get things working. I do want to polish this up, so 
> please provide feedback here or preferably on a github issue. The code 
> should work on julia 0.4 or 0.5. I have only tested using IDL 8.5 on OSX.
>
> Bob
>
> On Wed, Sep 21, 2016 at 3:23 AM, Luke Stagner  > wrote:
>
>> Are you still planning on releasing your IDL wrapper? I would still very 
>> much like to use it.
>>
>> -Luke
>>
>> On Friday, August 5, 2016 at 10:19:16 AM UTC-7, Bob Nnamtrop wrote:
>>>
>>> I have written a (pretty) complete wrapper for IDL (years ago actually), 
>>> but have not uploaded to github. I'll try to do that this weekend (and put 
>>> an update here). I wrote it for my own use and have been waiting for Julia 
>>> to mature to promote it to the IDL community. I originally wrote it for 
>>> Callable IDL and later switched to IDL RPC because of some library 
>>> conflicts I ran into using Callable IDL. It should not be hard to make the 
>>> Callable part work again. Of course, if you are on windows only Callable 
>>> IDL is supported by IDL, unfortunately. I have also implemented a REPL 
>>> interface to IDL from julia, which is really nice in practice. I originally 
>>> implemented the REPL in Julia 0.3 which was rather difficult, but it is 
>>> easy in 0.4. Note also that my package does not use Clang.jl but instead 
>>> hand written wrappers. This is no problem for IDL since the c interface to 
>>> IDL is very limited (basically you can pass in variables and call IDL 
>>> commands from strings; that is about it). It would be awesome if IDL would 
>>> expose more of the c interface like they did in the python library they 
>>> provide in recent versions. Maybe if julia picks up traction they will do 
>>> this.
>>>
>>> Bob
>>>
>>> On Thu, Aug 4, 2016 at 12:14 PM, Luke Stagner  
>>> wrote:
>>>
 Its possible to call IDL from C code (Callable IDL) 
  so I figured I 
 could do the same from Julia.

 I have been trying to use Clang.jl to automatically create the wrapper 
 but I am running into some issues. 

 This project is a bit above my level of expertise and I would 
 appreciate any help

 My current progress can be found at 
 https://github.com/lstagner/JulianInsurgence

 and the output from my wrapper attempt (wrap_idl.jl 
 ) 
  is located here 
 

 I am using current Clang.jl master and Julia
 julia> versioninfo()
 Julia Version 0.5.0-rc0+193
 Commit ff1b65c* (2016-08-04 04:14 UTC)
 Platform Info:
   System: Linux (x86_64-unknown-linux-gnu)
   CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
   WORD_SIZE: 64
   BLAS: libmkl_rt
   LAPACK: libmkl_rt
   LIBM: libopenlibm
   LLVM: libLLVM-3.7.1 (ORCJIT, haswell)




>>>
>

[julia-users] Re: ijulia with multiple versions

2016-09-21 Thread Roger Whitney
Thanks. That worked.

On Tuesday, September 20, 2016 at 6:55:34 PM UTC-7, Cedric St-Jean wrote:
>
> I have multiple julia versions installed (merely through never deleting 
> them). At every new version I've called `Pkg.build("IJulia")` from the 
> command line. This adds the new kernel to the list of kernels in the Kernel 
> -> Change Kernel menu of the Jupyter notebooks. Does that not work for you? 
> Maybe you need to update Jupyter (via conda?)
>
> On Tuesday, September 20, 2016 at 1:24:41 PM UTC-4, Roger Whitney wrote:
>>
>>
>>
>> On Monday, December 8, 2014 at 2:58:49 AM UTC-8, Simon Byrne wrote:
>>>
>>> I have multiple versions of julia installed on my machine. Is there an 
>>> easy way to specify which version of julia I want to use when running 
>>> ijulia?
>>>
>>> Simon
>>>
>>
>> JuliaBox has configured iJulia so one can select which version of Julia 
>> to use in a notebook. That is I can run one notebook using Julia 0.4 and 
>> another notebook using Julia 0.5 at the same time. I am currently teaching 
>> a class of ~60 students using Julia. I have students turn in their 
>> assignments as Jupyter notebooks. Now that Julia 0.5 is out it is going to 
>> be difficult to keep everyone in the class on the same version of Julia. It 
>> would be very useful if I could configure my local copy of Jupyter/iJulia 
>> to support two versions of Julia at the sometime. Does anyone know how this 
>> is done?
>>
>

Re: [julia-users] Help creating an IDL wrapper using Clang.jl

2016-09-21 Thread Bob Nnamtrop
Yes, sorry for the delay, I have uploaded some code to github. See:
https://github.com/BobPortmann/IDLCall.jl.git. I have added some notes
there to get you started. No doubt they are too short and you may have to
look into the code to get things working. I do want to polish this up, so
please provide feedback here or preferably on a github issue. The code
should work on julia 0.4 or 0.5. I have only tested using IDL 8.5 on OSX.

Bob

On Wed, Sep 21, 2016 at 3:23 AM, Luke Stagner  wrote:

> Are you still planning on releasing your IDL wrapper? I would still very
> much like to use it.
>
> -Luke
>
> On Friday, August 5, 2016 at 10:19:16 AM UTC-7, Bob Nnamtrop wrote:
>>
>> I have written a (pretty) complete wrapper for IDL (years ago actually),
>> but have not uploaded to github. I'll try to do that this weekend (and put
>> an update here). I wrote it for my own use and have been waiting for Julia
>> to mature to promote it to the IDL community. I originally wrote it for
>> Callable IDL and later switched to IDL RPC because of some library
>> conflicts I ran into using Callable IDL. It should not be hard to make the
>> Callable part work again. Of course, if you are on windows only Callable
>> IDL is supported by IDL, unfortunately. I have also implemented a REPL
>> interface to IDL from julia, which is really nice in practice. I originally
>> implemented the REPL in Julia 0.3 which was rather difficult, but it is
>> easy in 0.4. Note also that my package does not use Clang.jl but instead
>> hand written wrappers. This is no problem for IDL since the c interface to
>> IDL is very limited (basically you can pass in variables and call IDL
>> commands from strings; that is about it). It would be awesome if IDL would
>> expose more of the c interface like they did in the python library they
>> provide in recent versions. Maybe if julia picks up traction they will do
>> this.
>>
>> Bob
>>
>> On Thu, Aug 4, 2016 at 12:14 PM, Luke Stagner  wrote:
>>
>>> Its possible to call IDL from C code (Callable IDL)
>>>  so I figured I
>>> could do the same from Julia.
>>>
>>> I have been trying to use Clang.jl to automatically create the wrapper
>>> but I am running into some issues.
>>>
>>> This project is a bit above my level of expertise and I would appreciate
>>> any help
>>>
>>> My current progress can be found at https://github.com/lstagner
>>> /JulianInsurgence
>>>
>>> and the output from my wrapper attempt (wrap_idl.jl
>>> )
>>>  is located here
>>> 
>>>
>>> I am using current Clang.jl master and Julia
>>> julia> versioninfo()
>>> Julia Version 0.5.0-rc0+193
>>> Commit ff1b65c* (2016-08-04 04:14 UTC)
>>> Platform Info:
>>>   System: Linux (x86_64-unknown-linux-gnu)
>>>   CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libmkl_rt
>>>   LAPACK: libmkl_rt
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.7.1 (ORCJIT, haswell)
>>>
>>>
>>>
>>>
>>


Re: [julia-users] Does Julia 0.5 leak memory?

2016-09-21 Thread Yaakov Borstein
:)

Re: [julia-users] Does Julia 0.5 leak memory?

2016-09-21 Thread K leo
I ran some more tests with both my original code and some test codes, now 
with the 0.5 release version.  I would conclude that memory is not leaking. 
 The memory usage as reported by top is the actual memory used or having 
been just used.  Although the usage number does not drop after the code 
finishes, it does not further grow with new runs.  Actually with new runs 
the memory usage starts afresh.

It could be that the release version of 0.5 fixes this.  Anyway before any 
further evidence, we can disregard this thread now.

On Thursday, September 22, 2016 at 10:11:46 AM UTC+8, Yichao Yu wrote:
>
> On Wed, Sep 21, 2016 at 10:04 PM, Luke Stagner  > wrote: 
> > In trying to create a reduced test case I figured out the source of my 
> > memory leak. It wasn't caused by Julia but by an external library I was 
>
> Good to know. 
>
> > calling (Sundials.jl). Pulling the dev version of Sundials.jl fixed the 
> > issue for me. 
>
> And good to know it's fixed. 
>
> > 
> > K Leo, if you are using any external library, that may be the cause of 
> the 
> > memory leak you are seeing. 
> > 
> > -Luke 
> > 
> > On Wednesday, September 21, 2016 at 5:52:23 PM UTC-7, Yichao Yu wrote: 
> >> 
> >> On Wed, Sep 21, 2016 at 8:50 PM, Yichao Yu  wrote: 
> >> > On Mon, Sep 19, 2016 at 9:14 PM, Luke Stagner  
> >> > wrote: 
> >> >> I actually ran into this issue too. I have a routine that calculates 
> >> >> fast 
> >> >> ion orbits that uses a lot of memory (90%). Here is the code (sorry 
> its 
> >> >> not 
> >> >> very clean).  I tried to run the function `make_distribution_file` 
> in a 
> >> >> loop 
> >> >> in julia but it never released the memory between calls. I tried 
> >> >> inserting 
> >> >> `gc()` manually but that didn't do anything either. 
> >> > 
> >> > I don't have time currently but I'll try to reproduce it in a few 
> days. 
> >> > What's your versioninfo() and how did you install julia? 
> >> 
> >> In the mean time, I would also appreciate if you can reduce it a 
> >> little, especially if you can remove some of the external 
> >> dependencies. 
> >> 
> >> > 
> >> >> 
> >> >> -Luke 
> >> >> 
> >> >> 
> >> >> On Monday, September 19, 2016 at 3:08:52 PM UTC-7, K leo wrote: 
> >> >>> 
> >> >>> The only package used (at the global level) is DataFrames.  Does 
> that 
> >> >>> not 
> >> >>> release memory? 
> >> >>> 
> >> >>> On Tuesday, September 20, 2016 at 6:05:58 AM UTC+8, K leo wrote: 
> >>  
> >>  No.  After myfunction() finished and I am at the REPL prompt, top 
> >>  shows 
> >>  Julia taking 49%.  And after I did gc(), it shows Julia taking 
> 48%. 
> >>  
> >>  On Tuesday, September 20, 2016 at 4:05:56 AM UTC+8, Randy Zwitch 
> >>  wrote: 
> >> > 
> >> > Does the problem go away if you run gc()? 
> >> > 
> >> > 
> >> > 
> >> > On Monday, September 19, 2016 at 3:55:14 PM UTC-4, K leo wrote: 
> >> >> 
> >> >> Thanks for the suggestion about valgrind. 
> >> >> 
> >> >> Can someone please let me first understand the expected 
> behaviour 
> >> >> for 
> >> >> memory usage. 
> >> >> 
> >> >> Let's say when I first starts Julia REPL it takes 5% of RAM 
> >> >> (according 
> >> >> to top).  Then I include "myfile.jl" and run myfunction(). 
>  During 
> >> >> the 
> >> >> execution of myfunction(), memory allocation of Julia reaches 
> 40% 
> >> >> of RAM 
> >> >> (again according to top).  Say running myfunction() involves no 
> >> >> allocation 
> >> >> of global objects - all object used are local.  Then when 
> >> >> myfunction() 
> >> >> finished and I am at the REPL prompt, should top show the memory 
> >> >> usage of 
> >> >> Julia drops down to the previous level (5% of RAM)?  My current 
> >> >> observation 
> >> >> is that it doesn't.  Is this the expected behaviour? 
> >> >> 
> >> >> 
>


Re: [julia-users] Understanding Nullable as immutable

2016-09-21 Thread Fengyang Wang
Yes, I've been meaning to submit a PR. The last one contained several other 
changes which were substantially more controversial. My plan was to split 
the PR up into smaller changes.

On Wednesday, September 21, 2016 at 4:48:21 PM UTC-4, Páll Haraldsson wrote:
>
> On Wednesday, September 21, 2016 at 4:50:26 PM UTC, Fengyang Wang wrote:
>>
>> but type piracy is bad practice. This method should really be in Base.
>>
>
> As always, if something, should be in Base or Julia, then I think a PR is 
> welcome.
>
> [Maybe I do not fully understand this (yes, type piracy I guess equals, 
> accessing internal variable, that would be Private (to not violate Parnas' 
> Principles) in other languages).
>
> I like how Julia avoids Hoare's self-admitted billion dollar mistake. It 
> seems to violate Parnas' but since no type can subtype a concrete type, it 
> may not, or at least that violation can always be avoided(?).]
>
> -- 
> Palli.
>
>

Re: [julia-users] Does Julia 0.5 leak memory?

2016-09-21 Thread Yichao Yu
On Wed, Sep 21, 2016 at 10:04 PM, Luke Stagner  wrote:
> In trying to create a reduced test case I figured out the source of my
> memory leak. It wasn't caused by Julia but by an external library I was

Good to know.

> calling (Sundials.jl). Pulling the dev version of Sundials.jl fixed the
> issue for me.

And good to know it's fixed.

>
> K Leo, if you are using any external library, that may be the cause of the
> memory leak you are seeing.
>
> -Luke
>
> On Wednesday, September 21, 2016 at 5:52:23 PM UTC-7, Yichao Yu wrote:
>>
>> On Wed, Sep 21, 2016 at 8:50 PM, Yichao Yu  wrote:
>> > On Mon, Sep 19, 2016 at 9:14 PM, Luke Stagner 
>> > wrote:
>> >> I actually ran into this issue too. I have a routine that calculates
>> >> fast
>> >> ion orbits that uses a lot of memory (90%). Here is the code (sorry its
>> >> not
>> >> very clean).  I tried to run the function `make_distribution_file` in a
>> >> loop
>> >> in julia but it never released the memory between calls. I tried
>> >> inserting
>> >> `gc()` manually but that didn't do anything either.
>> >
>> > I don't have time currently but I'll try to reproduce it in a few days.
>> > What's your versioninfo() and how did you install julia?
>>
>> In the mean time, I would also appreciate if you can reduce it a
>> little, especially if you can remove some of the external
>> dependencies.
>>
>> >
>> >>
>> >> -Luke
>> >>
>> >>
>> >> On Monday, September 19, 2016 at 3:08:52 PM UTC-7, K leo wrote:
>> >>>
>> >>> The only package used (at the global level) is DataFrames.  Does that
>> >>> not
>> >>> release memory?
>> >>>
>> >>> On Tuesday, September 20, 2016 at 6:05:58 AM UTC+8, K leo wrote:
>> 
>>  No.  After myfunction() finished and I am at the REPL prompt, top
>>  shows
>>  Julia taking 49%.  And after I did gc(), it shows Julia taking 48%.
>> 
>>  On Tuesday, September 20, 2016 at 4:05:56 AM UTC+8, Randy Zwitch
>>  wrote:
>> >
>> > Does the problem go away if you run gc()?
>> >
>> >
>> >
>> > On Monday, September 19, 2016 at 3:55:14 PM UTC-4, K leo wrote:
>> >>
>> >> Thanks for the suggestion about valgrind.
>> >>
>> >> Can someone please let me first understand the expected behaviour
>> >> for
>> >> memory usage.
>> >>
>> >> Let's say when I first starts Julia REPL it takes 5% of RAM
>> >> (according
>> >> to top).  Then I include "myfile.jl" and run myfunction().  During
>> >> the
>> >> execution of myfunction(), memory allocation of Julia reaches 40%
>> >> of RAM
>> >> (again according to top).  Say running myfunction() involves no
>> >> allocation
>> >> of global objects - all object used are local.  Then when
>> >> myfunction()
>> >> finished and I am at the REPL prompt, should top show the memory
>> >> usage of
>> >> Julia drops down to the previous level (5% of RAM)?  My current
>> >> observation
>> >> is that it doesn't.  Is this the expected behaviour?
>> >>
>> >>


Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-21 Thread Yichao Yu
On Wed, Sep 21, 2016 at 9:49 PM, Erik Schnetter  wrote:
> I confirm that I can't get Julia to synthesize a `vfmadd` instruction
> either... Sorry for sending you on a wild goose chase.

-march=haswell does the trick for C (both clang and gcc)
the necessary bit for the machine ir optimization (this is not a llvm
ir optimization pass) to do this is llc options -mcpu=haswell and
function attribute unsafe-fp-math=true.

>
> -erik
>
> On Wed, Sep 21, 2016 at 9:33 PM, Yichao Yu  wrote:
>>
>> On Wed, Sep 21, 2016 at 9:29 PM, Erik Schnetter 
>> wrote:
>> > On Wed, Sep 21, 2016 at 9:22 PM, Chris Rackauckas 
>> > wrote:
>> >>
>> >> I'm not seeing `@fastmath` apply fma/muladd. I rebuilt the sysimg and
>> >> now
>> >> I get results where g and h apply muladd/fma in the native code, but a
>> >> new
>> >> function k which is `@fastmath` inside of f does not apply muladd/fma.
>> >>
>> >>
>> >> https://gist.github.com/ChrisRackauckas/b239e33b4b52bcc28f3922c673a25910
>> >>
>> >> Should I open an issue?
>> >
>> >
>> > In your case, LLVM apparently thinks that `x + x + 3` is faster to
>> > calculate
>> > than `2x+3`. If you use a less round number than `2` multiplying `x`,
>> > you
>> > might see a different behaviour.
>>
>> I've personally never seen llvm create fma from mul and add. We might
>> not have the llvm passes enabled if LLVM is capable of doing this at
>> all.
>>
>> >
>> > -erik
>> >
>> >
>> >> Note that this is on v0.6 Windows. On Linux the sysimg isn't rebuilding
>> >> for some reason, so I may need to just build from source.
>> >>
>> >> On Wednesday, September 21, 2016 at 6:22:06 AM UTC-7, Erik Schnetter
>> >> wrote:
>> >>>
>> >>> On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas 
>> >>> wrote:
>> 
>>  Hi,
>>    First of all, does LLVM essentially fma or muladd expressions like
>>  `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one
>>  explicitly use
>>  `muladd` and `fma` on these types of instructions (is there a macro
>>  for
>>  making this easier)?
>> >>>
>> >>>
>> >>> Yes, LLVM will use fma machine instructions -- but only if they lead
>> >>> to
>> >>> the same round-off error as using separate multiply and add
>> >>> instructions. If
>> >>> you do not care about the details of conforming to the IEEE standard,
>> >>> then
>> >>> you can use the `@fastmath` macro that enables several optimizations,
>> >>> including this one. This is described in the manual
>> >>>
>> >>> .
>> >>>
>> >>>
>>    Secondly, I am wondering if my setup is no applying these
>>  operations
>>  correctly. Here's my test code:
>> 
>>  f(x) = 2.0x + 3.0
>>  g(x) = muladd(x,2.0, 3.0)
>>  h(x) = fma(x,2.0, 3.0)
>> 
>>  @code_llvm f(4.0)
>>  @code_llvm g(4.0)
>>  @code_llvm h(4.0)
>> 
>>  @code_native f(4.0)
>>  @code_native g(4.0)
>>  @code_native h(4.0)
>> 
>>  Computer 1
>> 
>>  Julia Version 0.5.0-rc4+0
>>  Commit 9c76c3e* (2016-09-09 01:43 UTC)
>>  Platform Info:
>>    System: Linux (x86_64-redhat-linux)
>>    CPU: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>>    WORD_SIZE: 64
>>    BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
>>    LAPACK: libopenblasp.so.0
>>    LIBM: libopenlibm
>>    LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)
>> >>>
>> >>>
>> >>> This looks good, the "broadwell" architecture that LLVM uses should
>> >>> imply
>> >>> the respective optimizations. Try with `@fastmath`.
>> >>>
>> >>> -erik
>> >>>
>> >>>
>> >>>
>> >>>
>> 
>>  (the COPR nightly on CentOS7) with
>> 
>>  [crackauc@crackauc2 ~]$ lscpu
>>  Architecture:  x86_64
>>  CPU op-mode(s):32-bit, 64-bit
>>  Byte Order:Little Endian
>>  CPU(s):16
>>  On-line CPU(s) list:   0-15
>>  Thread(s) per core:1
>>  Core(s) per socket:8
>>  Socket(s): 2
>>  NUMA node(s):  2
>>  Vendor ID: GenuineIntel
>>  CPU family:6
>>  Model: 79
>>  Model name:Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>>  Stepping:  1
>>  CPU MHz:   1200.000
>>  BogoMIPS:  6392.58
>>  Virtualization:VT-x
>>  L1d cache: 32K
>>  L1i cache: 32K
>>  L2 cache:  256K
>>  L3 cache:  25600K
>>  NUMA node0 CPU(s): 0-7
>>  NUMA node1 CPU(s): 8-15
>> 
>> 
>> 
>>  I get the output
>> 
>>  define double @julia_f_72025(double) #0 {
>>  top:
>>    %1 = fmul double %0, 2.00e+00
>>    %2 = fadd double %1, 3.00e+00
>>    ret double %2
>>  }
>> 
>>  define double @julia_g_72027(double) #0 {
>> 

Re: [julia-users] Does Julia 0.5 leak memory?

2016-09-21 Thread Luke Stagner
In trying to create a reduced test case I figured out the source of my 
memory leak. It wasn't caused by Julia but by an external library I was 
calling (Sundials.jl). Pulling the dev version of Sundials.jl fixed the 
issue for me. 

K Leo, if you are using any external library, that may be the cause of the 
memory leak you are seeing.

-Luke

On Wednesday, September 21, 2016 at 5:52:23 PM UTC-7, Yichao Yu wrote:
>
> On Wed, Sep 21, 2016 at 8:50 PM, Yichao Yu  
> wrote: 
> > On Mon, Sep 19, 2016 at 9:14 PM, Luke Stagner  > wrote: 
> >> I actually ran into this issue too. I have a routine that calculates 
> fast 
> >> ion orbits that uses a lot of memory (90%). Here is the code (sorry its 
> not 
> >> very clean).  I tried to run the function `make_distribution_file` in a 
> loop 
> >> in julia but it never released the memory between calls. I tried 
> inserting 
> >> `gc()` manually but that didn't do anything either. 
> > 
> > I don't have time currently but I'll try to reproduce it in a few days. 
> > What's your versioninfo() and how did you install julia? 
>
> In the mean time, I would also appreciate if you can reduce it a 
> little, especially if you can remove some of the external 
> dependencies. 
>
> > 
> >> 
> >> -Luke 
> >> 
> >> 
> >> On Monday, September 19, 2016 at 3:08:52 PM UTC-7, K leo wrote: 
> >>> 
> >>> The only package used (at the global level) is DataFrames.  Does that 
> not 
> >>> release memory? 
> >>> 
> >>> On Tuesday, September 20, 2016 at 6:05:58 AM UTC+8, K leo wrote: 
>  
>  No.  After myfunction() finished and I am at the REPL prompt, top 
> shows 
>  Julia taking 49%.  And after I did gc(), it shows Julia taking 48%. 
>  
>  On Tuesday, September 20, 2016 at 4:05:56 AM UTC+8, Randy Zwitch 
> wrote: 
> > 
> > Does the problem go away if you run gc()? 
> > 
> > 
> > 
> > On Monday, September 19, 2016 at 3:55:14 PM UTC-4, K leo wrote: 
> >> 
> >> Thanks for the suggestion about valgrind. 
> >> 
> >> Can someone please let me first understand the expected behaviour 
> for 
> >> memory usage. 
> >> 
> >> Let's say when I first starts Julia REPL it takes 5% of RAM 
> (according 
> >> to top).  Then I include "myfile.jl" and run myfunction().  During 
> the 
> >> execution of myfunction(), memory allocation of Julia reaches 40% 
> of RAM 
> >> (again according to top).  Say running myfunction() involves no 
> allocation 
> >> of global objects - all object used are local.  Then when 
> myfunction() 
> >> finished and I am at the REPL prompt, should top show the memory 
> usage of 
> >> Julia drops down to the previous level (5% of RAM)?  My current 
> observation 
> >> is that it doesn't.  Is this the expected behaviour? 
> >> 
> >> 
>


Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-21 Thread Erik Schnetter
I confirm that I can't get Julia to synthesize a `vfmadd` instruction
either... Sorry for sending you on a wild goose chase.

-erik

On Wed, Sep 21, 2016 at 9:33 PM, Yichao Yu  wrote:

> On Wed, Sep 21, 2016 at 9:29 PM, Erik Schnetter 
> wrote:
> > On Wed, Sep 21, 2016 at 9:22 PM, Chris Rackauckas 
> > wrote:
> >>
> >> I'm not seeing `@fastmath` apply fma/muladd. I rebuilt the sysimg and
> now
> >> I get results where g and h apply muladd/fma in the native code, but a
> new
> >> function k which is `@fastmath` inside of f does not apply muladd/fma.
> >>
> >> https://gist.github.com/ChrisRackauckas/b239e33b4b52bcc28f3922c673a259
> 10
> >>
> >> Should I open an issue?
> >
> >
> > In your case, LLVM apparently thinks that `x + x + 3` is faster to
> calculate
> > than `2x+3`. If you use a less round number than `2` multiplying `x`, you
> > might see a different behaviour.
>
> I've personally never seen llvm create fma from mul and add. We might
> not have the llvm passes enabled if LLVM is capable of doing this at
> all.
>
> >
> > -erik
> >
> >
> >> Note that this is on v0.6 Windows. On Linux the sysimg isn't rebuilding
> >> for some reason, so I may need to just build from source.
> >>
> >> On Wednesday, September 21, 2016 at 6:22:06 AM UTC-7, Erik Schnetter
> >> wrote:
> >>>
> >>> On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas 
> >>> wrote:
> 
>  Hi,
>    First of all, does LLVM essentially fma or muladd expressions like
>  `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one
> explicitly use
>  `muladd` and `fma` on these types of instructions (is there a macro
> for
>  making this easier)?
> >>>
> >>>
> >>> Yes, LLVM will use fma machine instructions -- but only if they lead to
> >>> the same round-off error as using separate multiply and add
> instructions. If
> >>> you do not care about the details of conforming to the IEEE standard,
> then
> >>> you can use the `@fastmath` macro that enables several optimizations,
> >>> including this one. This is described in the manual
> >>>  performance-tips/#performance-annotations>.
> >>>
> >>>
>    Secondly, I am wondering if my setup is no applying these operations
>  correctly. Here's my test code:
> 
>  f(x) = 2.0x + 3.0
>  g(x) = muladd(x,2.0, 3.0)
>  h(x) = fma(x,2.0, 3.0)
> 
>  @code_llvm f(4.0)
>  @code_llvm g(4.0)
>  @code_llvm h(4.0)
> 
>  @code_native f(4.0)
>  @code_native g(4.0)
>  @code_native h(4.0)
> 
>  Computer 1
> 
>  Julia Version 0.5.0-rc4+0
>  Commit 9c76c3e* (2016-09-09 01:43 UTC)
>  Platform Info:
>    System: Linux (x86_64-redhat-linux)
>    CPU: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>    WORD_SIZE: 64
>    BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
>    LAPACK: libopenblasp.so.0
>    LIBM: libopenlibm
>    LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)
> >>>
> >>>
> >>> This looks good, the "broadwell" architecture that LLVM uses should
> imply
> >>> the respective optimizations. Try with `@fastmath`.
> >>>
> >>> -erik
> >>>
> >>>
> >>>
> >>>
> 
>  (the COPR nightly on CentOS7) with
> 
>  [crackauc@crackauc2 ~]$ lscpu
>  Architecture:  x86_64
>  CPU op-mode(s):32-bit, 64-bit
>  Byte Order:Little Endian
>  CPU(s):16
>  On-line CPU(s) list:   0-15
>  Thread(s) per core:1
>  Core(s) per socket:8
>  Socket(s): 2
>  NUMA node(s):  2
>  Vendor ID: GenuineIntel
>  CPU family:6
>  Model: 79
>  Model name:Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>  Stepping:  1
>  CPU MHz:   1200.000
>  BogoMIPS:  6392.58
>  Virtualization:VT-x
>  L1d cache: 32K
>  L1i cache: 32K
>  L2 cache:  256K
>  L3 cache:  25600K
>  NUMA node0 CPU(s): 0-7
>  NUMA node1 CPU(s): 8-15
> 
> 
> 
>  I get the output
> 
>  define double @julia_f_72025(double) #0 {
>  top:
>    %1 = fmul double %0, 2.00e+00
>    %2 = fadd double %1, 3.00e+00
>    ret double %2
>  }
> 
>  define double @julia_g_72027(double) #0 {
>  top:
>    %1 = call double @llvm.fmuladd.f64(double %0, double 2.00e+00,
>  double 3.00e+00)
>    ret double %1
>  }
> 
>  define double @julia_h_72029(double) #0 {
>  top:
>    %1 = call double @llvm.fma.f64(double %0, double 2.00e+00,
> double
>  3.00e+00)
>    ret double %1
>  }
>  .text
>  Filename: fmatest.jl
>  pushq %rbp
>  movq %rsp, %rbp
>  Source line: 1
>  addsd %xmm0, %xmm0
>  movabsq $139916162906520, %rax  # 

Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-21 Thread Yichao Yu
On Wed, Sep 21, 2016 at 9:33 PM, Yichao Yu  wrote:
> On Wed, Sep 21, 2016 at 9:29 PM, Erik Schnetter  wrote:
>> On Wed, Sep 21, 2016 at 9:22 PM, Chris Rackauckas 
>> wrote:
>>>
>>> I'm not seeing `@fastmath` apply fma/muladd. I rebuilt the sysimg and now
>>> I get results where g and h apply muladd/fma in the native code, but a new
>>> function k which is `@fastmath` inside of f does not apply muladd/fma.
>>>
>>> https://gist.github.com/ChrisRackauckas/b239e33b4b52bcc28f3922c673a25910
>>>
>>> Should I open an issue?
>>
>>
>> In your case, LLVM apparently thinks that `x + x + 3` is faster to calculate
>> than `2x+3`. If you use a less round number than `2` multiplying `x`, you
>> might see a different behaviour.
>
> I've personally never seen llvm create fma from mul and add. We might
> not have the llvm passes enabled if LLVM is capable of doing this at
> all.

Interestingly both clang and gcc keeps the mul and add with `-Ofast
-ffast-math -mavx2` and makes it a fma with `-mavx512f`. This is true
even when the call is in a loop (since switching between sse and avx
is costly) so I'd say either the compiler is right that the fma
instruction gives no speed advantage in this case or it's a llvm/gcc
missing optimization instead of a julia one.

>
>>
>> -erik
>>
>>
>>> Note that this is on v0.6 Windows. On Linux the sysimg isn't rebuilding
>>> for some reason, so I may need to just build from source.
>>>
>>> On Wednesday, September 21, 2016 at 6:22:06 AM UTC-7, Erik Schnetter
>>> wrote:

 On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas 
 wrote:
>
> Hi,
>   First of all, does LLVM essentially fma or muladd expressions like
> `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one explicitly use
> `muladd` and `fma` on these types of instructions (is there a macro for
> making this easier)?


 Yes, LLVM will use fma machine instructions -- but only if they lead to
 the same round-off error as using separate multiply and add instructions. 
 If
 you do not care about the details of conforming to the IEEE standard, then
 you can use the `@fastmath` macro that enables several optimizations,
 including this one. This is described in the manual
 .


>   Secondly, I am wondering if my setup is no applying these operations
> correctly. Here's my test code:
>
> f(x) = 2.0x + 3.0
> g(x) = muladd(x,2.0, 3.0)
> h(x) = fma(x,2.0, 3.0)
>
> @code_llvm f(4.0)
> @code_llvm g(4.0)
> @code_llvm h(4.0)
>
> @code_native f(4.0)
> @code_native g(4.0)
> @code_native h(4.0)
>
> Computer 1
>
> Julia Version 0.5.0-rc4+0
> Commit 9c76c3e* (2016-09-09 01:43 UTC)
> Platform Info:
>   System: Linux (x86_64-redhat-linux)
>   CPU: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
>   LAPACK: libopenblasp.so.0
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)


 This looks good, the "broadwell" architecture that LLVM uses should imply
 the respective optimizations. Try with `@fastmath`.

 -erik




>
> (the COPR nightly on CentOS7) with
>
> [crackauc@crackauc2 ~]$ lscpu
> Architecture:  x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):16
> On-line CPU(s) list:   0-15
> Thread(s) per core:1
> Core(s) per socket:8
> Socket(s): 2
> NUMA node(s):  2
> Vendor ID: GenuineIntel
> CPU family:6
> Model: 79
> Model name:Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
> Stepping:  1
> CPU MHz:   1200.000
> BogoMIPS:  6392.58
> Virtualization:VT-x
> L1d cache: 32K
> L1i cache: 32K
> L2 cache:  256K
> L3 cache:  25600K
> NUMA node0 CPU(s): 0-7
> NUMA node1 CPU(s): 8-15
>
>
>
> I get the output
>
> define double @julia_f_72025(double) #0 {
> top:
>   %1 = fmul double %0, 2.00e+00
>   %2 = fadd double %1, 3.00e+00
>   ret double %2
> }
>
> define double @julia_g_72027(double) #0 {
> top:
>   %1 = call double @llvm.fmuladd.f64(double %0, double 2.00e+00,
> double 3.00e+00)
>   ret double %1
> }
>
> define double @julia_h_72029(double) #0 {
> top:
>   %1 = call double @llvm.fma.f64(double %0, double 2.00e+00, double
> 3.00e+00)
>   ret double %1
> }
> .text
> Filename: fmatest.jl
> pushq %rbp
> movq 

Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-21 Thread Yichao Yu
On Wed, Sep 21, 2016 at 9:29 PM, Erik Schnetter  wrote:
> On Wed, Sep 21, 2016 at 9:22 PM, Chris Rackauckas 
> wrote:
>>
>> I'm not seeing `@fastmath` apply fma/muladd. I rebuilt the sysimg and now
>> I get results where g and h apply muladd/fma in the native code, but a new
>> function k which is `@fastmath` inside of f does not apply muladd/fma.
>>
>> https://gist.github.com/ChrisRackauckas/b239e33b4b52bcc28f3922c673a25910
>>
>> Should I open an issue?
>
>
> In your case, LLVM apparently thinks that `x + x + 3` is faster to calculate
> than `2x+3`. If you use a less round number than `2` multiplying `x`, you
> might see a different behaviour.

I've personally never seen llvm create fma from mul and add. We might
not have the llvm passes enabled if LLVM is capable of doing this at
all.

>
> -erik
>
>
>> Note that this is on v0.6 Windows. On Linux the sysimg isn't rebuilding
>> for some reason, so I may need to just build from source.
>>
>> On Wednesday, September 21, 2016 at 6:22:06 AM UTC-7, Erik Schnetter
>> wrote:
>>>
>>> On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas 
>>> wrote:

 Hi,
   First of all, does LLVM essentially fma or muladd expressions like
 `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one explicitly use
 `muladd` and `fma` on these types of instructions (is there a macro for
 making this easier)?
>>>
>>>
>>> Yes, LLVM will use fma machine instructions -- but only if they lead to
>>> the same round-off error as using separate multiply and add instructions. If
>>> you do not care about the details of conforming to the IEEE standard, then
>>> you can use the `@fastmath` macro that enables several optimizations,
>>> including this one. This is described in the manual
>>> .
>>>
>>>
   Secondly, I am wondering if my setup is no applying these operations
 correctly. Here's my test code:

 f(x) = 2.0x + 3.0
 g(x) = muladd(x,2.0, 3.0)
 h(x) = fma(x,2.0, 3.0)

 @code_llvm f(4.0)
 @code_llvm g(4.0)
 @code_llvm h(4.0)

 @code_native f(4.0)
 @code_native g(4.0)
 @code_native h(4.0)

 Computer 1

 Julia Version 0.5.0-rc4+0
 Commit 9c76c3e* (2016-09-09 01:43 UTC)
 Platform Info:
   System: Linux (x86_64-redhat-linux)
   CPU: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
   WORD_SIZE: 64
   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
   LAPACK: libopenblasp.so.0
   LIBM: libopenlibm
   LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)
>>>
>>>
>>> This looks good, the "broadwell" architecture that LLVM uses should imply
>>> the respective optimizations. Try with `@fastmath`.
>>>
>>> -erik
>>>
>>>
>>>
>>>

 (the COPR nightly on CentOS7) with

 [crackauc@crackauc2 ~]$ lscpu
 Architecture:  x86_64
 CPU op-mode(s):32-bit, 64-bit
 Byte Order:Little Endian
 CPU(s):16
 On-line CPU(s) list:   0-15
 Thread(s) per core:1
 Core(s) per socket:8
 Socket(s): 2
 NUMA node(s):  2
 Vendor ID: GenuineIntel
 CPU family:6
 Model: 79
 Model name:Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
 Stepping:  1
 CPU MHz:   1200.000
 BogoMIPS:  6392.58
 Virtualization:VT-x
 L1d cache: 32K
 L1i cache: 32K
 L2 cache:  256K
 L3 cache:  25600K
 NUMA node0 CPU(s): 0-7
 NUMA node1 CPU(s): 8-15



 I get the output

 define double @julia_f_72025(double) #0 {
 top:
   %1 = fmul double %0, 2.00e+00
   %2 = fadd double %1, 3.00e+00
   ret double %2
 }

 define double @julia_g_72027(double) #0 {
 top:
   %1 = call double @llvm.fmuladd.f64(double %0, double 2.00e+00,
 double 3.00e+00)
   ret double %1
 }

 define double @julia_h_72029(double) #0 {
 top:
   %1 = call double @llvm.fma.f64(double %0, double 2.00e+00, double
 3.00e+00)
   ret double %1
 }
 .text
 Filename: fmatest.jl
 pushq %rbp
 movq %rsp, %rbp
 Source line: 1
 addsd %xmm0, %xmm0
 movabsq $139916162906520, %rax  # imm = 0x7F40C5303998
 addsd (%rax), %xmm0
 popq %rbp
 retq
 nopl (%rax,%rax)
 .text
 Filename: fmatest.jl
 pushq %rbp
 movq %rsp, %rbp
 Source line: 2
 addsd %xmm0, %xmm0
 movabsq $139916162906648, %rax  # imm = 0x7F40C5303A18
 addsd (%rax), %xmm0
 popq %rbp
 retq
 nopl (%rax,%rax)
 .text
 Filename: fmatest.jl
 pushq %rbp
 movq %rsp, %rbp
 movabsq $139916162906776, %rax  # imm = 0x7F40C5303A98
 Source line: 3
 movsd (%rax), 

Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-21 Thread Chris Rackauckas
Still no FMA?

julia> k(x) = @fastmath 2.4x + 3.0
WARNING: Method definition k(Any) in module Main at REPL[14]:1 overwritten 
at REPL[23]:1.
k (generic function with 1 method)

julia> @code_llvm k(4.0)

; Function Attrs: uwtable
define double @julia_k_66737(double) #0 {
top:
  %1 = fmul fast double %0, 2.40e+00
  %2 = fadd fast double %1, 3.00e+00
  ret double %2
}

julia> @code_native k(4.0)
.text
Filename: REPL[23]
pushq   %rbp
movq%rsp, %rbp
movabsq $568231032, %rax# imm = 0x21DE8478
Source line: 1
vmulsd  (%rax), %xmm0, %xmm0
movabsq $568231040, %rax# imm = 0x21DE8480
vaddsd  (%rax), %xmm0, %xmm0
popq%rbp
retq
nopw%cs:(%rax,%rax)



On Wednesday, September 21, 2016 at 6:29:44 PM UTC-7, Erik Schnetter wrote:
>
> On Wed, Sep 21, 2016 at 9:22 PM, Chris Rackauckas  > wrote:
>
>> I'm not seeing `@fastmath` apply fma/muladd. I rebuilt the sysimg and now 
>> I get results where g and h apply muladd/fma in the native code, but a new 
>> function k which is `@fastmath` inside of f does not apply muladd/fma.
>>
>> https://gist.github.com/ChrisRackauckas/b239e33b4b52bcc28f3922c673a25910
>>
>> Should I open an issue?
>>
>
> In your case, LLVM apparently thinks that `x + x + 3` is faster to 
> calculate than `2x+3`. If you use a less round number than `2` multiplying 
> `x`, you might see a different behaviour.
>
> -erik
>
>
> Note that this is on v0.6 Windows. On Linux the sysimg isn't rebuilding 
>> for some reason, so I may need to just build from source.
>>
>> On Wednesday, September 21, 2016 at 6:22:06 AM UTC-7, Erik Schnetter 
>> wrote:
>>>
>>> On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas  
>>> wrote:
>>>
 Hi,
   First of all, does LLVM essentially fma or muladd expressions like 
 `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one explicitly use 
 `muladd` and `fma` on these types of instructions (is there a macro for 
 making this easier)?

>>>
>>> Yes, LLVM will use fma machine instructions -- but only if they lead to 
>>> the same round-off error as using separate multiply and add instructions. 
>>> If you do not care about the details of conforming to the IEEE standard, 
>>> then you can use the `@fastmath` macro that enables several optimizations, 
>>> including this one. This is described in the manual <
>>> http://docs.julialang.org/en/release-0.5/manual/performance-tips/#performance-annotations
>>> >.
>>>
>>>
>>>   Secondly, I am wondering if my setup is no applying these operations 
 correctly. Here's my test code:

 f(x) = 2.0x + 3.0
 g(x) = muladd(x,2.0, 3.0)
 h(x) = fma(x,2.0, 3.0)

 @code_llvm f(4.0)
 @code_llvm g(4.0)
 @code_llvm h(4.0)

 @code_native f(4.0)
 @code_native g(4.0)
 @code_native h(4.0)

 *Computer 1*

 Julia Version 0.5.0-rc4+0
 Commit 9c76c3e* (2016-09-09 01:43 UTC)
 Platform Info:
   System: Linux (x86_64-redhat-linux)
   CPU: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
   WORD_SIZE: 64
   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
   LAPACK: libopenblasp.so.0
   LIBM: libopenlibm
   LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)

>>>
>>> This looks good, the "broadwell" architecture that LLVM uses should 
>>> imply the respective optimizations. Try with `@fastmath`.
>>>
>>> -erik
>>>
>>>
>>>
>>>  
>>>
 (the COPR nightly on CentOS7) with 

 [crackauc@crackauc2 ~]$ lscpu
 Architecture:  x86_64
 CPU op-mode(s):32-bit, 64-bit
 Byte Order:Little Endian
 CPU(s):16
 On-line CPU(s) list:   0-15
 Thread(s) per core:1
 Core(s) per socket:8
 Socket(s): 2
 NUMA node(s):  2
 Vendor ID: GenuineIntel
 CPU family:6
 Model: 79
 Model name:Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
 Stepping:  1
 CPU MHz:   1200.000
 BogoMIPS:  6392.58
 Virtualization:VT-x
 L1d cache: 32K
 L1i cache: 32K
 L2 cache:  256K
 L3 cache:  25600K
 NUMA node0 CPU(s): 0-7
 NUMA node1 CPU(s): 8-15



 I get the output

 define double @julia_f_72025(double) #0 {
 top:
   %1 = fmul double %0, 2.00e+00
   %2 = fadd double %1, 3.00e+00
   ret double %2
 }

 define double @julia_g_72027(double) #0 {
 top:
   %1 = call double @llvm.fmuladd.f64(double %0, double 2.00e+00, 
 double 3.00e+00)
   ret double %1
 }

 define double @julia_h_72029(double) #0 {
 top:
   %1 = call double @llvm.fma.f64(double %0, double 2.00e+00, double 
 3.00e+00)
   ret double %1
 }
 .text
 Filename: 

Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-21 Thread Erik Schnetter
On Wed, Sep 21, 2016 at 9:22 PM, Chris Rackauckas 
wrote:

> I'm not seeing `@fastmath` apply fma/muladd. I rebuilt the sysimg and now
> I get results where g and h apply muladd/fma in the native code, but a new
> function k which is `@fastmath` inside of f does not apply muladd/fma.
>
> https://gist.github.com/ChrisRackauckas/b239e33b4b52bcc28f3922c673a25910
>
> Should I open an issue?
>

In your case, LLVM apparently thinks that `x + x + 3` is faster to
calculate than `2x+3`. If you use a less round number than `2` multiplying
`x`, you might see a different behaviour.

-erik


Note that this is on v0.6 Windows. On Linux the sysimg isn't rebuilding for
> some reason, so I may need to just build from source.
>
> On Wednesday, September 21, 2016 at 6:22:06 AM UTC-7, Erik Schnetter wrote:
>>
>> On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas 
>> wrote:
>>
>>> Hi,
>>>   First of all, does LLVM essentially fma or muladd expressions like
>>> `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one explicitly use
>>> `muladd` and `fma` on these types of instructions (is there a macro for
>>> making this easier)?
>>>
>>
>> Yes, LLVM will use fma machine instructions -- but only if they lead to
>> the same round-off error as using separate multiply and add instructions.
>> If you do not care about the details of conforming to the IEEE standard,
>> then you can use the `@fastmath` macro that enables several optimizations,
>> including this one. This is described in the manual <
>> http://docs.julialang.org/en/release-0.5/manual/performance
>> -tips/#performance-annotations>.
>>
>>
>>   Secondly, I am wondering if my setup is no applying these operations
>>> correctly. Here's my test code:
>>>
>>> f(x) = 2.0x + 3.0
>>> g(x) = muladd(x,2.0, 3.0)
>>> h(x) = fma(x,2.0, 3.0)
>>>
>>> @code_llvm f(4.0)
>>> @code_llvm g(4.0)
>>> @code_llvm h(4.0)
>>>
>>> @code_native f(4.0)
>>> @code_native g(4.0)
>>> @code_native h(4.0)
>>>
>>> *Computer 1*
>>>
>>> Julia Version 0.5.0-rc4+0
>>> Commit 9c76c3e* (2016-09-09 01:43 UTC)
>>> Platform Info:
>>>   System: Linux (x86_64-redhat-linux)
>>>   CPU: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
>>>   LAPACK: libopenblasp.so.0
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)
>>>
>>
>> This looks good, the "broadwell" architecture that LLVM uses should imply
>> the respective optimizations. Try with `@fastmath`.
>>
>> -erik
>>
>>
>>
>>
>>
>>> (the COPR nightly on CentOS7) with
>>>
>>> [crackauc@crackauc2 ~]$ lscpu
>>> Architecture:  x86_64
>>> CPU op-mode(s):32-bit, 64-bit
>>> Byte Order:Little Endian
>>> CPU(s):16
>>> On-line CPU(s) list:   0-15
>>> Thread(s) per core:1
>>> Core(s) per socket:8
>>> Socket(s): 2
>>> NUMA node(s):  2
>>> Vendor ID: GenuineIntel
>>> CPU family:6
>>> Model: 79
>>> Model name:Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>>> Stepping:  1
>>> CPU MHz:   1200.000
>>> BogoMIPS:  6392.58
>>> Virtualization:VT-x
>>> L1d cache: 32K
>>> L1i cache: 32K
>>> L2 cache:  256K
>>> L3 cache:  25600K
>>> NUMA node0 CPU(s): 0-7
>>> NUMA node1 CPU(s): 8-15
>>>
>>>
>>>
>>> I get the output
>>>
>>> define double @julia_f_72025(double) #0 {
>>> top:
>>>   %1 = fmul double %0, 2.00e+00
>>>   %2 = fadd double %1, 3.00e+00
>>>   ret double %2
>>> }
>>>
>>> define double @julia_g_72027(double) #0 {
>>> top:
>>>   %1 = call double @llvm.fmuladd.f64(double %0, double 2.00e+00,
>>> double 3.00e+00)
>>>   ret double %1
>>> }
>>>
>>> define double @julia_h_72029(double) #0 {
>>> top:
>>>   %1 = call double @llvm.fma.f64(double %0, double 2.00e+00, double
>>> 3.00e+00)
>>>   ret double %1
>>> }
>>> .text
>>> Filename: fmatest.jl
>>> pushq %rbp
>>> movq %rsp, %rbp
>>> Source line: 1
>>> addsd %xmm0, %xmm0
>>> movabsq $139916162906520, %rax  # imm = 0x7F40C5303998
>>> addsd (%rax), %xmm0
>>> popq %rbp
>>> retq
>>> nopl (%rax,%rax)
>>> .text
>>> Filename: fmatest.jl
>>> pushq %rbp
>>> movq %rsp, %rbp
>>> Source line: 2
>>> addsd %xmm0, %xmm0
>>> movabsq $139916162906648, %rax  # imm = 0x7F40C5303A18
>>> addsd (%rax), %xmm0
>>> popq %rbp
>>> retq
>>> nopl (%rax,%rax)
>>> .text
>>> Filename: fmatest.jl
>>> pushq %rbp
>>> movq %rsp, %rbp
>>> movabsq $139916162906776, %rax  # imm = 0x7F40C5303A98
>>> Source line: 3
>>> movsd (%rax), %xmm1   # xmm1 = mem[0],zero
>>> movabsq $139916162906784, %rax  # imm = 0x7F40C5303AA0
>>> movsd (%rax), %xmm2   # xmm2 = mem[0],zero
>>> movabsq $139925776008800, %rax  # imm = 0x7F43022C8660
>>> popq %rbp
>>> jmpq *%rax
>>> nopl (%rax)
>>>
>>> It looks like explicit muladd or not ends up at the same native code,
>>> but is that native code actually doing an fma? The 

Re: [julia-users] Does Julia 0.5 leak memory?

2016-09-21 Thread Yichao Yu
On Wed, Sep 21, 2016 at 8:50 PM, Yichao Yu  wrote:
> On Mon, Sep 19, 2016 at 9:14 PM, Luke Stagner  wrote:
>> I actually ran into this issue too. I have a routine that calculates fast
>> ion orbits that uses a lot of memory (90%). Here is the code (sorry its not
>> very clean).  I tried to run the function `make_distribution_file` in a loop
>> in julia but it never released the memory between calls. I tried inserting
>> `gc()` manually but that didn't do anything either.
>
> I don't have time currently but I'll try to reproduce it in a few days.
> What's your versioninfo() and how did you install julia?

In the mean time, I would also appreciate if you can reduce it a
little, especially if you can remove some of the external
dependencies.

>
>>
>> -Luke
>>
>>
>> On Monday, September 19, 2016 at 3:08:52 PM UTC-7, K leo wrote:
>>>
>>> The only package used (at the global level) is DataFrames.  Does that not
>>> release memory?
>>>
>>> On Tuesday, September 20, 2016 at 6:05:58 AM UTC+8, K leo wrote:

 No.  After myfunction() finished and I am at the REPL prompt, top shows
 Julia taking 49%.  And after I did gc(), it shows Julia taking 48%.

 On Tuesday, September 20, 2016 at 4:05:56 AM UTC+8, Randy Zwitch wrote:
>
> Does the problem go away if you run gc()?
>
>
>
> On Monday, September 19, 2016 at 3:55:14 PM UTC-4, K leo wrote:
>>
>> Thanks for the suggestion about valgrind.
>>
>> Can someone please let me first understand the expected behaviour for
>> memory usage.
>>
>> Let's say when I first starts Julia REPL it takes 5% of RAM (according
>> to top).  Then I include "myfile.jl" and run myfunction().  During the
>> execution of myfunction(), memory allocation of Julia reaches 40% of RAM
>> (again according to top).  Say running myfunction() involves no 
>> allocation
>> of global objects - all object used are local.  Then when myfunction()
>> finished and I am at the REPL prompt, should top show the memory usage of
>> Julia drops down to the previous level (5% of RAM)?  My current 
>> observation
>> is that it doesn't.  Is this the expected behaviour?
>>
>>


Re: [julia-users] Does Julia 0.5 leak memory?

2016-09-21 Thread Yichao Yu
On Mon, Sep 19, 2016 at 9:14 PM, Luke Stagner  wrote:
> I actually ran into this issue too. I have a routine that calculates fast
> ion orbits that uses a lot of memory (90%). Here is the code (sorry its not
> very clean).  I tried to run the function `make_distribution_file` in a loop
> in julia but it never released the memory between calls. I tried inserting
> `gc()` manually but that didn't do anything either.

I don't have time currently but I'll try to reproduce it in a few days.
What's your versioninfo() and how did you install julia?

>
> -Luke
>
>
> On Monday, September 19, 2016 at 3:08:52 PM UTC-7, K leo wrote:
>>
>> The only package used (at the global level) is DataFrames.  Does that not
>> release memory?
>>
>> On Tuesday, September 20, 2016 at 6:05:58 AM UTC+8, K leo wrote:
>>>
>>> No.  After myfunction() finished and I am at the REPL prompt, top shows
>>> Julia taking 49%.  And after I did gc(), it shows Julia taking 48%.
>>>
>>> On Tuesday, September 20, 2016 at 4:05:56 AM UTC+8, Randy Zwitch wrote:

 Does the problem go away if you run gc()?



 On Monday, September 19, 2016 at 3:55:14 PM UTC-4, K leo wrote:
>
> Thanks for the suggestion about valgrind.
>
> Can someone please let me first understand the expected behaviour for
> memory usage.
>
> Let's say when I first starts Julia REPL it takes 5% of RAM (according
> to top).  Then I include "myfile.jl" and run myfunction().  During the
> execution of myfunction(), memory allocation of Julia reaches 40% of RAM
> (again according to top).  Say running myfunction() involves no allocation
> of global objects - all object used are local.  Then when myfunction()
> finished and I am at the REPL prompt, should top show the memory usage of
> Julia drops down to the previous level (5% of RAM)?  My current 
> observation
> is that it doesn't.  Is this the expected behaviour?
>
>


Re: [julia-users] Re: Does Julia 0.5 leak memory?

2016-09-21 Thread Yichao Yu
On Wed, Sep 21, 2016 at 7:44 PM, K leo  wrote:
> I am running the 0.5 release now and it behaves in the same way - not
> releasing memory.  I can't say if this only has to do with 0.5 but not 0.4.
> Probably it happened in the same way with 0.4 but I just didn't pay
> attention then.
>
> Since there is no mechanism in Julia like in C/C++ for the programmer to
> free previously allocated memory, I don't believe that this is due to my
> coding errors (I only allocate memory without freeing them in the code).  I
> hope someone can point out to situations where this problem might occur so
> that I can check them out.  For now all I know is that my code allocates
> memory when it runs and those memory are not getting released even after the
> code has finished.
>
> I will try to see if I can cook up some demo code.

There's infinitely many ways mem leak bugs can happen. The only way
you can help debugging is to post actual code that reproduce the
issue. I personally don't care how long the code is, as long as it's
easy to run.

>
>
> On Wednesday, September 21, 2016 at 9:53:07 PM UTC+8, Páll Haraldsson wrote:
>>
>> On Sunday, September 18, 2016 at 12:53:19 PM UTC, K leo wrote:
>>>
>>> Any thoughts on what might be the culprit?
>>>
>>>   | | |_| | | | (_| |  |  Version 0.5.0-rc4+0 (2016-09-09 01:43 UTC)
>>>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
>>
>>
>> You say 0.5 in the title (when 0.5 wasn't out). Your later posts are after
>> 0.5 is out. Maybe it doesn't matter, I don't know, maybe the final version
>> is identical (except for versioninfo function..), I just do not know that
>> for a fact, so might as well upgrade just in case.
>>
>> --
>> Palli.
>>
>>
>>
>


[julia-users] Re: Help: Printing method in julia

2016-09-21 Thread Weicheng Zhu
OK, I see. Thank you!

On Wednesday, September 21, 2016 at 6:17:30 PM UTC-5, Steven G. Johnson 
wrote:
>
>
>
> On Wednesday, September 21, 2016 at 4:24:10 PM UTC-4, Weicheng Zhu wrote:
>>
>> Hi Dr. Johnson,
>> Thank you very much for your help! Not I understand how it works.
>> This problem has confused me a long time since I started learning julia.
>>
>> So if I want a `mytype` object to be printed in the pretty-printed lines 
>> by default, I have to define the display method, right?
>>
>
> No.  Only override show, not display.   And in Julia 0.4 the 3-argument 
> show is called writemime, not show.
>


Re: [julia-users] Int Matrix multiplication versus Float Matrix multiplication

2016-09-21 Thread Erik Schnetter
As matrix multiplication has a cost of O(N^3) while using only O(N^2)
elements, and since many integers (those smaller than 2^52 or so) can be
represented exactly as Float64 values, one approach could be to convert the
matrices to Float64, multiply them, and then convert back. For 64-bit
integers this might even be the fastest option allowed by common hardware.
For 32-bit integers, you could investigate whether using Float32 as
intermediate representation suffices.

-erik

On Wed, Sep 21, 2016 at 7:18 PM, Lutfullah Tomak 
wrote:

> Float matrix multiplication uses heavily optimized openblas but integer
> matrix multiplication is a generic one from julia and there is not much you
> can do to improve it a lot because not all cpus have simd multiplication
> and addition for integers.




-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] Re: Does Julia 0.5 leak memory?

2016-09-21 Thread K leo
I am running the 0.5 release now and it behaves in the same way - not 
releasing memory.  I can't say if this only has to do with 0.5 but not 0.4. 
 Probably it happened in the same way with 0.4 but I just didn't pay 
attention then.

Since there is no mechanism in Julia like in C/C++ for the programmer to 
free previously allocated memory, I don't believe that this is due to my 
coding errors (I only allocate memory without freeing them in the code).  I 
hope someone can point out to situations where this problem might occur so 
that I can check them out.  For now all I know is that my code allocates 
memory when it runs and those memory are not getting released even after 
the code has finished.

I will try to see if I can cook up some demo code.

On Wednesday, September 21, 2016 at 9:53:07 PM UTC+8, Páll Haraldsson wrote:
>
> On Sunday, September 18, 2016 at 12:53:19 PM UTC, K leo wrote:
>
>> Any thoughts on what might be the culprit?
>>
>>   | | |_| | | | (_| |  |  Version 0.5.0-rc4+0 (2016-09-09 01:43 UTC)
>>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
>>
>
> You say 0.5 in the title (when 0.5 wasn't out). Your later posts are after 
> 0.5 is out. Maybe it doesn't matter, I don't know, maybe the final version 
> is identical (except for versioninfo function..), I just do not know that 
> for a fact, so might as well upgrade just in case.
>
> -- 
> Palli.
>
>
>
>

[julia-users] Int Matrix multiplication versus Float Matrix multiplication

2016-09-21 Thread Lutfullah Tomak
Float matrix multiplication uses heavily optimized openblas but integer matrix 
multiplication is a generic one from julia and there is not much you can do to 
improve it a lot because not all cpus have simd multiplication and addition for 
integers.

[julia-users] Re: Help: Printing method in julia

2016-09-21 Thread Steven G. Johnson


On Wednesday, September 21, 2016 at 4:24:10 PM UTC-4, Weicheng Zhu wrote:
>
> Hi Dr. Johnson,
> Thank you very much for your help! Not I understand how it works.
> This problem has confused me a long time since I started learning julia.
>
> So if I want a `mytype` object to be printed in the pretty-printed lines 
> by default, I have to define the display method, right?
>

No.  Only override show, not display.   And in Julia 0.4 the 3-argument 
show is called writemime, not show.


[julia-users] Re: Int Matrix multiplication versus Float Matrix multiplication

2016-09-21 Thread Joaquim Masset Lacombe Dias Garcia
Thanks again!

I just found this:
http://stackoverflow.com/questions/2550281/floating-point-vs-integer-calculations-on-modern-hardware

According to the information presented there, generic multiplication is 
slower for ints than for floats. 

Em quarta-feira, 21 de setembro de 2016 20:14:34 UTC-3, Steven G. Johnson 
escreveu:
>
> Floating-point matrix multiplication uses a highly optimized BLAS library 
> (OpenBLAS), while integer matrix multiplication uses generic code.
>


[julia-users] Re: Int Matrix multiplication versus Float Matrix multiplication

2016-09-21 Thread Steven G. Johnson
Floating-point matrix multiplication uses a highly optimized BLAS library 
(OpenBLAS), while integer matrix multiplication uses generic code.


[julia-users] Int Matrix multiplication versus Float Matrix multiplication

2016-09-21 Thread Joaquim Masset Lacombe Dias Garcia

Does anyone knows why the float matrix multiplication is much faster in the 
following code:

function foo()

  n = 2000
  a = rand(1:10, n, n)
  b = rand(n, n)
  @time a*a
  @time b*b
  
  nothing
end

I get:


julia> test()
  6.715335 seconds (9 allocations: 30.518 MB, 0.00% gc time)
  0.120801 seconds (3 allocations: 30.518 MB, 7.88% gc time)

Thanks!

(btw, I tried using A_mul_B! and the time improvement was not 
significant...)


[julia-users] Visualizing a Julia AST (abstract syntax tree) as a tree

2016-09-21 Thread David P. Sanders
Hi,

In case it's useful for anybody, the following notebook shows how to use 
the LightGraphs and TikzGraphs packages
to visualize a Julia abstract syntax tree (Expression object) as an actual 
tree:

https://gist.github.com/dpsanders/5cc1acff2471d27bc583916e00d43387

Currently it requires the master branch of TikzGraphs.jl.

It would be great to have some kind of Julia snippet repository for this 
kind of thing that is less than a package but
provides some kind of useful functionality. 

David.




[julia-users] Re: Help: Printing method in julia

2016-09-21 Thread Weicheng Zhu
OK, I see.
It seems that if I define the two show method directly in the REPL, it 
doesn't work. But it works when I wrap them into a module:)


On Wednesday, September 21, 2016 at 3:24:10 PM UTC-5, Weicheng Zhu wrote:
>
> Hi Dr. Johnson,
> Thank you very much for your help! Not I understand how it works.
> This problem has confused me a long time since I started learning julia.
>
> So if I want a `mytype` object to be printed in the pretty-printed lines 
> by default, I have to define the display method, right?
> I guess my definition of the display method is not quite correct, although 
> it works in REPL. 
>
>
>> import Base.show
>> type mytype
>>   x::Array{Float64,2}
>> end
>> function show(io::IO, x::mytype)
>>   show(io, x.x)
>> end
>> function show(io::IO, m::MIME"text/plain", x::mytype)
>>   show(io, m, x.x)
>> end
>> show(STDOUT, MIME("text/plain"), mytype(x))
>> function Base.display(x::mytype)
>>   println("mytype object:")
>>   show(STDOUT, MIME("text/plain"), x)
>> end
>> julia> x = rand(5,2)
>> julia> mytype(x)
>> mytype object:
>> 5×2 Array{Float64,2}:
>>  0.05127   0.908138
>>  0.527729  0.835109
>>  0.657212  0.275374
>>  0.119597  0.659259
>>  0.94996   0.36432 
>
>
>
> On Wednesday, September 21, 2016 at 2:27:18 PM UTC-5, Steven G. Johnson 
> wrote:
>>
>>
>>
>> On Wednesday, September 21, 2016 at 2:54:15 PM UTC-4, Weicheng Zhu wrote:
>>>
>>> Hi all,
>>>
>>> I have a few simple questions to ask.
>>>
>>> 1) What function is invoked when I type x in the following example?
>>>
>>
>> display(x), which calls show(STDOUT, MIME("text/plain"), x)  [or 
>> writemime in Julia ≤ 0.4]
>>
>>
>> (Under the hood, this eventually calls a function Base.showarray)
>>
>>
>> 2) When I define a type which takes a matrix as a member, how to define 
>>> the show method to print x as shown above in julia 0.5.
>>> julia> type mytype
>>>x::Array{Float64,2}
>>> end
>>> julia> mytype(x)
>>> mytype([0.923288 0.0157897; 0.439387 0.50823; … ; 0.605268 0.416877; 
>>> 0.223898 0.558542])
>>
>>
>> You want to define two show methods: show(io::IO, x::mytype), which calls 
>> show(io, x.x) and outputs everything on a single line, and show(io::IO, 
>> m::MIME"text/plain", x::mytype), which calls show(io, m, x.x) and outputs 
>> multiple pretty-printed lines.
>>
>>
>> (You can also define additional show methods, e.g. for HTML or Markdown 
>> output, for even nicer display in an environment like IJulia that supports 
>> other MIME types.)
>>
>

Re: [julia-users] Understanding Nullable as immutable

2016-09-21 Thread Páll Haraldsson
On Wednesday, September 21, 2016 at 4:50:26 PM UTC, Fengyang Wang wrote:
>
> but type piracy is bad practice. This method should really be in Base.
>

As always, if something, should be in Base or Julia, then I think a PR is 
welcome.

[Maybe I do not fully understand this (yes, type piracy I guess equals, 
accessing internal variable, that would be Private (to not violate Parnas' 
Principles) in other languages).

I like how Julia avoids Hoare's self-admitted billion dollar mistake. It 
seems to violate Parnas' but since no type can subtype a concrete type, it 
may not, or at least that violation can always be avoided(?).]

-- 
Palli.



[julia-users] Re: Help: Printing method in julia

2016-09-21 Thread Weicheng Zhu
Hi Dr. Johnson,
Thank you very much for your help! Not I understand how it works.
This problem has confused me a long time since I started learning julia.

So if I want a `mytype` object to be printed in the pretty-printed lines by 
default, I have to define the display method, right?
I guess my definition of the display method is not quite correct, although 
it works in REPL. 


> import Base.show
> type mytype
>   x::Array{Float64,2}
> end
> function show(io::IO, x::mytype)
>   show(io, x.x)
> end
> function show(io::IO, m::MIME"text/plain", x::mytype)
>   show(io, m, x.x)
> end
> show(STDOUT, MIME("text/plain"), mytype(x))
> function Base.display(x::mytype)
>   println("mytype object:")
>   show(STDOUT, MIME("text/plain"), x)
> end
> julia> x = rand(5,2)
> julia> mytype(x)
> mytype object:
> 5×2 Array{Float64,2}:
>  0.05127   0.908138
>  0.527729  0.835109
>  0.657212  0.275374
>  0.119597  0.659259
>  0.94996   0.36432 



On Wednesday, September 21, 2016 at 2:27:18 PM UTC-5, Steven G. Johnson 
wrote:
>
>
>
> On Wednesday, September 21, 2016 at 2:54:15 PM UTC-4, Weicheng Zhu wrote:
>>
>> Hi all,
>>
>> I have a few simple questions to ask.
>>
>> 1) What function is invoked when I type x in the following example?
>>
>
> display(x), which calls show(STDOUT, MIME("text/plain"), x)  [or writemime 
> in Julia ≤ 0.4]
>
>
> (Under the hood, this eventually calls a function Base.showarray)
>
>
> 2) When I define a type which takes a matrix as a member, how to define 
>> the show method to print x as shown above in julia 0.5.
>> julia> type mytype
>>x::Array{Float64,2}
>> end
>> julia> mytype(x)
>> mytype([0.923288 0.0157897; 0.439387 0.50823; … ; 0.605268 0.416877; 
>> 0.223898 0.558542])
>
>
> You want to define two show methods: show(io::IO, x::mytype), which calls 
> show(io, x.x) and outputs everything on a single line, and show(io::IO, 
> m::MIME"text/plain", x::mytype), which calls show(io, m, x.x) and outputs 
> multiple pretty-printed lines.
>
>
> (You can also define additional show methods, e.g. for HTML or Markdown 
> output, for even nicer display in an environment like IJulia that supports 
> other MIME types.)
>


[julia-users] Re: Does Julia 0.5 leak memory?

2016-09-21 Thread Luke Stagner
I just tried my code again using the 0.5 release and the memory is still 
not releasing. I thought maybe it was caused by using HDF5 but even if i 
commented out that bit of the code it still doesn't free up the memory.

-Luke

On Wednesday, September 21, 2016 at 6:53:07 AM UTC-7, Páll Haraldsson wrote:
>
> On Sunday, September 18, 2016 at 12:53:19 PM UTC, K leo wrote:
>
>> Any thoughts on what might be the culprit?
>>
>>   | | |_| | | | (_| |  |  Version 0.5.0-rc4+0 (2016-09-09 01:43 UTC)
>>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
>>
>
> You say 0.5 in the title (when 0.5 wasn't out). Your later posts are after 
> 0.5 is out. Maybe it doesn't matter, I don't know, maybe the final version 
> is identical (except for versioninfo function..), I just do not know that 
> for a fact, so might as well upgrade just in case.
>
> -- 
> Palli.
>
>
>
>

Re: [julia-users] Re: Is FMA/Muladd Working Here?

2016-09-21 Thread Milan Bouchet-Valat
Le mercredi 21 septembre 2016 à 12:15 -0700, Chris Rackauckas a écrit :
> I see. So what I am getting is that, in my codes, 
> 
> 1. I will need to add @fastmath anywhere I want these optimizations
> to show up. That should be easy since I can just add it at the
> beginnings of loops where I have @inbounds which already covers every
> major inner loop I have. Easy find/replace fix. 
> 
> 2. For my own setup, I am going to need to build from source to get
> all the optimizations? I would've though the point of using the Linux
> repositories instead of the generic binaries is that they would be
> setup to build for your system. That's just a non-expert's
> misconception I guess? I think that should be highlighted somewhere.
No, the point of using Linux packages is to integrate easily with the
rest of the system (e.g. automated updates, installation in path
without manual tweaking), and to use your distribution's libraries to
avoid duplication.

That's just how it works for any software in a distribution. You need
to use Gentoo if you want software to be tuned at build time to your
particular system.


Regards

> > Le mercredi 21 septembre 2016 à 11:36 -0700, Chris Rackauckas a
> > écrit : 
> > > The Windows one is using the pre-built binary. The Linux one uses
> > the 
> > > COPR nightly (I assume that should build with all the goodies?) 
> > The Copr RPMs are subject to the same constraint as official
> > binaries: 
> > we need them to work on most machines. So they don't enable FMA
> > (nor 
> > e.g. AVX) either. 
> > 
> > It would be nice to find a way to ship with several pre-built
> > sysimg 
> > files and using the highest instruction set supported by your CPU. 
> > 
> > 
> > Regards 
> > 
> > > > > Hi, 
> > > > >   First of all, does LLVM essentially fma or muladd
> > expressions 
> > > > > like `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that
> > one 
> > > > > explicitly use `muladd` and `fma` on these types of
> > instructions 
> > > > > (is there a macro for making this easier)? 
> > > > > 
> > > > 
> > > > You will generally need to use muladd, unless you use
> > @fastmath. 
> > > > 
> > > >   
> > > > >   Secondly, I am wondering if my setup is no applying these 
> > > > > operations correctly. Here's my test code: 
> > > > > 
> > > > 
> > > > If you're using the prebuilt downloads (as opposed to building
> > from 
> > > > source), you will need to rebuild the sysimg (look in 
> > > > contrib/build_sysimg.jl) as we build for the lowest-common 
> > > > architecture. 
> > > > 
> > > > -Simon 
> > > > 


[julia-users] Re: Help: Printing method in julia

2016-09-21 Thread Steven G. Johnson


On Wednesday, September 21, 2016 at 2:54:15 PM UTC-4, Weicheng Zhu wrote:
>
> Hi all,
>
> I have a few simple questions to ask.
>
> 1) What function is invoked when I type x in the following example?
>

display(x), which calls show(STDOUT, MIME("text/plain"), x)  [or writemime 
in Julia ≤ 0.4]


(Under the hood, this eventually calls a function Base.showarray)


2) When I define a type which takes a matrix as a member, how to define the 
> show method to print x as shown above in julia 0.5.
> julia> type mytype
>x::Array{Float64,2}
> end
> julia> mytype(x)
> mytype([0.923288 0.0157897; 0.439387 0.50823; … ; 0.605268 0.416877; 
> 0.223898 0.558542])


You want to define two show methods: show(io::IO, x::mytype), which calls 
show(io, x.x) and outputs everything on a single line, and show(io::IO, 
m::MIME"text/plain", x::mytype), which calls show(io, m, x.x) and outputs 
multiple pretty-printed lines.


(You can also define additional show methods, e.g. for HTML or Markdown 
output, for even nicer display in an environment like IJulia that supports 
other MIME types.)


Re: [julia-users] Re: Is FMA/Muladd Working Here?

2016-09-21 Thread Chris Rackauckas
I see. So what I am getting is that, in my codes, 

1. I will need to add @fastmath anywhere I want these optimizations to show 
up. That should be easy since I can just add it at the beginnings of loops 
where I have @inbounds which already covers every major inner loop I have. 
Easy find/replace fix. 

2. For my own setup, I am going to need to build from source to get all the 
optimizations? I would've though the point of using the Linux repositories 
instead of the generic binaries is that they would be setup to build for 
your system. That's just a non-expert's misconception I guess? I think that 
should be highlighted somewhere.

On Wednesday, September 21, 2016 at 12:11:34 PM UTC-7, Milan Bouchet-Valat 
wrote:
>
> Le mercredi 21 septembre 2016 à 11:36 -0700, Chris Rackauckas a écrit : 
> > The Windows one is using the pre-built binary. The Linux one uses the 
> > COPR nightly (I assume that should build with all the goodies?) 
> The Copr RPMs are subject to the same constraint as official binaries: 
> we need them to work on most machines. So they don't enable FMA (nor 
> e.g. AVX) either. 
>
> It would be nice to find a way to ship with several pre-built sysimg 
> files and using the highest instruction set supported by your CPU. 
>
>
> Regards 
>
> > > > Hi, 
> > > >   First of all, does LLVM essentially fma or muladd expressions 
> > > > like `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one 
> > > > explicitly use `muladd` and `fma` on these types of instructions 
> > > > (is there a macro for making this easier)? 
> > > > 
> > > 
> > > You will generally need to use muladd, unless you use @fastmath. 
> > > 
> > >   
> > > >   Secondly, I am wondering if my setup is no applying these 
> > > > operations correctly. Here's my test code: 
> > > > 
> > > 
> > > If you're using the prebuilt downloads (as opposed to building from 
> > > source), you will need to rebuild the sysimg (look in 
> > > contrib/build_sysimg.jl) as we build for the lowest-common 
> > > architecture. 
> > > 
> > > -Simon 
> > > 
>


Re: [julia-users] Re: Is FMA/Muladd Working Here?

2016-09-21 Thread Milan Bouchet-Valat
Le mercredi 21 septembre 2016 à 11:36 -0700, Chris Rackauckas a écrit :
> The Windows one is using the pre-built binary. The Linux one uses the
> COPR nightly (I assume that should build with all the goodies?)
The Copr RPMs are subject to the same constraint as official binaries:
we need them to work on most machines. So they don't enable FMA (nor
e.g. AVX) either.

It would be nice to find a way to ship with several pre-built sysimg
files and using the highest instruction set supported by your CPU.


Regards

> > > Hi,
> > >   First of all, does LLVM essentially fma or muladd expressions
> > > like `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one
> > > explicitly use `muladd` and `fma` on these types of instructions
> > > (is there a macro for making this easier)?
> > > 
> > 
> > You will generally need to use muladd, unless you use @fastmath.
> > 
> >  
> > >   Secondly, I am wondering if my setup is no applying these
> > > operations correctly. Here's my test code:
> > > 
> > 
> > If you're using the prebuilt downloads (as opposed to building from
> > source), you will need to rebuild the sysimg (look in
> > contrib/build_sysimg.jl) as we build for the lowest-common
> > architecture.
> > 
> > -Simon
> > 


[julia-users] Re: Plotting lots of data

2016-09-21 Thread Chris Rackauckas
Usually I'm plotting the run from really long differential equations 
solution. The one I am mentioning is from a really long stochastic 
differential equation solution (publication coming soon). 19 lines with 
likely millions of dots, thrown together into one figure or spanning 
multiple. I can't really explain "faster" other than, when I ran the plot 
command afterwards (on smaller test cases) PyPlot would take forever but GR 
would get the plot done much quicker, so for the longer run I went with GR 
and it worked. I am not much of a plot guy so my method is, use Plots.jl, 
switch backends to find something that works, and if I can't find an easy 
solution like this, go ask Tom :). What I am saying is, if you do some 
experiments, GR will plot faster than something like Gadfly, PyPlot, 
(Plotly gave issues, this was in June so it may no longer be present) etc., 
so my hint is to give the GR backend a try if you're ever in a similar case.

On Wednesday, September 21, 2016 at 11:54:11 AM UTC-7, Andreas Lobinger 
wrote:
>
> Hello colleague,
>
> On Wednesday, September 21, 2016 at 8:34:21 PM UTC+2, Chris Rackauckas 
> wrote:
>>
>> I've managed to plot quite a few large datasets. GR through Plots.jl 
>> works very well for this. I tend to still prefer the defaults of PyPlot, 
>> but GR is just so much faster that I switch the backend whenever the amount 
>> of data gets unruly (larger than like 5-10GB, and it's worked to save a 
>> raster image from data larger than 40-50 GB). Plots + GR is a good combo
>>
>
> Could you explain this in more length, especially the 'faster'? It sounds 
> like your plotting a few hundred million items/lines.
>


[julia-users] Help: Printing method in julia

2016-09-21 Thread Weicheng Zhu


Hi all,

I have a few simple questions to ask.

1) What function is invoked when I type x in the following example? I tried 
print(x) and show(x), both don't seem to print x in this way.

julia> x = rand(5,2);

julia> x

5×2 Array{Float64,2}:

 0.923288  0.0157897

 0.439387  0.50823  

 0.233286  0.132342 

 0.605268  0.416877 

 0.223898  0.558542 


2) When I define a type which takes a matrix as a member, how to define the 
show method to print x as shown above in julia 0.5.

julia> type mytype

   x::Array{Float64,2}

end

julia> mytype(x)

mytype([0.923288 0.0157897; 0.439387 0.50823; … ; 0.605268 0.416877; 
0.223898 0.558542])


Thanks for any help on this!


Best,

Weicheng


[julia-users] Re: Plotting lots of data

2016-09-21 Thread Andreas Lobinger
Hello colleague,

On Wednesday, September 21, 2016 at 8:34:21 PM UTC+2, Chris Rackauckas 
wrote:
>
> I've managed to plot quite a few large datasets. GR through Plots.jl works 
> very well for this. I tend to still prefer the defaults of PyPlot, but GR 
> is just so much faster that I switch the backend whenever the amount of 
> data gets unruly (larger than like 5-10GB, and it's worked to save a raster 
> image from data larger than 40-50 GB). Plots + GR is a good combo
>

Could you explain this in more length, especially the 'faster'? It sounds 
like your plotting a few hundred million items/lines.


[julia-users] Re: Is FMA/Muladd Working Here?

2016-09-21 Thread Chris Rackauckas
The Windows one is using the pre-built binary. The Linux one uses the COPR 
nightly (I assume that should build with all the goodies?)

On Wednesday, September 21, 2016 at 9:00:02 AM UTC-7, Simon Byrne wrote:
>
> On Wednesday, 21 September 2016 06:56:45 UTC+1, Chris Rackauckas wrote:
>>
>> Hi,
>>   First of all, does LLVM essentially fma or muladd expressions like 
>> `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one explicitly use 
>> `muladd` and `fma` on these types of instructions (is there a macro for 
>> making this easier)?
>>
>
> You will generally need to use muladd, unless you use @fastmath.
>
>  
>
>>   Secondly, I am wondering if my setup is no applying these operations 
>> correctly. Here's my test code:
>>
>
> If you're using the prebuilt downloads (as opposed to building from 
> source), you will need to rebuild the sysimg (look in 
> contrib/build_sysimg.jl) as we build for the lowest-common architecture.
>
> -Simon
>


[julia-users] Re: Plotting lots of data

2016-09-21 Thread Chris Rackauckas
I've managed to plot quite a few large datasets. GR through Plots.jl works 
very well for this. I tend to still prefer the defaults of PyPlot, but GR 
is just so much faster that I switch the backend whenever the amount of 
data gets unruly (larger than like 5-10GB, and it's worked to save a raster 
image from data larger than 40-50 GB). Plots + GR is a good combo.

On Wednesday, September 21, 2016 at 4:52:43 AM UTC-7, Igor wrote:
>
> Hello!
> did you managed to plot big data sets? You can try to use my small package 
> for this (  https://github.com/ig-or/qwtwplot.jl ) - it's very 
> interesting for me how it can handle big data sets.
>
> Best regards, Igor
>
>
> четверг, 16 июня 2016 г., 0:08:42 UTC+3 пользователь CrocoDuck O'Ducks 
> написал:
>>
>>
>> 
>>
>>
>> 
>> Hi, thank you very much, really appreciated. GR seems pretty much what I 
>> need. I like I can use Plots.jl with it. PlotlyJS.jl is very hot, I guess I 
>> will use it when I need interactivity. I will look into OpenGL related 
>> visualization tools for more advanced plots/renders.
>>
>> I just have a quick question. I just did a quick test with GR plotting 
>> two 1 second long sine waves sampled at 192 kHz, one of frequency 100 Hz 
>> and one of frequency 10 kHz. The 100 Hz looks fine but the 10 kHz plot has 
>> blank areas (see attached pictures). I guess it is due to the density of 
>> lines... probably solved by making the plot bigger?
>>
>>

Re: [julia-users] Understanding Nullable as immutable

2016-09-21 Thread Randy Zwitch
OH! That makes a big difference in usability. :)

On Wednesday, September 21, 2016 at 12:50:26 PM UTC-4, Fengyang Wang wrote:
>
> There is no need to modify a copy; only the Nullable is immutable and not 
> its underlying value. Just instead of modifying ec.title, modify 
> get(ec.title). Like setfield!(get(ec.title), k, v). In short scripts, I 
> often define getindex(x::Nullable) = get(x) so that I can write ec.title[] 
> instead of get(ec.title), but type piracy is bad practice. This method 
> should really be in Base.
>
> On Wednesday, September 21, 2016 at 9:59:21 AM UTC-4, Randy Zwitch wrote:
>>
>> So get() to check if a value is there, and if there is modify a copy 
>> then overwrite?
>>
>> If that's the case, it might be worth the mental overhead to use Nullable 
>> types when mentally I'm viewing everything as a consistently mutating 
>> object until the desired result is achieved.
>>  
>> On Wednesday, September 21, 2016 at 9:49:44 AM UTC-4, Yichao Yu wrote:
>>>
>>> On Sep 21, 2016 9:42 AM, "Randy Zwitch"  
>>> wrote:
>>> >
>>> > I frequently have a design pattern of Union{Title, Void}. I was 
>>> thinking that I could redefine this as title::Nullable{Title}. However, 
>>> once I try to modify fields inside the Title type using setfield!(ec.title, 
>>> k, v), I get this error message:
>>> >
>>> > LoadError: type Nullable is immutable while loading In[19], in 
>>> expression starting on line 4
>>> >
>>> >
>>> >
>>> > My question is, why is the Nullable type immutable? My original 
>>> thought was that my Nullable definition was saying "There is either a Title 
>>> type here or nothing/missing", and maybe I know the value now or maybe I 
>>> know it later. But it seems the definition is actually "There could be a 
>>> Title type here or missing, and whatever you see first is what you will 
>>> always have"
>>> >
>>> > Is there a better way to express the former behavior other than as a 
>>> Union type? My use case is building JSON strings as specifications of 
>>> graphs for JavaScript libraries, so nearly every field of every type is 
>>> possibly missing for any given specification.
>>>
>>> Assign the whole object instead of mutating it.
>>>
>>> >
>>> > @with_kw type EChart <: AbstractEChartType
>>> > # title::Union{Title,Void} = Title()
>>> > title::Nullable{Title} = Title()
>>> > legend::Union{Legend,Void} = nothing
>>> > grid::Union{Grid,Void} = nothing
>>> > xAxis::Union{Array{Axis,1},Void} = nothing
>>> > yAxis::Union{Array{Axis,1},Void} = nothing
>>> > polar::Union{Polar,Void} = nothing
>>> > radiusAxis::Union{RadiusAxis,Void} = nothing
>>> > angleAxis::Union{AngleAxis,Void} = nothing
>>> > radar::Union{Radar,Void} = nothing
>>> > dataZoom::Union{DataZoom,Void} = nothing
>>> > visualMap::Union{VisualMap,Void} = nothing
>>> > tooltip::Union{Tooltip,Void} = nothing
>>> > toolbox::Union{Toolbox,Void} = Toolbox()
>>> > geo::Union{Geo,Void} = nothing
>>> > parallel::Union{Parallel,Void} = nothing
>>> > parallelAxis::Union{ParallelAxis,Void} = nothing
>>> > timeline::Union{Timeline,Void} = nothing
>>> > series::Union{Array{Series,1},Void} = nothing
>>> > color::Union{AbstractVector,Void} = nothing
>>> > backgroundColor::Union{String,Void} = nothing
>>> > textStyle::Union{TextStyle,Void} = nothing
>>> > animation::Union{Bool,Void} = nothing
>>> > animationDuration::Union{Int,Void} = nothing
>>> > animationEasing::Union{String,Void} = nothing
>>> > animationDelay::Union{Int,Void} = nothing
>>> > animationDurationUpdate::Union{Int,Void} = nothing
>>> > animationEasingUpdate::Union{String,Void} = nothing
>>> > animationDelayUpdate::Union{Int,Void} = nothing
>>> > end
>>>
>>

Re: [julia-users] RE: [julia-news] ANN: Julia v0.5.0 released!

2016-09-21 Thread 'Tobias Knopp' via julia-users
I also want to thank you Tony for the excellent release management! 0.5 is 
an awesome release and in our research group the switch was absolutely 
smoothly.

Thanks,

Tobi

Am Dienstag, 20. September 2016 21:28:04 UTC+2 schrieb Jeff Bezanson:
>
> Thanks Tony for your very hard work seeing this through to completion. 
>
> 0.5 is a big improvement, and we have some equally juicy stuff planned for 
> 0.6! 
>
>
> On Tue, Sep 20, 2016 at 12:23 PM, David Anthoff  > wrote: 
> > Great news, thanks to everyone who contributed to this great release! 
> I’ve 
> > been using the release candidates for a couple of weeks, and 0.5 is a 
> > fantastic release. 
> > 
> > 
> > 
> > Best, 
> > 
> > David 
> > 
> > 
> > 
> > From: julia...@googlegroups.com  [mailto:
> julia...@googlegroups.com ] On 
> > Behalf Of Tony Kelman 
> > Sent: Tuesday, September 20, 2016 2:09 AM 
> > To: julia-news  
> > Cc: Julia Users  
> > Subject: [julia-news] ANN: Julia v0.5.0 released! 
> > 
> > 
> > 
> > At long last, we can announce the final release of Julia 0.5.0! See the 
> > release notes at 
> https://github.com/JuliaLang/julia/blob/release-0.5/NEWS.md 
> > for more details, and expect a blog post with some highlights within the 
> > next few days. Binaries are available from the usual place, and please 
> > report all issues to either the issue tracker or email the julia-users 
> list. 
> > Don't CC julia-news, which is intended to be low-volume, if you reply to 
> > this message. 
> > 
> > 
> > 
> > Many thanks to all the contributors, package authors, users and 
> reporters of 
> > issues who helped us get here. We'll be releasing regular monthly bugfix 
> > backports from the 0.5.x line, while major feature work is ongoing on 
> master 
> > for 0.6-dev. Enjoy! 
> > 
> > 
> > We haven't made the change just yet, but to package authors: please be 
> aware 
> > that `release` on Travis CI for `language: julia` will change meaning to 
> 0.5 
> > shortly. So if you want to continue supporting Julia 0.4 in your 
> package, 
> > please update your .travis.yml file to have an "- 0.4" entry under 
> `julia:` 
> > versions. When you want to drop support for Julia 0.4, update your 
> REQUIRE 
> > file to list `julia 0.5` as a lower bound, and the next time you tag the 
> > package be sure to increment the minor or major version number via 
> > `PkgDev.tag(pkgname, :minor)`. 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "julia-news" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to julia-news+...@googlegroups.com . 
> > To view this discussion on the web visit 
> > 
> https://groups.google.com/d/msgid/julia-news/387852e5-6a73-4dd0-84b5-9914f2d5c35a%40googlegroups.com.
>  
>
> > For more options, visit https://groups.google.com/d/optout. 
>


Re: [julia-users] Understanding Nullable as immutable

2016-09-21 Thread Fengyang Wang
There is no need to modify a copy; only the Nullable is immutable and not 
its underlying value. Just instead of modifying ec.title, modify 
get(ec.title). Like setfield!(get(ec.title), k, v). In short scripts, I 
often define getindex(x::Nullable) = get(x) so that I can write ec.title[] 
instead of get(ec.title), but type piracy is bad practice. This method 
should really be in Base.

On Wednesday, September 21, 2016 at 9:59:21 AM UTC-4, Randy Zwitch wrote:
>
> So get() to check if a value is there, and if there is modify a copy then 
> overwrite?
>
> If that's the case, it might be worth the mental overhead to use Nullable 
> types when mentally I'm viewing everything as a consistently mutating 
> object until the desired result is achieved.
>  
> On Wednesday, September 21, 2016 at 9:49:44 AM UTC-4, Yichao Yu wrote:
>>
>> On Sep 21, 2016 9:42 AM, "Randy Zwitch"  wrote:
>> >
>> > I frequently have a design pattern of Union{Title, Void}. I was 
>> thinking that I could redefine this as title::Nullable{Title}. However, 
>> once I try to modify fields inside the Title type using setfield!(ec.title, 
>> k, v), I get this error message:
>> >
>> > LoadError: type Nullable is immutable while loading In[19], in 
>> expression starting on line 4
>> >
>> >
>> >
>> > My question is, why is the Nullable type immutable? My original thought 
>> was that my Nullable definition was saying "There is either a Title type 
>> here or nothing/missing", and maybe I know the value now or maybe I know it 
>> later. But it seems the definition is actually "There could be a Title type 
>> here or missing, and whatever you see first is what you will always have"
>> >
>> > Is there a better way to express the former behavior other than as a 
>> Union type? My use case is building JSON strings as specifications of 
>> graphs for JavaScript libraries, so nearly every field of every type is 
>> possibly missing for any given specification.
>>
>> Assign the whole object instead of mutating it.
>>
>> >
>> > @with_kw type EChart <: AbstractEChartType
>> > # title::Union{Title,Void} = Title()
>> > title::Nullable{Title} = Title()
>> > legend::Union{Legend,Void} = nothing
>> > grid::Union{Grid,Void} = nothing
>> > xAxis::Union{Array{Axis,1},Void} = nothing
>> > yAxis::Union{Array{Axis,1},Void} = nothing
>> > polar::Union{Polar,Void} = nothing
>> > radiusAxis::Union{RadiusAxis,Void} = nothing
>> > angleAxis::Union{AngleAxis,Void} = nothing
>> > radar::Union{Radar,Void} = nothing
>> > dataZoom::Union{DataZoom,Void} = nothing
>> > visualMap::Union{VisualMap,Void} = nothing
>> > tooltip::Union{Tooltip,Void} = nothing
>> > toolbox::Union{Toolbox,Void} = Toolbox()
>> > geo::Union{Geo,Void} = nothing
>> > parallel::Union{Parallel,Void} = nothing
>> > parallelAxis::Union{ParallelAxis,Void} = nothing
>> > timeline::Union{Timeline,Void} = nothing
>> > series::Union{Array{Series,1},Void} = nothing
>> > color::Union{AbstractVector,Void} = nothing
>> > backgroundColor::Union{String,Void} = nothing
>> > textStyle::Union{TextStyle,Void} = nothing
>> > animation::Union{Bool,Void} = nothing
>> > animationDuration::Union{Int,Void} = nothing
>> > animationEasing::Union{String,Void} = nothing
>> > animationDelay::Union{Int,Void} = nothing
>> > animationDurationUpdate::Union{Int,Void} = nothing
>> > animationEasingUpdate::Union{String,Void} = nothing
>> > animationDelayUpdate::Union{Int,Void} = nothing
>> > end
>>
>

[julia-users] Re: Is FMA/Muladd Working Here?

2016-09-21 Thread Simon Byrne
On Wednesday, 21 September 2016 06:56:45 UTC+1, Chris Rackauckas wrote:
>
> Hi,
>   First of all, does LLVM essentially fma or muladd expressions like 
> `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one explicitly use 
> `muladd` and `fma` on these types of instructions (is there a macro for 
> making this easier)?
>

You will generally need to use muladd, unless you use @fastmath.

 

>   Secondly, I am wondering if my setup is no applying these operations 
> correctly. Here's my test code:
>

If you're using the prebuilt downloads (as opposed to building from 
source), you will need to rebuild the sysimg (look in 
contrib/build_sysimg.jl) as we build for the lowest-common architecture.

-Simon


[julia-users] http://pkg.julialang.org/ needs updating.. I guess 0.5 is now recommended (and specific packages?)

2016-09-21 Thread Páll Haraldsson

At:

http://pkg.julialang.org/

http://pkg.julialang.org/pulse.html

I see:

"*v0.4.6 (current release)* — v0.5-pre (unstable)"

former is no longer true, and I guess not the latter. Isn't "unstable" 
meant to refer to Julia itself? Or packages..? Both?

This is just a reminder to update..




Great to see: "Listing all 1127 registered packages 
". Then there are some more.. 
unknowns.. but of those registered:



I see at least 5 web frameworks now (this "gluttony" is getting to be a 
problem..), and 29 hits on "web", yes, infrastructure code, e.g. HTTP2.


New ones I didn't know about:
http://github.com/codeneomatrix/Merly.jl

http://github.com/EricForgy/Pages.jl


and what I've yet too look into (thought the newest one):

http://github.com/wookay/Bukdu.jl


Any recommendations? Great to know that there is just not math related.

-- 
Palli.


[julia-users] reading 4-level nested child nodes using lightxml

2016-09-21 Thread varun7rs
i have a rather long xml file which i need to parse and i'm not able to 
proceed at a certain point. the code throws up errors when i try to access 
the child nodes of a certain element. 



  

  

  

  


  




  


  




  


  




  


  



  

  
  





  

:
:

   
  

  





I suppressed a lot of content and I hpe its okay. The main problem I have 
here is when I access the element requests as 

requests = get_elements_by_tagname(xroot, "requests")
slice_set = get_elements_by_tagname(requests, "slice")

for slice in sliceset
:
:
end

I get an error saying get_elements_by_tagname has no method matching 
I tried find_element(requests[1], "slice") but that didn't work either. Is 
it possible to parse this hugely complicated file and if yes, how can it be 
handled?


[julia-users] Re: Problem with 0.4.7 mac os distribution

2016-09-21 Thread Páll Haraldsson
On Tuesday, September 20, 2016 at 7:16:41 PM UTC, Tony Kelman wrote:
>
> try removing the copy of 0.4.7 you downloaded and replacing it with a 
> fresh copy.


Maybe even try out 0.5 anyway, since it is out.

[Not sure what you mean by "distro appears to download OK", "Julia + Juno 
IDE bundles (v0.3.12)", is the only "distro" I can think of, seems there 
wasn't a distro/bundle for (0.4.x or) 0.5. I got Juno separately from 0.5 
as instructed, and that worked (after small workaround, may have been just 
an issue on my machine and/or RC4 at the time).]

-- 
Palli.




Re: [julia-users] Understanding Nullable as immutable

2016-09-21 Thread Randy Zwitch
So get() to check if a value is there, and if there is modify a copy then 
overwrite?

If that's the case, it might be worth the mental overhead to use Nullable 
types when mentally I'm viewing everything as a consistently mutating 
object until the desired result is achieved.
 
On Wednesday, September 21, 2016 at 9:49:44 AM UTC-4, Yichao Yu wrote:
>
> On Sep 21, 2016 9:42 AM, "Randy Zwitch"  > wrote:
> >
> > I frequently have a design pattern of Union{Title, Void}. I was thinking 
> that I could redefine this as title::Nullable{Title}. However, once I try 
> to modify fields inside the Title type using setfield!(ec.title, k, v), I 
> get this error message:
> >
> > LoadError: type Nullable is immutable while loading In[19], in 
> expression starting on line 4
> >
> >
> >
> > My question is, why is the Nullable type immutable? My original thought 
> was that my Nullable definition was saying "There is either a Title type 
> here or nothing/missing", and maybe I know the value now or maybe I know it 
> later. But it seems the definition is actually "There could be a Title type 
> here or missing, and whatever you see first is what you will always have"
> >
> > Is there a better way to express the former behavior other than as a 
> Union type? My use case is building JSON strings as specifications of 
> graphs for JavaScript libraries, so nearly every field of every type is 
> possibly missing for any given specification.
>
> Assign the whole object instead of mutating it.
>
> >
> > @with_kw type EChart <: AbstractEChartType
> > # title::Union{Title,Void} = Title()
> > title::Nullable{Title} = Title()
> > legend::Union{Legend,Void} = nothing
> > grid::Union{Grid,Void} = nothing
> > xAxis::Union{Array{Axis,1},Void} = nothing
> > yAxis::Union{Array{Axis,1},Void} = nothing
> > polar::Union{Polar,Void} = nothing
> > radiusAxis::Union{RadiusAxis,Void} = nothing
> > angleAxis::Union{AngleAxis,Void} = nothing
> > radar::Union{Radar,Void} = nothing
> > dataZoom::Union{DataZoom,Void} = nothing
> > visualMap::Union{VisualMap,Void} = nothing
> > tooltip::Union{Tooltip,Void} = nothing
> > toolbox::Union{Toolbox,Void} = Toolbox()
> > geo::Union{Geo,Void} = nothing
> > parallel::Union{Parallel,Void} = nothing
> > parallelAxis::Union{ParallelAxis,Void} = nothing
> > timeline::Union{Timeline,Void} = nothing
> > series::Union{Array{Series,1},Void} = nothing
> > color::Union{AbstractVector,Void} = nothing
> > backgroundColor::Union{String,Void} = nothing
> > textStyle::Union{TextStyle,Void} = nothing
> > animation::Union{Bool,Void} = nothing
> > animationDuration::Union{Int,Void} = nothing
> > animationEasing::Union{String,Void} = nothing
> > animationDelay::Union{Int,Void} = nothing
> > animationDurationUpdate::Union{Int,Void} = nothing
> > animationEasingUpdate::Union{String,Void} = nothing
> > animationDelayUpdate::Union{Int,Void} = nothing
> > end
>


[julia-users] Re: Does Julia 0.5 leak memory?

2016-09-21 Thread Páll Haraldsson
On Sunday, September 18, 2016 at 12:53:19 PM UTC, K leo wrote:

> Any thoughts on what might be the culprit?
>
>   | | |_| | | | (_| |  |  Version 0.5.0-rc4+0 (2016-09-09 01:43 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
>

You say 0.5 in the title (when 0.5 wasn't out). Your later posts are after 
0.5 is out. Maybe it doesn't matter, I don't know, maybe the final version 
is identical (except for versioninfo function..), I just do not know that 
for a fact, so might as well upgrade just in case.

-- 
Palli.





Re: [julia-users] Understanding Nullable as immutable

2016-09-21 Thread Yichao Yu
On Sep 21, 2016 9:42 AM, "Randy Zwitch"  wrote:
>
> I frequently have a design pattern of Union{Title, Void}. I was thinking
that I could redefine this as title::Nullable{Title}. However, once I try
to modify fields inside the Title type using setfield!(ec.title, k, v), I
get this error message:
>
> LoadError: type Nullable is immutable while loading In[19], in expression
starting on line 4
>
>
>
> My question is, why is the Nullable type immutable? My original thought
was that my Nullable definition was saying "There is either a Title type
here or nothing/missing", and maybe I know the value now or maybe I know it
later. But it seems the definition is actually "There could be a Title type
here or missing, and whatever you see first is what you will always have"
>
> Is there a better way to express the former behavior other than as a
Union type? My use case is building JSON strings as specifications of
graphs for JavaScript libraries, so nearly every field of every type is
possibly missing for any given specification.

Assign the whole object instead of mutating it.

>
> @with_kw type EChart <: AbstractEChartType
> # title::Union{Title,Void} = Title()
> title::Nullable{Title} = Title()
> legend::Union{Legend,Void} = nothing
> grid::Union{Grid,Void} = nothing
> xAxis::Union{Array{Axis,1},Void} = nothing
> yAxis::Union{Array{Axis,1},Void} = nothing
> polar::Union{Polar,Void} = nothing
> radiusAxis::Union{RadiusAxis,Void} = nothing
> angleAxis::Union{AngleAxis,Void} = nothing
> radar::Union{Radar,Void} = nothing
> dataZoom::Union{DataZoom,Void} = nothing
> visualMap::Union{VisualMap,Void} = nothing
> tooltip::Union{Tooltip,Void} = nothing
> toolbox::Union{Toolbox,Void} = Toolbox()
> geo::Union{Geo,Void} = nothing
> parallel::Union{Parallel,Void} = nothing
> parallelAxis::Union{ParallelAxis,Void} = nothing
> timeline::Union{Timeline,Void} = nothing
> series::Union{Array{Series,1},Void} = nothing
> color::Union{AbstractVector,Void} = nothing
> backgroundColor::Union{String,Void} = nothing
> textStyle::Union{TextStyle,Void} = nothing
> animation::Union{Bool,Void} = nothing
> animationDuration::Union{Int,Void} = nothing
> animationEasing::Union{String,Void} = nothing
> animationDelay::Union{Int,Void} = nothing
> animationDurationUpdate::Union{Int,Void} = nothing
> animationEasingUpdate::Union{String,Void} = nothing
> animationDelayUpdate::Union{Int,Void} = nothing
> end


[julia-users] Re: Is FMA/Muladd Working Here?

2016-09-21 Thread Páll Haraldsson

On Wednesday, September 21, 2016 at 5:56:45 AM UTC, Chris Rackauckas wrote:

> Julia Version 0.5.0-rc4+0
>
 
I'm not saying it matters here, but is this version know to be identical to 
the released 0.5? Unless you know, in general, bugs should be reported on 
latest version.

-- 
Palli.



[julia-users] Understanding Nullable as immutable

2016-09-21 Thread Randy Zwitch
I frequently have a design pattern of Union{Title, Void}. I was thinking 
that I could redefine this as title::Nullable{Title}. However, once I try 
to modify fields inside the Title type using setfield!(ec.title, k, v), I 
get this error message:

LoadError: type Nullable is immutable
while loading In[19], in expression starting on line 4



My question is, why is the Nullable type immutable? My original thought was 
that my Nullable definition was saying "There is either a Title type here 
or nothing/missing", and maybe I know the value now or maybe I know it 
later. But it seems the definition is actually "There could be a Title type 
here or missing, and whatever you see first is what you will always have"

Is there a better way to express the former behavior other than as a Union 
type? My use case is building JSON strings as specifications of graphs for 
JavaScript libraries, so nearly every field of every type is possibly 
missing for any given specification.

@with_kw type EChart <: AbstractEChartType
# title::Union{Title,Void} = Title()
title::Nullable{Title} = Title()
legend::Union{Legend,Void} = nothing
grid::Union{Grid,Void} = nothing
xAxis::Union{Array{Axis,1},Void} = nothing
yAxis::Union{Array{Axis,1},Void} = nothing
polar::Union{Polar,Void} = nothing
radiusAxis::Union{RadiusAxis,Void} = nothing
angleAxis::Union{AngleAxis,Void} = nothing
radar::Union{Radar,Void} = nothing
dataZoom::Union{DataZoom,Void} = nothing
visualMap::Union{VisualMap,Void} = nothing
tooltip::Union{Tooltip,Void} = nothing
toolbox::Union{Toolbox,Void} = Toolbox()
geo::Union{Geo,Void} = nothing
parallel::Union{Parallel,Void} = nothing
parallelAxis::Union{ParallelAxis,Void} = nothing
timeline::Union{Timeline,Void} = nothing
series::Union{Array{Series,1},Void} = nothing
color::Union{AbstractVector,Void} = nothing
backgroundColor::Union{String,Void} = nothing
textStyle::Union{TextStyle,Void} = nothing
animation::Union{Bool,Void} = nothing
animationDuration::Union{Int,Void} = nothing
animationEasing::Union{String,Void} = nothing
animationDelay::Union{Int,Void} = nothing
animationDurationUpdate::Union{Int,Void} = nothing
animationEasingUpdate::Union{String,Void} = nothing
animationDelayUpdate::Union{Int,Void} = nothing
end


Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-21 Thread Erik Schnetter
On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas 
wrote:

> Hi,
>   First of all, does LLVM essentially fma or muladd expressions like
> `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one explicitly use
> `muladd` and `fma` on these types of instructions (is there a macro for
> making this easier)?
>

Yes, LLVM will use fma machine instructions -- but only if they lead to the
same round-off error as using separate multiply and add instructions. If
you do not care about the details of conforming to the IEEE standard, then
you can use the `@fastmath` macro that enables several optimizations,
including this one. This is described in the manual <
http://docs.julialang.org/en/release-0.5/manual/performance-tips/#performance-annotations
>.


  Secondly, I am wondering if my setup is no applying these operations
> correctly. Here's my test code:
>
> f(x) = 2.0x + 3.0
> g(x) = muladd(x,2.0, 3.0)
> h(x) = fma(x,2.0, 3.0)
>
> @code_llvm f(4.0)
> @code_llvm g(4.0)
> @code_llvm h(4.0)
>
> @code_native f(4.0)
> @code_native g(4.0)
> @code_native h(4.0)
>
> *Computer 1*
>
> Julia Version 0.5.0-rc4+0
> Commit 9c76c3e* (2016-09-09 01:43 UTC)
> Platform Info:
>   System: Linux (x86_64-redhat-linux)
>   CPU: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
>   LAPACK: libopenblasp.so.0
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)
>

This looks good, the "broadwell" architecture that LLVM uses should imply
the respective optimizations. Try with `@fastmath`.

-erik





> (the COPR nightly on CentOS7) with
>
> [crackauc@crackauc2 ~]$ lscpu
> Architecture:  x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):16
> On-line CPU(s) list:   0-15
> Thread(s) per core:1
> Core(s) per socket:8
> Socket(s): 2
> NUMA node(s):  2
> Vendor ID: GenuineIntel
> CPU family:6
> Model: 79
> Model name:Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
> Stepping:  1
> CPU MHz:   1200.000
> BogoMIPS:  6392.58
> Virtualization:VT-x
> L1d cache: 32K
> L1i cache: 32K
> L2 cache:  256K
> L3 cache:  25600K
> NUMA node0 CPU(s): 0-7
> NUMA node1 CPU(s): 8-15
>
>
>
> I get the output
>
> define double @julia_f_72025(double) #0 {
> top:
>   %1 = fmul double %0, 2.00e+00
>   %2 = fadd double %1, 3.00e+00
>   ret double %2
> }
>
> define double @julia_g_72027(double) #0 {
> top:
>   %1 = call double @llvm.fmuladd.f64(double %0, double 2.00e+00,
> double 3.00e+00)
>   ret double %1
> }
>
> define double @julia_h_72029(double) #0 {
> top:
>   %1 = call double @llvm.fma.f64(double %0, double 2.00e+00, double
> 3.00e+00)
>   ret double %1
> }
> .text
> Filename: fmatest.jl
> pushq %rbp
> movq %rsp, %rbp
> Source line: 1
> addsd %xmm0, %xmm0
> movabsq $139916162906520, %rax  # imm = 0x7F40C5303998
> addsd (%rax), %xmm0
> popq %rbp
> retq
> nopl (%rax,%rax)
> .text
> Filename: fmatest.jl
> pushq %rbp
> movq %rsp, %rbp
> Source line: 2
> addsd %xmm0, %xmm0
> movabsq $139916162906648, %rax  # imm = 0x7F40C5303A18
> addsd (%rax), %xmm0
> popq %rbp
> retq
> nopl (%rax,%rax)
> .text
> Filename: fmatest.jl
> pushq %rbp
> movq %rsp, %rbp
> movabsq $139916162906776, %rax  # imm = 0x7F40C5303A98
> Source line: 3
> movsd (%rax), %xmm1   # xmm1 = mem[0],zero
> movabsq $139916162906784, %rax  # imm = 0x7F40C5303AA0
> movsd (%rax), %xmm2   # xmm2 = mem[0],zero
> movabsq $139925776008800, %rax  # imm = 0x7F43022C8660
> popq %rbp
> jmpq *%rax
> nopl (%rax)
>
> It looks like explicit muladd or not ends up at the same native code, but
> is that native code actually doing an fma? The fma native is different, but
> from a discussion on the Gitter it seems that might be a software FMA? This
> computer is setup with the BIOS setting as LAPACK optimized or something
> like that, so is that messing with something?
>
> *Computer 2*
>
> Julia Version 0.6.0-dev.557
> Commit c7a4897 (2016-09-08 17:50 UTC)
> Platform Info:
>   System: NT (x86_64-w64-mingw32)
>   CPU: Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
>   LAPACK: libopenblas64_
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.7.1 (ORCJIT, haswell)
>
>
> on a 4770k i7, Windows 10, I get the output
>
> ; Function Attrs: uwtable
> define double @julia_f_66153(double) #0 {
> top:
>   %1 = fmul double %0, 2.00e+00
>   %2 = fadd double %1, 3.00e+00
>   ret double %2
> }
>
> ; Function Attrs: uwtable
> define double @julia_g_66157(double) #0 {
> top:
>   %1 = call double @llvm.fmuladd.f64(double %0, double 2.00e+00,
> double 3.00e+00)
>   ret double %1
> }
>
> ; Function Attrs: uwtable
> define double @julia_h_66158(double) #0 {
> top:
>   %1 = call double 

[julia-users] Re: Maps in Julia?

2016-09-21 Thread 'Philippe Roy' via julia-users
@Yeesian Ng : Thanks! Looks like my typical work day :)

@cdm : Subscribed to julia-geo. Will be useful!

@others : Thanks! We have lots of useful information for anyone reading 
this thread. :)

Le mercredi 21 septembre 2016 08:37:27 UTC-4, Yeesian Ng a écrit :
>
> It is based on slow work-in-progress, but I have slides 
>  (and notebook 
> )
>  
> for a talk I'm giving later today on geospatial processing in julia. They 
> give some sense of what the API of plotting a map through 
> https://github.com/tbreloff/Plots.jl via 
> https://github.com/yeesian/GeoDataFrames.jl might look like in the future.
>
> On Tuesday, 20 September 2016 20:54:10 UTC-4, cdm wrote:
>>
>>
>> the julia-geo list may also be helpful:
>>
>> https://groups.google.com/forum/#!forum/julia-geo
>>
>

[julia-users] Re: Maps in Julia?

2016-09-21 Thread Yeesian Ng
It is based on slow work-in-progress, but I have slides 
 (and notebook 
)
 
for a talk I'm giving later today on geospatial processing in julia. They 
give some sense of what the API of plotting a map through 
https://github.com/tbreloff/Plots.jl via 
https://github.com/yeesian/GeoDataFrames.jl might look like in the future.

On Tuesday, 20 September 2016 20:54:10 UTC-4, cdm wrote:
>
>
> the julia-geo list may also be helpful:
>
> https://groups.google.com/forum/#!forum/julia-geo
>


[julia-users] Re: Plotting lots of data

2016-09-21 Thread Igor
Hello!
did you managed to plot big data sets? You can try to use my small package 
for this (  https://github.com/ig-or/qwtwplot.jl ) - it's very interesting 
for me how it can handle big data sets.

Best regards, Igor


четверг, 16 июня 2016 г., 0:08:42 UTC+3 пользователь CrocoDuck O'Ducks 
написал:
>
>
> 
>
>
> 
> Hi, thank you very much, really appreciated. GR seems pretty much what I 
> need. I like I can use Plots.jl with it. PlotlyJS.jl is very hot, I guess I 
> will use it when I need interactivity. I will look into OpenGL related 
> visualization tools for more advanced plots/renders.
>
> I just have a quick question. I just did a quick test with GR plotting two 
> 1 second long sine waves sampled at 192 kHz, one of frequency 100 Hz and 
> one of frequency 10 kHz. The 100 Hz looks fine but the 10 kHz plot has 
> blank areas (see attached pictures). I guess it is due to the density of 
> lines... probably solved by making the plot bigger?
>
>

[julia-users] Re: How to write a binary data file? (Matlab to Julia translation)

2016-09-21 Thread Steven G. Johnson


On Wednesday, September 21, 2016 at 5:23:24 AM UTC-4, Dennis Eckmeier wrote:
>
> thanks for the help! Coming from Matlab data types and binary files had 
> never been a concern... 
>

You had to explicitly pass 'integer*4' in Matlab, so they were a concern 
there too, it is just that you express that concern differently in Matlab 
(by passing a format parameter rather than by converting the argument).


[julia-users] Re: Any way of recovering a corrupted .julia?

2016-09-21 Thread Jānis Erdmanis
I would suggest to start over, but keep REQUIRE file. Then do Pkg.init(), 
put REQUIRE file back and Pkg.update(). 

On Tuesday, September 20, 2016 at 10:07:36 PM UTC+3, J Luis wrote:
>
> Hi,
>
> So I did it again. While trying to install ImageMagick and at same time 
> attempting to convince Julia that I already had one ImageMagick installed 
> in my Win machine and manually deleted some files.
> After that the chaos. Every Pkg.update() has something to complain. For 
> example
>
> julia> Pkg.update()
> INFO: Updating METADATA...
> INFO: Cloning cache of GLVisualize from https://
> github.com/JuliaGL/GLVisualize.jl.git
> ERROR: GitError(Code:ENOTFOUND, Class:Repository, Could not find 
> repository from 'C:\j\.julia\v0.5\.cache\GMT')
>  in Base.LibGit2.GitRepo(::String) at .\libgit2\repository.jl:11
>
> it looks that I have no alternative than to wipe it completely and start a 
> new.
> And, I see that it's still there the (confess, annoyingly fact) that a 
> Pkg.rm() doesn't remove anything but merely moves the "removed" package to 
> the .trash directory.
>
> In such cases, is there anyway to rescue the corrupted .julia?
>
>
>
>

[julia-users] Re: Adding publications easier

2016-09-21 Thread Magnus Röding
I realize I was very vague. I don't have a concrete suggestion, but 
github/pull request/jekyll/pandoc plus the fact that I'm not well 
acquainted with any of the steps was just too much. If I remember correctly 
I got as far as producing an error message in Jekyll and eventually 
giving up the whole thing.

Chris: If it could boil down to basically editing a .bib in the browser or 
comparable, that would be just amazing.

I'm raising the question, I don't have answers though...

Den tisdag 20 september 2016 kl. 21:48:06 UTC+2 skrev Chris Rackauckas:
>
> I think he's talking about the fact that this specifically is more than 
> Github: it also requires using pandoc and Jekyll: 
> https://github.com/JuliaLang/julialang.github.com/tree/master/publications 
> 
>
> If the repo somehow ran a build script when checking the PR so that way 
> all you had to do was edit the .bib and index.md file, that probably 
> would lower the barrier to entry (and could be done straight from the 
> browser). That would require a smart setup like what's done for building 
> docs, and probably overkill for this. It's probably easier on the 
> maintainer side just to tell people to ask for help. (And it's better to 
> keep it as a .bib instead of directly editing the Markdown/HTML so that way 
> formatting is always the same / correct).
>
> On Tuesday, September 20, 2016 at 12:14:21 PM UTC-7, Tony Kelman wrote:
>>
>> What do you propose? Github is about as simple as we can do, considering 
>> also the complexity of maintaining something from thr project side. There 
>> are plenty of people around the community who are happy to walk you through 
>> the process of making a pull request, and if it's not explained in enoug 
>> detail then we can add more instructions if it would help. What have you 
>> tried so far?
>
>

Re: [julia-users] Help creating an IDL wrapper using Clang.jl

2016-09-21 Thread Luke Stagner
Are you still planning on releasing your IDL wrapper? I would still very 
much like to use it.

-Luke

On Friday, August 5, 2016 at 10:19:16 AM UTC-7, Bob Nnamtrop wrote:
>
> I have written a (pretty) complete wrapper for IDL (years ago actually), 
> but have not uploaded to github. I'll try to do that this weekend (and put 
> an update here). I wrote it for my own use and have been waiting for Julia 
> to mature to promote it to the IDL community. I originally wrote it for 
> Callable IDL and later switched to IDL RPC because of some library 
> conflicts I ran into using Callable IDL. It should not be hard to make the 
> Callable part work again. Of course, if you are on windows only Callable 
> IDL is supported by IDL, unfortunately. I have also implemented a REPL 
> interface to IDL from julia, which is really nice in practice. I originally 
> implemented the REPL in Julia 0.3 which was rather difficult, but it is 
> easy in 0.4. Note also that my package does not use Clang.jl but instead 
> hand written wrappers. This is no problem for IDL since the c interface to 
> IDL is very limited (basically you can pass in variables and call IDL 
> commands from strings; that is about it). It would be awesome if IDL would 
> expose more of the c interface like they did in the python library they 
> provide in recent versions. Maybe if julia picks up traction they will do 
> this.
>
> Bob
>
> On Thu, Aug 4, 2016 at 12:14 PM, Luke Stagner  > wrote:
>
>> Its possible to call IDL from C code (Callable IDL) 
>>  so I figured I 
>> could do the same from Julia.
>>
>> I have been trying to use Clang.jl to automatically create the wrapper 
>> but I am running into some issues. 
>>
>> This project is a bit above my level of expertise and I would appreciate 
>> any help
>>
>> My current progress can be found at 
>> https://github.com/lstagner/JulianInsurgence
>>
>> and the output from my wrapper attempt (wrap_idl.jl 
>> ) 
>>  is located here 
>> 
>>
>> I am using current Clang.jl master and Julia
>> julia> versioninfo()
>> Julia Version 0.5.0-rc0+193
>> Commit ff1b65c* (2016-08-04 04:14 UTC)
>> Platform Info:
>>   System: Linux (x86_64-unknown-linux-gnu)
>>   CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
>>   WORD_SIZE: 64
>>   BLAS: libmkl_rt
>>   LAPACK: libmkl_rt
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.7.1 (ORCJIT, haswell)
>>
>>
>>
>>
>

[julia-users] Re: How to write a binary data file? (Matlab to Julia translation)

2016-09-21 Thread Dennis Eckmeier
Hi,

thanks for the help! Coming from Matlab data types and binary files had 
never been a concern... it was about time I learned how to do this!

I was able to complete the Julia wrapper for Barnes Huts t-sne, and I will 
put it on github :)

Next up: plotting the results ;)

cheers,

Dennis

On Wednesday, September 21, 2016 at 1:06:55 AM UTC+1, Steven G. Johnson 
wrote:
>
>
>
> On Tuesday, September 20, 2016 at 3:46:47 PM UTC-4, Dennis Eckmeier wrote:
>>
>> So, I tried this, but the data are still not stored as the Matlab code 
>> (first post) would do it...
>>
>> function write_data(X, no_dims, theta, perplexity)
>>   n, d = size(X)
>>   A = write(h, hton(n), hton(d), hton(theta), hton(perplexity), 
>> hton(no_dims))
>>
>
> Note that integers are 64 bit by default on Julia, so these are writing 
> 64-bit values.  From your Matlab code, it seems like you need 32-bit 
> (4-byte) values for your integers.  Just convert them to Int32 or UInt32 
> first, e.g. hton(UInt32(n)) 
>


Re: [julia-users] Does Julia 0.5 leak memory?

2016-09-21 Thread Yaakov Borstein
For what it's worth, I have a production system that was developed on 
0.4.6, and started running it today on the 0.5 release.  It retrieves data 
from dozens of API's as JSON, processes and converts them to DataFrames, 
reconverts to JSON, caches the results in Redis.  Sizes range from several 
mb's to 500 mb's.  The routines have been running for several hours, 
allocating and deallocating large chunks of memory for the DataFrames with 
millions of rows and large JSON blobs.  No leaks whatsoever have been 
detected, humming away smoothly with slightly less maximum memory consumed 
and what appears to be faster release (gc()). So my guess is that there is 
likely something else going on with the poster's code example


Re: [julia-users] Re: Introducing Knet8: beginning deep learning with 100 lines of Julia!

2016-09-21 Thread Deniz Yuret
No problem.  Let me know if I can help you get started.

On Wed, Sep 21, 2016 at 10:35 AM Jon Norberg 
wrote:

> Ah yes of course, sorry :-) and thanks


Re: [julia-users] Re: Introducing Knet8: beginning deep learning with 100 lines of Julia!

2016-09-21 Thread Jon Norberg
Ah yes of course, sorry :-) and thanks

Re: [julia-users] Re: Introducing Knet8: beginning deep learning with 100 lines of Julia!

2016-09-21 Thread Deniz Yuret
Can you try Pkg.update() first?  Your METADATA could be out of date.

On Wed, Sep 21, 2016 at 8:22 AM Jon Norberg 
wrote:

> I get "LoadError: unknown package Knet" when using Pkg.add("Knet"). I am
> on 0.5.
>
> Very interested in this julia native ML library, thanks for sharing