Re: [julia-users] help with @generated function call please?

2016-10-15 Thread Erik Schnetter
A generated function is only useful if you perform a non-trivial
calculation based on the argument types. You don't do that here, so I
wonder whether simply using the Cartesian indexing macros by themselves
would be sufficient.

Note also that you don't need to write `$N` in your code; using `N`
directly has the same effect here.

I'm not saying that generated functions should be avoided at all costs, but
if it isn't necessary here you might as well skip the associated
complications.

-erik

On Fri, Oct 14, 2016 at 11:51 AM, Florian Oswald <florian.osw...@gmail.com>
wrote:

> hi all,
>
> I want to evaluate a function at each index of an array. There is a N
> dimensional function, and I want to map it onto an N-dimensional array:
>
> fpoly(x::Array{Real,5}) = x[1] + x[2]^2 + x[3] + x[4]^2 + x[5]
>
> want to do
>
> a = rand(2,2,2,2,2);
> b = similar(a)
>
> for i1 in indices(a,1)
> for i2 in indices(a,2)
> ...
> b[i1,i2,i3,i4,i5] = fpoly(a[i1,i2,i3,i4,i5])
> end
> end...
>
> I tried:
> # actually want to do it inplace
> @generated function set_poly!{T,N}(a::Array{T,N})
> quote
> @nloops $N i a begin
> @nref $N a i = @ncall $N fpoly i->a[i]
> end
> end
> end
>
> but that fails. I dont get further than:
>
> macroexpand(:(@nloops 3 j a begin
> x = @ncall 3 fpoly i->a[j]
> end))
>
> *quote  # cartesian.jl, line 62:*
>
> *for j_3 = indices(a,3) # cartesian.jl, line 63:*
>
> *nothing # cartesian.jl, line 64:*
>
> *begin  # cartesian.jl, line 62:*
>
> *for j_2 = indices(a,2) # cartesian.jl, line 63:*
>
> *nothing # cartesian.jl, line 64:*
>
> *begin  # cartesian.jl, line 62:*
>
> *for j_1 = indices(a,1) # cartesian.jl, line 63:*
>
> *nothing # cartesian.jl, line 64:*
>
> *begin  # REPL[145], line 2:*
>
> *x = fpoly(a[j],a[j],a[j])*
>
> *end # cartesian.jl, line 65:*
>
> *nothing*
>
> *end*
>
> *end # cartesian.jl, line 65:*
>
> *nothing*
>
> *end*
>
> *end # cartesian.jl, line 65:*
>
> *nothing*
>
> *end*
>
> *end*
>
>
>
> *which is a start but how can I get the LHS right the indices of a right?*
>
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] ERROR: Target architecture mismatch

2016-10-15 Thread Erik Schnetter
You didn't show the actual error message. Debugging is easier if (after
seeing an error) you re-run with "make -j1", so that the error message
doesn't scroll away.

-erik

On Sat, Oct 15, 2016 at 1:41 PM, ABB <austinb...@gmail.com> wrote:

> I'm getting a new error. This is with the following make.user:
>
> LLVM_VER=svn
> USEICC=1
> USEIFC=1
> USE_INTEL_MKL=1
> USE_INTEL_MKL_FFT=1
> USE_INTEL_LIBM=1
>
>
> building directly on the  (KNL) compute node (in parallel: make -j 68)
>
> configure: amending tests/server/Makefile
> configure: amending tests/libtest/Makefile
> configure: amending docs/examples/Makefile
> configure: Configured to build curl/libcurl:
>
>   curl version: 7.50.1
>   Host setup:   x86_64-unknown-linux-gnu
>   Install prefix:   /home1/04179/abean/julia/usr
>   Compiler: icc
>   SSL support:  enabled (mbedTLS)
>   SSH support:  enabled (libSSH2)
>   zlib support: no  (--with-zlib)
>   GSS-API support:  no  (--with-gssapi)
>   TLS-SRP support:  no  (--enable-tls-srp)
>   resolver: default (--enable-ares / --enable-threaded-resolver)
>   IPv6 support: enabled
>   Unix sockets support: enabled
>   IDN support:  no  (--with-{libidn,winidn})
>   Build libcurl:Shared=yes, Static=yes
>   Built-in manual:  enabled
>   --libcurl option: enabled (--disable-libcurl-option)
>   Verbose errors:   enabled (--disable-verbose)
>   SSPI support: no  (--enable-sspi)
>   ca cert bundle:   /etc/pki/tls/certs/ca-bundle.crt
>   ca cert path: no
>   ca fallback:  no
>   LDAP support: no  (--enable-ldap / --with-ldap-lib /
> --with-lber-lib)
>   LDAPS support:no  (--enable-ldaps)
>   RTSP support: enabled
>   RTMP support: no  (--with-librtmp)
>   metalink support: no  (--with-libmetalink)
>   PSL support:  no  (--with-libpsl)
>   HTTP2 support:disabled (--with-nghttp2)
>   Protocols:DICT FILE FTP FTPS GOPHER HTTP HTTPS IMAP IMAPS POP3
> POP3S RTSP SCP SFTP SMTP SMTPS TELNET TFTP
>
> make: *** [julia-deps] Error 2
>
>
> Does anyone have any advice for this one?  I didn't find previous
> discussion of this problem.
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] ERROR: Target architecture mismatch

2016-10-14 Thread Erik Schnetter
Julia runs some of the code it generates as part of its bootstrapping
procedure. That is, traditional cross-compiling won't work. I think there's
a way around it, but it's not trivial. I would avoid this in the beginning.

-erik

On Fri, Oct 14, 2016 at 11:28 AM, ABB <austinb...@gmail.com> wrote:

> I was building on the (Haswell) front end.  From some of the other issues
> I looked at it appeared that I could specify the architecture even if I was
> not actually building on that kind of system.  But that could be totally
> wrong, so I can try it on the KNL node if that's required.
>
> When I put  "LLVM_VER := svn" and tried this morning (again on the front
> end) the error I got was:
>
> JULIA_CPU_TARGET = knl
>
>
>  lib/CodeGen/SelectionDAG/DAGCombiner.cpp | 13 +
>
>  test/CodeGen/X86/negate-shift.ll | 16 
>
>  2 files changed, 17 insertions(+), 12 deletions(-)
>
> CMake Error at CMakeLists.txt:3 (cmake_minimum_required):
>
>   CMake 3.4.3 or higher is required.  You are running version 2.8.11
>
>
>
> -- Configuring incomplete, errors occurred!
>
> make[1]: *** [build/llvm-svn/build_Release/CMakeCache.txt] Error 1
>
> make: *** [julia-deps] Error 2
>
>
>
>
> On Friday, October 14, 2016 at 9:51:56 AM UTC-5, Erik Schnetter wrote:
>>
>> Were you building on a KNL node or on the frontend? What architecture did
>> you specify?
>>
>> -erik
>>
>> On Thu, Oct 13, 2016 at 9:38 PM, Valentin Churavy <v.ch...@gmail.com>
>> wrote:
>>
>>> Since KNL is just a new platform the default version of the LLVM
>>> compiler that Julia is based on does not support it properly.
>>> During our testing at MIT we found that we needed to switch to the
>>> current upstream of LLVM (or if anybody reads this at a later time LLVM 4.0)
>>> You can do that by putting
>>> LLVM_VER:=svn
>>> into your Make.user.
>>>
>>> - Valentin
>>>
>>> On Friday, 14 October 2016 09:55:16 UTC+9, ABB wrote:
>>>>
>>>> Sigh... build failed.  I'm including the last part that worked and the
>>>> error message which followed:
>>>>
>>>> JULIA usr/lib/julia/inference.ji
>>>> essentials.jl
>>>> generator.jl
>>>> reflection.jl
>>>> options.jl
>>>> promotion.jl
>>>> tuple.jl
>>>> range.jl
>>>> expr.jl
>>>> error.jl
>>>> bool.jl
>>>> number.jl
>>>> int.jl
>>>>
>>>> signal (4): Illegal instruction
>>>> while loading int.jl, in expression starting on line 193
>>>> ! at ./bool.jl:16
>>>> jl_call_method_internal at /home1/04179/abean/julia/src/j
>>>> ulia_internal.h:189
>>>> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
>>>> anonymous at ./ (unknown line)
>>>> jl_call_method_internal at /home1/04179/abean/julia/src/j
>>>> ulia_internal.h:189
>>>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:569
>>>> jl_parse_eval_all at /home1/04179/abean/julia/src/ast.c:717
>>>> jl_load at /home1/04179/abean/julia/src/toplevel.c:596
>>>> jl_load_ at /home1/04179/abean/julia/src/toplevel.c:605
>>>> include at ./boot.jl:231
>>>> jl_call_method_internal at /home1/04179/abean/julia/src/j
>>>> ulia_internal.h:189
>>>> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
>>>> do_call at /home1/04179/abean/julia/src/interpreter.c:66
>>>> eval at /home1/04179/abean/julia/src/interpreter.c:190
>>>> jl_interpret_toplevel_expr at /home1/04179/abean/julia/src/i
>>>> nterpreter.c:31
>>>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:558
>>>> jl_eval_module_expr at /home1/04179/abean/julia/src/toplevel.c:196
>>>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:465
>>>> jl_toplevel_eval at /home1/04179/abean/julia/src/toplevel.c:580
>>>> jl_toplevel_eval_in_warn at /home1/04179/abean/julia/src/builtins.c:590
>>>> jl_toplevel_eval_in at /home1/04179/abean/julia/src/builtins.c:556
>>>> eval at ./boot.jl:234
>>>> jl_call_method_internal at /home1/04179/abean/julia/src/j
>>>> ulia_internal.h:189
>>>> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
>>>> do_call at /home1/04179/abean/julia/src/interpreter.c:66
>>>> eval at /home1/04179/abean/julia/src/interpreter.c:190
>>>> jl_interpret_toplevel

Re: [julia-users] ERROR: Target architecture mismatch

2016-10-14 Thread Erik Schnetter
Were you building on a KNL node or on the frontend? What architecture did
you specify?

-erik

On Thu, Oct 13, 2016 at 9:38 PM, Valentin Churavy <v.chur...@gmail.com>
wrote:

> Since KNL is just a new platform the default version of the LLVM compiler
> that Julia is based on does not support it properly.
> During our testing at MIT we found that we needed to switch to the current
> upstream of LLVM (or if anybody reads this at a later time LLVM 4.0)
> You can do that by putting
> LLVM_VER:=svn
> into your Make.user.
>
> - Valentin
>
> On Friday, 14 October 2016 09:55:16 UTC+9, ABB wrote:
>>
>> Sigh... build failed.  I'm including the last part that worked and the
>> error message which followed:
>>
>> JULIA usr/lib/julia/inference.ji
>> essentials.jl
>> generator.jl
>> reflection.jl
>> options.jl
>> promotion.jl
>> tuple.jl
>> range.jl
>> expr.jl
>> error.jl
>> bool.jl
>> number.jl
>> int.jl
>>
>> signal (4): Illegal instruction
>> while loading int.jl, in expression starting on line 193
>> ! at ./bool.jl:16
>> jl_call_method_internal at /home1/04179/abean/julia/src/j
>> ulia_internal.h:189
>> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
>> anonymous at ./ (unknown line)
>> jl_call_method_internal at /home1/04179/abean/julia/src/j
>> ulia_internal.h:189
>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:569
>> jl_parse_eval_all at /home1/04179/abean/julia/src/ast.c:717
>> jl_load at /home1/04179/abean/julia/src/toplevel.c:596
>> jl_load_ at /home1/04179/abean/julia/src/toplevel.c:605
>> include at ./boot.jl:231
>> jl_call_method_internal at /home1/04179/abean/julia/src/j
>> ulia_internal.h:189
>> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
>> do_call at /home1/04179/abean/julia/src/interpreter.c:66
>> eval at /home1/04179/abean/julia/src/interpreter.c:190
>> jl_interpret_toplevel_expr at /home1/04179/abean/julia/src/i
>> nterpreter.c:31
>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:558
>> jl_eval_module_expr at /home1/04179/abean/julia/src/toplevel.c:196
>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:465
>> jl_toplevel_eval at /home1/04179/abean/julia/src/toplevel.c:580
>> jl_toplevel_eval_in_warn at /home1/04179/abean/julia/src/builtins.c:590
>> jl_toplevel_eval_in at /home1/04179/abean/julia/src/builtins.c:556
>> eval at ./boot.jl:234
>> jl_call_method_internal at /home1/04179/abean/julia/src/j
>> ulia_internal.h:189
>> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
>> do_call at /home1/04179/abean/julia/src/interpreter.c:66
>> eval at /home1/04179/abean/julia/src/interpreter.c:190
>> jl_interpret_toplevel_expr at /home1/04179/abean/julia/src/i
>> nterpreter.c:31
>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:558
>> jl_parse_eval_all at /home1/04179/abean/julia/src/ast.c:717
>> jl_load at /home1/04179/abean/julia/src/toplevel.c:596
>> exec_program at /home1/04179/abean/julia/ui/repl.c:66
>> true_main at /home1/04179/abean/julia/ui/repl.c:119
>> main at /home1/04179/abean/julia/ui/repl.c:232
>> __libc_start_main at /usr/lib64/libc.so.6 (unknown line)
>> unknown function (ip: 0x401928)
>> Allocations: 100373 (Pool: 100371; Big: 2); GC: 0
>> /bin/sh: line 1: 15078 Illegal instruction
>> /home1/04179/abean/julia/usr/bin/julia-debug -C knl --output-ji
>> /home1/04179/abean/julia/usr/lib/julia/inference.ji --startup-file=no
>> coreimg.jl
>> make[1]: *** [/home1/04179/abean/julia/usr/lib/julia/inference.ji] Error
>> 132
>> make: *** [julia-inference] Error 2
>>
>>
>>
>> Any advice for debugging that?  I don't find any previous issues which
>> are helpful.
>>
>> Thanks -
>>
>> Austin
>>
>> On Thursday, October 13, 2016 at 1:49:24 PM UTC-5, ABB wrote:
>>>
>>> Awesome.  Thanks.  I'll try it again then.  I appreciate the help.
>>>
>>> (Austin is also my name.  I save space in my memory by going to school
>>> at, living in and being a guy with the same name.)
>>>
>>> On Thursday, October 13, 2016 at 1:40:09 PM UTC-5, Erik Schnetter wrote:
>>>>
>>>> AB
>>>>
>>>> You're speaking of Stampede, if I might guess from the "austin" prefix
>>>> in your email address. I would treat the old and the new section of the
>>>> machines as separate, since they are not binary compatible. If you are
>>>> really interested in the KNL part, then

Re: [julia-users] ERROR: Target architecture mismatch

2016-10-13 Thread Erik Schnetter
AB

You're speaking of Stampede, if I might guess from the "austin" prefix in
your email address. I would treat the old and the new section of the
machines as separate, since they are not binary compatible. If you are
really interested in the KNL part, then I'd concentrate on these, and use
the development mode to always log in, build on, and run on the KNL nodes,
and ignore everything else. Mixing different architectures in a single
Julia environment is something I'd tackle much later, if at all.

Alternatively you can use "haswell" as CPU architecture (instead of "core2"
above), which should work both on the front end as well as the KNL nodes.
However, this way you will lose speed on the KNL nodes, except for linear
algebra operations.

-erik


On Thu, Oct 13, 2016 at 2:26 PM, ABB <austinb...@gmail.com> wrote:

> This is great - thanks for getting back to me so quickly.
>
> To follow up, I have two small questions:
>
> - To build specifically for the KNL system I should include something like
> "JULIA_CPU_TARGET = knl" in the Make.user file?
>
> - Part of the system is KNL, part of it is "Intel Xeon E5 Sandy Bridge
> and the Intel Knights Corner (KNC) coprocessor"  (the exact system is
> this one: https://portal.tacc.utexas.edu/user-guides/stampede ).  Is
> there a way to build for both of the architectures?  I think I read in
> another issue somewhere that it wasn't possible to support the Knights
> Corner because of (if I recall correctly) lack of LLVM support or something
> (maybe I am completely making that up) so if it's not possible I wouldn't
> be surprised.  (The two sections of Stampede run different versions of
> Linux too, if that makes it even more complicated.  I'd just be happy to
> get it running one way or the other.)
>
> Thanks again!
>
> AB
>
>
> On Thursday, October 13, 2016 at 1:10:48 PM UTC-5, Erik Schnetter wrote:
>>
>> AB
>>
>> Using "core2" is a fallback that will work on very old machines. In your
>> case -- if this happens to be a more modern, uniform HPC system -- you
>> might want to use a different architecture. For example, if you're building
>> on the compute nodes, and never run on the front end, then the default
>> should already work for you. Otherwise, choosing "knl" as architecture
>> should also work (and would also make it impossible to run on the front
>> end).
>>
>> -erik
>>
>>
>> On Thu, Oct 13, 2016 at 1:18 PM, ABB <austi...@gmail.com> wrote:
>>
>>> I built Julia Version 0.5.1-pre+2 on a cluster I have access to.
>>>
>>> The login node on which I executed the build has this architecture:
>>>
>>> Intel Core i7-5000 Extreme Edition (Haswell R2) / Xeon E5-x600 v3
>>> (Haswell-EP C1/M1/R2), 22nm
>>>
>>> The compute node has this architecture:
>>>
>>> Intel Xeon Phi Coprocessor (Knights Landing), 14nm
>>>
>>> (Those are each the last line of the output of "cpuid")
>>>
>>> when I try to run anything, I get the error:
>>>
>>> ERROR: Target architecture mismatch. Please delete or regenerate
>>> sys.{so,dll,dylib}.
>>>
>>> I found this old discussion:
>>>
>>> https://groups.google.com/forum/#!msg/julia-dev/Eqp0GhZWxME/3mGKX1l_L9gJ
>>>
>>> which recommends using
>>>
>>> JULIA_CPU_TARGET = core2
>>>
>>> in the Make.user file.
>>>
>>> Since that discussion is 2 years old, I am just double checking to see
>>> if that's still the best advice or if there is something else I should try
>>> first and/or instead.
>>>
>>> Thanks!
>>>
>>> AB
>>>
>>
>>
>>
>> --
>> Erik Schnetter <schn...@gmail.com> http://www.perimeterinstitute.
>> ca/personal/eschnetter/
>>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] ERROR: Target architecture mismatch

2016-10-13 Thread Erik Schnetter
AB

Using "core2" is a fallback that will work on very old machines. In your
case -- if this happens to be a more modern, uniform HPC system -- you
might want to use a different architecture. For example, if you're building
on the compute nodes, and never run on the front end, then the default
should already work for you. Otherwise, choosing "knl" as architecture
should also work (and would also make it impossible to run on the front
end).

-erik


On Thu, Oct 13, 2016 at 1:18 PM, ABB <austinb...@gmail.com> wrote:

> I built Julia Version 0.5.1-pre+2 on a cluster I have access to.
>
> The login node on which I executed the build has this architecture:
>
> Intel Core i7-5000 Extreme Edition (Haswell R2) / Xeon E5-x600 v3
> (Haswell-EP C1/M1/R2), 22nm
>
> The compute node has this architecture:
>
> Intel Xeon Phi Coprocessor (Knights Landing), 14nm
>
> (Those are each the last line of the output of "cpuid")
>
> when I try to run anything, I get the error:
>
> ERROR: Target architecture mismatch. Please delete or regenerate
> sys.{so,dll,dylib}.
>
> I found this old discussion:
>
> https://groups.google.com/forum/#!msg/julia-dev/Eqp0GhZWxME/3mGKX1l_L9gJ
>
> which recommends using
>
> JULIA_CPU_TARGET = core2
>
> in the Make.user file.
>
> Since that discussion is 2 years old, I am just double checking to see if
> that's still the best advice or if there is something else I should try
> first and/or instead.
>
> Thanks!
>
> AB
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] HDF5, how to save string data to file?

2016-10-13 Thread Erik Schnetter
You could look at the JLD package instead of using HDF5 directly.

The HDF5 package might be missing support for variable-length strings.

-erik

On Thu, Oct 13, 2016 at 6:36 AM, <programista...@gmail.com> wrote:

> how to save string data to file? What wrong ?
>
> julia> dset = d_create(g, "/dane", datatype(String), dataspace(nol))
> ERROR: `datatype` has no method matching datatype(::Type{String})
>
> julia> dset = d_create(g, "/dane", datatype(UTF8String), dataspace(nol))
> ERROR: `datatype` has no method matching datatype(::Type{UTF8String})
>
> julia> dset = d_create(g, "/dane", datatype(ASCIIString), dataspace(nol))
> ERROR: `datatype` has no method matching datatype(::Type{ASCIIString})
>
> Paul
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: What's the status of SIMD instructions from a user's perspective in v0.5?

2016-10-13 Thread Erik Schnetter
If you want to use the SIMD package, then you need to manually vectorized
the code. That is, all (most of) the local variables you're using will have
a SIMD `Vec` type. For convenience, your input and output arrays will
likely still hold scalar values, and the `vload` and vstore` functions
access scalar arrays, reading/writing SIMD vectors. The function you quote
above (from the SIMD examples) does just this.

What vector length `N` is best depends on the particular machine. Usually,
you would look at the CPU instruction set and choose the largest SIMD
vector size that the CPU supports, but sometimes twice that size or half
that size might also work well. Note that using a larger SIMD vector size
roughly corresponds to loop unrolling, which might be beneficial if the
compiler isn't clever enough to do this automatically.

There's additional complication if the array size is not a multiple of the
vector size. In this case, extending the array via dummy elements if often
the easiest way to go.

Note that SIMD vectorization is purely a performance improvement. It does
not make sense to make such changes without measuring performance before
and after. Given the low-level nature if the changes, looking at the
generated assembler code via `@code_native` is usually also insightful.

I'll be happy to help if you have a specific problem on which you're
working.

-erik


On Thu, Oct 13, 2016 at 9:51 AM, Florian Oswald <florian.osw...@gmail.com>
wrote:

> ok thanks! and so I should define my SIMD-able function like
>
> function vadd!{N,T}(xs::Vector{T}, ys::Vector{T}, ::Type{Vec{N,T}})
> @assert length(ys) == length(xs)
> @assert length(xs) % N == 0
> @inbounds for i in 1:N:length(xs)
> xv = vload(Vec{N,T}, xs, i)
> yv = vload(Vec{N,T}, ys, i)
> xv += yv
> vstore(xv, xs, i)
> endend
>
> i.e. using vload() and vstore() methods?
>
>
> On Thursday, 13 October 2016 15:29:50 UTC+2, Valentin Churavy wrote:
>>
>> If you want explicit simd the best way right now is the great SIMD.jl
>> package https://github.com/eschnett/SIMD.jl  it is builds on top of
>> VecElement.
>>
>> In many cases we can perform automatic vectorisation, but you have to
>> start Julia with -O3
>>
>> On Thursday, 13 October 2016 22:15:00 UTC+9, Florian Oswald wrote:
>>>
>>> i see on the docs http://docs.julialang.org/en/release-0.5/stdlib/simd-
>>> types/?highlight=SIMD that there is a vecElement that is build for SIMD
>>> support. I don't understand if as a user I should construct vecElement
>>> arrays and hope for some SIMD optimization? thanks.
>>>
>>>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] MPI.jl and composite types

2016-10-01 Thread Erik Schnetter
There's `Gather` (low-level, adapted straight from the MPI standard) and
`gather` (high-level, convenient for Julia, but apparently not yet
implemented).

-erik

On Sat, Oct 1, 2016 at 3:50 PM, Joaquim Dias Garcia <
joaquimdgar...@gmail.com> wrote:

> Thanks! Thats nice, I was misled by the gather function, i think I was
> looking too low level.
>
> Joaquim
>
> On 1 Oct 2016, at 16:37, Erik Schnetter <schnet...@gmail.com> wrote:
>
> In Julia, `MPI` can send objects of any type. These objects will
> automatically be serialized and deserialized. The respective functions are
>
> ```Julia
> function send(obj, dest::Integer, tag::Integer, comm::Comm)
> function recv(src::Integer, tag::Integer, comm::Comm)
> ```
> Note the lower-case function name, which indicates (following the Python
> convention) a higher-level interface to MPI.
>
> There is also a function `irecv` that checks whether a message containing
> such an object can be received, and receives it if so.
>
> -erik
>
>
>
> On Sat, Oct 1, 2016 at 3:19 PM, Joaquim Masset Lacombe Dias Garcia <
> joaquimdgar...@gmail.com> wrote:
>
>> Hi all,
>>
>> I have the following type:
>>
>> type bar
>> a::Int
>> b::Vector{Matrix{Float64}}
>> c::Vector{Float64}
>> end
>>
>> I would like to send instances of that type via MPI.
>> Can I do it?
>> Is there any serialization/deserialization procedure i could use for that?
>>
>> thanks!
>>
>
>
>
> --
> Erik Schnetter <schnet...@gmail.com> http://www.perimeterinstitute.
> ca/personal/eschnetter/
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] MPI.jl and composite types

2016-10-01 Thread Erik Schnetter
In Julia, `MPI` can send objects of any type. These objects will
automatically be serialized and deserialized. The respective functions are

```Julia
function send(obj, dest::Integer, tag::Integer, comm::Comm)
function recv(src::Integer, tag::Integer, comm::Comm)
```
Note the lower-case function name, which indicates (following the Python
convention) a higher-level interface to MPI.

There is also a function `irecv` that checks whether a message containing
such an object can be received, and receives it if so.

-erik



On Sat, Oct 1, 2016 at 3:19 PM, Joaquim Masset Lacombe Dias Garcia <
joaquimdgar...@gmail.com> wrote:

> Hi all,
>
> I have the following type:
>
> type bar
> a::Int
> b::Vector{Matrix{Float64}}
> c::Vector{Float64}
> end
>
> I would like to send instances of that type via MPI.
> Can I do it?
> Is there any serialization/deserialization procedure i could use for that?
>
> thanks!
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Is there a way to export all functions defined in a module?

2016-09-29 Thread Erik Schnetter
Yes; see <https://github.com/simonster/Reexport.jl>.

-erik

On Thu, Sep 29, 2016 at 5:35 PM, K leo <cnbiz...@gmail.com> wrote:

> without retyping all the names.
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-23 Thread Erik Schnetter
It should. Yes, please open an issue.

-erik

On Thu, Sep 22, 2016 at 7:46 PM, Chris Rackauckas <rackd...@gmail.com>
wrote:

> So, in the end, is `@fastmath` supposed to be adding FMA? Should I open an
> issue?
>
> On Wednesday, September 21, 2016 at 7:11:14 PM UTC-7, Yichao Yu wrote:
>>
>> On Wed, Sep 21, 2016 at 9:49 PM, Erik Schnetter <schn...@gmail.com>
>> wrote:
>> > I confirm that I can't get Julia to synthesize a `vfmadd` instruction
>> > either... Sorry for sending you on a wild goose chase.
>>
>> -march=haswell does the trick for C (both clang and gcc)
>> the necessary bit for the machine ir optimization (this is not a llvm
>> ir optimization pass) to do this is llc options -mcpu=haswell and
>> function attribute unsafe-fp-math=true.
>>
>> >
>> > -erik
>> >
>> > On Wed, Sep 21, 2016 at 9:33 PM, Yichao Yu <yyc...@gmail.com> wrote:
>> >>
>> >> On Wed, Sep 21, 2016 at 9:29 PM, Erik Schnetter <schn...@gmail.com>
>> >> wrote:
>> >> > On Wed, Sep 21, 2016 at 9:22 PM, Chris Rackauckas <rack...@gmail.com>
>>
>> >> > wrote:
>> >> >>
>> >> >> I'm not seeing `@fastmath` apply fma/muladd. I rebuilt the sysimg
>> and
>> >> >> now
>> >> >> I get results where g and h apply muladd/fma in the native code,
>> but a
>> >> >> new
>> >> >> function k which is `@fastmath` inside of f does not apply
>> muladd/fma.
>> >> >>
>> >> >>
>> >> >> https://gist.github.com/ChrisRackauckas/b239e33b4b52bcc28f39
>> 22c673a25910
>> >> >>
>> >> >> Should I open an issue?
>> >> >
>> >> >
>> >> > In your case, LLVM apparently thinks that `x + x + 3` is faster to
>> >> > calculate
>> >> > than `2x+3`. If you use a less round number than `2` multiplying
>> `x`,
>> >> > you
>> >> > might see a different behaviour.
>> >>
>> >> I've personally never seen llvm create fma from mul and add. We might
>> >> not have the llvm passes enabled if LLVM is capable of doing this at
>> >> all.
>> >>
>> >> >
>> >> > -erik
>> >> >
>> >> >
>> >> >> Note that this is on v0.6 Windows. On Linux the sysimg isn't
>> rebuilding
>> >> >> for some reason, so I may need to just build from source.
>> >> >>
>> >> >> On Wednesday, September 21, 2016 at 6:22:06 AM UTC-7, Erik
>> Schnetter
>> >> >> wrote:
>> >> >>>
>> >> >>> On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas <
>> rack...@gmail.com>
>> >> >>> wrote:
>> >> >>>>
>> >> >>>> Hi,
>> >> >>>>   First of all, does LLVM essentially fma or muladd expressions
>> like
>> >> >>>> `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one
>> >> >>>> explicitly use
>> >> >>>> `muladd` and `fma` on these types of instructions (is there a
>> macro
>> >> >>>> for
>> >> >>>> making this easier)?
>> >> >>>
>> >> >>>
>> >> >>> Yes, LLVM will use fma machine instructions -- but only if they
>> lead
>> >> >>> to
>> >> >>> the same round-off error as using separate multiply and add
>> >> >>> instructions. If
>> >> >>> you do not care about the details of conforming to the IEEE
>> standard,
>> >> >>> then
>> >> >>> you can use the `@fastmath` macro that enables several
>> optimizations,
>> >> >>> including this one. This is described in the manual
>> >> >>>
>> >> >>> <http://docs.julialang.org/en/release-0.5/manual/performance
>> -tips/#performance-annotations>.
>> >> >>>
>> >> >>>
>> >> >>>>   Secondly, I am wondering if my setup is no applying these
>> >> >>>> operations
>> >> >>>> correctly. Here's my test code:
>> >> >>>>
>> >> >>>> f(x) = 2.0x + 3.0
>> >> >>>> g(x) = muladd(x,2.0, 3.0)
>> >> >>>> h(x) = 

Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-21 Thread Erik Schnetter
I confirm that I can't get Julia to synthesize a `vfmadd` instruction
either... Sorry for sending you on a wild goose chase.

-erik

On Wed, Sep 21, 2016 at 9:33 PM, Yichao Yu <yyc1...@gmail.com> wrote:

> On Wed, Sep 21, 2016 at 9:29 PM, Erik Schnetter <schnet...@gmail.com>
> wrote:
> > On Wed, Sep 21, 2016 at 9:22 PM, Chris Rackauckas <rackd...@gmail.com>
> > wrote:
> >>
> >> I'm not seeing `@fastmath` apply fma/muladd. I rebuilt the sysimg and
> now
> >> I get results where g and h apply muladd/fma in the native code, but a
> new
> >> function k which is `@fastmath` inside of f does not apply muladd/fma.
> >>
> >> https://gist.github.com/ChrisRackauckas/b239e33b4b52bcc28f3922c673a259
> 10
> >>
> >> Should I open an issue?
> >
> >
> > In your case, LLVM apparently thinks that `x + x + 3` is faster to
> calculate
> > than `2x+3`. If you use a less round number than `2` multiplying `x`, you
> > might see a different behaviour.
>
> I've personally never seen llvm create fma from mul and add. We might
> not have the llvm passes enabled if LLVM is capable of doing this at
> all.
>
> >
> > -erik
> >
> >
> >> Note that this is on v0.6 Windows. On Linux the sysimg isn't rebuilding
> >> for some reason, so I may need to just build from source.
> >>
> >> On Wednesday, September 21, 2016 at 6:22:06 AM UTC-7, Erik Schnetter
> >> wrote:
> >>>
> >>> On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas <rack...@gmail.com>
> >>> wrote:
> >>>>
> >>>> Hi,
> >>>>   First of all, does LLVM essentially fma or muladd expressions like
> >>>> `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one
> explicitly use
> >>>> `muladd` and `fma` on these types of instructions (is there a macro
> for
> >>>> making this easier)?
> >>>
> >>>
> >>> Yes, LLVM will use fma machine instructions -- but only if they lead to
> >>> the same round-off error as using separate multiply and add
> instructions. If
> >>> you do not care about the details of conforming to the IEEE standard,
> then
> >>> you can use the `@fastmath` macro that enables several optimizations,
> >>> including this one. This is described in the manual
> >>> <http://docs.julialang.org/en/release-0.5/manual/
> performance-tips/#performance-annotations>.
> >>>
> >>>
> >>>>   Secondly, I am wondering if my setup is no applying these operations
> >>>> correctly. Here's my test code:
> >>>>
> >>>> f(x) = 2.0x + 3.0
> >>>> g(x) = muladd(x,2.0, 3.0)
> >>>> h(x) = fma(x,2.0, 3.0)
> >>>>
> >>>> @code_llvm f(4.0)
> >>>> @code_llvm g(4.0)
> >>>> @code_llvm h(4.0)
> >>>>
> >>>> @code_native f(4.0)
> >>>> @code_native g(4.0)
> >>>> @code_native h(4.0)
> >>>>
> >>>> Computer 1
> >>>>
> >>>> Julia Version 0.5.0-rc4+0
> >>>> Commit 9c76c3e* (2016-09-09 01:43 UTC)
> >>>> Platform Info:
> >>>>   System: Linux (x86_64-redhat-linux)
> >>>>   CPU: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
> >>>>   WORD_SIZE: 64
> >>>>   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
> >>>>   LAPACK: libopenblasp.so.0
> >>>>   LIBM: libopenlibm
> >>>>   LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)
> >>>
> >>>
> >>> This looks good, the "broadwell" architecture that LLVM uses should
> imply
> >>> the respective optimizations. Try with `@fastmath`.
> >>>
> >>> -erik
> >>>
> >>>
> >>>
> >>>
> >>>>
> >>>> (the COPR nightly on CentOS7) with
> >>>>
> >>>> [crackauc@crackauc2 ~]$ lscpu
> >>>> Architecture:  x86_64
> >>>> CPU op-mode(s):32-bit, 64-bit
> >>>> Byte Order:Little Endian
> >>>> CPU(s):16
> >>>> On-line CPU(s) list:   0-15
> >>>> Thread(s) per core:1
> >>>> Core(s) per socket:8
> >>>> Socket(s): 2
> >>>> NUMA node(s):  2
> >>>> Vendor ID: GenuineIntel
> >>>> CPU family:  

Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-21 Thread Erik Schnetter
On Wed, Sep 21, 2016 at 9:22 PM, Chris Rackauckas <rackd...@gmail.com>
wrote:

> I'm not seeing `@fastmath` apply fma/muladd. I rebuilt the sysimg and now
> I get results where g and h apply muladd/fma in the native code, but a new
> function k which is `@fastmath` inside of f does not apply muladd/fma.
>
> https://gist.github.com/ChrisRackauckas/b239e33b4b52bcc28f3922c673a25910
>
> Should I open an issue?
>

In your case, LLVM apparently thinks that `x + x + 3` is faster to
calculate than `2x+3`. If you use a less round number than `2` multiplying
`x`, you might see a different behaviour.

-erik


Note that this is on v0.6 Windows. On Linux the sysimg isn't rebuilding for
> some reason, so I may need to just build from source.
>
> On Wednesday, September 21, 2016 at 6:22:06 AM UTC-7, Erik Schnetter wrote:
>>
>> On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas <rack...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>   First of all, does LLVM essentially fma or muladd expressions like
>>> `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one explicitly use
>>> `muladd` and `fma` on these types of instructions (is there a macro for
>>> making this easier)?
>>>
>>
>> Yes, LLVM will use fma machine instructions -- but only if they lead to
>> the same round-off error as using separate multiply and add instructions.
>> If you do not care about the details of conforming to the IEEE standard,
>> then you can use the `@fastmath` macro that enables several optimizations,
>> including this one. This is described in the manual <
>> http://docs.julialang.org/en/release-0.5/manual/performance
>> -tips/#performance-annotations>.
>>
>>
>>   Secondly, I am wondering if my setup is no applying these operations
>>> correctly. Here's my test code:
>>>
>>> f(x) = 2.0x + 3.0
>>> g(x) = muladd(x,2.0, 3.0)
>>> h(x) = fma(x,2.0, 3.0)
>>>
>>> @code_llvm f(4.0)
>>> @code_llvm g(4.0)
>>> @code_llvm h(4.0)
>>>
>>> @code_native f(4.0)
>>> @code_native g(4.0)
>>> @code_native h(4.0)
>>>
>>> *Computer 1*
>>>
>>> Julia Version 0.5.0-rc4+0
>>> Commit 9c76c3e* (2016-09-09 01:43 UTC)
>>> Platform Info:
>>>   System: Linux (x86_64-redhat-linux)
>>>   CPU: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
>>>   LAPACK: libopenblasp.so.0
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)
>>>
>>
>> This looks good, the "broadwell" architecture that LLVM uses should imply
>> the respective optimizations. Try with `@fastmath`.
>>
>> -erik
>>
>>
>>
>>
>>
>>> (the COPR nightly on CentOS7) with
>>>
>>> [crackauc@crackauc2 ~]$ lscpu
>>> Architecture:  x86_64
>>> CPU op-mode(s):32-bit, 64-bit
>>> Byte Order:Little Endian
>>> CPU(s):16
>>> On-line CPU(s) list:   0-15
>>> Thread(s) per core:1
>>> Core(s) per socket:8
>>> Socket(s): 2
>>> NUMA node(s):  2
>>> Vendor ID: GenuineIntel
>>> CPU family:6
>>> Model: 79
>>> Model name:Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>>> Stepping:  1
>>> CPU MHz:   1200.000
>>> BogoMIPS:  6392.58
>>> Virtualization:VT-x
>>> L1d cache: 32K
>>> L1i cache: 32K
>>> L2 cache:  256K
>>> L3 cache:  25600K
>>> NUMA node0 CPU(s): 0-7
>>> NUMA node1 CPU(s): 8-15
>>>
>>>
>>>
>>> I get the output
>>>
>>> define double @julia_f_72025(double) #0 {
>>> top:
>>>   %1 = fmul double %0, 2.00e+00
>>>   %2 = fadd double %1, 3.00e+00
>>>   ret double %2
>>> }
>>>
>>> define double @julia_g_72027(double) #0 {
>>> top:
>>>   %1 = call double @llvm.fmuladd.f64(double %0, double 2.00e+00,
>>> double 3.00e+00)
>>>   ret double %1
>>> }
>>>
>>> define double @julia_h_72029(double) #0 {
>>> top:
>>>   %1 = call double @llvm.fma.f64(double %0, double 2.00e+00, double
>>> 3.00e+00)
>>>   ret double %1
>>> }
>>> .text
>>> Filename: fmatest.jl
>&g

Re: [julia-users] Int Matrix multiplication versus Float Matrix multiplication

2016-09-21 Thread Erik Schnetter
As matrix multiplication has a cost of O(N^3) while using only O(N^2)
elements, and since many integers (those smaller than 2^52 or so) can be
represented exactly as Float64 values, one approach could be to convert the
matrices to Float64, multiply them, and then convert back. For 64-bit
integers this might even be the fastest option allowed by common hardware.
For 32-bit integers, you could investigate whether using Float32 as
intermediate representation suffices.

-erik

On Wed, Sep 21, 2016 at 7:18 PM, Lutfullah Tomak <tomaklu...@gmail.com>
wrote:

> Float matrix multiplication uses heavily optimized openblas but integer
> matrix multiplication is a generic one from julia and there is not much you
> can do to improve it a lot because not all cpus have simd multiplication
> and addition for integers.




-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-21 Thread Erik Schnetter
ble
> define double @julia_f_66153(double) #0 {
> top:
>   %1 = fmul double %0, 2.00e+00
>   %2 = fadd double %1, 3.00e+00
>   ret double %2
> }
>
> ; Function Attrs: uwtable
> define double @julia_g_66157(double) #0 {
> top:
>   %1 = call double @llvm.fmuladd.f64(double %0, double 2.00e+00,
> double 3.00e+00)
>   ret double %1
> }
>
> ; Function Attrs: uwtable
> define double @julia_h_66158(double) #0 {
> top:
>   %1 = call double @llvm.fma.f64(double %0, double 2.00e+00, double
> 3.00e+00)
>   ret double %1
> }
> .text
> Filename: console
> pushq %rbp
> movq %rsp, %rbp
> Source line: 1
> addsd %xmm0, %xmm0
> movabsq $534749456, %rax# imm = 0x1FDFA110
> addsd (%rax), %xmm0
> popq %rbp
> retq
> nopl (%rax,%rax)
> .text
> Filename: console
> pushq %rbp
> movq %rsp, %rbp
> Source line: 2
> addsd %xmm0, %xmm0
> movabsq $534749584, %rax# imm = 0x1FDFA190
> addsd (%rax), %xmm0
> popq %rbp
> retq
> nopl (%rax,%rax)
> .text
> Filename: console
> pushq %rbp
> movq %rsp, %rbp
> movabsq $534749712, %rax# imm = 0x1FDFA210
> Source line: 3
> movsd dcabs164_(%rax), %xmm1  # xmm1 = mem[0],zero
> movabsq $534749720, %rax# imm = 0x1FDFA218
> movsd (%rax), %xmm2   # xmm2 = mem[0],zero
> movabsq $fma, %rax
> popq %rbp
> jmpq *%rax
> nop
>
> This seems to be similar to the first result.
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] accessing globals from keyword arg defaults

2016-09-18 Thread Erik Schnetter
As a work-around, you can define a function `getx() = (global x; x)`, and
use it to access the global variable x.

-erik

On Sun, Sep 18, 2016 at 5:31 PM, Marius Millea <mariusmil...@gmail.com>
wrote:

> I'd like to access global variables from the default values of keywords
> arguments, e.g.:
>
> x = 3
> function f(;x=x) #<- this default value of x here should refer to x in
> the global scope which is 3
>...
> end
>
> Is there any way to do this? I had guessed the following might work but it
> doesn't:
>
> function f(;x=(global x; x))
>...
> end
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Advice on (perhaps) chunking to HDF5

2016-09-14 Thread Erik Schnetter
On Tue, Sep 13, 2016 at 9:36 PM, Anandaroop Ray <sparrowhaw...@gmail.com>
wrote:

> Cool! The colons approach makes sense to me, followed by splatting.
>
> I'm unfamiliar with the syntax here but when I try to create a tuple in
> the REPL using
>
> inds = ((:) for i in 1:3)
>
> I get
> ERROR: syntax: missing separator in tuple
>

That's a feature that will be available in Julia 0.5. You can instead write

inds = [(:) for i in 1:3]

which defines an array instead of a generator. (The "fill" syntax mentioned
below works as well, and is even shorter.)

-erik

-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Curious parsing behavior

2016-09-14 Thread Erik Schnetter
There was a talk at JuliaCon suggesting that parsing ambiguities are often
best resolved by throwing an error: "Fortress: Features and Lessons
Learned".

-erik

On Wed, Sep 14, 2016 at 12:01 PM, David P. Sanders <dpsand...@gmail.com>
wrote:

>
>
> El miércoles, 14 de septiembre de 2016, 11:12:52 (UTC-4), David Gleich
> escribió:
>>
>> Ahah! That explains it.
>>
>> Is there a better way to create floating point literals that avoid this?
>>
>
> I think using 1782.0 instead of 1782. (without the 0) will solve this?
> I seem to remember there was an issue to deprecate the style without the 0.
>
>
>>
>> David
>>
>> On Wednesday, September 14, 2016 at 9:26:42 AM UTC-4, Steven G. Johnson
>> wrote:
>>>
>>>
>>>
>>> On Wednesday, September 14, 2016 at 9:18:11 AM UTC-4, David Gleich wrote:
>>>>
>>>> Can anyone give me a quick explanation for why these statements seem to
>>>> parse differently?
>>>>
>>>> julia> 1782.^12. + 1841.^12.
>>>>
>>>
>>> .^ and .+ are (elementwise/broadcasting) operators in Julia, and there
>>> is a parsing ambiguity here because it is not clear whether the . goes with
>>> the operator or the number.
>>>
>>> See also the discussion at
>>>
>>>  https://github.com/JuliaLang/julia/issues/15731
>>>  https://github.com/JuliaLang/julia/pull/11529
>>>
>>> for possible ways that this might be made less surprising in the future.
>>>
>>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Advice on (perhaps) chunking to HDF5

2016-09-13 Thread Erik Schnetter
If you have a varying rank, then you should probably use something like
`CartesianIndex` and `CartesianRange` to represent the indices, or possible
tuples of integers. You would then use the splatting operator to create the
indexing instructions:

```Julia
indrange = CartesianRange(xyz)
dset[indrange..., i] = slicedim
```

I don't know whether the expression `indrange...` works as-is, or whether
you have to manually create a tuple of `UnitRange`s.

If you want to use colons, then you'd write

```Julia
inds = ((:) for i in 1:rank)
dset[inds..., i] = xyz
```

-erik




On Tue, Sep 13, 2016 at 5:08 PM, Anandaroop Ray <sparrowhaw...@gmail.com>
wrote:

> Many thanks for your comprehensive recommendations. I think HDF5 views are
> probably what I need to go with - will read up more and then ask.
>
> What I mean about dimension is rank, really. The shape is always the same
> for all samples. One slice for storage, i.e., one sample, could be chunked
> as dset[:,:,i] or dset[:,:,:,:,i] but always of the form, dset[:,...,:i],
> depending on input to the code at runtime.
>
> Thanks
>
> On 13 September 2016 at 14:47, Erik Schnetter <schnet...@gmail.com> wrote:
>
>> On Tue, Sep 13, 2016 at 11:27 AM, sparrowhawker <sparrowhaw...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I'm new to Julia, and have been able to accomplish a lot of what I used
>>> to do in Matlab/Fortran, in very little time since I started using Julia in
>>> the last three months. Here's my newest stumbling block.
>>>
>>> I have a process which creates nsamples within a loop. Each sample takes
>>> a long time to compute as there are expensive finite difference operations,
>>> which ultimately lead to a sample, say 1 to 10 seconds. I have to store
>>> each of the nsamples, and I know the size and dimensions of each of the
>>> nsamples (all samples have the same size and dimensions). However,
>>> depending on the run time parameters, each sample may be a 32x32 image or
>>> perhaps a 64x64x64 voxset with 3 attributes, i.e., a 64x64x64x3
>>> hyper-rectangle. To be clear, each sample can be an arbitrary dimension
>>> hyper-rectangle, specified at run time.
>>>
>>> Obviously, since I don't want to lose computation and want to see
>>> incremental progress, I'd like to do incremental saves of these samples on
>>> disk, instead of waiting to collect all nsamples at the end. For instance,
>>> if I had to store 1000 samples of size 64x64, I thought perhaps I could
>>> chunk and save 64x64 slices to an HDF5 file 1000 times. Is this the right
>>> approach? If so, here's a prototype program to do so, but it depends on my
>>> knowing the number of dimensions of the slice, which is not known until
>>> runtime,
>>>
>>> using HDF5
>>>
>>> filename = "test.h5"
>>> # open file
>>> fmode ="w"
>>> # get a file object
>>> fid = h5open(filename, fmode)
>>> # matrix to write in chunks
>>> B = rand(64,64,1000)
>>> # figure out its dimensions
>>> sizeTuple = size(B)
>>> Ndims = length(sizeTuple)
>>> # set up to write in chunks of sizeArray
>>> sizeArray = ones(Int, Ndims)
>>> [sizeArray[i] = sizeTuple[i] for i in 1:(Ndims-1)] # last value of size
>>> array is :...:,1
>>> # create a dataset models within root
>>> dset = d_create(fid, "models", datatype(Float64), dataspace(size(B)),
>>> "chunk", sizeArray)
>>> [dset[:,:,i] = slicedim(B, Ndims, i) for i in 1:size(B, Ndims)]
>>> close(fid)
>>>
>>> This works, but the second last line, dset[:,:,i] requires syntax
>>> specific to writing a slice of a dimension 3 array - but I don't know the
>>> dimensions until run time. Of course I could just write to a flat binary
>>> file incrementally, but HDF5.jl could make my life so much simpler!
>>>
>>
>> HDF5 supports "extensible datasets", which were created for use cases
>> such as this one. I don't recall the exact syntax, but if I recall
>> correctly, you can specify one dimension (the first one in C, the last one
>> in Julia) to be extensible, and then you can add more data as you go. You
>> will probably need to specify a chunk size, which could be the size of the
>> increment in your case. Given file system speeds, a chunk size smaller than
>> a few MegaBytes probably don't make much sense (i.e. will slow things down).
>>
>> If you want to monitor the HDF5 file as it is being written, look at the
>> SWIMR feature. This requires HDF5 1.10; unfo

Re: [julia-users] Advice on (perhaps) chunking to HDF5

2016-09-13 Thread Erik Schnetter
On Tue, Sep 13, 2016 at 11:27 AM, sparrowhawker <sparrowhaw...@gmail.com>
wrote:

> Hi,
>
> I'm new to Julia, and have been able to accomplish a lot of what I used to
> do in Matlab/Fortran, in very little time since I started using Julia in
> the last three months. Here's my newest stumbling block.
>
> I have a process which creates nsamples within a loop. Each sample takes a
> long time to compute as there are expensive finite difference operations,
> which ultimately lead to a sample, say 1 to 10 seconds. I have to store
> each of the nsamples, and I know the size and dimensions of each of the
> nsamples (all samples have the same size and dimensions). However,
> depending on the run time parameters, each sample may be a 32x32 image or
> perhaps a 64x64x64 voxset with 3 attributes, i.e., a 64x64x64x3
> hyper-rectangle. To be clear, each sample can be an arbitrary dimension
> hyper-rectangle, specified at run time.
>
> Obviously, since I don't want to lose computation and want to see
> incremental progress, I'd like to do incremental saves of these samples on
> disk, instead of waiting to collect all nsamples at the end. For instance,
> if I had to store 1000 samples of size 64x64, I thought perhaps I could
> chunk and save 64x64 slices to an HDF5 file 1000 times. Is this the right
> approach? If so, here's a prototype program to do so, but it depends on my
> knowing the number of dimensions of the slice, which is not known until
> runtime,
>
> using HDF5
>
> filename = "test.h5"
> # open file
> fmode ="w"
> # get a file object
> fid = h5open(filename, fmode)
> # matrix to write in chunks
> B = rand(64,64,1000)
> # figure out its dimensions
> sizeTuple = size(B)
> Ndims = length(sizeTuple)
> # set up to write in chunks of sizeArray
> sizeArray = ones(Int, Ndims)
> [sizeArray[i] = sizeTuple[i] for i in 1:(Ndims-1)] # last value of size
> array is :...:,1
> # create a dataset models within root
> dset = d_create(fid, "models", datatype(Float64), dataspace(size(B)),
> "chunk", sizeArray)
> [dset[:,:,i] = slicedim(B, Ndims, i) for i in 1:size(B, Ndims)]
> close(fid)
>
> This works, but the second last line, dset[:,:,i] requires syntax
> specific to writing a slice of a dimension 3 array - but I don't know the
> dimensions until run time. Of course I could just write to a flat binary
> file incrementally, but HDF5.jl could make my life so much simpler!
>

HDF5 supports "extensible datasets", which were created for use cases such
as this one. I don't recall the exact syntax, but if I recall correctly,
you can specify one dimension (the first one in C, the last one in Julia)
to be extensible, and then you can add more data as you go. You will
probably need to specify a chunk size, which could be the size of the
increment in your case. Given file system speeds, a chunk size smaller than
a few MegaBytes probably don't make much sense (i.e. will slow things down).

If you want to monitor the HDF5 file as it is being written, look at the
SWIMR feature. This requires HDF5 1.10; unfortunately, Julia will by
default often still install version 1.8.

If you want to protect against crashes of your code so that you don't lose
progress, then HDF5 is probably not right for you. Once an HDF5 file is
open for writing, the on-disk state might be inconsistent, so that you can
lose all data when your code crashes. In this case, you might want to write
data into different files, one per increment. HDF5 1.0 offers "views",
which are umbrella files that stitch together datasets stored in other
files.

If you are looking for generic advice for setting up things with HDF5, then
I recommend their documentation. If you are looking for how to access these
features in Julia, or if you notice a feature that is not available in
Julia, then we'll be happy to explain or correct things.

What do mean by "dimension only known at run time" -- do you mean what
Julia calls "size" (shape) or what Julia calls "dim" (rank)?

Do all datasets have the same size, or do they differ? If they differ, then
putting them into the same dataset might not make sense; in this case, I
would write them into different datasets.

-erik

-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: idiom for standard basis vector (eg [0,1,0])

2016-09-09 Thread Erik Schnetter
Here is something short:
```Julia
bv(i,n) = [j==i for j in 1:n]
```
This returns a vector of `Bool`. In Julia, `Bool` is a numeric type that is
automatically promoted to `Int` / `Float` / etc. where necessary. If you
want, you can add an explicit type (e.g. `Float64`) in front of the array
comprehension.

-erik


On Fri, Sep 9, 2016 at 1:16 PM, Matt Bauman <mbau...@gmail.com> wrote:

> You could implement:
>
> Base.getindex(U::UniformScaling, I::AbstractArray, j::Int) = [U[i,j] for
> i in I]
> Base.getindex(U::UniformScaling, i::Int, J::AbstractArray) = [U[i,j] for
> j in J]
> Base.getindex(U::UniformScaling, I::AbstractArray, J::AbstractArray) = [U[
> i,j] for i in I, j in J]
>
> Then:
>
> julia> I[1:4, 2]
> 4-element Array{Int64,1}:
>  0
>  1
>  0
>  0
>
>
> On Friday, September 9, 2016 at 11:58:44 AM UTC-5, Steven G. Johnson wrote:
>>
>> On Friday, September 9, 2016 at 11:56:58 AM UTC-4, Christoph Ortner wrote:
>>>
>>> I predict that - right now - somebody is writing an answer explaining
>>> why this is terrible and you need an abstract array type with lazy
>>> evaluation. ;)
>>
>>
>> Well, we already have the built-in constant I.   I[i,j] is the j-th
>> element of the i-th unit vector, computed lazily, i.e. the Kronecker delta
>> function.
>>
>> But yes, if you need unit vectors a lot you may be doing something wrong.
>>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Juila vs Mathematica (Wolfram language): high-level features

2016-09-01 Thread Erik Schnetter
e role of types. Sympy (in python and Julia) aim to add symbolic
>> computation capability to Julia.  They are more 'typed' than Mma and
>> SJulia. But, it seems that python sympy is hybrid in this respect and also
>> supports generic expressions.
>>
>>
>>> To provide a starting point, here is the definition of type in Julia
>>> from documentation http://docs.julialang.org/en/l
>>> atest/manual/metaprogramming/
>>>
>>> type Expr
>>>   head::Symbol
>>>   args::Array{Any,1}
>>>   typend
>>>
>>>
>>> Maybe there's a typo in docs (line 4) but it doesn't really matter. What
>>> do Julia users do, for example, to avoid boilerplate code defining lots of
>>> types? I understand how to avoid writing boilerplate definitions in WL: it
>>> may not be easy but at least I know where to begin in case I need a program
>>> that would write new definitions or update existing ones.
>>>
>>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] code_native complex for simple snippet

2016-08-30 Thread Erik Schnetter
With unsigned shift counts the generated code should be shorter, since
Julia then knows in which direction to shift.

-erik

On Tue, Aug 30, 2016 at 7:24 PM, mmh <mumo...@gmail.com> wrote:

> Is there an unsafe version of >> and <<  that does not do range checks?
>
>
> On Tuesday, August 30, 2016 at 6:46:36 PM UTC-4, Yichao Yu wrote:
>>
>>
>>
>> On Tue, Aug 30, 2016 at 6:25 PM, mmh <mum...@gmail.com> wrote:
>>
>>>
>>> <https://lh3.googleusercontent.com/-5n6yH4HI3KA/V8YHuRMLoiI/AC8/5zqKJIGP4wwpjw7hmKgLgTxvt3WVYKY2ACLcB/s1600/snip_20160830182427.png>
>>>
>>>
>>>
>> FWIW, your "simple" function has 10 operations that generates ~40
>> instructions when fully inlined including the range check for the shifts,
>> prologue, epilogue, struct return etc, that seems like a reasonable number
>> for me.
>>
>>>
>>>
>>>
>>> <https://lh3.googleusercontent.com/-UmVvAEvHCMI/V8YGuOL_FqI/ACw/aH6yx-CJAvIHxB3EMYa1nuW8JOtbJjCrgCLcB/s1600/snip_20160830182032.png>
>>>
>>> <https://lh3.googleusercontent.com/-UmVvAEvHCMI/V8YGuOL_FqI/ACw/aH6yx-CJAvIHxB3EMYa1nuW8JOtbJjCrgCLcB/s1600/snip_20160830182032.png>The
>>> output for the Int32 and Int64 case
>>> <https://lh3.googleusercontent.com/-UmVvAEvHCMI/V8YGuOL_FqI/ACw/aH6yx-CJAvIHxB3EMYa1nuW8JOtbJjCrgCLcB/s1600/snip_20160830182032.png>
>>>
>>> <https://lh3.googleusercontent.com/-UmVvAEvHCMI/V8YGuOL_FqI/ACw/aH6yx-CJAvIHxB3EMYa1nuW8JOtbJjCrgCLcB/s1600/snip_20160830182032.png>The
>>> following looks particularly bad for the Int32 case
>>> <https://lh3.googleusercontent.com/-UmVvAEvHCMI/V8YGuOL_FqI/ACw/aH6yx-CJAvIHxB3EMYa1nuW8JOtbJjCrgCLcB/s1600/snip_20160830182032.png>
>>>
>>> <https://lh3.googleusercontent.com/-UmVvAEvHCMI/V8YGuOL_FqI/ACw/aH6yx-CJAvIHxB3EMYa1nuW8JOtbJjCrgCLcB/s1600/snip_20160830182032.png>
>>>  movabsq $">>", %rax
>>> <https://lh3.googleusercontent.com/-UmVvAEvHCMI/V8YGuOL_FqI/ACw/aH6yx-CJAvIHxB3EMYa1nuW8JOtbJjCrgCLcB/s1600/snip_20160830182032.png>
>>>  movl$6, %edx
>>> <https://lh3.googleusercontent.com/-UmVvAEvHCMI/V8YGuOL_FqI/ACw/aH6yx-CJAvIHxB3EMYa1nuW8JOtbJjCrgCLcB/s1600/snip_20160830182032.png>
>>>  callq   *%rax
>>> <https://lh3.googleusercontent.com/-UmVvAEvHCMI/V8YGuOL_FqI/ACw/aH6yx-CJAvIHxB3EMYa1nuW8JOtbJjCrgCLcB/s1600/snip_20160830182032.png>
>>>
>>>
>>>
>>> <https://lh3.googleusercontent.com/-PhTsfPn5Lo0/V8YHFRbgGmI/AC4/-243A3nwrWMSKPis1vkMH85RGCjMNyGAwCLcB/s1600/snip_20160830182154.png>
>>>
>>> <https://lh3.googleusercontent.com/-RdeaFTCAHJA/V8YG2CTJ2CI/AC0/hXguNBdd6MM7VlsvMPjX-QwjyIJv140RQCLcB/s1600/snip_20160830182101.png>
>>>
>>>
>>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Set Data Structure

2016-08-29 Thread Erik Schnetter
On Mon, Aug 29, 2016 at 3:59 PM, Jared Crean <jcrea...@gmail.com> wrote:

> Here is an oddity:
>
> julia> s
> Set([2,3,1])
>
> julia> in(s, 2)
> false
>
> julia> in(2, s)
> true
>
> I would have though the first use of in would be an error because asking
> if a set is contained in a number is not defined.  Is there some other
> interpretation of the operation?
>

In Julia, every number is a collection that contains just this number. Thus
you are essentially asking whether the set `s` is equal to this number,
which it is not.

-erik

On Monday, August 29, 2016 at 3:27:30 PM UTC-4, Jared Crean wrote:
>>
>> Ah, yes.  That's it.
>>
>>   Thanks,
>> Jared Crean
>>
>> On Monday, August 29, 2016 at 3:11:02 PM UTC-4, Erik Schnetter wrote:
>>>
>>> Jared
>>>
>>> Are you looking for the function `in`?
>>>
>>> -erik
>>>
>>> On Mon, Aug 29, 2016 at 3:06 PM, Jared Crean <jcre...@gmail.com> wrote:
>>>
>>>> I'm looking for a data structure that allows O(1) querying if a value
>>>> is contained in the data structure, and reasonably fast construction of the
>>>> data structure given that the initial size is unknown (although this
>>>> criteria is not that strict).  I was looking at the Set in base, but I
>>>> can't find a way to test if a Set contains contains a particular value.  I
>>>> also don't know about the efficiency of appending to a Set.  Any
>>>> suggestions or information about Set would be appreciated.
>>>>
>>>> Jared Crean
>>>>
>>>
>>>
>>>
>>> --
>>> Erik Schnetter <schn...@gmail.com> http://www.perimeterinstitute.
>>> ca/personal/eschnetter/
>>>
>>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Set Data Structure

2016-08-29 Thread Erik Schnetter
Jared

Are you looking for the function `in`?

-erik

On Mon, Aug 29, 2016 at 3:06 PM, Jared Crean <jcrea...@gmail.com> wrote:

> I'm looking for a data structure that allows O(1) querying if a value is
> contained in the data structure, and reasonably fast construction of the
> data structure given that the initial size is unknown (although this
> criteria is not that strict).  I was looking at the Set in base, but I
> can't find a way to test if a Set contains contains a particular value.  I
> also don't know about the efficiency of appending to a Set.  Any
> suggestions or information about Set would be appreciated.
>
> Jared Crean
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Performant methods for enclosing parameters?

2016-08-23 Thread Erik Schnetter
`gelelementptr inbounds` takes as input a pointer to a C struct (or
equivalent) as well as the offset to a field in that struct, and returns
the pointer to that field. It's how LLVM accesses fields in a struct. This
usually becomes a single add instruction, and can often be folded into
other instructions. In other words, it is in this case a very cheap
instruction.

I prefer to look at the generated native code (`@code_native`) because this
tells me what is actually going on. LLVM is quite high level, and it can be
difficult make performance predictions from it.

-erik


On Tue, Aug 23, 2016 at 8:14 PM, Chris Rackauckas <rackd...@gmail.com>
wrote:

> In my scenarios, the function k is given by the user. So this method won't
> work. If you move that definition of k outside of test() and pass it into
> your test, you'll see that the LLVM code explodes (at least it does for
> me). The issue is defining a closure on a function which was defined
> outside of the current scope.
>
>
> BTW, what exactly is "%3 = getelementptr inbounds %"##30#34", %"##30#34"*
> %0, i64 0, i32 1". How harmful is it to performance (I just see inbounds
> and an extra line and want to get rid of it, but if it's harmless then
> there are two solutions in here).
>
>
> On Tuesday, August 23, 2016 at 5:04:21 PM UTC-7, Andrew wrote:
>>
>> I'm pretty confused about what you're trying to accomplish beyond
>> standard closures. What is your ParameterHolder type for?
>>
>> I rewrote your first example wrapping everything in a function. Is this
>> doing what you want it to? The LLVM looks fine.
>> function test(α)
>> k(u::Float64,t::Float64,α) = α*u
>> G = (u,t) -> k(u,t,1.01)
>> G2 = (u,t)->k(u,t,α)
>> const β = 1.01
>> G3 = (u,t)->k(u,t,β)
>>
>> @code_llvm G(1., 2.)
>> @code_llvm G2(1., 2.)
>> @code_llvm G3(1., 2.)
>> end
>>
>> julia> test(2.)
>>
>> define double @"julia_#28_70100"(double, double) #0 {
>> top:
>>   %2 = fmul double %0, 1.01e+00
>>   ret double %2
>> }
>>
>> define double @"julia_#29_70102"(%"##29#33"*, double, double) #0 {
>> top:
>>   %3 = getelementptr inbounds %"##29#33", %"##29#33"* %0, i64 0, i32 0
>>   %4 = load double, double* %3, align 8
>>   %5 = fmul double %4, %1
>>   ret double %5
>> }
>>
>> define double @"julia_#30_70104"(%"##30#34"*, double, double) #0 {
>> top:
>>   %3 = getelementptr inbounds %"##30#34", %"##30#34"* %0, i64 0, i32 1
>>   %4 = load double, double* %3, align 8
>>   %5 = fmul double %4, %1
>>   ret double %5
>> }
>>
>>
>>
>>
>> On Tuesday, August 23, 2016 at 6:00:37 PM UTC-4, Chris Rackauckas wrote:
>>>
>>> Yes, I am looking for a closure which has the least overhead possible
>>> (this was all in v0.5). For example, for re-ordering parameters: g =
>>> (du,u,t) -> f(t,u,du), or for enclosing parameter values as above.
>>>
>>> I'll give the Val method a try and see whether the compile time is
>>> significant (it will probably be an option).
>>>
>>> Even a solution which is like translator1/2 where the inbounds check is
>>> able to be turned off would likely be performant enough to not be
>>> noticeably different.
>>>
>>> On Tuesday, August 23, 2016 at 1:27:25 PM UTC-7, Erik Schnetter wrote:
>>>>
>>>> Chris
>>>>
>>>> I don't quite understand what you mean. Are you looking for a closure /
>>>> lambda expression?
>>>>
>>>> ```Julia
>>>> function myfunc(x0, x1, alpha)
>>>> f(x) = alpha * x
>>>> ODE.solve(f, x0, x1)
>>>> end
>>>> ```
>>>>
>>>> Or is it important for your that your function `f` is optimized, i.e.
>>>> you want to re-run the code generator (expensive!) every time there's a new
>>>> value for `alpha`? For this, you can use `Val` (but please benchmark 
>>>> first):
>>>>
>>>> ```Julia
>>>> function f{alpha}(x, ::Type{Val{alpha}})
>>>> alpha * x
>>>> end
>>>>
>>>> function myfunc(x0, x1, alpha)
>>>> f1(x) = f(x, Val{alpha})
>>>> ODE.solve(f, x0, x1)
>>>> end
>>>> ```
>>>>
>>>> This will have a marginally faster evaluation of `f`, at the cost of
>>>> compiling a separate function for each value of `alpha`.
>>>>
&

Re: [julia-users] Performant methods for enclosing parameters?

2016-08-23 Thread Erik Schnetter
hread_ptr, i64 -2672
>   %ptls = bitcast i8* %ptls_i8 to %jl_value_t***
>   %3 = alloca [6 x %jl_value_t*], align 8
>   %.sub = getelementptr inbounds [6 x %jl_value_t*], [6 x %jl_value_t*]*
> %3, i64 0, i64 0
>   %4 = getelementptr [6 x %jl_value_t*], [6 x %jl_value_t*]* %3, i64 0,
> i64 2
>   %5 = bitcast %jl_value_t** %4 to i8*
>   call void @llvm.memset.p0i8.i32(i8* %5, i8 0, i32 32, i32 8, i1 false)
>   %6 = bitcast [6 x %jl_value_t*]* %3 to i64*
>   store i64 8, i64* %6, align 8
>   %7 = getelementptr [6 x %jl_value_t*], [6 x %jl_value_t*]* %3, i64 0,
> i64 1
>   %8 = bitcast i8* %ptls_i8 to i64*
>   %9 = load i64, i64* %8, align 8
>   %10 = bitcast %jl_value_t** %7 to i64*
>   store i64 %9, i64* %10, align 8
>   store %jl_value_t** %.sub, %jl_value_t*** %ptls, align 8
>   %11 = getelementptr [6 x %jl_value_t*], [6 x %jl_value_t*]* %3, i64 0,
> i64 5
>   %12 = getelementptr [6 x %jl_value_t*], [6 x %jl_value_t*]* %3, i64 0,
> i64 4
>   %13 = getelementptr [6 x %jl_value_t*], [6 x %jl_value_t*]* %3, i64 0,
> i64 3
>   %14 = bitcast %jl_value_t* %0 to i64*
>   %15 = load i64, i64* %14, align 8
>   %16 = getelementptr %jl_value_t, %jl_value_t* %0, i64 1
>   %17 = bitcast %jl_value_t* %16 to i64*
>   %18 = load i64, i64* %17, align 8
>   %19 = bitcast %jl_value_t** %4 to i64*
>   store i64 %15, i64* %19, align 8
>   %20 = call %jl_value_t* @jl_gc_pool_alloc(i8* %ptls_i8, i32 1432, i32 16)
>   %21 = getelementptr inbounds %jl_value_t, %jl_value_t* %20, i64 -1, i32 0
>   store %jl_value_t* inttoptr (i64 139896322417392 to %jl_value_t*),
> %jl_value_t** %21, align 8
>   %22 = bitcast %jl_value_t* %20 to double*
>   store double %1, double* %22, align 8
>   store %jl_value_t* %20, %jl_value_t** %13, align 8
>   %23 = call %jl_value_t* @jl_gc_pool_alloc(i8* %ptls_i8, i32 1432, i32 16)
>   %24 = getelementptr inbounds %jl_value_t, %jl_value_t* %23, i64 -1, i32 0
>   store %jl_value_t* inttoptr (i64 139896322417392 to %jl_value_t*),
> %jl_value_t** %24, align 8
>   %25 = bitcast %jl_value_t* %23 to double*
>   store double %2, double* %25, align 8
>   store %jl_value_t* %23, %jl_value_t** %12, align 8
>   %26 = call %jl_value_t* @jl_gc_pool_alloc(i8* %ptls_i8, i32 1432, i32 16)
>   %27 = getelementptr inbounds %jl_value_t, %jl_value_t* %26, i64 -1, i32 0
>   store %jl_value_t* inttoptr (i64 139896322417392 to %jl_value_t*),
> %jl_value_t** %27, align 8
>   %28 = bitcast %jl_value_t* %26 to i64*
>   store i64 %18, i64* %28, align 8
>   store %jl_value_t* %26, %jl_value_t** %11, align 8
>   %29 = call %jl_value_t* @jl_apply_generic(%jl_value_t** %4, i32 4)
>   %30 = load i64, i64* %10, align 8
>   store i64 %30, i64* %8, align 8
>   ret %jl_value_t* %29
> }
>
>
>
>
>
>
> So in the end, I couldn't find a way within a function to enclose the
> parameter α and compile a function which actually treats α as a constant
> and optimizes it all the way. However, the ParameterHolder and translator
> results are getting pretty close, but I can't seem to get rid of the bounds
> checking.
>
> Does anyone else have a better solution? Or is this supposed to "act
> nicer" by default?
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] mapreduce two variable (x,y)

2016-08-22 Thread Erik Schnetter
This function could be added...

Currently, you'll have to `zip` the two collections, and unzip them in your
function:

```Julia
mapreduce(a -> begin x,y=a; 2*y^2 - 1 + x end, +, zip([1,2,3], [1,2,3]))
```

-erik


On Mon, Aug 22, 2016 at 7:31 PM, <jmarcellopere...@ufpi.edu.br> wrote:

> how to write mapreduce for two variable? Ex:
>
> mapreduce((x,y)-> 2*y.^2 - 1 + x, + ,1:3,1:3)
>
> LoadError: wrong number of arguments
> while loading In[182], in expression starting on line 1
>
> in anonymous at ./In[182]:1
>
>
>
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] signature of parametric "::Type" methods

2016-08-22 Thread Erik Schnetter
Your observation is correct. This remains the same with Julia 0.5.

`foo{T}` and `foo{T,N,P,S}` are different types. Due to invariance (see the
manual), the method will not match.

Whenever you define a type `foo{T,N,P,S}`, Julia automatically defines an
abstract type `foo{T}` as well, with `foo{T,N,P,S} <: foo{T}`.

-erik


On Mon, Aug 22, 2016 at 4:46 PM, Davide Lasagna <lasagnadav...@gmail.com>
wrote:

> Hi,
>
> Why do i need to specify all the parameters in the signature of methods
> that take `::Type` arguments?
>
> Demo:
> # define some parametric type
> julia> immutable foo{T, N, P, S}
>end
>
> # this will not work, but it seems natural to omit trailing unused
> parameters
> julia> bar{T}(::Type{foo{T}}) = T
> bar (generic function with 1 method)
>
> # yes, it does not work
> julia> bar(foo{1, 2, 3, 4})
> ERROR: MethodError: `bar` has no method matching bar(::Type{foo{1,2,3,4}})
>
> # this will do the job,
> julia> bar2{T, N, P, S}(::Type{foo{T, N, P, S}}) = T
> bar2 (generic function with 1 method)
>
> # yep
> julia> bar2(foo{1, 2, 3, 4})
> 1
>
>
> In methods that take instances of a type it is usually not necessary to
> specify trailing unused parameter, meaning that, for instance
> baz{T}(f::foo{T}) = one(T)
> will work for any values of `N`, `P` and `S`.
>
> Thanks!
>
> P.S. This is on 0.4
>
>
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: memory allocations when accessing tuple fields of a type

2016-08-20 Thread Erik Schnetter
My guess is that Julia cannot determine all the types of the variables and
expressions used in this first if statement. This is surprising, because it
should. I've tried locally with Julia 0.4 and Julia 0.5, using
code_warntype and code_native, and couldn't find a problem.

Which Julia version are you using?

Can you post a complete example that exactly reproduces the problem?

-erik


On Sat, Aug 20, 2016 at 7:46 AM, Uri Patish <uripat...@gmail.com> wrote:

> Whops, the copy paste got messed. Here it is,
>
> typealias ElementSize Tuple{Int, Int, Int}
>
> Uri
>
> On Friday, August 19, 2016 at 3:06:10 PM UTC+3, Uri Patish wrote:
>>
>> Hi, I have to following types,
>>
>> typealias ElementSize
>>
>> typealias Element Array{Float64, 3}
>>
>> type ElementBuffer
>> size::ElementSize
>> alloc::ElementSize
>> data::Element
>> end
>>
>> The following function is called many times,
>>
>> function set_size!(bf::ElementBuffer, sz::ElementSize,
>> load_old_data::Bool = true)
>>   if (bf.alloc[1] < sz[1]) || (bf.alloc[2] < sz[2]) || (bf.alloc[3] <
>> sz[3])
>> old_data = bf.data
>> bf.alloc = map(max, bf.alloc, sz)
>> bf.data = Element(bf.alloc)
>> if load_old_data
>>   load_body!(bf, old_data)
>> end
>> end
>> bf.size = sz
>>   nothing
>> end
>>
>> I've seen there is a lot of memory allocation going on, so I've
>> benchmarked the former code using Julia's track-allocation=user option. The
>> result was suprising,
>>
>> - function set_size!(bf::ElementBuffer, sz::ElementSize,
>> load_old_data::Bool = true)
>> 1710097664   if (bf.alloc[1] < sz[1]) || (bf.alloc[2] < sz[2]) ||
>> (bf.alloc[3] < sz[3])
>> 0 old_data = bf.data
>>   2894592 bf.alloc = map(max, bf.alloc, sz)
>>  59783056 bf.data = Element(bf.alloc)
>> 0 if load_old_data
>> 0   load_body!(bf, old_data)
>> - end
>> - end
>> 0 bf.size = sz
>> 0   nothing
>> - end
>>
>> After running the code in debug, I've seen that there is an allocation
>> happening every time the alloc field is accessed. If I change the field
>> type from a tuple to an array this phenomenon disappears. I can't figure
>> out why this allocation is happening, any ideas?
>>
>> Uri
>>
>>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Symbols as Dict keys - efficiency / memory allocation?

2016-08-19 Thread Erik Schnetter
On Fri, Aug 19, 2016 at 10:07 AM, Yichao Yu <yyc1...@gmail.com> wrote:

>
>
> On Fri, Aug 19, 2016 at 10:01 PM, Erik Schnetter <schnet...@gmail.com>
> wrote:
>
>> On Fri, Aug 19, 2016 at 9:53 AM, Yichao Yu <yyc1...@gmail.com> wrote:
>>
>>>
>>> On Fri, Aug 19, 2016 at 9:01 PM, Oliver Schulz <
>>> oliver.sch...@tu-dortmund.de> wrote:
>>>
>>>> Hi Eric,
>>>>
>>>> nice - this is exactly what I meant. I wonder if Julia Symbols
>>>> themselves could become like this in the future.
>>>>
>>>
>>> As jameson said already, isbits or not is irrelevant here and we are
>>> working towards making non isbits type inlinable.
>>>
>>> The ValueSymbol type is also not a true isbits type, you can't save it
>>> to sysimg or use it with mmap array for example (well, you can, but you
>>> won't get the answer you want)
>>>
>>
>> Okay, call it `pointerfree` then.
>>
>
> Well, the point is that it doesn't have all the necessary probably assumed
> by other part of the system. `isbits` is an acronym of `pointerfree`.
>

Can I mmap regular symbols? What ensures that the symbols' values are
contained in the mmapped region? What ensures that mmapped symbols are
unique with respect to local symbols?

Saving in a system image is an operation for experts; there are good
solutions for ensuring symbols are stored and loaded properly, such as e.g.
serializing them.

-erik

On 17.08.2016 18:44, Erik Schnetter wrote:
>>>>
>>>>> See <https://github.com/eschnett/ValueSymbols.jl> for what I had in
>>>>> mind.
>>>>>
>>>>> -erik
>>>>>
>>>>> On Wed, Aug 17, 2016 at 10:32 AM, Yichao Yu <yyc1...@gmail.com
>>>>> <mailto:yyc1...@gmail.com>> wrote:
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Aug 17, 2016 at 10:19 PM, Oliver Schulz
>>>>> <oliver.sch...@tu-dortmund.de <mailto:oliver.sch...@tu-dortmund.de
>>>>> >>
>>>>>
>>>>> wrote:
>>>>>
>>>>> Hi Yichao
>>>>>
>>>>> I get that - I was just curious why (and I guess there's likely
>>>>> a very good reason for it). My intuition would be that a
>>>>> non-GCed interned string type could be represented by a
>>>>> bitstype
>>>>> (i.e. a pointer or index to the string table). So I was
>>>>> surprised that Symbol is an object reference and wanted to
>>>>> understand this a bit better.
>>>>>
>>>>>
>>>>> It's a pointer to managed memory with tag that's all what's needed
>>>>> for it to be a !pointerfree type. There are way more things that we
>>>>> don't GC and it has nothing to do with whether it's a isbits type
>>>>> or
>>>>> not.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Oliver
>>>>>
>>>>>
>>>>> On Wednesday, August 17, 2016 at 3:35:41 PM UTC+2, Yichao Yu
>>>>> wrote:
>>>>>
>>>>>
>>>>> > PS: Yes, symbols in Julia should be bitstypes.
>>>>>
>>>>> No, it's a object reference and isn't a `isbits()` type.
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Aug 17, 2016 at 8:26 AM, Cedric St-Jean
>>>>> <cedric...@gmail.com> wrote:
>>>>>
>>>>> Good points.
>>>>>
>>>>> Given that symbols have a name which is a string, I
>>>>> don't see how they could be a bits-type unless they
>>>>> just became numbers to index into a global array
>>>>> somewhere.. i.e. a pointer. Stack-allocating
>>>>> non-bits-type immutables is on the roadmap to 1.0,
>>>>> so that should solve your problems.
>>>>>
>>>>> Maybe you can solve the CategorizedPoint problem by
>>>>> using an @enum.
>>>>>
>>>>> On Wed, Aug 17, 2016 at 6:48 AM, 

Re: [julia-users] Re: Symbols as Dict keys - efficiency / memory allocation?

2016-08-19 Thread Erik Schnetter
On Fri, Aug 19, 2016 at 9:53 AM, Yichao Yu <yyc1...@gmail.com> wrote:

>
> On Fri, Aug 19, 2016 at 9:01 PM, Oliver Schulz <
> oliver.sch...@tu-dortmund.de> wrote:
>
>> Hi Eric,
>>
>> nice - this is exactly what I meant. I wonder if Julia Symbols themselves
>> could become like this in the future.
>>
>
> As jameson said already, isbits or not is irrelevant here and we are
> working towards making non isbits type inlinable.
>
> The ValueSymbol type is also not a true isbits type, you can't save it to
> sysimg or use it with mmap array for example (well, you can, but you won't
> get the answer you want)
>

Okay, call it `pointerfree` then.

-erik


Cheers,
>>
>> Oliver
>>
>>
>> On 17.08.2016 18:44, Erik Schnetter wrote:
>>
>>> See <https://github.com/eschnett/ValueSymbols.jl> for what I had in
>>> mind.
>>>
>>> -erik
>>>
>>> On Wed, Aug 17, 2016 at 10:32 AM, Yichao Yu <yyc1...@gmail.com
>>> <mailto:yyc1...@gmail.com>> wrote:
>>>
>>>
>>>
>>> On Wed, Aug 17, 2016 at 10:19 PM, Oliver Schulz
>>> <oliver.sch...@tu-dortmund.de <mailto:oliver.sch...@tu-dortmund.de>>
>>>
>>> wrote:
>>>
>>> Hi Yichao
>>>
>>> I get that - I was just curious why (and I guess there's likely
>>> a very good reason for it). My intuition would be that a
>>> non-GCed interned string type could be represented by a bitstype
>>> (i.e. a pointer or index to the string table). So I was
>>> surprised that Symbol is an object reference and wanted to
>>> understand this a bit better.
>>>
>>>
>>> It's a pointer to managed memory with tag that's all what's needed
>>> for it to be a !pointerfree type. There are way more things that we
>>> don't GC and it has nothing to do with whether it's a isbits type or
>>> not.
>>>
>>>
>>>
>>>
>>> Cheers,
>>>
>>> Oliver
>>>
>>>
>>> On Wednesday, August 17, 2016 at 3:35:41 PM UTC+2, Yichao Yu
>>> wrote:
>>>
>>>
>>> > PS: Yes, symbols in Julia should be bitstypes.
>>>
>>> No, it's a object reference and isn't a `isbits()` type.
>>>
>>>
>>>
>>> On Wed, Aug 17, 2016 at 8:26 AM, Cedric St-Jean
>>> <cedric...@gmail.com> wrote:
>>>
>>> Good points.
>>>
>>> Given that symbols have a name which is a string, I
>>> don't see how they could be a bits-type unless they
>>> just became numbers to index into a global array
>>> somewhere.. i.e. a pointer. Stack-allocating
>>> non-bits-type immutables is on the roadmap to 1.0,
>>> so that should solve your problems.
>>>
>>> Maybe you can solve the CategorizedPoint problem by
>>> using an @enum.
>>>
>>> On Wed, Aug 17, 2016 at 6:48 AM, Oliver Schulz
>>> <oliver...@tu-dortmund.de> wrote:
>>>
>>> Hello Andreas,
>>>
>>> consider iterating over a Dict{Symbol,Int64}:
>>>
>>> |
>>> d =Dict(:foo =>42,:bar =>77)
>>>
>>> forx ind println("$(typeof(x)), $(isbits(x))")end
>>> |
>>>
>>>
>>> During iteration, a lot of stack allocated
>>> "Pair{Symbol,Int64}" objects are created. That
>>> is very bad for performance.
>>>
>>> |
>>> immutable CategorizedPoint
>>> x::Float64
>>> y::Float64
>>> category::Symbol
>>> end
>>> |
>>>
>>>
>>> Also, consider a (hypothetical) data type for
>>> categorized points (with a finite number of
>>> categories - but e

Re: [julia-users] memory allocations when accessing tuple fields of a type

2016-08-19 Thread Erik Schnetter
How is `ElementSize` defined? Creating objects (or making copies) of such
objects might require memory allocation.d

-erik

On Fri, Aug 19, 2016 at 8:06 AM, Uri Patish <uripat...@gmail.com> wrote:

> Hi, I have to following types,
>
> typealias ElementSize
>
> typealias Element Array{Float64, 3}
>
> type ElementBuffer
> size::ElementSize
> alloc::ElementSize
> data::Element
> end
>
> The following function is called many times,
>
> function set_size!(bf::ElementBuffer, sz::ElementSize, load_old_data::Bool
> = true)
>   if (bf.alloc[1] < sz[1]) || (bf.alloc[2] < sz[2]) || (bf.alloc[3] <
> sz[3])
> old_data = bf.data
> bf.alloc = map(max, bf.alloc, sz)
> bf.data = Element(bf.alloc)
> if load_old_data
>   load_body!(bf, old_data)
> end
> end
> bf.size = sz
>   nothing
> end
>
> I've seen there is a lot of memory allocation going on, so I've
> benchmarked the former code using Julia's track-allocation=user option. The
> result was suprising,
>
> - function set_size!(bf::ElementBuffer, sz::ElementSize,
> load_old_data::Bool = true)
> 1710097664   if (bf.alloc[1] < sz[1]) || (bf.alloc[2] < sz[2]) ||
> (bf.alloc[3] < sz[3])
> 0 old_data = bf.data
>   2894592 bf.alloc = map(max, bf.alloc, sz)
>  59783056 bf.data = Element(bf.alloc)
> 0 if load_old_data
> 0   load_body!(bf, old_data)
> - end
> - end
> 0 bf.size = sz
> 0   nothing
> - end
>
> After running the code in debug, I've seen that there is an allocation
> happening every time the alloc field is accessed. If I change the field
> type from a tuple to an array this phenomenon disappears. I can't figure
> out why this allocation is happening, any ideas?
>
> Uri
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Symbols as Dict keys - efficiency / memory allocation?

2016-08-19 Thread Erik Schnetter
If Julia symbols are never garbage collected, then they could be directly
implemented as a (bitstype) C pointer to a malloc'd region. There's no need
to involve Julia's memory allocator or garbage collector, or tell the
garbage collector about variables holding symbols.

-erik

On Fri, Aug 19, 2016 at 9:01 AM, Oliver Schulz <oliver.sch...@tu-dortmund.de
> wrote:

> Hi Eric,
>
> nice - this is exactly what I meant. I wonder if Julia Symbols themselves
> could become like this in the future.
>
>
> Cheers,
>
> Oliver
>
>
> On 17.08.2016 18:44, Erik Schnetter wrote:
>
>> See <https://github.com/eschnett/ValueSymbols.jl> for what I had in mind.
>>
>> -erik
>>
>> On Wed, Aug 17, 2016 at 10:32 AM, Yichao Yu <yyc1...@gmail.com
>> <mailto:yyc1...@gmail.com>> wrote:
>>
>>
>>
>> On Wed, Aug 17, 2016 at 10:19 PM, Oliver Schulz
>> <oliver.sch...@tu-dortmund.de <mailto:oliver.sch...@tu-dortmund.de>>
>> wrote:
>>
>> Hi Yichao
>>
>> I get that - I was just curious why (and I guess there's likely
>> a very good reason for it). My intuition would be that a
>> non-GCed interned string type could be represented by a bitstype
>> (i.e. a pointer or index to the string table). So I was
>> surprised that Symbol is an object reference and wanted to
>> understand this a bit better.
>>
>>
>> It's a pointer to managed memory with tag that's all what's needed
>> for it to be a !pointerfree type. There are way more things that we
>> don't GC and it has nothing to do with whether it's a isbits type or
>> not.
>>
>>
>>
>>
>> Cheers,
>>
>> Oliver
>>
>>
>> On Wednesday, August 17, 2016 at 3:35:41 PM UTC+2, Yichao Yu
>> wrote:
>>
>>
>> > PS: Yes, symbols in Julia should be bitstypes.
>>
>> No, it's a object reference and isn't a `isbits()` type.
>>
>>
>>
>> On Wed, Aug 17, 2016 at 8:26 AM, Cedric St-Jean
>> <cedric...@gmail.com> wrote:
>>
>> Good points.
>>
>> Given that symbols have a name which is a string, I
>> don't see how they could be a bits-type unless they
>> just became numbers to index into a global array
>> somewhere.. i.e. a pointer. Stack-allocating
>> non-bits-type immutables is on the roadmap to 1.0,
>> so that should solve your problems.
>>
>> Maybe you can solve the CategorizedPoint problem by
>> using an @enum.
>>
>> On Wed, Aug 17, 2016 at 6:48 AM, Oliver Schulz
>> <oliver...@tu-dortmund.de> wrote:
>>
>> Hello Andreas,
>>
>> consider iterating over a Dict{Symbol,Int64}:
>>
>> |
>> d =Dict(:foo =>42,:bar =>77)
>> forx ind println("$(typeof(x)), $(isbits(x))")end
>> |
>>
>>
>> During iteration, a lot of stack allocated
>> "Pair{Symbol,Int64}" objects are created. That
>> is very bad for performance.
>>
>> |
>> immutable CategorizedPoint
>> x::Float64
>> y::Float64
>> category::Symbol
>> end
>> |
>>
>>
>> Also, consider a (hypothetical) data type for
>> categorized points (with a finite number of
>> categories - but extensible, so that Symbol is a
>> better fit than using an enum):
>>
>> immutable CategorizedPoint
>> x::Float64
>> y::Float64
>> category::Symbol
>> end
>>
>> I would definitely want this to be a bits-type,
>> and not have every instance heap-allocated.
>>
>>
>> Cheers,
>>
>> Oliver
>

Re: [julia-users] Fast random access to Dict

2016-08-17 Thread Erik Schnetter
For the insertion stage, the Dict is likely ideal.

For the second stage, a Dict that is based on hashing is a good data
structure. Another data structure worth examining would be a sorted vector
of `(Int64, Float64)` pairs (sorted by the `Int64` keys). Interpolation
search <https://en.wikipedia.org/wiki/Interpolation_search> can then have a
complexity of `O(log log N)`. My personal guess is that a well-optimized
Dict (i.e. with both hash function and hash table size "optimized") will be
faster if we assume a close to random access, as it will have fewer memory
accesses.

Julia's Dict implementation (see base/dict.jl) has constants that you could
tune (read: play with). There is also a function `Base.rehash!` that you
can call to increase the size of the hash table, which might increase
performance by avoiding hash collisions, if you have sufficient memory.

-erik


On Wed, Aug 17, 2016 at 7:58 PM, 'Greg Plowman' via julia-users <
julia-users@googlegroups.com> wrote:

> I need fast random access to a Dict{Int64,Float64}.
> My application has a first phase in which the Dict is populated, and a
> second phase where it accessed randomly (with no further additions or
> deletions).
> There are about 50,000 to 100,000 entries, with keys in the range 10^9 to
> 10^14.
>
> Firstly is a Dict the most optimal data structure? (presumably Dict is
> faster than SparseVector for random lookup?)
>
> Secondly, is there any "preconditioning" I can do after creating the Dict
> in phase 1 but before phase 2, to optimise the Dict for random access.
> (e.g. sizehint, sorting keys)
>
> I do appreciate access might already be fast and optimal and I should just
> benchmark, but I'm just looking for some theoretical
> insight before benchmarking.
>
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Symbols as Dict keys - efficiency / memory allocation?

2016-08-17 Thread Erik Schnetter
Since the pointer to the symbol's value does not change over time, you can
simply store the pointer (i.e. essentially the object id) as an integer
value. There is no need to choose an offset.

The advantage of storing the pointer instead of the object id is that you
can use the pointer to re-create the symbol. This is not directly possible
with the object id.

-erik

On Wed, Aug 17, 2016 at 1:00 PM, Jameson <vtjn...@gmail.com> wrote:

> We could change the implementation of Symbol to be an opaque offset
> instead of being and opaque value pointer, but then passing around Symbols
> in generic contexts would require boxing that offset. So six of one, half
> dozen of the other. Fixing the issue that all immutable objects can't be
> allocated inline would probably be the better fix for eliminating the
> problem.
>
>
> On Wednesday, August 17, 2016 at 12:44:21 PM UTC-4, Erik Schnetter wrote:
>>
>> See <https://github.com/eschnett/ValueSymbols.jl> for what I had in mind.
>>
>> -erik
>>
>> On Wed, Aug 17, 2016 at 10:32 AM, Yichao Yu <yyc1...@gmail.com> wrote:
>>
>>>
>>>
>>> On Wed, Aug 17, 2016 at 10:19 PM, Oliver Schulz <
>>> oliver.sch...@tu-dortmund.de> wrote:
>>>
>>>> Hi Yichao
>>>>
>>>> I get that - I was just curious why (and I guess there's likely a very
>>>> good reason for it). My intuition would be that a non-GCed interned string
>>>> type could be represented by a bitstype (i.e. a pointer or index to the
>>>> string table). So I was surprised that Symbol is an object reference and
>>>> wanted to understand this a bit better.
>>>>
>>>
>>> It's a pointer to managed memory with tag that's all what's needed for
>>> it to be a !pointerfree type. There are way more things that we don't GC
>>> and it has nothing to do with whether it's a isbits type or not.
>>>
>>>
>>>>
>>>>
>>>> Cheers,
>>>>
>>>> Oliver
>>>>
>>>>
>>>> On Wednesday, August 17, 2016 at 3:35:41 PM UTC+2, Yichao Yu wrote:
>>>>>
>>>>>
>>>>> > PS: Yes, symbols in Julia should be bitstypes.
>>>>>
>>>>> No, it's a object reference and isn't a `isbits()` type.
>>>>>
>>>>>
>>>>>>
>>>>>> On Wed, Aug 17, 2016 at 8:26 AM, Cedric St-Jean <cedric...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Good points.
>>>>>>>
>>>>>>> Given that symbols have a name which is a string, I don't see how
>>>>>>> they could be a bits-type unless they just became numbers to index into 
>>>>>>> a
>>>>>>> global array somewhere.. i.e. a pointer. Stack-allocating 
>>>>>>> non-bits-type
>>>>>>> immutables is on the roadmap to 1.0, so that should solve your problems.
>>>>>>>
>>>>>>> Maybe you can solve the CategorizedPoint problem by using an @enum.
>>>>>>>
>>>>>>> On Wed, Aug 17, 2016 at 6:48 AM, Oliver Schulz <
>>>>>>> oliver...@tu-dortmund.de> wrote:
>>>>>>>
>>>>>>>> Hello Andreas,
>>>>>>>>
>>>>>>>> consider iterating over a Dict{Symbol,Int64}:
>>>>>>>>
>>>>>>>> d = Dict(:foo => 42, :bar => 77)
>>>>>>>> for x in d println("$(typeof(x)), $(isbits(x))") end
>>>>>>>>
>>>>>>>>
>>>>>>>> During iteration, a lot of stack allocated "Pair{Symbol,Int64}"
>>>>>>>> objects are created. That is very bad for performance.
>>>>>>>>
>>>>>>>> immutable CategorizedPoint
>>>>>>>> x::Float64
>>>>>>>> y::Float64
>>>>>>>> category::Symbol
>>>>>>>> end
>>>>>>>>
>>>>>>>>
>>>>>>>> Also, consider a (hypothetical) data type for categorized points
>>>>>>>> (with a finite number of categories - but extensible, so that Symbol 
>>>>>>>> is a
>>>>>>>> better fit than using an enum):
>>>>>>>>
>>>>>>>> immutable CategorizedPoint
>>>>>>>> x::Float64
>>>>>>>> y::Float64
>>>>>>>> category::Symbol
>>>>>>>> end
>>>>>>>>
>>>>>>>> I would definitely want this to be a bits-type, and not have every
>>>>>>>> instance heap-allocated.
>>>>>>>>
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>>
>>>>>>>> Oliver
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wednesday, August 17, 2016 at 11:44:13 AM UTC+2, Andreas
>>>>>>>> Lobinger wrote:
>>>>>>>>>
>>>>>>>>> Hello colleague,
>>>>>>>>>
>>>>>>>>> On Wednesday, August 17, 2016 at 11:12:51 AM UTC+2, Oliver Schulz
>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> Sure -  I was assuming that as Symbol (as an interned string) is
>>>>>>>>>> basically just a pointer. So comparisons are O(1), etc. What I'd 
>>>>>>>>>> like to
>>>>>>>>>> understand is, why can't it be a bitstype? Currently, it's not, so 
>>>>>>>>>> Symbol
>>>>>>>>>> cannot be used in a lightweight (i.e. stack-allocated) immutable 
>>>>>>>>>> type. I
>>>>>>>>>> assume there's a good reason for it, I just want to understand why.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Could you make an example how you would like to use Symbol in
>>>>>>>>> lightweight types?
>>>>>>>>>
>>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Erik Schnetter <schn...@gmail.com> http://www.perimeterinstitute.
>>>>>> ca/personal/eschnetter/
>>>>>>
>>>>>
>>>>>
>>>
>>
>>
>> --
>> Erik Schnetter <schnet...@gmail.com> http://www.perimeterinstitute.
>> ca/personal/eschnetter/
>>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Determining L1 cache size?

2016-08-17 Thread Erik Schnetter
On Wed, Aug 17, 2016 at 1:31 PM, Páll Haraldsson <pall.haralds...@gmail.com>
wrote:

> On Tuesday, August 16, 2016 at 1:41:54 PM UTC, Erik Schnetter wrote:
>>
>> ```Julia
>> Pkg.add("Hwloc")
>> using Hwloc
>> topo = load_topology()
>> print(topo)
>> ```
>>
>> I just see that Hwloc has problems with Julia 0.5; I'll tag a new release.
>>
>
> Great to know about the hwloc package and its wrapper (I was going to post
> some links on dynamically finding out.. or statically for Linux) .
>
> I did notice however:
>
> "Remove non-functional Windows support
> <https://github.com/JuliaParallel/Hwloc.jl/commit/5cc4f3c3bd20237f82a6ad300b5df94c8d2798b5>"
> (just in the wrapper, the base library supports Windows, so you could add
> suport.. and that commit helps knowing how).
>
> [I didn't check FreeBSD (or OS X) support, is it compatible with Linux?]
>

Yes, it works on all Unix systems I tried. The code to determine the L1
cache size is the same on Windows as on Unix. The problem is just that I
don't know how to build the hwloc library on Windows, or how to install a
pre-built one. This is not a difficult problem, I just didn't have a need
for this.

I was looking into this myself a while back (and getting cache-line size).
> I guess a default fallback of at least 4 KB, possibly 32 KB, might be ok
> (and 16-byte cache lines, probably save and lowest/most common size) on
> case your library finds nothing, e.g. on Windows. [BLAS is compiled, I
> think with cache knowledge, maybe there's a way of knowing dynamically with
> what options? I guess not good to rely on, think it's stipped out in
> Julia-lite branch.]
>

Current (and many former) Intel CPUs use an L1 data cache size with 32
kByte, with cache lines of 64 Bytes.

Depending on your application, it is safer to over-estimate the cache line
size, for example if you need to prevent accessing the same cache line from
multiple threads.

-erik

[I was also thinking ahead to AOT compiled Julia.. be aware of that also..
> when cross-compiling Julia and C++ would seem to have to be conservative,
> one reason Julia seems better than C++ (even without AOT).]
>
>
> See here for save lowest-L1 size you want to support (I've never heard of
> lower than 4 KB data, all shared code+data gone and also where 4 KB):
>
> https://en.wikipedia.org/wiki/List_of_ARM_microarchitectures
>
>
> >is there a (cross-platform) way to get the CPU L1 cache size
>
> You assume you have a [L1] cache..? :) My/the first ARM had none.. than
> came 4 KB (shared, later split and more than L1). Yes, all computers in
> decades have a cache, except for newest fastest TOP500 supercomputer
> (scratchpad, Adapteva's CPU similar) and Tera had just many threads..
>
> --
> Palli.
>
>
>
>>
>> -erik
>>
>>
>> On Tue, Aug 16, 2016 at 5:12 AM, Oliver Schulz <oliver...@tu-dortmund.de>
>> wrote:
>>
>>> Hi,
>>>
>>> is there a (cross-platform) way to get the CPU L1 cache size from Julia?
>>> I'd like to determine the optimal block size(s) for an array algorithm.
>>>
>>>
>>> Cheers,
>>>
>>> Oliver
>>>
>>>
>>
>>
>> --
>> Erik Schnetter <schn...@gmail.com> http://www.perimeterinstitute.
>> ca/personal/eschnetter/
>>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Julia eval(text) function

2016-08-17 Thread Erik Schnetter
In Julia, you would not map a string, but map a function instead:

```Julia
function mapcol(f, p0)
M,N = size(p0)
y = similar(p0)
for i in 1:N
x = view(p0, :,i)
y[:,i] = f(x)
end
y
end
```

You would call it e.g. as `mapcol(v -> v+1, p0)`.

Parsing and evaluating a string is also possible, of course, but is usually
not necessary. Look at the documentation for `parse` if you are interested.

-erik


On Wed, Aug 17, 2016 at 3:34 AM, Nicholas Hausch <nhau...@gmail.com> wrote:

> Hello all. I am attempting to translate a MATLAB file that evaluates a
> string column by column. The function is simple, pasted below:
>
> function y = mapcol(text,p0,p1,p2)
> %MAPCOL
> % mapcol(STRING,A,B,C)  will execute the command
> %   contained in STRING over the columns of the matrix A,
> %   creating one column of the output matrix at a time.
> %   The additional arguments B and C are available to pass parameters.
> % EX:  mapcol('x.*p1',A,b) will do a point-wise
> %  multiplication of each column of A by the vector b.
> %
> [M,N] = size(p0);
> y = [];
> for i=1:N
>x = p0(:,i);
>y(:,i) = eval(text);
> end
>
> In trying to write the equivalent function in Julia, I ran into issues
> regarding eval() only having access to variables at the global level. Is
> this possible to translate?/ How would I go about doing so (preferably
> without too much added complexity, which I noticed in other threads
> regarding the scope of eval)? I realize that this function may not appear
> to be very useful on the surface, but for me it would be worth having as a
> tool to use in other functions.
>
> Any feedback is appreciated. Thanks.
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: How to Manipulate each character in of a string using a for loop in Julia ?

2016-08-17 Thread Erik Schnetter
Alternatively, convert the string to an array of characters, which is
mutable:

```Julia
a = Vector{Char}("abcd")
for i in 1:length(a)
   a[i] += 1
end
```

-erik


On Wed, Aug 17, 2016 at 3:59 AM, 'Greg Plowman' via julia-users <
julia-users@googlegroups.com> wrote:

> I think Jacob meant:
>
> for char in a
>write(io, char + 1)
> end
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Symbols as Dict keys - efficiency / memory allocation?

2016-08-17 Thread Erik Schnetter
See <https://github.com/eschnett/ValueSymbols.jl> for what I had in mind.

-erik

On Wed, Aug 17, 2016 at 10:32 AM, Yichao Yu <yyc1...@gmail.com> wrote:

>
>
> On Wed, Aug 17, 2016 at 10:19 PM, Oliver Schulz <
> oliver.sch...@tu-dortmund.de> wrote:
>
>> Hi Yichao
>>
>> I get that - I was just curious why (and I guess there's likely a very
>> good reason for it). My intuition would be that a non-GCed interned string
>> type could be represented by a bitstype (i.e. a pointer or index to the
>> string table). So I was surprised that Symbol is an object reference and
>> wanted to understand this a bit better.
>>
>
> It's a pointer to managed memory with tag that's all what's needed for it
> to be a !pointerfree type. There are way more things that we don't GC and
> it has nothing to do with whether it's a isbits type or not.
>
>
>>
>>
>> Cheers,
>>
>> Oliver
>>
>>
>> On Wednesday, August 17, 2016 at 3:35:41 PM UTC+2, Yichao Yu wrote:
>>>
>>>
>>> > PS: Yes, symbols in Julia should be bitstypes.
>>>
>>> No, it's a object reference and isn't a `isbits()` type.
>>>
>>>
>>>>
>>>> On Wed, Aug 17, 2016 at 8:26 AM, Cedric St-Jean <cedric...@gmail.com>
>>>> wrote:
>>>>
>>>>> Good points.
>>>>>
>>>>> Given that symbols have a name which is a string, I don't see how they
>>>>> could be a bits-type unless they just became numbers to index into a 
>>>>> global
>>>>> array somewhere.. i.e. a pointer. Stack-allocating non-bits-type
>>>>> immutables is on the roadmap to 1.0, so that should solve your problems.
>>>>>
>>>>> Maybe you can solve the CategorizedPoint problem by using an @enum.
>>>>>
>>>>> On Wed, Aug 17, 2016 at 6:48 AM, Oliver Schulz <
>>>>> oliver...@tu-dortmund.de> wrote:
>>>>>
>>>>>> Hello Andreas,
>>>>>>
>>>>>> consider iterating over a Dict{Symbol,Int64}:
>>>>>>
>>>>>> d = Dict(:foo => 42, :bar => 77)
>>>>>> for x in d println("$(typeof(x)), $(isbits(x))") end
>>>>>>
>>>>>>
>>>>>> During iteration, a lot of stack allocated "Pair{Symbol,Int64}"
>>>>>> objects are created. That is very bad for performance.
>>>>>>
>>>>>> immutable CategorizedPoint
>>>>>> x::Float64
>>>>>> y::Float64
>>>>>> category::Symbol
>>>>>> end
>>>>>>
>>>>>>
>>>>>> Also, consider a (hypothetical) data type for categorized points
>>>>>> (with a finite number of categories - but extensible, so that Symbol is a
>>>>>> better fit than using an enum):
>>>>>>
>>>>>> immutable CategorizedPoint
>>>>>> x::Float64
>>>>>> y::Float64
>>>>>> category::Symbol
>>>>>> end
>>>>>>
>>>>>> I would definitely want this to be a bits-type, and not have every
>>>>>> instance heap-allocated.
>>>>>>
>>>>>>
>>>>>> Cheers,
>>>>>>
>>>>>> Oliver
>>>>>>
>>>>>>
>>>>>> On Wednesday, August 17, 2016 at 11:44:13 AM UTC+2, Andreas Lobinger
>>>>>> wrote:
>>>>>>>
>>>>>>> Hello colleague,
>>>>>>>
>>>>>>> On Wednesday, August 17, 2016 at 11:12:51 AM UTC+2, Oliver Schulz
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Sure -  I was assuming that as Symbol (as an interned string) is
>>>>>>>> basically just a pointer. So comparisons are O(1), etc. What I'd like 
>>>>>>>> to
>>>>>>>> understand is, why can't it be a bitstype? Currently, it's not, so 
>>>>>>>> Symbol
>>>>>>>> cannot be used in a lightweight (i.e. stack-allocated) immutable type. 
>>>>>>>> I
>>>>>>>> assume there's a good reason for it, I just want to understand why.
>>>>>>>>
>>>>>>>
>>>>>>> Could you make an example how you would like to use Symbol in
>>>>>>> lightweight types?
>>>>>>>
>>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Erik Schnetter <schn...@gmail.com> http://www.perimeterinstitute.
>>>> ca/personal/eschnetter/
>>>>
>>>
>>>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Determining L1 cache size?

2016-08-17 Thread Erik Schnetter
On Wed, Aug 17, 2016 at 5:43 AM, Oliver Schulz <oliver.sch...@tu-dortmund.de
> wrote:

> Great - thanks, Eric!
>
> Would it be possible to enable precompilation for SIMD.jl (so that another
> precompiled packages can have it as a dependency)?
>

Sure. Of course, you probably won't see any speed benefit, since most
functions are generated at run time as needed.

Do you want to submit a pull request?

Uh, and can I ask another question regarding SIMD.jl? Is it possible to
> shift SIMD vectors by one element? Ideally, I'd
> like to shift one element out and another element in (for a filter
> algorithm).
>

Yes! This is possible since yesterday using "vector shuffles" (see the
readme), thanks to @GunnarFarneback.

-erik


> On Tuesday, August 16, 2016 at 8:02:30 PM UTC+2, Erik Schnetter wrote:
>>
>> Ooh! Yes, I need to register it. Thanks for the reminder!
>>
>> -erik
>>
>> On Tue, Aug 16, 2016 at 12:01 PM, Oliver Schulz <oliver...@tu-dortmund.de
>> > wrote:
>>
>>> Thanks, Eric!
>>>
>>> I should have known you had something up your sleeve - I was actually
>>> thinking about adding SIMD.jl to the mix, too. :-)
>>>
>>> Can I ask - do you plan to register SIMD? Or is it more like a prototype?
>>>
>>>
>>> Cheers,
>>>
>>> Oliver
>>>
>>>
>>> On Tuesday, August 16, 2016 at 3:41:54 PM UTC+2, Erik Schnetter wrote:
>>>>
>>>> ```Julia
>>>> Pkg.add("Hwloc")
>>>> using Hwloc
>>>> topo = load_topology()
>>>> print(topo)
>>>> ```
>>>>
>>>> I just see that Hwloc has problems with Julia 0.5; I'll tag a new
>>>> release.
>>>>
>>>> -erik
>>>>
>>>>
>>>> On Tue, Aug 16, 2016 at 5:12 AM, Oliver Schulz <
>>>> oliver...@tu-dortmund.de> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> is there a (cross-platform) way to get the CPU L1 cache size from
>>>>> Julia? I'd like to determine the optimal block size(s) for an array
>>>>> algorithm.
>>>>>
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Oliver
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Erik Schnetter <schn...@gmail.com> http://www.perimeterinstitute.
>>>> ca/personal/eschnetter/
>>>>
>>>
>>
>>
>> --
>> Erik Schnetter <schn...@gmail.com> http://www.perimeterinstitute.
>> ca/personal/eschnetter/
>>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Symbols as Dict keys - efficiency / memory allocation?

2016-08-17 Thread Erik Schnetter
You could create your own immutable type that holds a `Ptr`, and then
define respective `unsafe` functions that convert from `Symbol` to `Ptr`
and back:

```Julia
# Take a symbol, convert to a bitstype:
sym = :car
ptr = Cstring(sym)
isbits(ptr)

# Convert back to a symbol
sym2 = Symbol(unsafe_wrap(String, ptr))
sym2 === sym
```

The call to the `Symbol` constructor is probably expensive, as Julia will
have to traverse its internal set of existing symbols to determine whether
this symbol is new.

I assume that "hiding" symbols in pointers in this way will break e.g.
serialization. If you define your own serialization format (where you
serialize the pointer instead of the bitstype pointer) you should be fine.
There might be other gotchas like this as well.

-erik

PS: Yes, symbols in Julia should be bitstypes.


On Wed, Aug 17, 2016 at 8:26 AM, Cedric St-Jean <cedric.stj...@gmail.com>
wrote:

> Good points.
>
> Given that symbols have a name which is a string, I don't see how they
> could be a bits-type unless they just became numbers to index into a global
> array somewhere.. i.e. a pointer. Stack-allocating non-bits-type
> immutables is on the roadmap to 1.0, so that should solve your problems.
>
> Maybe you can solve the CategorizedPoint problem by using an @enum.
>
> On Wed, Aug 17, 2016 at 6:48 AM, Oliver Schulz <
> oliver.sch...@tu-dortmund.de> wrote:
>
>> Hello Andreas,
>>
>> consider iterating over a Dict{Symbol,Int64}:
>>
>> d = Dict(:foo => 42, :bar => 77)
>> for x in d println("$(typeof(x)), $(isbits(x))") end
>>
>>
>> During iteration, a lot of stack allocated "Pair{Symbol,Int64}" objects
>> are created. That is very bad for performance.
>>
>> immutable CategorizedPoint
>> x::Float64
>> y::Float64
>> category::Symbol
>> end
>>
>>
>> Also, consider a (hypothetical) data type for categorized points (with a
>> finite number of categories - but extensible, so that Symbol is a better
>> fit than using an enum):
>>
>> immutable CategorizedPoint
>> x::Float64
>> y::Float64
>> category::Symbol
>> end
>>
>> I would definitely want this to be a bits-type, and not have every
>> instance heap-allocated.
>>
>>
>> Cheers,
>>
>> Oliver
>>
>>
>> On Wednesday, August 17, 2016 at 11:44:13 AM UTC+2, Andreas Lobinger
>> wrote:
>>>
>>> Hello colleague,
>>>
>>> On Wednesday, August 17, 2016 at 11:12:51 AM UTC+2, Oliver Schulz wrote:
>>>>
>>>> Sure -  I was assuming that as Symbol (as an interned string) is
>>>> basically just a pointer. So comparisons are O(1), etc. What I'd like to
>>>> understand is, why can't it be a bitstype? Currently, it's not, so Symbol
>>>> cannot be used in a lightweight (i.e. stack-allocated) immutable type. I
>>>> assume there's a good reason for it, I just want to understand why.
>>>>
>>>
>>> Could you make an example how you would like to use Symbol in
>>> lightweight types?
>>>
>>>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Determining L1 cache size?

2016-08-16 Thread Erik Schnetter
Ooh! Yes, I need to register it. Thanks for the reminder!

-erik

On Tue, Aug 16, 2016 at 12:01 PM, Oliver Schulz <
oliver.sch...@tu-dortmund.de> wrote:

> Thanks, Eric!
>
> I should have known you had something up your sleeve - I was actually
> thinking about adding SIMD.jl to the mix, too. :-)
>
> Can I ask - do you plan to register SIMD? Or is it more like a prototype?
>
>
> Cheers,
>
> Oliver
>
>
> On Tuesday, August 16, 2016 at 3:41:54 PM UTC+2, Erik Schnetter wrote:
>>
>> ```Julia
>> Pkg.add("Hwloc")
>> using Hwloc
>> topo = load_topology()
>> print(topo)
>> ```
>>
>> I just see that Hwloc has problems with Julia 0.5; I'll tag a new release.
>>
>> -erik
>>
>>
>> On Tue, Aug 16, 2016 at 5:12 AM, Oliver Schulz <oliver...@tu-dortmund.de>
>> wrote:
>>
>>> Hi,
>>>
>>> is there a (cross-platform) way to get the CPU L1 cache size from Julia?
>>> I'd like to determine the optimal block size(s) for an array algorithm.
>>>
>>>
>>> Cheers,
>>>
>>> Oliver
>>>
>>>
>>
>>
>> --
>> Erik Schnetter <schn...@gmail.com> http://www.perimeterinstitute.
>> ca/personal/eschnetter/
>>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Determining L1 cache size?

2016-08-16 Thread Erik Schnetter
```Julia
Pkg.add("Hwloc")
using Hwloc
topo = load_topology()
print(topo)
```

I just see that Hwloc has problems with Julia 0.5; I'll tag a new release.

-erik


On Tue, Aug 16, 2016 at 5:12 AM, Oliver Schulz <oliver.sch...@tu-dortmund.de
> wrote:

> Hi,
>
> is there a (cross-platform) way to get the CPU L1 cache size from Julia?
> I'd like to determine the optimal block size(s) for an array algorithm.
>
>
> Cheers,
>
> Oliver
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Help please: how to expand dictionary into (local ?) variables with the same name as the keys and the corresponding values

2016-08-14 Thread Erik Schnetter
 end
>>>>
>>>>
>>>> render_template() |> println
>>>>
>>>> Hello from me, ...
>>>>
>>>>
>>>> Olé
>>>> moo
>>>>
>>>>
>>>> duminică, 14 august 2016, 11:20:37 UTC+2, Adrian Salceanu a scris:
>>>>>
>>>>> Thanks
>>>>>
>>>>> Yes, I've thought about a few ways to mitigate some of these issues:
>>>>>
>>>>> 1. in the app I can setup a module (like Render) and evaluate into
>>>>> this module exclusively.
>>>>> Hence, another approach might be to have some helper methods that
>>>>> setup the variables in the module and then eval the template itself inside
>>>>> the module too (must try though). So something in the lines of:
>>>>> set(:foo, "foo")
>>>>> set(:bar, [1, 2, 3])
>>>>> parse_tpl(ejl"""$(foo) and $(bar)""")
>>>>> # all the above gets parsed in Render
>>>>>
>>>>> 2. break the parsing in 2 steps:
>>>>> a. reading the template string and parse it to generated Julia code
>>>>> (as strings) (an array of Julia code lines) - cache it
>>>>> b. (load strings from cache and) eval the code together with the vars
>>>>>
>>>>> ===
>>>>>
>>>>> Another approach (which is how it's done in one of the Ruby templating
>>>>> engine) is to generate a full function definition, whose body parses the
>>>>> template and takes the variables as params. And then eval and execute the
>>>>> function with its params. However, I'm still struggling with the
>>>>> metaprogramming API as for instance parse() chokes on multiple lines, and 
>>>>> I
>>>>> couldn't find a functional equivalent of a quote ... end blocks. But I'm
>>>>> hoping include_string() will do the trick (must test though).
>>>>>
>>>>>
>>>>> sâmbătă, 13 august 2016, 15:20:01 UTC+2, Yichao Yu a scris:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sat, Aug 13, 2016 at 8:06 PM, Adrian Salceanu <
>>>>>> adrian@gmail.com> wrote:
>>>>>>
>>>>>>> That's pretty difficult as my goal is to use embedded Julia as the
>>>>>>> templating language. Similar to Ruby's ERB, ex:
>>>>>>> http://www.stuartellis.eu/articles/erb/
>>>>>>>
>>>>>>> So say in the template I have something like
>>>>>>>
>>>>>>> <% if foo == "bar" %>
>>>>>>> Bar
>>>>>>> <% else %>
>>>>>>> Baz
>>>>>>> <% end %>
>>>>>>>
>>>>>>> The idea is to use Julia itself to parse the code block and Julia
>>>>>>> will raise an error is foo is not defined. So I can't really look it up.
>>>>>>>
>>>>>>
>>>>>> It's ok to use the julia syntax and parser but it's a pretty bad idea
>>>>>> to use the julia runtime to actually evaluating the expression, and
>>>>>> absolutely not by making them reference to local variables.
>>>>>>
>>>>>> For a start you are not allowed to reference local variables by names
>>>>>> anyway.
>>>>>> You also shouldn't allow reference to/overwrite of other local
>>>>>> variables (i.e. the template namespace should be fully isolated and
>>>>>> independent of any scope in the template engine).
>>>>>>
>>>>>> Since you want to eval, it seems that efficiency is not an issue, in
>>>>>> which case you can create an anonymous module and eval/create globals in
>>>>>> that module. This should also be reasonably fast if you are only using 
>>>>>> the
>>>>>> template once.
>>>>>>
>>>>>> If you want to use it multiple time and compile the template, you
>>>>>> should then scan for variable references in the expressions and process 
>>>>>> it
>>>>>> from there.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> I can either do
>>>>>>>
>>>>>>> <% if _[:foo] == "bar" %>
>>>>>>>
>>>>>>> or
>>>>>>>
>>>>>>> <% if _(:foo) == "bar" %>
>>>>>>>
>>>>>>> but it's not that nice.
>>>>>>>
>>>>>>>
>>>>>>> sâmbătă, 13 august 2016, 13:24:18 UTC+2, Yichao Yu a scris:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sat, Aug 13, 2016 at 7:13 PM, Adrian Salceanu <
>>>>>>>> adrian@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>> It's for a templating engine. The user creates the document (a
>>>>>>>>> string) which contains interpolated variables placeholders and 
>>>>>>>>> markup. When
>>>>>>>>> the template is rendered, the placeholders must be replaced with the
>>>>>>>>> corresponding values from the dict.
>>>>>>>>>
>>>>>>>>> The lines in the template are eval-ed and so Julia will look for
>>>>>>>>> the variables in the scope. So the vars should be already defined.
>>>>>>>>>
>>>>>>>>
>>>>>>>> You should explicitly look up those variables in the dict instead.
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Yes, ultimately I can force the user to use a dict (or rather a
>>>>>>>>> function for a bit of semantic sugar) - which is preferable from a
>>>>>>>>> performance perspective, but less pretty end error prone from the user
>>>>>>>>> perspective.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Numer of ones in bitstype

2016-08-13 Thread Erik Schnetter
You can also write
```Julia
firstbit(n::Integer) = signed(n) < 0
```

I hope that LLVM generates the same machine code for this as for Steven's
bit masks above. Using bit masks is more general since it allows you to
test every bit, not just the "first" (sign) bit.

-erik


On Sat, Aug 13, 2016 at 11:10 AM, Steven G. Johnson <stevenj@gmail.com>
wrote:

>
>
> On Saturday, August 13, 2016 at 9:22:14 AM UTC-4, jw3126 wrote:
>>
>> Sorry my question was confusing. What I want is a function that behaves
>> as follows on bittypes:
>>
>> *0*0101101 -> 0
>> *0*1110101 -> 0
>> *0*1011011 -> 0
>> *1*0011101 -> 1
>> *1*1010001 -> 1
>> *1*0110111 -> 1
>>
>
> Normally to compute this function you would just check whether the
> bitwise-and with 1000 is nonzero.  e.g.
>
> firstbit(n::Union{UInt8,Int8}) = (n & 0x80) != 0
> firstbit(n::Union{UInt16,Int16}) = (n & 0x8000) != 0
> firstbit(n::Union{UInt32,Int32}) = (n & 0x8000) != 0
> firstbit(n::Union{UInt64,Int64}) = (n & 0x8000) != 0
> firstbit(n::Union{UInt128,Int128}) = (n & 0x8000)
> != 0
> firstbit(n::Integer) = leading_ones(n) > 0
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-12 Thread Erik Schnetter
On Fri, Aug 12, 2016 at 4:06 AM, Tomas Lycken <tomas.lyc...@gmail.com>
wrote:

> Julia’s parametric types are not generic enough to support this.
>
> In my experience, any time I’ve been inclined to reason like this about
> one of my use cases, someone (usually Tim Holy :P) has been able to show me
> a way that works just fine, without magic, by just using the type system in
> a smarter way than I was able to figure out myself. So, if you’ll forgive
> my delusions of grandeur, I’ll attempt to suggest a way to support the
> cases you’ve mentioned so far!
>
> The code turned out to be quite lengthy, so I put it in a gist instead:
> https://gist.github.com/tlycken/e501e1079d947d699b71941d93b7113e
>
That's similar to the approach I'm currently trying. The added difficulty
is that I don't know ahead of time how many fields I need in the immutable
type, hence I'm passing a single type parameter that is a tuple.

Also, I really want the type generating function to be type-stable. If I
relax this restriction, then I can keep my current code, use a regular (not
generated) function, and call `eval` there to create the type I needed
(properly memoized of course). However, this makes code that first declares
a variable and then uses it quite inefficient; one will always have to put
the variable declaration and its use into separate functions, declaring
variables "further outside", so that Julia's method call selection can
allow type inference to kick in.

-erik


> Basically, what I’ve done is to create a generic type that holds all the
> information you need ((lower bound, stride, end) for each dimension) in
> its type arguments, and then a generator function that lets you specify
> these the same way you do today - with symbols as placeholders to be filled
> in when instantiated.
>
> The generator function won’t be type stable, because the return type will
> depend on values of keyword arguments, but that shouldn’t be a big problem
> since it’s just upon array initialization; I suspect you will do the actual
> work in some kernel function, which then has access to the full type
> information.
>
> // T
>
> On Friday, August 12, 2016 at 7:38:07 AM UTC+2, Andy Ferris wrote:
>
> > I'm encountering the error "eval cannot be used in a generated function"
>> in Julia 0.5 for code that is working in Julia 0.4.
>>
>> I feel your pain, Erik!
>>
>> > AFAICT, it remains possible to do dynamic type generation if you (1)
>> print the code that would define the type to a file, and (2) `include` the
>> file.
>>
>> While this is true, is your example meant to be a generated function? If
>> I need to recreate an existing type by calling `create_type_dynamically`
>> again with the same parameters, it won't be free at run-time. Unless we can
>> get it to recompile, and `isdefined()` is `@pure` or (evaluated by
>> inference)... hmm...
>>
>> > The definition of a generated function is "a pure function which
>> computes a function body based solely upon properties of the input argument
>> types". Since `eval` isn't pure, that would violate the definition of what
>> `@generated` is, and therefore it isn't usable. This isn't supposed to be
>> complicated, what an `@generated` function is actually supposed to be
>> allowed to do is just extremely limited, to make sure the system doesn't
>> fall apart on edge cases.
>>
>> Jameson - I suppose most of us think not of Julia as like the C or C++
>> language standards, but as an implementation. A beautifully hackable
>> implementation, at that! I would assert that the definition of a generated
>> function is written in C++, lisp, Julia, etc. So it seemed to us users that
>> what "pure" meant was in this context was whatever we observed to work:
>> that it should always produce the same output code, and it seemed obvious
>> that the generator itself runs one or more times before the function is
>> called. Side effects that don't cause the compiler to get confused? Well,
>> why not? We can `println` in the generator, and that is rather useful for
>> debugging. In the past months several people have independently come up
>> with the same incredibly useful trick/hack, and there is now a bit of
>> sadness felt that our clever/insane code has died!
>>
>> I'm not saying this is terribly bad. Fixing #265 would be wonderful, and
>> we shouldn't get in the way of that. But Julia has always appeared to be
>> driven by community desires, and I feel these generated types are pretty
>> nifty. Constructive thoughts on any way forward would be greatly
>> appreciated. As would an actual example of this not working as expected in
>> the current implementation of Julia (though I think you may have provided
>> this for planned work on #265??).
>>
> ​
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-12 Thread Erik Schnetter
In what sense does a generated function need to be pure? I was not aware of
this. Does this mean that (1) the generating function needs to be pure (and
return only a quoted expression, not modifying any state), or that (2) the
quoted expression needs to be pure?

The documentation of generated might need to be updated. Purity is
currently not mentioned there; indeed, the example there contains a
`println`. The documentation also argues that side effects are bad since
the generating function might be evaluated multiple times. I understand
this to imply that idempotent side effects are fine.

-erik


On Fri, Aug 12, 2016 at 1:38 AM, Andy Ferris <ferris.a...@gmail.com> wrote:

> > I'm encountering the error "eval cannot be used in a generated function"
> in Julia 0.5 for code that is working in Julia 0.4.
>
> I feel your pain, Erik!
>
> > AFAICT, it remains possible to do dynamic type generation if you (1)
> print the code that would define the type to a file, and (2) `include` the
> file.
>
> While this is true, is your example meant to be a generated function? If I
> need to recreate an existing type by calling `create_type_dynamically`
> again with the same parameters, it won't be free at run-time. Unless we can
> get it to recompile, and `isdefined()` is `@pure` or (evaluated by
> inference)... hmm...
>
> > The definition of a generated function is "a pure function which
> computes a function body based solely upon properties of the input argument
> types". Since `eval` isn't pure, that would violate the definition of what
> `@generated` is, and therefore it isn't usable. This isn't supposed to be
> complicated, what an `@generated` function is actually supposed to be
> allowed to do is just extremely limited, to make sure the system doesn't
> fall apart on edge cases.
>
> Jameson - I suppose most of us think not of Julia as like the C or C++
> language standards, but as an implementation. A beautifully hackable
> implementation, at that! I would assert that the definition of a generated
> function is written in C++, lisp, Julia, etc. So it seemed to us users that
> what "pure" meant was in this context was whatever we observed to work:
> that it should always produce the same output code, and it seemed obvious
> that the generator itself runs one or more times before the function is
> called. Side effects that don't cause the compiler to get confused? Well,
> why not? We can `println` in the generator, and that is rather useful for
> debugging. In the past months several people have independently come up
> with the same incredibly useful trick/hack, and there is now a bit of
> sadness felt that our clever/insane code has died!
>
> I'm not saying this is terribly bad. Fixing #265 would be wonderful, and
> we shouldn't get in the way of that. But Julia has always appeared to be
> driven by community desires, and I feel these generated types are pretty
> nifty. Constructive thoughts on any way forward would be greatly
> appreciated. As would an actual example of this not working as expected in
> the current implementation of Julia (though I think you may have provided
> this for planned work on #265??).
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Numer of ones in bitstype

2016-08-11 Thread Erik Schnetter
See also "leading_zeros".

-erik

On Thu, Aug 11, 2016 at 1:52 PM, Tom Breloff <t...@breloff.com> wrote:

> help?> count_ones
> search: count_ones count_zeros
>
>   count_ones(x::Integer) -> Integer
>
>   Number of ones in the binary representation of x.
>
>   julia> count_ones(7)
>   3
>
>
> On Thu, Aug 11, 2016 at 1:48 PM, jw3126 <jw3...@gmail.com> wrote:
>
>> I have a bitstype and want to know the number of ones in its bit
>> representation as well as the first bit. How to accomplish these in a
>> performant way?
>>
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-11 Thread Erik Schnetter
On Thu, Aug 11, 2016 at 3:08 AM, Tomas Lycken <tomas.lyc...@gmail.com>
wrote:

> Late to the party, but what’s wrong with writing FastArray{1,10,1,10} (or
> even FastArray{10,10} if you’re OK with implicit 1-based indexing)? It
> seems that those (valid) type arguments could convey just as much
> information as FastArray(1:10, 1:10), and you could then handle any
> special-casing on the size in the constructor. That way you’d only need one
> (generic) type. Or am I missing something important here?
>
The package supports many other cases as well. Both lower or upper index
can be either fixed in the type, or left flexible; depending on this, the
optimal way to access array elements is determined. Also, the number of
dimensions can change, so that there is `FastArray(1:10)` as well as
`FastArray(1:10,1:10,1:10,1:10)`. Julia's parametric types are not generic
enough to support this.

One example might be a `(3,n)` array, where the first dimension is known to
have size `3`, and the second can be arbitrary. You write `typealias
array3n FastArray(3,:)`, and later create arrays of this type e.g. via
`array3n{Float64}(:,100)`. At run time, this leads to better (faster) code
when accessing array elements in my use case.

-erik


> // T
>
> On Thursday, August 11, 2016 at 12:50:14 AM UTC+2, Erik Schnetter wrote:
>
> The upshot of the discussions seems to be "it won't work in 0.5 because
>> the experts say so, and there are no plans to change that". So I'm going to
>> accept that statement.
>>
>> I think I'll use the following work-around:
>> ```Julia
>> immutable Wrapper{Tag,Types}
>> data::Types
>> end
>> ```
>> where I use `Tag` as `Val{Symbol}` to generate many distinct types, and
>> `Types` will be a tuple. This allows me to "generate" any immutable type. I
>> won't be able to access fields via the `.fieldname` syntax, but that is not
>> important to me.
>>
>> -erik
>>
>>
>>
>> On Wed, Aug 10, 2016 at 6:09 PM, Tim Holy <tim@gmail.com> wrote:
>>
>>> AFAICT, it remains possible to do dynamic type generation if you (1)
>>> print the
>>> code that would define the type to a file, and (2) `include` the file.
>>>
>>> function create_type_dynamically{T}(::Type{T})
>>> type_name = string("MyType", T)
>>> isdefined(Main, Symbol(type_name)) && return nothing
>>> filename = joinpath(tempdir(), string(T))
>>> open(filename, "w") do io
>>> println(io, """
>>> type $type_name
>>> val::$T
>>> end
>>> """)
>>> end
>>> eval(include(filename))
>>> nothing
>>> end
>>>
>>> Is this somehow less evil than doing it in a generated function?
>>>
>>> Best,
>>> --Tim
>>>
>>> On Wednesday, August 10, 2016 9:49:23 PM CDT Jameson Nash wrote:
>>> > > Why is it impossible to generate a new type at run time? I surely
>>> can do
>>> >
>>> > this by calling `eval` at module scope.
>>> >
>>> > module scope is compile time != runtime
>>> >
>>> > > Or I could create a type via a macro.
>>> >
>>> > Again, compile time != runtime
>>> >
>>> > > Given this, I can also call `eval` in a function, if I ensure the
>>> >
>>> > function is called only once.
>>> >
>>> > > Note that I've been doing this in Julia 0.4 without any (apparent)
>>> >
>>> > problems.
>>> >
>>> > Sure, I'm just here to tell you why it won't work that way in v0.5
>>> >
>>> > > I'm not defining thousands of types in my code. I define one type,
>>> and
>>> >
>>> > use it all over the place. However, each time my code runs (for
>>> days!), it
>>> > defines a different type, chosen by a set of user parameters. I'm also
>>> not
>>> > adding constraints to type parameters -- the type parameters are just
>>> `Int`
>>> > values.
>>> >
>>> > Right, the basic tradeoff required here is that you just need to
>>> provide a
>>> > convenient way for your user to declare the type at the toplevel that
>>> will
>>> > be used for the run. For example, you can just JIT the code for the
>>> whole
>>> > run at the beginning:
>>> >
>>> > function do_run()
>>> >   return @eval begin
>&g

Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-10 Thread Erik Schnetter
The upshot of the discussions seems to be "it won't work in 0.5 because the
experts say so, and there are no plans to change that". So I'm going to
accept that statement.

I think I'll use the following work-around:
```Julia
immutable Wrapper{Tag,Types}
data::Types
end
```
where I use `Tag` as `Val{Symbol}` to generate many distinct types, and
`Types` will be a tuple. This allows me to "generate" any immutable type. I
won't be able to access fields via the `.fieldname` syntax, but that is not
important to me.

-erik



On Wed, Aug 10, 2016 at 6:09 PM, Tim Holy <tim.h...@gmail.com> wrote:

> AFAICT, it remains possible to do dynamic type generation if you (1) print
> the
> code that would define the type to a file, and (2) `include` the file.
>
> function create_type_dynamically{T}(::Type{T})
> type_name = string("MyType", T)
> isdefined(Main, Symbol(type_name)) && return nothing
> filename = joinpath(tempdir(), string(T))
> open(filename, "w") do io
> println(io, """
> type $type_name
> val::$T
> end
> """)
> end
> eval(include(filename))
> nothing
> end
>
> Is this somehow less evil than doing it in a generated function?
>
> Best,
> --Tim
>
> On Wednesday, August 10, 2016 9:49:23 PM CDT Jameson Nash wrote:
> > > Why is it impossible to generate a new type at run time? I surely can
> do
> >
> > this by calling `eval` at module scope.
> >
> > module scope is compile time != runtime
> >
> > > Or I could create a type via a macro.
> >
> > Again, compile time != runtime
> >
> > > Given this, I can also call `eval` in a function, if I ensure the
> >
> > function is called only once.
> >
> > > Note that I've been doing this in Julia 0.4 without any (apparent)
> >
> > problems.
> >
> > Sure, I'm just here to tell you why it won't work that way in v0.5
> >
> > > I'm not defining thousands of types in my code. I define one type, and
> >
> > use it all over the place. However, each time my code runs (for days!),
> it
> > defines a different type, chosen by a set of user parameters. I'm also
> not
> > adding constraints to type parameters -- the type parameters are just
> `Int`
> > values.
> >
> > Right, the basic tradeoff required here is that you just need to provide
> a
> > convenient way for your user to declare the type at the toplevel that
> will
> > be used for the run. For example, you can just JIT the code for the whole
> > run at the beginning:
> >
> > function do_run()
> >   return @eval begin
> >  lots of function definitions
> >  do_work()
> >   end
> > end
> >
> > On Wed, Aug 10, 2016 at 5:14 PM Erik Schnetter <schnet...@gmail.com>
> wrote:
> > > On Wed, Aug 10, 2016 at 1:45 PM, Jameson <vtjn...@gmail.com> wrote:
> > >> AFAIK, defining an arbitrary new type at runtime is impossible,
> sorry. In
> > >> v0.4 it was allowed, because we hoped that people understood not to
> try.
> > >> See also https://github.com/JuliaLang/julia/issues/16806. Note that
> it
> > >> is insufficient to "handle" the repeat calling via caching in a Dict
> or
> > >> similar such mechanism. It must always compute the exact final output
> > >> from
> > >> the input values alone (e.g. it must truly be const pure).
> > >
> > > The generated function first calculates the name of the type, then
> checks
> > > (`isdefined`) if this type is defined, and if so, returns it.
> Otherwise it
> > > is defined and then returned. This corresponds to looking up the type
> via
> > > `eval(typename)` (a symbol). I assume this is as pure as it gets.
> > >
> > > Why is it impossible to generate a new type at run time? I surely can
> do
> > > this by calling `eval` at module scope. Or I could create a type via a
> > > macro. Given this, I can also call `eval` in a function, if I ensure
> the
> > > function is called only once. Note that I've been doing this in Julia
> 0.4
> > > without any (apparent) problems.
> > >
> > > Being able to define types with arbitrary constraints in the type
> > >
> > >> parameters works OK for toy demos, but it's intentionally rather
> > >> difficult
> > >> since it causes performance issues at scale. Operations on Array are
> > >> likely
> > >> to be much faster (including the allocation) than on

Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-10 Thread Erik Schnetter
On Wed, Aug 10, 2016 at 1:45 PM, Jameson <vtjn...@gmail.com> wrote:

> AFAIK, defining an arbitrary new type at runtime is impossible, sorry. In
> v0.4 it was allowed, because we hoped that people understood not to try.
> See also https://github.com/JuliaLang/julia/issues/16806. Note that it is
> insufficient to "handle" the repeat calling via caching in a Dict or
> similar such mechanism. It must always compute the exact final output from
> the input values alone (e.g. it must truly be const pure).
>

The generated function first calculates the name of the type, then checks
(`isdefined`) if this type is defined, and if so, returns it. Otherwise it
is defined and then returned. This corresponds to looking up the type via
`eval(typename)` (a symbol). I assume this is as pure as it gets.

Why is it impossible to generate a new type at run time? I surely can do
this by calling `eval` at module scope. Or I could create a type via a
macro. Given this, I can also call `eval` in a function, if I ensure the
function is called only once. Note that I've been doing this in Julia 0.4
without any (apparent) problems.

Being able to define types with arbitrary constraints in the type
> parameters works OK for toy demos, but it's intentionally rather difficult
> since it causes performance issues at scale. Operations on Array are likely
> to be much faster (including the allocation) than on Tuple (due to the cost
> of *not* allocating) unless that Tuple is very small.
>

I'm not defining thousands of types in my code. I define one type, and use
it all over the place. However, each time my code runs (for days!), it
defines a different type, chosen by a set of user parameters. I'm also not
adding constraints to type parameters -- the type parameters are just `Int`
values.

And yes, I am using a mutable `Vector{T}` as underlying storage, that's not
the issue here. The speedup comes from knowing the size of the array ahead
of time, which allows the compiler to optimize indexing expressions. I've
benchmarked it, and examined the generated machine code. There's no doubt
that generating a type is the "right thing" to do in this case.

-erik

On Wednesday, August 10, 2016 at 1:25:15 PM UTC-4, Erik Schnetter wrote:
>>
>> I want to create a type, and need more flexibility than Julia's `type`
>> definitions offer (see <https://github.com/eschnett/FastArrays.jl>).
>> Currently, I have a function that generates the type, and returns the type.
>>
>> I would like to make this a generated function (as it was in Julia 0.4).
>> The advantage is that this leads to type stability: The generated type only
>> depends on the types of the arguments pass to the function, and Julia would
>> be able to infer the type.
>>
>> In practice, this looks like
>>
>> using FastArrays
>> # A (10x10) fixed-size arraytypealias Arr2d_10x10 FastArray(1:10, 1:10)
>> a2 = Arr2d_10x10{Float64}(:,:)
>>
>>
>> In principle I'd like to write `FastArray{1:10, 1:10}` (with curly
>> braces), but Julia doesn't offer sufficient flexibility for this. Hence I
>> use a regular function.
>>
>> To generate the type in the function I need to call `eval`. (Yes, I'm
>> aware that the function might be called multiple times, and I'm handling
>> this.)
>>
>> Do you have a suggestion for a different solution?
>>
>> -erik
>>
>>
>> On Wed, Aug 10, 2016 at 11:51 AM, Jameson <vtjn...@gmail.com> wrote:
>>
>>> It is tracking the dynamic scope of the code generator, it doesn't care
>>> about what code you emit. The generator function must not cause any
>>> side-effects and must be entirely computed from the types of the inputs and
>>> not other global state. Over time, these conditions are likely to be more
>>> accurately enforced, as needed to make various optimizations reliable
>>> and/or correct.
>>>
>>>
>>>
>>> On Wednesday, August 10, 2016 at 10:48:31 AM UTC-4, Erik Schnetter wrote:
>>>>
>>>> I'm encountering the error "eval cannot be used in a generated
>>>> function" in Julia 0.5 for code that is working in Julia 0.4. My question
>>>> is -- what exactly is now disallowed? For example, if a generated function
>>>> `f` calls another (non-generated) function `g`, can `g` then call `eval`?
>>>> Does the word "in" here refer to the code that is generated by the
>>>> generated function, or does it refer to the dynamical scope of the code
>>>> generation state of the generated function?
>>>>
>>>> To avoid the error I have to redesign my code, and I'd like to know
>>>> ahead of time 

Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-10 Thread Erik Schnetter
I want to create a type, and need more flexibility than Julia's `type`
definitions offer (see <https://github.com/eschnett/FastArrays.jl>).
Currently, I have a function that generates the type, and returns the type.

I would like to make this a generated function (as it was in Julia 0.4).
The advantage is that this leads to type stability: The generated type only
depends on the types of the arguments pass to the function, and Julia would
be able to infer the type.

In practice, this looks like

using FastArrays
# A (10x10) fixed-size arraytypealias Arr2d_10x10 FastArray(1:10, 1:10)
a2 = Arr2d_10x10{Float64}(:,:)


In principle I'd like to write `FastArray{1:10, 1:10}` (with curly braces),
but Julia doesn't offer sufficient flexibility for this. Hence I use a
regular function.

To generate the type in the function I need to call `eval`. (Yes, I'm aware
that the function might be called multiple times, and I'm handling this.)

Do you have a suggestion for a different solution?

-erik


On Wed, Aug 10, 2016 at 11:51 AM, Jameson <vtjn...@gmail.com> wrote:

> It is tracking the dynamic scope of the code generator, it doesn't care
> about what code you emit. The generator function must not cause any
> side-effects and must be entirely computed from the types of the inputs and
> not other global state. Over time, these conditions are likely to be more
> accurately enforced, as needed to make various optimizations reliable
> and/or correct.
>
>
>
> On Wednesday, August 10, 2016 at 10:48:31 AM UTC-4, Erik Schnetter wrote:
>>
>> I'm encountering the error "eval cannot be used in a generated function"
>> in Julia 0.5 for code that is working in Julia 0.4. My question is -- what
>> exactly is now disallowed? For example, if a generated function `f` calls
>> another (non-generated) function `g`, can `g` then call `eval`? Does the
>> word "in" here refer to the code that is generated by the generated
>> function, or does it refer to the dynamical scope of the code generation
>> state of the generated function?
>>
>> To avoid the error I have to redesign my code, and I'd like to know ahead
>> of time what to avoid. A Google search only turned up the C file within
>> Julia that emits the respective error message, as well as the Travis build
>> log for my package.
>>
>> -erik
>>
>> --
>> Erik Schnetter <schnet...@gmail.com> http://www.perimeterinstitute.
>> ca/personal/eschnetter/
>>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Adding tuples

2016-08-10 Thread Erik Schnetter
The built-in type `CartesianIndex` supports adding and subtracting, and
presumably also multiplication. It is implemented very efficiently, based
on tuples.

Otherwise, to generate efficient code, you might have to make use of
"generated functions". These are similar to macros, but they know about the
types upon which they act, and thus know the value of `N`. This is a bit
low-level, so I'd use this only if (a) there is not other package
available, and (b) you have examined Julia's performance and found it
lacking.

I would avoid overloading operators for `NTuple`, and instead us a new
immutable type, since overloading operations for Julia's tuples can have
unintended side effects.

-erik


On Wed, Aug 10, 2016 at 9:57 AM, 'Bill Hart' via julia-users <
julia-users@googlegroups.com> wrote:

> Does anyone know an efficient way to add NTuples in Julia?
>
> I can do it using recursive functions, but for various reasons this is not
> efficient in my context. I really miss something like tuple(a[i] + b[i] for
> i in 1:N) to create the resulting tuple all in one go (here a and b would
> be tuples).
>
> The compiler doesn't do badly with recursive functions for handling tuples
> in very straightforward situations, but for example, if I want to create an
> immutable type based on a tuple the compiler doesn't seem to be able to
> handle the necessary optimisations. At least, that is what I infer from the
> timings. Consider
>
> immutable bill{N}
>d::NTuple{N, Int}
> end
>
> and I want to add two bill's together. If I have to add the tuples
> themselves using recursive functions, then I no longer seem to be able to
> do something like:
>
> A[i] = B[i] + C[i] efficiently, where A, B and C are arrays whose elements
> are of type bill.
>
> I know how to handle tuples via arrays, but for efficiency reasons I
> certainly don't want to do that, e.g. tuple([a[i] + b[i] for i in 1:N]...).
>
> Bill.
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] "eval cannot be used in a generated function"

2016-08-10 Thread Erik Schnetter
I'm encountering the error "eval cannot be used in a generated function" in
Julia 0.5 for code that is working in Julia 0.4. My question is -- what
exactly is now disallowed? For example, if a generated function `f` calls
another (non-generated) function `g`, can `g` then call `eval`? Does the
word "in" here refer to the code that is generated by the generated
function, or does it refer to the dynamical scope of the code generation
state of the generated function?

To avoid the error I have to redesign my code, and I'd like to know ahead
of time what to avoid. A Google search only turned up the C file within
Julia that emits the respective error message, as well as the Travis build
log for my package.

-erik

-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] type parametrized in vector length with vector container field

2016-08-05 Thread Erik Schnetter
Pieterjan

Depending on your needs, <https://github.com/eschnett/FlexibleArrays.jl>
might also be a solution. It supports arrays where the size is known at
compile time.

-erik

On Fri, Aug 5, 2016 at 6:20 AM, Pieterjan Robbe <pieterjanro...@gmail.com>
wrote:

> Accord to http://docs.julialang.org/en/release-0.4/manual/
> performance-tips/#avoid-fields-with-abstract-containers, this is the way
> to go for types with a container field:
>
> type MyVector{T<:AbstractFloat,V<:AbstractVector}
> v::V
>
> MyVector(v::AbstractVector{T}) = new(v)
> end
> MyVector(v::AbstractVector) = MyVector{eltype(v),typeof(v)}(v)
>
> This is indeed type-stable:
>
> @code_warntype MyVector(0:0.1:1)
>
> However, If I also want to parametrize my type in the vector length L like
>
> type MySecondVector{L,T<:AbstractFloat,V<:AbstractVector}
> v::V
>
> MySecondVector(v::AbstractVector{T}) = new(v)
> end
> MySecondVector(v::AbstractVector) = MySecondVector{length(v),
> eltype(v),typeof(v)}(v)
>
> This is no longer type-stable:
>
> @code_warntype MySecondVector(0:0.1:1)
>
> Can I enforce the length as a parameter so that this is known to the
> compiler or should I look into FixedSizeArrays instead?
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Is there a way to reclaim memory?

2016-08-02 Thread Erik Schnetter
Can you make a copy of the vector, and assign this copy to the original
variable? As in:

```Julia
type Node
edges::Vector{X}
end

...
node.edges = copy(node.edges)
...
```

-erik



On Tue, Aug 2, 2016 at 1:56 PM, Seth <catch...@bromberger.com> wrote:

>
>
> On Tuesday, August 2, 2016 at 10:41:26 AM UTC-7, Seth wrote:
>>
>> So, a 62.5 million edge LightGraphs.Graph should only take about a 1.25
>> gigs of memory, all-in. However, because the edges are being added to
>> vectors within the datastructure, and each vector is doing its own memory
>> allocation, we wind up with the Julia process taking ~6.5 gigs.
>>
>> Given that we don't know the degree distribution of the graph before we
>> load it (and therefore can't use sizehint! effectively), is there any way
>> to reclaim the allocated-but-unused memory from all these vectors, with the
>> accepted large performance hit that would come should we decide to add
>> another edge?
>>
>
> Things that I've tried that *don't* work:
>
> - explicitly calling sizehint! after the vectors have been created and
> populated
> - calling resize! after the vectors have been created and populated
>
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Fused-Multiply-Add function?

2016-07-24 Thread Erik Schnetter
fma is a very specific operation defined by an IEEE standard. BLAS axpy
does not implement it.

If you extend the fma definition to linear algebra, then you'd arrive at a
definition such as "no intermediate result will be rounded". The BLAS
definition of axpy does not state that, and it's unlikely that any
real-world implementation would do this.

I'd thus that that axpy is equivalent to muladd, without any requirement on
intermediate results, or the order in which things are evaluated.

-erik



On Sun, Jul 24, 2016 at 12:10 PM, Stefan Karpinski <ste...@karpinski.org>
wrote:

> Maybe it does. It's unclear to me whether operations like BLAS apxy
> implement fma or muladd.
>
> On Sunday, July 24, 2016, Oliver Schulz <oliver.sch...@tu-dortmund.de>
> wrote:
>
>> Uh, sorry, I don't quite get that. I thought muladd was basically that
>> same as fma - with the difference that fma has to do it without rounding
>> the intermediate result, while muladd is free to do whatever is most
>> efficient on the given hardware? But why wouldn't make muladd! make sense
>> for collections then?
>>
>> On Saturday, July 23, 2016 at 8:47:57 PM UTC+2, Stefan Karpinski wrote:
>>>
>>> I would file an issue about having and fma! function. Since it's only
>>> sensible for matrices (as far as I can tell), there doesn't seem to be a
>>> corresponding need for muladd! since the fusion is at the collection level.
>>>
>>> On Sat, Jul 23, 2016 at 2:31 PM, Oliver Schulz <oliver...@tu-dortmund.de
>>> > wrote:
>>>
>>>> Oh, sure - I was actually mainly thinking about matrices. I was looking
>>>> for something like fma! in ArrayFire (without success), and then wondered
>>>> what it might be called in Base.
>>>>
>>>> On Saturday, July 23, 2016 at 8:04:31 PM UTC+2, Stefan Karpinski wrote:
>>>>>
>>>>> They don't make sense for scalars but they could be added for matrices.
>>>>>
>>>>> On Sat, Jul 23, 2016 at 1:54 PM, Oliver Schulz <
>>>>> oliver...@tu-dortmund.de> wrote:
>>>>>
>>>>>> Hi Stefan,
>>>>>>
>>>>>> sorry, yes, I had somehow overlooked fma. Mainly I was looking for an
>>>>>> in-place version though, like fma! and muladd!. Is there a reason those
>>>>>> don't exist?
>>>>>>
>>>>>> On Saturday, July 23, 2016 at 7:50:30 PM UTC+2, Stefan Karpinski
>>>>>> wrote:
>>>>>>>
>>>>>>> Yes: https://github.com/JuliaLang/julia/issues/6330. In short,
>>>>>>> there are both fma and muladd operations with different purposes:
>>>>>>>
>>>>>>> help?> fma
>>>>>>>
>>>>>>>
>>>>>>> search: fma findmax @fastmath UniformScaling
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>   fma(x, y, z)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>   Computes x*y+z without rounding the intermediate result x*y. On
>>>>>>> some systems this is significantly more expensive than x*y+z. fma is 
>>>>>>> used
>>>>>>> to improve accuracy in certain algorithms. See muladd.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> help?> muladd
>>>>>>>
>>>>>>>
>>>>>>> search: muladd
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>   muladd(x, y, z)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>   Combined multiply-add, computes x*y+z in an efficient manner. This
>>>>>>> may on some systems be equivalent to x*y+z, or to fma(x,y,z). muladd is
>>>>>>> used to improve performance. See fma.
>>>>>>>
>>>>>>> On Sat, Jul 23, 2016 at 1:40 PM, Oliver Schulz <
>>>>>>> oliver...@tu-dortmund.de> wrote:
>>>>>>>
>>>>>>>> Does Julia have a standardized FMA (Fused-Multiply-Add) function?
>>>>>>>> Like fma(A, B, factor) and fma!(dest, A, B, factor), so that that GPU
>>>>>>>> libraries, etc. can provide optimized versions?
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>
>>>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Mapping a array of tuples

2016-07-17 Thread Erik Schnetter
Your anonymous function expects two scalar arguments, not one tuple
argument. You need to write instead:

```Julia
map(t -> ((x,y) = t; x+y), [(1,1),(2,2)])
```

This is a function accepting one tuple `t`, which is then decomposed into
its elements `x,y` via an assignment.

-erik



On Sun, Jul 17, 2016 at 9:00 AM, Jesse Jaanila <jessejaan...@gmail.com>
wrote:

> Hi,
>
> I'm trying to map a array of tuples.
>
> julia> map((x,y) -> x+y, [(1,1),(2,2)])
> ERROR: wrong number of arguments
>  in anonymous at none:1
>  in map at abstractarray.jl:1305
>
> This doesn't work. I expect the result to be
> [2,4]
>
> Can this kind of mapping be achieved or the map function only defined for
> one dimensional domain i.e. Array{T, 1}?
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: A story on unchecked integer operations

2016-07-13 Thread Erik Schnetter
Someone asked for more details; I'm replying publicly.

The code decomposes the domain into many small "components" which are
characterized by a tuple of 4 (small) integers. We want to use these
integers as key in a C++ map (i.e. Dict). To simplify things, we calculate
a single, unique integer key from these 4 integers, very much like
linearizing an array index. As the maximum value of all 4 integers is
known, this is straightforward. Only a few components need to be handled by
a particular node, so this map is very sparsely populated and efficient.

In this case, the maximum key exceeded the capacity of a C++ int. The
solution was to use a 64-bit integer instead. (Another option would be to
use a C++ tuple, but tuples did not exist in C++ in 2002 when the code was
originally developed.)

I realize now that Julia actually would have prevented this issue on Blue
Waters, since this is a 64-bit architecture, and Julia uses 64-bit integers
there by default. On the other hand, a simulation on a Blue Gene/Q would
have exhibited the same issue, and this is a 32-bit architecture

-erik


On Wed, Jul 13, 2016 at 4:29 PM, Erik Schnetter <schnet...@gmail.com> wrote:

> I'm hoping for a system where integer operations are checked by default.
> If this becomes expensive in a particular function / module, then one can
> there (1) perform respective checks when the function is entered, (2)
> disable further checks inside the function, and (3) (for bonus points) use
> static analysis to prove that the manual checks prevent accidental overflow.
>
> Except for the bonus part, that's how array indices are currently checked,
> and that's how we handle IEEE floating-point conformance.
>
> Anyway -- adding one item to my to-do list: Experiment with a command-line
> flag to Julia that switches on checked integer operations.
>
> -erik
>
> On Wed, Jul 13, 2016 at 4:21 PM, John Myles White <
> johnmyleswh...@gmail.com> wrote:
>
>> This seems more like a use case for static analysis that checked
>> operations to me. The problem IIUC isn't about the usage of
>> high-performance code that is unsafe, but rather that the system was
>> nominally tested, but tested in an imperfect way that didn't cover the
>> failure cases. If you were rewriting this in Rust, it's easy for me to
>> imagine that you would use checked arithmetic at the start until 5 years
>> have passed, then you would decide it's safe and turn off the checks -- all
>> because you had never really tested the obscure cases that only a static
>> analyzer is likely to catch.
>>
>>  -- John
>>
>> On Wednesday, July 13, 2016 at 1:07:59 PM UTC-7, Erik Schnetter wrote:
>>>
>>> We have this code <https://einsteintoolkit.org> that simulates black
>>> holes and other astrophysical systems. It's written in C++ (and a few other
>>> languages). I obviously intend to rewrite it in Julia, but that's not the
>>> point here.
>>>
>>> One of the core functions allows evaluating (interpolating) the value of
>>> a function at any point in the domain. That code was originally written in
>>> 2002, and has been used and optimized and tested extensively. So you'd
>>> think it's reasonably bug-free...
>>>
>>> Today, a colleague ran this code on Blue Waters, using 32,000 nodes, and
>>> with some other parameters set to higher resolutions that before. Given the
>>> subject of the email, you can guess what happened.
>>>
>>> Luckily, a debugging routine was active, and caught an inconsistency (an
>>> inconsistent domain decomposition), alerting us to the problem.
>>>
>>> Would Julia have prevented this? I know that everybody wants speed --
>>> and if you are using 32,000 nodes, you want a lot of speed -- but the idea
>>> of bugs that only appear when you are pushing the limits makes me
>>> uncomfortable. So, no -- Julia's unchecked integer arithmetic would not
>>> have caught this bug either.
>>>
>>> Score: Julia vs. C++, both zero.
>>>
>>> -erik
>>>
>>> --
>>> Erik Schnetter <schn...@gmail.com>
>>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>>
>>
>
>
> --
> Erik Schnetter <schnet...@gmail.com>
> http://www.perimeterinstitute.ca/personal/eschnetter/
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: A story on unchecked integer operations

2016-07-13 Thread Erik Schnetter
I'm hoping for a system where integer operations are checked by default. If
this becomes expensive in a particular function / module, then one can
there (1) perform respective checks when the function is entered, (2)
disable further checks inside the function, and (3) (for bonus points) use
static analysis to prove that the manual checks prevent accidental overflow.

Except for the bonus part, that's how array indices are currently checked,
and that's how we handle IEEE floating-point conformance.

Anyway -- adding one item to my to-do list: Experiment with a command-line
flag to Julia that switches on checked integer operations.

-erik

On Wed, Jul 13, 2016 at 4:21 PM, John Myles White <johnmyleswh...@gmail.com>
wrote:

> This seems more like a use case for static analysis that checked
> operations to me. The problem IIUC isn't about the usage of
> high-performance code that is unsafe, but rather that the system was
> nominally tested, but tested in an imperfect way that didn't cover the
> failure cases. If you were rewriting this in Rust, it's easy for me to
> imagine that you would use checked arithmetic at the start until 5 years
> have passed, then you would decide it's safe and turn off the checks -- all
> because you had never really tested the obscure cases that only a static
> analyzer is likely to catch.
>
>  -- John
>
> On Wednesday, July 13, 2016 at 1:07:59 PM UTC-7, Erik Schnetter wrote:
>>
>> We have this code <https://einsteintoolkit.org> that simulates black
>> holes and other astrophysical systems. It's written in C++ (and a few other
>> languages). I obviously intend to rewrite it in Julia, but that's not the
>> point here.
>>
>> One of the core functions allows evaluating (interpolating) the value of
>> a function at any point in the domain. That code was originally written in
>> 2002, and has been used and optimized and tested extensively. So you'd
>> think it's reasonably bug-free...
>>
>> Today, a colleague ran this code on Blue Waters, using 32,000 nodes, and
>> with some other parameters set to higher resolutions that before. Given the
>> subject of the email, you can guess what happened.
>>
>> Luckily, a debugging routine was active, and caught an inconsistency (an
>> inconsistent domain decomposition), alerting us to the problem.
>>
>> Would Julia have prevented this? I know that everybody wants speed -- and
>> if you are using 32,000 nodes, you want a lot of speed -- but the idea of
>> bugs that only appear when you are pushing the limits makes me
>> uncomfortable. So, no -- Julia's unchecked integer arithmetic would not
>> have caught this bug either.
>>
>> Score: Julia vs. C++, both zero.
>>
>> -erik
>>
>> --
>> Erik Schnetter <schn...@gmail.com>
>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] A story on unchecked integer operations

2016-07-13 Thread Erik Schnetter
We have this code <https://einsteintoolkit.org> that simulates black holes
and other astrophysical systems. It's written in C++ (and a few other
languages). I obviously intend to rewrite it in Julia, but that's not the
point here.

One of the core functions allows evaluating (interpolating) the value of a
function at any point in the domain. That code was originally written in
2002, and has been used and optimized and tested extensively. So you'd
think it's reasonably bug-free...

Today, a colleague ran this code on Blue Waters, using 32,000 nodes, and
with some other parameters set to higher resolutions that before. Given the
subject of the email, you can guess what happened.

Luckily, a debugging routine was active, and caught an inconsistency (an
inconsistent domain decomposition), alerting us to the problem.

Would Julia have prevented this? I know that everybody wants speed -- and
if you are using 32,000 nodes, you want a lot of speed -- but the idea of
bugs that only appear when you are pushing the limits makes me
uncomfortable. So, no -- Julia's unchecked integer arithmetic would not
have caught this bug either.

Score: Julia vs. C++, both zero.

-erik

-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] How do I plot into an ITerm2 window?

2016-07-05 Thread Erik Schnetter
How do I plot directly into an ITerm2 window?

ITerm2 (an OS X equivalent of xterm) supports displaying graphics right in
the terminal window. I've heard that the "TerminalExtensions" package helps
support this. My question is now -- how do I use this? Is there e.g. a
backend for the "Plots" package?

I have found some email discussions on the web, but these contain just
cryptic hints. Is there maybe an example or a tutorial?

I know about UnicodePlots, which is already quite amazing, but now I'm
looking even cooler.

-erik

-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Does it make sense to loop over types?

2016-06-10 Thread Erik Schnetter
In your case, there's no need to use a macro. There's also no need to use
`Val`; you can just pass `k` directly.

If you want to loop over functions, then you can put the functions into an
array, and loop over it:

```Julia
function mywork(x, y, z)
  for k in 1:10
procedure(k, x, y, z)
  end
end
```

or

```Julia
function mywork(x, y, z)
  for p in [proc1, proc2, proc3, ...]
p(x, y, z)
  end
end
```

-erik


On Fri, Jun 10, 2016 at 11:39 PM, Po Choi <vegetableb...@gmail.com> wrote:

> Thanks for pointing out the mistakes.
>
> I want to loop over the data types because I want to loop over functions.
> I need to do something like this:
> function mywork(x, y, z)
>   procedure1(x, y, z)
>   procedure2(x, y, z)
>   procedure3(x, y, z)
>   #...
>   procedure8(x, y, z)
>   procedure9(x, y, z)
>   procedure10(x, y, z)
> end
>
> I feel it may be silly to write out all the procedurek(x, y, z)
> Then I am thinking about the following:
> function mywork(x, y, z)
>   for k in 1:10
> procedure(::Val{k}, x, y, z)
>   end
> end
>
> I have not seen anyone doing this before. So I am not sure I am doing
> something right.
> That's why I am asking this question.
>
> I remember people like to use macro to define functions for different
> types.
> In my case, should I use macro? Or am I fine to code as above?
>
>
> On Friday, June 10, 2016 at 7:34:09 PM UTC-7, Erik Schnetter wrote:
>>
>> Your code won't work. `Val{k}` is a type, which is an object in Julia.
>> You are passing this object to `foo`. Thus in your declaration of `foo`,
>> you need to list the type of `Val`, not just `Val` -- types have types as
>> well:
>> ```
>> foo(::Type{Val{1}}) = 1
>> ```
>>
>> Of course you know that using `Val` in this way is nonsensical in a real
>> program. I understand that you know this, as you're purposefully
>> experimenting with Julia, but I'd still like to point it out for the casual
>> reader of this example.
>>
>> Whether you encounter "performance issues" or not depends on what
>> performance you need. If you compare this code to simple arithmetic
>> operations (adding numbers), then it's slower. If you compare it to sending
>> data across the network or accessing the disk, then it's faster.
>>
>> I assume that calling `foo` in the loop requires a hash table lookup at
>> run time, and likely some memory allocation.
>>
>> -erik
>>
>>
>> On Fri, Jun 10, 2016 at 9:40 PM, Po Choi <vegeta...@gmail.com> wrote:
>>
>>> Ops! I accidentally hit the post button! So the post is not completed!
>>>
>>> It is an example:
>>> foo(::Val{1}) = 1
>>> foo(::Val{2}) = 2
>>> foo(::Val{2}) = 3
>>>
>>> function bar()
>>>   s = 0
>>>   for t in Datatype[Val{k} for k in 1:3]
>>> s += foo(t)
>>>   end
>>> end
>>>
>>> Will there be any performance issue if I loop over types?
>>> I am still trying to understand how the multiple-dispatch works.
>>> Sometimes I am confused!
>>>
>>>
>>> On Friday, June 10, 2016 at 6:37:31 PM UTC-7, Po Choi wrote:
>>>>
>>>>
>>>>
>>>> foo(::Val{1}) = 1
>>>> foo(::Val{2}) = 2
>>>> foo(::Val{2}) = 3
>>>>
>>>> function bar()
>>>>
>>>> for t in Datatype[Val{k} for k in 1:3]
>>>>
>>>>   end
>>>> end
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>
>>
>> --
>> Erik Schnetter <schn...@gmail.com>
>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] why the numerical result is different (RK45 julia and matlab)?

2016-06-10 Thread Erik Schnetter
When you write `(...)^1/8`, you probably mean `(...)^(1/8)` instead.

-erik

On Fri, Jun 10, 2016 at 10:29 PM, <jmarcellopere...@ufpi.edu.br> wrote:

> this is the test for equation differential using runge-kutta45: f(x,y)=
> (-5*x - y/5)^1/8 + 10
>
>
> <https://lh3.googleusercontent.com/-D6U4d5pN0pw/V1t3MaV3JpI/ACU/-E_ceVhrTxkT3SAgmLUy5nHDhRJTIikSgCLcB/s1600/rk45.png>
>
> why the numerical result is different? I used :
>
> function Rk_JL()
>  f(x,y)= (-5*x - y/5)^1/8 + 10
>  tspan = 0:0.001:n
>  y0 = [0.0, 1.0]
>  return ODE.ode45(f, y0,tspan);end
>
>
> and
>
>
> function [X1,Y1] = RK_M()
>  f = @(x,y) (-5*x - y/5)^1/8 + 10;
>  tspan = 0:0.001:n;
>  y0 = 1
>  [X1,Y1]= ode45(f,tspan,1);end
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Does it make sense to loop over types?

2016-06-10 Thread Erik Schnetter
Your code won't work. `Val{k}` is a type, which is an object in Julia. You
are passing this object to `foo`. Thus in your declaration of `foo`, you
need to list the type of `Val`, not just `Val` -- types have types as well:
```
foo(::Type{Val{1}}) = 1
```

Of course you know that using `Val` in this way is nonsensical in a real
program. I understand that you know this, as you're purposefully
experimenting with Julia, but I'd still like to point it out for the casual
reader of this example.

Whether you encounter "performance issues" or not depends on what
performance you need. If you compare this code to simple arithmetic
operations (adding numbers), then it's slower. If you compare it to sending
data across the network or accessing the disk, then it's faster.

I assume that calling `foo` in the loop requires a hash table lookup at run
time, and likely some memory allocation.

-erik


On Fri, Jun 10, 2016 at 9:40 PM, Po Choi <vegetableb...@gmail.com> wrote:

> Ops! I accidentally hit the post button! So the post is not completed!
>
> It is an example:
> foo(::Val{1}) = 1
> foo(::Val{2}) = 2
> foo(::Val{2}) = 3
>
> function bar()
>   s = 0
>   for t in Datatype[Val{k} for k in 1:3]
> s += foo(t)
>   end
> end
>
> Will there be any performance issue if I loop over types?
> I am still trying to understand how the multiple-dispatch works. Sometimes
> I am confused!
>
>
> On Friday, June 10, 2016 at 6:37:31 PM UTC-7, Po Choi wrote:
>>
>>
>>
>> foo(::Val{1}) = 1
>> foo(::Val{2}) = 2
>> foo(::Val{2}) = 3
>>
>> function bar()
>>
>> for t in Datatype[Val{k} for k in 1:3]
>>
>>   end
>> end
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] "Passing by reference" (small) headache

2016-06-08 Thread Erik Schnetter
The statement `z=z+5` first creates a new vector holding `z+5`, then
assigns this vector to `z`, which is then not a reference to the argument
any more. To modify it, you can write `z[:] += 5` (or `z[:] = z+5`, or
`z[:] = z[:] + 5`). Note that these expressions will still create a new
vector with the value of `z+5`, but this new vector will then copied into
the existing vector.

-erik

On Wed, Jun 8, 2016 at 1:44 PM, Michele Giugliano <mgiugli...@gmail.com>
wrote:

> Within a function, addressing a generic element of a vector passed to it
> results in an external modification of the vector (i.e. argument passed by
> reference not value).
>
>
> How do I obtain the same effect when using the vector operator "sum to a
> scalar" ? In short, I would like the effect of the two functions f!(z) and
> g!(z), defined below, to be the same.
>
>
> function f!(z)
>
> for i=1:length(z)
>
> z[i] = 25;
>
> end
>
> z = z + 5;
>
> end
>
>
> function g!(z)
>
> for i=1:length(z)
>
> z[i] = 30;
>
> end
>
> end
>
>
> Where is my (conceptual) mistake? I would be immensely grateful for your
> help and guidance.
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Initializing a new array generates unexpected results

2016-06-07 Thread Erik Schnetter
PS: `Array{Int64}()` is an array with 0 dimensions, "each" of which has
size 0 (so to say), leading to an overall length of 0^0 = 1 elements...

-erik

On Tue, Jun 7, 2016 at 11:01 PM, Erik Schnetter <schnet...@gmail.com> wrote:

> The syntax for constructing arrays is `Array{Type}(size1, size2, ...)`.
> You are not passing any values for `size`, thus you are constructing an
> array that has no indices (zero dimensions). The way to access it is `x[]`,
> i.e. an indexing expression without any indices. All zero-dimensional
> arrays have one element -- there is no way for it to have zero elements; a
> zero-dimensional array is very similar to a scalar value.
>
> Zero-dimensional arrays are confusing if you don't expect them; I assume
> many people expect to obtain a one-dimensional array with zero elements
> instead. (You can write `Vector{Int64}()` for this, or `Array{Int64}(0)`,
> or Array{Int64,1}()`, or `Int64[]`.)
>
> To add to the confusion, Julia supports "linear indexing", which allows
> you to index arrays with fewer or more (!) indices than declared. If you
> use fewer indices, then the last index can be larger than its declared
> bound; if there are more, then the additional indices all need to have the
> value 1:
> ```Julia
> x = Vector{Int}(10)   # one dimension, ten elements
> x[1,1]   # this accesses the first element
> x[1,1,1,1,1]   # so does this
> ```
> This allows you to write `x[1]` in your case, although `x` actually
> doesn't have any indices.
>
> Supporting zero-dimensional arrays is an important corner case.
>
> Linear indexing is also important, but might use a different syntax in the
> future to avoid confusion.
>
> -erik
>
>
>
> On Tue, Jun 7, 2016 at 9:47 PM, David Parks <davidpark...@gmail.com>
> wrote:
>
>> Hi Mauro, thanks for the response!
>> I get the logic behind an uninitialized array, but shouldn't such an
>> array not return an iterable with 1 element of garbage? Why would it be
>> initialized to having 1 random element and not 0?
>>
>> I would expect an empty constructor to do something reasonable. Perhaps
>> it would be logical for these two statements to be equivalent (see bold
>> lines below). I don't see the value of having a null constructor that
>> generates an array that's iterable with exactly 1 element of garbage
>> contents. Seems like a recipe for bugs. My bugs specifically, which is
>> exactly what brought me to this topic. :)
>>
>> *julia> x = Array{Int64}()*
>> 0-dimensional Array{Int64,0}:
>> 2198389808
>>
>> julia> x[1]
>> 2198389808
>>
>> julia> x[2]
>> ERROR: BoundsError: attempt to access 0-dimensional Array{Int64,0}:
>> 2198389808
>>   at index [2]
>>  in getindex at array.jl:282
>>
>> *julia> x=Array{Int64}(0)*
>> 0-element Array{Int64,1}
>>
>> julia> x[1]
>> ERROR: BoundsError: attempt to access 0-element Array{Int64,1}
>>   at index [1]
>>  in getindex at array.jl:282
>>
>>
>>
>>
>>
>>
>>
>>
>
>
> --
> Erik Schnetter <schnet...@gmail.com>
> http://www.perimeterinstitute.ca/personal/eschnetter/
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Initializing a new array generates unexpected results

2016-06-07 Thread Erik Schnetter
The syntax for constructing arrays is `Array{Type}(size1, size2, ...)`. You
are not passing any values for `size`, thus you are constructing an array
that has no indices (zero dimensions). The way to access it is `x[]`, i.e.
an indexing expression without any indices. All zero-dimensional arrays
have one element -- there is no way for it to have zero elements; a
zero-dimensional array is very similar to a scalar value.

Zero-dimensional arrays are confusing if you don't expect them; I assume
many people expect to obtain a one-dimensional array with zero elements
instead. (You can write `Vector{Int64}()` for this, or `Array{Int64}(0)`,
or Array{Int64,1}()`, or `Int64[]`.)

To add to the confusion, Julia supports "linear indexing", which allows you
to index arrays with fewer or more (!) indices than declared. If you use
fewer indices, then the last index can be larger than its declared bound;
if there are more, then the additional indices all need to have the value 1:
```Julia
x = Vector{Int}(10)   # one dimension, ten elements
x[1,1]   # this accesses the first element
x[1,1,1,1,1]   # so does this
```
This allows you to write `x[1]` in your case, although `x` actually doesn't
have any indices.

Supporting zero-dimensional arrays is an important corner case.

Linear indexing is also important, but might use a different syntax in the
future to avoid confusion.

-erik



On Tue, Jun 7, 2016 at 9:47 PM, David Parks <davidpark...@gmail.com> wrote:

> Hi Mauro, thanks for the response!
> I get the logic behind an uninitialized array, but shouldn't such an array
> not return an iterable with 1 element of garbage? Why would it be
> initialized to having 1 random element and not 0?
>
> I would expect an empty constructor to do something reasonable. Perhaps it
> would be logical for these two statements to be equivalent (see bold lines
> below). I don't see the value of having a null constructor that generates
> an array that's iterable with exactly 1 element of garbage contents. Seems
> like a recipe for bugs. My bugs specifically, which is exactly what brought
> me to this topic. :)
>
> *julia> x = Array{Int64}()*
> 0-dimensional Array{Int64,0}:
> 2198389808
>
> julia> x[1]
> 2198389808
>
> julia> x[2]
> ERROR: BoundsError: attempt to access 0-dimensional Array{Int64,0}:
> 2198389808
>   at index [2]
>  in getindex at array.jl:282
>
> *julia> x=Array{Int64}(0)*
> 0-element Array{Int64,1}
>
> julia> x[1]
> ERROR: BoundsError: attempt to access 0-element Array{Int64,1}
>   at index [1]
>  in getindex at array.jl:282
>
>
>
>
>
>
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Calling Fortran code?

2016-06-07 Thread Erik Schnetter
Charles

Can you show the complete code, including the definitions of `c` and `IER`?

Can you also show how you "call" these variables, and what the error
message is?

The more detail you give, the easier it is to help.

-erik


On Mon, Jun 6, 2016 at 8:16 PM, Charles Ll <charles.lel...@gmail.com> wrote:

> Dear all,
>
> Thank you very much for you help! I've been able to write a form of gcvspl
> that seems to run, with calling:
>
> ccall( (:gcvspl_, "./libgcvspl.so"), Void,
> (Ptr{Float64},Ptr{Float64},Ptr{Cint},Ptr{Float64},Ptr{Float64},Ptr{Cint},Ptr{Cint},Ptr{Cint},Ptr{Cint},Ptr{Float64},Ptr{Float64},Ptr{Cint},Ptr{Float64},Ptr{Cint}),x,y,,wx,wy,VAL,c,,WK,IER)
>
>
> However, when I try to call the c or IER variables, the Julia kernel
> dies... :s
>
> Do you have any idea why? What am I missing?
>
> Furthermore, dear Erik, what do you mean by using 'Ref' for one-element
> arrays?
>
> Thanks in advance!
>
> Best,
> Charles.
>
> Le lundi 6 juin 2016 22:12:32 UTC+10, Erik Schnetter a écrit :
>>
>> I would use `Cint` as type, not `Int64`, as in `Ptr{Cint}`. Are you sure
>> that your Fortran code uses 64-bit integers? Technically, this violates the
>> Fortran standard. A Fortran integer is the same as a C int, except if you
>> use special flags while building the Fortran library (which you might be
>> doing).
>>
>> You can use `Ref` instead of one-element arrays.
>>
>> Your error message has nothing to do with arrays vs. pointer, but rather
>> the mismatch between `Float64` and `Int64`.
>>
>> -erik
>>
>>
>> On Mon, Jun 6, 2016 at 6:07 AM, Páll Haraldsson <pall.ha...@gmail.com>
>> wrote:
>>
>>> On Monday, June 6, 2016 at 3:22:47 AM UTC, Charles Ll wrote:
>>>>
>>>> Dear all,
>>>>
>>>> I am trying to call some Fortran code in Julia, but I have a hard time
>>>> doing so... I have read the docs, looked at the wrapping of ARPACK and
>>>> other libraries... But I did not find any way to make it work.
>>>>
>>>> I am trying to wrap a spline function library (gcvspl.f,
>>>> https://github.com/charlesll/Spectra.jl/tree/master/Dependencies),
>>>> which I want to use in my project, Spectra.jl.
>>>>
>>>> I already have a wrapper in Python, but this was easier to wrap with
>>>> using f2py. In Julia, I understand that I have to do it properly. The
>>>> function I am trying to call is:
>>>>
>>>
>>> I think, you may already have gotten the correct answer. At first I was
>>> expecting fcall, not just ccall keyword in Julia, to call Fortran.. It's
>>> not strictly needed, but in fact, there is an issue somewhere still open
>>> about fcall (keyword, or was if for a function?), and it may have been for
>>> your situation..
>>>
>>>
>>> [It might not be too helpful to know, you can call Python with
>>> PyCall.jl, so if you've already wrapped in Python..]
>>>
>>> --
>>> Palli.
>>>
>>>
>>>
>>
>>
>> --
>> Erik Schnetter <schn...@gmail.com>
>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Calling Fortran code?

2016-06-06 Thread Erik Schnetter
I would use `Cint` as type, not `Int64`, as in `Ptr{Cint}`. Are you sure
that your Fortran code uses 64-bit integers? Technically, this violates the
Fortran standard. A Fortran integer is the same as a C int, except if you
use special flags while building the Fortran library (which you might be
doing).

You can use `Ref` instead of one-element arrays.

Your error message has nothing to do with arrays vs. pointer, but rather
the mismatch between `Float64` and `Int64`.

-erik


On Mon, Jun 6, 2016 at 6:07 AM, Páll Haraldsson <pall.haralds...@gmail.com>
wrote:

> On Monday, June 6, 2016 at 3:22:47 AM UTC, Charles Ll wrote:
>>
>> Dear all,
>>
>> I am trying to call some Fortran code in Julia, but I have a hard time
>> doing so... I have read the docs, looked at the wrapping of ARPACK and
>> other libraries... But I did not find any way to make it work.
>>
>> I am trying to wrap a spline function library (gcvspl.f,
>> https://github.com/charlesll/Spectra.jl/tree/master/Dependencies), which
>> I want to use in my project, Spectra.jl.
>>
>> I already have a wrapper in Python, but this was easier to wrap with
>> using f2py. In Julia, I understand that I have to do it properly. The
>> function I am trying to call is:
>>
>
> I think, you may already have gotten the correct answer. At first I was
> expecting fcall, not just ccall keyword in Julia, to call Fortran.. It's
> not strictly needed, but in fact, there is an issue somewhere still open
> about fcall (keyword, or was if for a function?), and it may have been for
> your situation..
>
>
> [It might not be too helpful to know, you can call Python with PyCall.jl,
> so if you've already wrapped in Python..]
>
> --
> Palli.
>
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Uniform syntax

2016-06-05 Thread Erik Schnetter
gt; >> >>
>> >> >> Regarding you example: it is unclear what you are trying to do, but
>> >> >> unless you use the macro keyword, you get an ordinary function, not
>> a
>> >> >> macro.
>> >> >>
>> >> >> On Sun, Jun 05 2016, Ford O. wrote:
>> >> >>
>> >> >> > What makes macro different from function?
>> >> >> >
>> >> >> > Why is it not possible to do:
>> >> >> > foo(e::Expr) = :()
>> >> >> > @foo x = x + 5
>> >> >> >
>> >> >> > On Friday, June 3, 2016 at 10:05:46 AM UTC+2, Ford O. wrote:
>> >> >> >
>> >> >> >  I think this deserves an own topic.
>> >> >> >
>> >> >> >  You should post here syntax that looks like duplicate or you
>> think
>> >> it
>> >> >> could use already existing keyword. (mark with # identical or #
>> >> >> replacement)
>> >> >> >  Rule of thumb - the less special syntax the better.
>> >> >> >
>> >> >> >  # identical
>> >> >> >  # replace ' , ' with ' ; ' in arrays ?
>> >> >> >  [1, 2, 3, 4]
>> >> >> >  [1; 2; 3; 4]
>> >> >> >
>> >> >> >  # identical
>> >> >> >  # replace ' = ' with ' in ' in for loops ?
>> >> >> >  for i = 1:10
>> >> >> >  for i in 1:10
>> >> >> >
>> >> >> >  # replacement
>> >> >> >  # replace ' -> ' with ' = ' in anonymous functions ?
>> >> >> >  (a, b, c) -> a + b + c
>> >> >> >  (a, b, c) = a + b + c
>> >> >>
>> >> >>
>> >>
>> >>
>>
>>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] How to launch a Julia cluster under Torque scheduler

2016-06-03 Thread Erik Schnetter
David

One way to go would be to use the MPI package, and to use mpirun to start
the Julia processes. In this way, all processes are started automatically,
and you do not need to run addprocs. The example "06-cman-transport.jl" in
the MPI package shows how to use this setup.

-erik

On Fri, Jun 3, 2016 at 2:53 PM, David Parks <davidpark...@gmail.com> wrote:

> I've been reading various discussions on here about launching a Julia
> cluster.
> It's hard to tell how current they are, and the documentation is a bit
> lacking on how to launch clusters under environments such as Torque. I'm
> using Julia 0.4.5 (e.g. no cookies)
>
> The primary approach that is documented is for the master Julia process to
> `addprocs` and ssh to the workers in the cluster.
>
> But this seem to conflict with the torque model, where torque launches a
> script under each assigned host. When those scripts exit the worker process
> are considered finished (as far as I understand).
>
> So it would make most sense to let torque launch the Julia workers, and
> have those workers connect back to a designated master.
>
> Is that straight forward to do with 0.4.5?
>
> I see --worker, and I thought --bind-to would allow me to specify the
> master host, but I don't see a way to tell the master that it should listen
> for workers. This is about where the documentation ends.
>
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Arithmetic with TypePar

2016-06-02 Thread Erik Schnetter
In these cases I do the following:

(1) Add additional type parameters, as in

```Julia
type FooImp{D,D1}
baz::Array{D}
bar::Array{D1}
end
```

(2) Add a function that calculates the type:

```Julia
Foo(D) = FooImpl{D,D+1}
```

In many cases I can then have to write `Foo(D)` instead of `Foo{D}` (round
parentheses instead of curly braces), but apart from this, things work out
 nicely. I'm sure there is some `@pure` annotation or similar for `Foo` to
help type inference.

-erik

PS: Yes, it would be nice if this calculation (`D1 = D+1`) could be moved
elsewhere, e.g. into the type definition. I don't see why this isn't
possible, I think it's mostly a complicated thing to implement, and people
haven't demonstrated the necessity for this yet.



On Thu, Jun 2, 2016 at 2:07 AM, Robert DJ <math.rob...@gmail.com> wrote:

> The problem with not specifying D is type inference. I actually have more
> entries in Foo and would like
>
> type Foo{D}
> baz::Array{D}
> bar::Array{D+1}
> end
>
>
> I want to use Foo in calculations and for D = 1 I am doing something like
>
> baz + bar[:,1]
> baz + bar[:,2]
>
>
> For D = 2:
>
> baz + bar[:,:,1]
> baz + bar[:,:,2]
>
>
> I *could* instead (for D = 2) move the third dimension to extra columns...
>
>
> On Wednesday, June 1, 2016 at 8:56:28 PM UTC+2, Robert DJ wrote:
>>
>> I have a custom type with a TypePar denoting a dimension and would like
>> to define the following:
>>
>> type Foo{D}
>> bar::Array{D+1}
>> end
>>
>> However, this does not work. As D is only 1 or 2 it would OK with
>>
>> type Foo{1}
>> bar::Matrix
>> end
>>
>> type Foo{2}
>> bar::Array{3}
>> end
>>
>> but unfortunately this isn't working, either.
>>
>> Can this problem be solved?
>>
>> Thanks!
>>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Context manager and Julia (with...)

2016-06-01 Thread Erik Schnetter
Julia has a similar feature that is built on closures. See the "do
notation" in the manual. For example:

```Julia
open("file", "r") do f
@show readline(f)
end
```

-erik


On Wed, Jun 1, 2016 at 10:18 AM, Femto Trader <femto.tra...@gmail.com>
wrote:

> Hello,
>
> Python have a nice concept called "context manager" (see "with" statement)
> http://book.pythontips.com/en/latest/context_managers.html
> With this concept it's quite easy to open a file without having to close
> it.
> Yattag, for example, use this concept to create XML files.
> You just need to use context managers to create opening tags... closing
> tags
> are automatically created.
> So is there a similar concept in Julia.
> If yes, where can I find some doc about it ?
> If not, is there any plan to provide such feature ?
>
> Kind regards
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Erik Schnetter
You can use `@simd` with the innermost loop. (It really should be extended
to work with loop nests as well, but that hasn't been done yet.) Here is
how `@simd` would look in your case:

```Julia
@inbounds for l = 1:1000, k = 1:20, j = 1:20
@simd for i = 1:70
bar = bar + (array1[i, l] - array3[1, i, j, k])^2 + (array2[i, l] -
array3[2, i, j, k])^2
end
end
```

That is, you split off the innermost loop, and attach the `@simd` there.

-erik


On Wed, Jun 1, 2016 at 7:51 AM, Lutfullah Tomak <tomaklu...@gmail.com>
wrote:

> Ok, I revise what I said about @ simd, according to documentation, it only
> works for
> one dimensional ranges and it does not allow this type for-loop and throws
> error.
>
> Here is timings for me
>
> julia> function foo()
>   array1 = rand(70, 1000)
>   array2 = rand(70, 1000)
>   array3 = rand(2, 70, 20, 20)
>   bar = 0
>   @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
> bar = bar +
> (array1[i, l] - array3[1, i, j, k])^2 +
> (array2[i, l] - array3[2, i, j, k])^2
>   end
>end
> foo (generic function with 1 method)
>
>
> julia> foo()
>   1.215454 seconds (84.00 M allocations: 1.252 GB, 4.16% gc time)
>
>
> julia> foo()
>   1.308979 seconds (84.00 M allocations: 1.252 GB, 3.92% gc time)
>
>
> julia> function foo()
>   array1 = rand(70, 1000)
>   array2 = rand(70, 1000)
>   array3 = rand(2, 70, 20, 20)
>   bar = 0.0
>   @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
> bar = bar +
> (array1[i, l] - array3[1, i, j, k])^2 +
> (array2[i, l] - array3[2, i, j, k])^2
>   end
>end
> foo (generic function with 1 method)
>
>
> julia> foo()
>   0.114811 seconds
>
>
> julia> foo()
>   0.150542 seconds
>
>
> julia> function foo()  array1 = rand(70, 1000)
>   array2 = rand(70, 1000)
>   array3 = rand(2, 70, 20, 20)
>   bar = 0.0
>   @time @inbounds for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
> bar = bar +
> (array1[i, l] - array3[1, i, j, k])^2 +
> (array2[i, l] - array3[2, i, j, k])^2
>   end
>
>end
>
> foo (generic function with 1 method)
>
>
> julia> foo()
>   0.004927 seconds
>
>
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Why these 2 functions generate very different llvm?

2016-05-30 Thread Erik Schnetter
Yichao

What is the known bug here: That Julia doesn't handle this more
efficiently, or that the variable `x` can change its type from Int to
Float64?

In this example, are there two variables `x` (the argument and the
explicitly declared local variable), or should the declaration be an error?

-erik

On Mon, May 30, 2016 at 7:47 PM, Yichao Yu <yyc1...@gmail.com> wrote:

> On Mon, May 30, 2016 at 7:23 PM, David P. Sanders <dpsand...@gmail.com>
> wrote:
> >
> >
> > El lunes, 30 de mayo de 2016, 19:11:47 (UTC-4), FANG Colin escribió:
> >>
> >> function t1(n::Int, x::Int, a::Float64)
> >>x::Float64 = x
> >>for i in 1:n
> >> x += a
> >>end
> >> x
> >> end
> >> @time t1(10^6, 1, 1.0)
> >>
> >> 0.005445 seconds (1.00 M allocations: 15.259 MB)
> >
> >
> > In t1, x changes type during the function, from Int to Float64, so the
> > function is type *un*stable, as shown by @code_warntype,
> > and as suggested by the huge number of allocations.
> >
> > In t2, x is always a Float64, and the function is type stable.
> >
> >>
> >>
> >>
> >>
> >>
> >> function t2(n::Int, y::Int, a::Float64)
> >>x::Float64 = y
> >>for i in 1:n
> >> x += a
> >>end
> >> x
> >> end
> >> @time t2(10^6, 1, 1.0)
> >>
> >> 0.001044 seconds (6 allocations: 192 bytes)
> >>
> >>
> >>
> >>
> >> The @code_warntype of the 2 functions are very similar. However, the
> llvm
> >> code generated from t2 is a lot simpler.
> >
> >
> > The @code_warntype of the two functions is very *different*. (This is
> easier
> > to see in the REPL than in the notebook, if
> > that is the problem.)
> >
> >>
> >>
> >> Does it suggest that if we want to change the type of an argument, we'd
> >> better create a new variable?
>
> This is a known bug. Fortunately it's easy to catch with code_warntype.
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] External Fortran Library (ifort vs gfortran)

2016-05-06 Thread Erik Schnetter
Derek

The ccall looks correct. (I didn't count the arguments or check their
types, though.)

A wild guess: ifort has an option -i8 that uses 8-byte integers. If you're
using it, you'd need to use Int64 instead. Alternatively, there could be an
"implicit integer*8" statement somewhere.

-erik

On Fri, May 6, 2016 at 7:43 PM, Derek Tucker <de...@tetonedge.net> wrote:

> Erik,
>
> I wondered that, the library is not mine, but the code is here, work in
> progress
>
> The ccall is here (
> https://github.com/jdtuck/spatial_pp/blob/master/thomas_pp.jl#L556)
>
> The library is here (
> https://github.com/jdtuck/spatial_pp/blob/master/deps/src/nscluster/Simplex-Thomasf.f
> )
>
> From what I understand everything is a pointer from ccall for Fortran,
> correct?
>
> Thanks
>
> Derek
>
> On Friday, May 6, 2016 at 5:03:48 PM UTC-6, Erik Schnetter wrote:
>>
>> Derek
>>
>> How are you interfacing it with Julia?
>>
>> It could be there is an error in the way you are interfacing it. This
>> error could be undetected with gfortran, but be visible with ifort. The
>> error message sounds as if you are writing to a memory location that should
>> not be written to (e.g. to a constant or a string). It is easy to get
>> confused with what is a pointer and what not when using Julia's `ccall` for
>> Fortran code, and this can lead to that kind of error.
>>
>> If you point to your code, people might be able to give better advice.
>>
>> -erik
>>
>>
>> On Fri, May 6, 2016 at 6:03 PM, Derek Tucker <de...@tetonedge.net> wrote:
>>
>>> I have an external library that I am interfacing with julia, when I
>>> compile it with gfortran it runs without a problem. When I compile it with
>>> ifort i get this error
>>>
>>> ERROR: ReadOnlyMemoryError()
>>>
>>> Anybody has any ideas why. I have test the library with ifort using a
>>> test program and it works fine.
>>>
>>
>>
>>
>> --
>> Erik Schnetter <schn...@gmail.com>
>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] External Fortran Library (ifort vs gfortran)

2016-05-06 Thread Erik Schnetter
Derek

How are you interfacing it with Julia?

It could be there is an error in the way you are interfacing it. This error
could be undetected with gfortran, but be visible with ifort. The error
message sounds as if you are writing to a memory location that should not
be written to (e.g. to a constant or a string). It is easy to get confused
with what is a pointer and what not when using Julia's `ccall` for Fortran
code, and this can lead to that kind of error.

If you point to your code, people might be able to give better advice.

-erik


On Fri, May 6, 2016 at 6:03 PM, Derek Tucker <de...@tetonedge.net> wrote:

> I have an external library that I am interfacing with julia, when I
> compile it with gfortran it runs without a problem. When I compile it with
> ifort i get this error
>
> ERROR: ReadOnlyMemoryError()
>
> Anybody has any ideas why. I have test the library with ifort using a test
> program and it works fine.
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Any downside of using a symbol as an interned string?

2016-04-22 Thread Erik Schnetter
On Fri, Apr 22, 2016 at 8:46 AM, Tamas Papp <tkp...@gmail.com> wrote:
> find_string(table::Dict, string) = get!(table, string, string)
>
> table = Dict()
>
> a1 = find_string(table, "a")
> a2 = find_string(table, "a")# called w/ different string, will return 
> first one
> a1 === a2   # true, same string
>
> Maybe you could also use a Set, but I don't know how to get the elements
> that matches, not just test for it with `in`.

Maybe `Set` should have a `getindex` function?

-erik

> Best,
>
> Tamas
>
> On Fri, Apr 22 2016, Lyndon White wrote:
>
>> @Tamas thanks
>> Can you clarify a Dict of strings to what? Ints?
>> Loose a lot of interpretablity that way.
>>
>>
>> On Friday, 22 April 2016 19:08:11 UTC+8, Tamas Papp wrote:
>>>
>>> I would be more concerned about style than speed -- symbols as strings
>>> is an ancient Lisp technique in NLP, but IMO a Dict of strings would be
>>> better style.
>>>
>>>
>>>
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] using Atomic

2016-04-17 Thread Erik Schnetter
Ben

The threading and atomic API is not exported from Base. That means
that you need to use names such as `x = Threads.Atomic{Int}()` to
create an atomic variable.

-erik


On Sun, Apr 17, 2016 at 5:45 PM, ben e <beneacr...@gmail.com> wrote:
> Hello,
> I'm still relatively new to Julia so this is probably a simple question, but
> I've had trouble finding the answer.
> I'm interested in using shared memory to do ipc. Shared arrays make a single
> producer / single reader impl exceptionally easy in Julia.
> I was planning to use Atomic and some atomic operations to implement a lock
> free multiple producer set up, but I can't seem to access them (import,
> using). For another structure, Mutex would also be useful, but same issue.
> I've seen that Threads are no longer supported in v0.4, but it looks like
> all these modules are included through the sysimg.jl in base, so I'm
> confused as to why I can't hit anything in that namespace.
> Are there risks I am unaware of if I just include the sources atomics.jl and
> locks.jl directly in my projects?
> Thank you
> Ben



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] scoping wat

2016-04-12 Thread Erik Schnetter
On Tue, Apr 12, 2016 at 9:05 AM, Cedric St-Jean <cedric.stj...@gmail.com> wrote:
>
> On Tuesday, April 12, 2016 at 8:52:23 AM UTC-4, Didier Verna wrote:
>>
>> Mauro <maur...@runbox.com> wrote:
>>
>> > Maybe you can be a bit more explicit in what you mean, examples would
>> > help too.
>>
>>   Sorry for the fuzziness. This is an example of what I mean:
>>
>> let
>>   x = 10
>> end
>
>
> Your let is broken, it's a no-op. let always creates local bindings
>
> xx = 20
> let xx = 30
> @show xx
> end
> @show xx
>
>> xx=30
>> xx=20

The manual is unclear where it introduces let bindings. It says e.g.
at one point that let uses soft scope
(http://docs.julialang.org/en/release-0.4/manual/variables-and-scoping/),
and at another point that "let statements allocate new variable
bindings". It doesn't make a distinction between the bindings
introduced on the same line as the `let` (or in lines terminated by a
colon), and other lines (or after the first semicolon). Compare

let a=4 end

let; a=4 end

let a=4, b=4 end

let a=4; b=4 end

let a=4,
b=4
end

let a=4;
b=4
end

let
a=4
b=4
end

The manual could be more explicit about this.

-erik


> I agree that the hard/soft scoping rules are messy. In practice, it doesn't
> seem to cause a lot of issues
>
>
>>
>>
>> here, the same expression "x = 10" will either assign a new value to x
>> if such a /global/ exists, or create a new /local/ binding. This is
>> already bad enough[1]. Then, begin/end behaves differently. Then, "if"
>> doesn't introduce a new block, but some others constructs do, and some
>> don't. And then, some like "for" do or don't, it depends.
>>
>> Now even worse:
>> x = 10
>> function foo()
>>   println(x)
>>   x = 5
>>   println(x)
>> end
>>
>> will break on the first println (one could expect to get 10, the global
>> value for x) because since there is an assignment /later on/, a new
>> /local/ binding will be created for x (which BTW is the exact opposite
>> of what let does!), but this binding isn't available to the first
>> println. And also, since a variable cannot switch from global to local
>> (or vice-versa) in the same block, the intuition (well, mine anyway ;-))
>> that after the assignment, the scoping changes, is wrong.
>>
>> And then, as you mention, nested functions will behave yet
>> differently. All of this looks really bad.
>>
>> So IMHO, the real problem is that there are two distinct concepts:
>> assigning a new value to an existing binding, and creating a new
>> binding, possibly with an initial assignment. These are separate things
>> and mixing the two in such ways is wrong, and also quite surprising,
>> knowing the lispy side of this language.
>>
>>
>>
>> Footnotes:
>> [1]  consider that a simple typo in your code may lead to silently
>> creating a new variable, which will never be used.
>>
>> --
>> ELS'16 registration open! http://www.european-lisp-symposium.org
>>
>> Lisp, Jazz, Aïkido: http://www.didierverna.info



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] is accessing tuple elements more efficient than accessing elements of a 1d array?

2016-04-12 Thread Erik Schnetter
On Tue, Apr 12, 2016 at 3:43 AM, DNF <oyv...@gmail.com> wrote:
> But what if you do have a very small number of elements? Let's say I am
> working with 2D or 3D points and wonder whether to represent them as
> vectors, tuples or a custom type, like
>
> immutable Point2D{T<:Real}
> x::T
> y::T
> end
> Base.(:+){T}(p1::Point2D{T}, p2::Point2D{T}) = Point2D(p1.x+p2.x, p1.y+p2.y)

This is essentially equivalent (performance-wise) to using a tuple.

The main difference between small arrays and small tuples, if they
hold bitstypes (such as integers for floating-point numbers) seems to
me that arrays require heap allocation, while tuples do not. That can
lead to a significant performance difference, in particular if they
are used as elements of a large array, or in a loop that is executed
often.

-erik

-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: [ANN] FixedSizeDictionaries

2016-04-11 Thread Erik Schnetter
On Mon, Apr 11, 2016 at 11:07 AM, Simon Danisch <sdani...@gmail.com> wrote:
> @Christoph: arbitrarily, of course :D Joke aside, it should also use zip for
> the (keys, values) constructor (like Base does).
> @tshort:
> It can have arbitrary isbits & immutables as keys.
> E.g, this works (should probably get some macro sugar):
> function test{Keys<: Tuple{Val{1}, Val{:c},
> Val{MyKeyType()}}(x::FixedKeyValueDict{Keys})
>
> Also, it doesn't need to look up if a Dict is already defined, which
> simplifies the implementation.
>
> In general, I think it's a bit more future proof.
> I expect generating types via eval to be never really idiomatic, while
> @generated seems to be fine for these kind of optimizations.

"Generated types" don't yet exist in Julia, "generated functions" do.
I find generated types quite useful, and often necessary in
conjunction with generated functions. One can hide the `@eval` in a
generated function, so that the expense is paid at the same time. It
should also be possible to extend the `@generated` macro to work for
types as well.

-erik

> With getfield overloading and better constant propagation, the
> implementation might be freed of any @generated and macro magic at some
> point, which I think is desirable.
>
> For NamedTuples speaks, that it'll be equally fast or faster in most cases
> and getfield/setfield works currently better with types.
> Also, it already has the nicer macro sugar ;)
>
>
>
> Am Montag, 11. April 2016 14:08:59 UTC+2 schrieb Simon Danisch:
>>
>> Here is yet another package: FixedSizeDictionaries.jl
>>
>> From the README:
>> Library which implements a FixedSize variant of Dictionaries. These can be
>> stack allocated and have O(1) indexing performance without boundcheck. It
>> implements most parts of the Base.Dict interface.
>> This package is useful, when you want anonymous composite types. You
>> should be a bit careful with generating a lot of FixedSizeDict's, since it
>> will compile a unique set of functions for every field of a Dict.
>>
>> I'll be using it to speed up various places where I'm currently using a
>> dictionary but where the number of keys is set at compile time.
>> Also, Shashi and I contemplated to implement Extensible Records, which
>> seems like a good fit to represent all kind of graphics/geometry types.
>> FixedSizeDictionaries could be an integral part of this!
>>
>> Best,
>> Simon
>>
>> PS:
>> I do feel like it's time for a package announcement list...



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Find the indice of the element with the nearest value of a float in an array

2016-04-10 Thread Erik Schnetter
Fred

Yes, the fastest way is a loop. (It might be possible to extend e.g.
`minimum` by a "predicate" argument, but that would require a change
to the base library.)

You would write the loop slightly differently, though:
```Julia
mini = 0
minx = 0.0
mindx = typemax(Float64)
for (i,x) in enumerate(array)
dx = abs(x - x0)
if dx < mindx
mini, minx, mindx = i, x, dx
end
end
mini, minx, mindx
```

This will return `imin=0` for empty inputs.

-erik



On Sun, Apr 10, 2016 at 7:40 AM, Fred <fred.softwa...@gmail.com> wrote:
> Hi,
>
> I am looking for the most efficient (fastest) way to find the indice of the
> element with the nearest value of a float in an array.
>
> x = [1:0.1:10]
>
> julia> x
> 91-element Array{Float64,1}:
>   1.0
>   1.1
>   1.2
>   1.3
>   1.4
>   ⋮
>   9.4
>   9.5
>   9.6
>   9.7
>   9.8
>   9.9
>  10.0
>
> It is very easy to find the indice of an exact value of x, for example 8.2
>
> julia> find(x .== 8.2)
> 1-element Array{Int64,1}:
>  73
>
> But if I want the indice of the closest value of 8.22
>
> julia> minimum(abs(x-8.22))
> 0.02135
>
> julia> find(x .== minimum(abs(x-8.22)))
> 0-element Array{Int64,1}
>
>
> Of course it is easy to do that with a loop but is it the fastest solution ?
>
> min_i = 0
> min_x = 1.0
>
>  for i=[1:length(x)]
>e = abs(collect(x)[i] - 8.22)
>if e < min_x
>min_x = e
>min_i = i
>end
>  end
>
> println(min_x, " -> ", min_i)
> 0.02135 -> 73
>
>
> Thanks for your comments !
>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Fast multiprecision integers?

2016-04-09 Thread Erik Schnetter
Laurent

You can use a construct such as

```Julia
immutable FastInt
isbig::Bool
small::Int
big::BigInt
FastInt(x::Int) = new(false, x)
FastInt(x::BigInt) = new(true, 0, x)
end
getbig(x::FastInt) = x.isbig ? x.big : BigInt(x.small)
function add(x::FastInt, y::FastInt)
if !isbig(x) & !isbig(y)
# add overflow checking
return FastInt(x.small + y.small)
end
FastInt(getbig(x) + getbig(y))
end
```

This will be reasonably efficient (apart from memory usage), and since
unused `BigInt` objects are never initialized, no memory will be
allocated for them.

You could also fold the `isbig` field into `small` by e.g. choosing to
use `typemin(Int)` as a value that indicates that the `BigInt` field
should be used.

-erik



On Sat, Apr 9, 2016 at 8:07 AM, Laurent Bartholdi
<laurent.bartho...@gmail.com> wrote:
> Thanks for the explanations. Am I correct in understanding that objects are
> garbage-collected when they're only pointed to, and this causes the problem?
> This kills my naive idea of using pointers to BigInts as a 64-bit datatype
> compatible with ints, and is suggested by the following experiment:
>
> julia> l = [pointer_from_objref(BigInt(i)) for i=1:10]
> 10-element Array{Ptr{Void},1}:
>  Ptr{Void} @0x000118c77150
>  Ptr{Void} @0x000118c77170
>  Ptr{Void} @0x000118c77190
>  Ptr{Void} @0x000118c771b0
>  Ptr{Void} @0x000118c771d0
>  Ptr{Void} @0x000118c771f0
>  Ptr{Void} @0x000118c77210
>  Ptr{Void} @0x000118c77230
>  Ptr{Void} @0x000118c77250
>  Ptr{Void} @0x000118c77270
>
>
> julia> unsafe_pointer_to_objref(l[7])
> 7
>
>
> julia> map(unsafe_pointer_to_objref,l)
> Segmentation fault: 11
>
>
> On Friday, 8 April 2016 17:47:34 UTC+2, Stefan Karpinski wrote:
>>
>> This could help if both parts of the union were plain old data like Int64
>> and Float64, but in the case of Int and BigInt, one of them is a more
>> complex type, which would currently force the whole thing to be boxed and
>> live in the heap anyway. What's needed here is the ability to stack-allocate
>> objects that refer to the heap. We cannot do that now, but it is an
>> important planned CG improvement.
>>
>> On Fri, Apr 8, 2016 at 11:28 AM, Scott Jones <scott.pa...@gmail.com>
>> wrote:
>>>
>>> I like very much the idea of a discriminated union - that would help
>>> enormously with some of the stuff I work on.
>>>
>>>
>>> On Friday, April 8, 2016 at 8:47:59 AM UTC-4, Erik Schnetter wrote:
>>>>
>>>> Laurent
>>>>
>>>> I'm afraid you can't use `reinterpret` with a type such as `BigInt`.
>>>>
>>>> I think what you want is an extension of `Nullable`: A nullable type
>>>> represents an object of a particular type that might or might not be
>>>> there; there's hope that this would be done efficiently, e.g. via an
>>>> additional bit. What you want is an efficient representation of a
>>>> discriminated union -- a type that can represent either of two types,
>>>> but without the overhead from boxing the types as is currently done in
>>>> a `Union`. Unfortunately, Julia currently doesn't provide this, but it
>>>> would make sense to have a feature request for this.
>>>>
>>>> This might look like this:
>>>> ```Julia
>>>> immutable FastInt
>>>> val::DiscriminatedUnion{Int, BigInt}
>>>> end
>>>> function (+)(a::FastInt, b::FastInt)
>>>> if typeindex(a.val) == 1 & typeindex(b.val) == 1
>>>> ov,res = add_with_overflow(a.val[1], b.val[1])
>>>> ov && return FastInt(BigInt(res))
>>>> return FastInt(res)
>>>> end
>>>> # TODO: handle mixed case
>>>> FastInt(a.val[2] + b.val[2])
>>>> end
>>>> ```
>>>>
>>>> This would be the same idea as yours, but the `reinterpret` occurs
>>>> within Julia, in a type-safe and type-stable manner, in a way such
>>>> that the compiler can optimize better.
>>>>
>>>> You could try defining a type that contains two fields, both an `Int`
>>>> and a `BigInt` -- maybe `BigInt` will handle the case of a value that
>>>> is never used in a more efficient manner that doesn't require
>>>> allocating an object.
>>>>
>>>> -erik
>>>>
>>>> On Fri, Apr 8, 2016 at 2:07 AM, Laurent Bartholdi
>>>> <laurent@gmail.com> wrote:
>>>> > Dear all,
>>>> &

Re: [julia-users] Fast multiprecision integers?

2016-04-08 Thread Erik Schnetter
Laurent

I'm afraid you can't use `reinterpret` with a type such as `BigInt`.

I think what you want is an extension of `Nullable`: A nullable type
represents an object of a particular type that might or might not be
there; there's hope that this would be done efficiently, e.g. via an
additional bit. What you want is an efficient representation of a
discriminated union -- a type that can represent either of two types,
but without the overhead from boxing the types as is currently done in
a `Union`. Unfortunately, Julia currently doesn't provide this, but it
would make sense to have a feature request for this.

This might look like this:
```Julia
immutable FastInt
val::DiscriminatedUnion{Int, BigInt}
end
function (+)(a::FastInt, b::FastInt)
if typeindex(a.val) == 1 & typeindex(b.val) == 1
ov,res = add_with_overflow(a.val[1], b.val[1])
ov && return FastInt(BigInt(res))
return FastInt(res)
end
# TODO: handle mixed case
FastInt(a.val[2] + b.val[2])
end
```

This would be the same idea as yours, but the `reinterpret` occurs
within Julia, in a type-safe and type-stable manner, in a way such
that the compiler can optimize better.

You could try defining a type that contains two fields, both an `Int`
and a `BigInt` -- maybe `BigInt` will handle the case of a value that
is never used in a more efficient manner that doesn't require
allocating an object.

-erik

On Fri, Apr 8, 2016 at 2:07 AM, Laurent Bartholdi
<laurent.bartho...@gmail.com> wrote:
> Dear all,
> How hard would it be to code arbitrary-precision integers in Julia with at
> worst 2x performance loss over native Ints?
>
> This is what I have in mind: have a bitstype structure, say 64 bits, which
> is either the address of a BigInt (if even), or an Int64 (if odd). Addition
> would be something like:
>
> function +(a::FastInt,b::FastInt)
> if a&1
> (result,obit) = @llvm.sadd.with.overflow.i64(a,b&~1)
> obit ? reinterpret(FastInt,BigInt(a>>1)+(b>>1)) : result
> elseif a&1
> reinterpret(FastInt,(a>>1) + reinterpret(BigInt,b))
> elseif b&1
> reinterpret(FastInt,reinterpret(BigInt,a) + b>>1)
> else
> reinterpret(FastInt,reinterpret(BigInt,a) + reinterpret(BigInt,b))
> end
> end
>
> (code not meant to be run, just a skeleton)
>
> This would be very useful in the development of computer algebra systems, in
> which BigInts are too slow and eat up too much memory, but one is ready to
> pay a small price for guard against arithmetic overflows.
>
> If this is too complicated, then perhaps at least a type of integers that
> would raise an error in case of over/underflows? Those could be caught in
> throw/catch enclosures, so the user could restart the operation with
> BigInts.
>
> TIA, Laurent



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] overflow behavior

2016-04-05 Thread Erik Schnetter
On Tue, Apr 5, 2016 at 11:05 AM, Didier Verna <did...@didierverna.net> wrote:
> Erik Schnetter <schnet...@gmail.com> wrote:
>
>> The literal `1` has type `Int`.  The promotion rules for `Int8` and
>> `Int` state that, before the addition, `Int8` is converted to `Int`.
>> (On your system, it seems that `Int` is `Int64`.)
>
>   OK, so indeed, there's modular arithmetics for the non native
>   representations as well. It's just that litterals are neither
>   overloaded, nor implicitly transtyped. I see, thanks.

Some mathematical constants (e.g. pi) automatically adapt to the
environment where they are used, defaulting to Float64. Literals,
however, do not -- they have a specific type.

-erik

-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] elementwise operators

2016-04-05 Thread Erik Schnetter
On Tue, Apr 5, 2016 at 11:23 AM, Didier Verna <did...@didierverna.net> wrote:
>
>   I understand that some operators have elementwise versions, when
>   prefixed with a dot. I think the manual is missing some dots, like in
>   this paragraph, right ?
>
> "The operator < is intended for array objects; the operation A .< B is
> valid only if A and B have the same dimensions. The operator returns an
> array with boolean entries and with the same dimensions as A and B. Such
> operators are called elementwise; Julia offers a suite of elementwise
> operators: *, +, etc."
>
>   Also, is there a true mechanism for elementwise'ing operators ? Or are
>   those just a collection of built-ins following the same naming
>   convention ?

The true mechanism is the `map` function. You could define

```Julia
(.<)(arrays...) = map(.<, arrays...)
```
(but the actual implementation might look different).

-erik

-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: overflow behavior

2016-04-05 Thread Erik Schnetter
On Tue, Apr 5, 2016 at 10:50 AM, Páll Haraldsson
<pall.haralds...@gmail.com> wrote:
> On Tuesday, April 5, 2016 at 2:45:38 PM UTC, Didier Verna wrote:
>>
>>
>>   Hello,
>>
>> the manual says: "In Julia, exceeding the maximum representable value of
>> a given type results in a wraparound behavior:", but that seems to be
>> the case only for the native type and above:
>>
>> julia> typeof(typemax(Int64)  + 1) -> Int64
>> julia> typeof(typemax(Int128) + 1) -> Int128
>>
>> but:
>>
>> julia> typeof(typemax(Int8)   + 1) -> Int64
>> julia> typeof(typemax(Int16)  + 1) -> Int64
>> julia> typeof(typemax(Int32)  + 1) -> Int64
>
>
> Right, the manual could be updated, I guess. The last one, gives you Int32
> on 32-bit platforms. This i all intentional. Then there is Int128 and BigInt
> (no modular there).

`Int128` does use modulus semantics.

-erik

>> These last 3 results seem inconsistent (and the manual incorrect)
>> because for example, typeof(typemax(Int8)) -> Int8 and not Int64, so no,
>> Julia doesn't do modular arithmetics all the time.
>>
>> --
>> Resistance is futile. You will be jazzimilated.
>>
>> Lisp, Jazz, Aïkido: http://www.didierverna.info


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] overflow behavior

2016-04-05 Thread Erik Schnetter
On Tue, Apr 5, 2016 at 10:45 AM, Didier Verna <did...@didierverna.net> wrote:
>
>   Hello,
>
> the manual says: "In Julia, exceeding the maximum representable value of
> a given type results in a wraparound behavior:", but that seems to be
> the case only for the native type and above:
>
> julia> typeof(typemax(Int64)  + 1) -> Int64
> julia> typeof(typemax(Int128) + 1) -> Int128
>
> but:
>
> julia> typeof(typemax(Int8)   + 1) -> Int64
> julia> typeof(typemax(Int16)  + 1) -> Int64
> julia> typeof(typemax(Int32)  + 1) -> Int64
>
> These last 3 results seem inconsistent (and the manual incorrect)
> because for example, typeof(typemax(Int8)) -> Int8 and not Int64, so no,
> Julia doesn't do modular arithmetics all the time.

The literal `1` has type `Int`. The promotion rules for `Int8` and
`Int` state that, before the addition, `Int8` is converted to `Int`.
(On your system, it seems that `Int` is `Int64`.)

You can avoid this by writing `Int8(1)` instead of `1`; this should
recover the wrap-around behaviour.

Note that this wrap-around exists only for some simple operations (+ -
* abs); e.g. for division, there is an error instead.

-erik

-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] prefix / infix syntax bridge

2016-04-05 Thread Erik Schnetter
The syntax without parentheses works only for some operators; it does
not work e.g. for `&`, since `&` is also a prefix operator.

There are also some contexts where parsing is ambiguous without the
parentheses, e.g. expressions such as `[+ + +]`. This could be an
array literal with three functions `[(+) (+) (+)]`, or could be an
addition taking functions as argument `[(+) + (+)]` or `[(+)((+),
(+))]`.

-erik


On Tue, Apr 5, 2016 at 9:13 AM, Didier Verna <did...@didierverna.net> wrote:
>
>   Hi,
>
> the user manual says this (VARIABLES section):
>
> "Operators like + are also valid identifiers, but are parsed
> specially. In some contexts, operators can be used just like variables;
> for example (+) refers to the addition function, and (+) = f will
> reassign it."
>
> This looks like Haskell's bridge between the infix and prefix
> syntax. However, it seems that contrary to what the manual says, you
> don't need the parenthesis, i.e. I can do just x = f. So is there a real
> reason for using (+) ?
>
> Also another question: is it possible, as in Haskell, to define new
> infix operators, or are you restricted[1] to the built-in ones ?
>
> Thanks.
>
>
> Footnotes:
> [1]  by "restricted" here, I don't mean to say that there are too few of
> them; I'm aware the unicode ones are available.
>
> --
> Resistance is futile. You will be jazzimilated.
>
> Lisp, Jazz, Aïkido: http://www.didierverna.info



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] how should I really define const to avoid redefining warning?

2016-04-05 Thread Erik Schnetter
I usually put code into a module, and use `using` to load the module.

After updating a module, I use `workspace()` to remove all previous
definition in the REPL:

`workspace(); using FunHPC`

-erik


On Tue, Apr 5, 2016 at 9:13 AM, K leo <cnbiz...@gmail.com> wrote:
> I have some const defined in various files in the following way:
>
> [code]const AConst=1[/code]
>
> Everytime after I modify something in the files (constants remain unchanged)
> and include a file on Julia's REPL, I always get a bunch warnings about
> redefining constants.  Repeating the process many time can get Julia to have
> slow response.
>
> What can I do to avoid these warnings?



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] List comprehension for 1 <= i < j <= 4

2016-04-03 Thread Erik Schnetter
Nothing has been set in stone, but one proposed syntax is

```Julia
l = collect((i, j) for j in range(i+1, 5) for i in range(1, 5))
```

i.e. using parentheses instead of brackets, exchanging the order of
`i` and `j`, and using an explicit call to `collect` to convert the
generator into an array. Without that call, you get a generator that
is more efficient since it doesn't have to allocate memory -- compare
`range` and `xrange` in Python 2. In most circumstances, you would be
able to use the generator directly instead of having to convert it to
an array, and then you don't need the call to `collect.

-erik



On Sun, Apr 3, 2016 at 1:25 PM, Jonatan Pallesen <jonatan@gmail.com> wrote:
> Performance is not the main factor. More importantly I don't think the vcat
> solution looks very neat. I'm curious about what the proposed generator
> syntax is.



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] List comprehension for 1 <= i < j <= 4

2016-04-03 Thread Erik Schnetter
On Sun, Apr 3, 2016 at 4:12 AM, Jonatan Pallesen <jonatan@gmail.com> wrote:
> I want to make a list comprehension for 1 <= i < j <= 4. It can be done with
> a for loop like this:
>
> l = []
> for i in 1:4, j in i+1:4
> push!(l, (i,j))
> end
>
> In python a list comprehension can be made like this
>
> l = [(i, j) for i in range(1, 5) for j in range(i+1, 5)]
>
> But this Julia code gives an error, saying i is not defined:
>
> l = [(i,j) for i in 1:4, j in i+1:4]
>
> Is there another approach? I feel like 4 lines to do this is excessive

Since `i` and `j` are integers, you probably want to initialize the
output array with `Int[]`. This will improve performance.

In Python, a comprehension yields a one-dimensional array, hence
ragged loop boundaries are possible. In Julia, this syntax gives you a
two-dimensional array, hence the loop ranges cannot depend on each
other. The reason things were designed that way is that, in Python, a
list (a one-dimensional array) is the "natural" data structure,
whereas in Julia, a multi-dimensional array is "natural".

Currently, the for loop you showed is the best way to do this. There
is a discussion about generators and appropriate syntax on the
development mailing list, but this isn't ready for testing yet.

The `vcat` solution suggested below is slower since it creates several
intermediate arrays and then concatenates them; this might or might
not be important to you.

-erik

-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: How to specify a template argument as a super type

2016-04-02 Thread Erik Schnetter
Julia's type hierarchies use single inheritance only. Thus inheritance
mechanisms that work fine in C++ can't always be made to work in
Julia. There are discussions on introducing concepts (traits), but
that's still in a prototype stage. (I'm using traits
<https://github.com/mauro3/Traits.jl> in a project of mine; this works
fine, only the error messages are sometimes confusing.)

If you have `Name1{S <: Name2} <: S`, and you need to stick with
single inheritance, then you essentially define both

`Name1{S} <: Name1
`Name1{S} <: S`

(here `Name1` is the abstract type that Julia introduces for every
parameterized type; `Name1` is different from `Name1{S}`).

Since Julia uses single inheritance, you also need to have either
`Name1 <: S` or `S <: Name1`. Both would be strange, since they would
introduce a relation between `Name1` and `S`, but you haven't
specified `S`.

-erik


On Sat, Apr 2, 2016 at 3:36 PM, Scott Lundberg <scottmlundb...@gmail.com> wrote:
> Thanks Erik, though I am not quite clear how it would break things.
>
> As for the real goal:
>
> Name2 is also templated such as Name2{A} and Name2{B}. There are lots of
> methods that dispatch depending on Name2{A} or Name2{B}.
>
> I want to add a subtype of Name2 (which I call Name1) that still correctly
> dispatches for those methods. Another way I would like to do it is:
>
> abstract Name1{Name2{T <: AvsB} <: Name2{T}
>
> but of course that also does not work.
>
> Thanks!
>
> On Saturday, April 2, 2016 at 11:55:47 AM UTC-7, Erik Schnetter wrote:
>>
>> If you define an abstract type `Name1{S}`, then Julia automatically
>> defines another abstract type `Name` with `Name1{S} <: Name`.
>>
>> This would break if you tried to insert `S` into the type hierarchy as
>> well.
>>
>> -erik
>>
>> On Sat, Apr 2, 2016 at 2:51 PM, Tomas Lycken <tomas@gmail.com> wrote:
>> > If it's abstract, what is the purpose of the type parameter? In other
>> > words,
>> > what is the ultimate goal you're reaching for?
>> >
>> > //T
>> >
>> > On Saturday, April 2, 2016 at 3:17:34 AM UTC+2, Scott Lundberg wrote:
>> >>
>> >> You can define:
>> >>
>> >> abstract Name1{S <: Name2} <: Name3{S}
>> >>
>> >> but I want to define:
>> >>
>> >> abstract Name1{S <: Name2} <: S
>> >>
>> >> However the second line does not work. Is there some other way to
>> >> accomplish the same thing? Thanks!
>>
>>
>>
>> --
>> Erik Schnetter <schn...@gmail.com>
>> http://www.perimeterinstitute.ca/personal/eschnetter/



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: How to specify a template argument as a super type

2016-04-02 Thread Erik Schnetter
If you define an abstract type `Name1{S}`, then Julia automatically
defines another abstract type `Name` with `Name1{S} <: Name`.

This would break if you tried to insert `S` into the type hierarchy as well.

-erik

On Sat, Apr 2, 2016 at 2:51 PM, Tomas Lycken <tomas.lyc...@gmail.com> wrote:
> If it's abstract, what is the purpose of the type parameter? In other words,
> what is the ultimate goal you're reaching for?
>
> //T
>
> On Saturday, April 2, 2016 at 3:17:34 AM UTC+2, Scott Lundberg wrote:
>>
>> You can define:
>>
>> abstract Name1{S <: Name2} <: Name3{S}
>>
>> but I want to define:
>>
>> abstract Name1{S <: Name2} <: S
>>
>> However the second line does not work. Is there some other way to
>> accomplish the same thing? Thanks!



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Fast coroutines

2016-04-02 Thread Erik Schnetter
Can you put a number on the task creating and task switching overhead?

For example, if everything runs on a single core, task switching could
(theoretically) happen within 100 ns on a modern CPU. Whether that is
the case depends on the tradeoffs and choices made during design and
implementation, hence the question.

-erik

On Sat, Apr 2, 2016 at 8:28 AM, Yichao Yu <yyc1...@gmail.com> wrote:
> On Fri, Apr 1, 2016 at 3:45 PM, Carl <eschv...@gmail.com> wrote:
>> Hello,
>>
>> Julia is great.  I'm new fan and trying to figure out how to write simple
>> coroutines that are fast.  So far I have this:
>>
>>
>> function task_iter(vec)
>> @task begin
>> i = 1
>> for v in vec
>> produce(v)
>> end
>> end
>> end
>>
>> function task_sum(vec)
>> s = 0.0
>> for val in task_iter(vec)
>> s += val
>> end
>> return s
>> end
>>
>> function normal_sum(vec)
>> s = 0.0
>> for val in vec
>> s += val
>> end
>> return s
>> end
>>
>> values = rand(10^6)
>> task_sum(values)
>> normal_sum(values)
>>
>> @time task_sum(values)
>> @time normal_sum(values)
>>
>>
>>
>>   1.067081 seconds (2.00 M allocations: 30.535 MB, 1.95% gc time)
>>   0.006656 seconds (5 allocations: 176 bytes)
>>
>> I was hoping to be able to get the speeds to match (as close as possible).
>> I've read the performance tips and I can't find anything I'm doing wrong.  I
>> also tried out 0.5 thinking that maybe it would be faster with supporting
>> fast anonymous functions but it was slower (1.5 seconds).
>
> Tasks are expensive and are basically designed for IO.
> ~1000x slow down for this simple stuff is expected.
>
>>
>>
>> Carl
>>



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] string version of expressions not parseable

2016-04-01 Thread Erik Schnetter
Vishesh

I would use `serialize` and `deserialize` to ensure that you get
exactly the same expression back that you wrote out initially. There
are several caveats that would be very difficult to get correct in a
string representation, e.g. if objects are aliased (if the same object
appears multiple times in the expression).

-erik


On Fri, Apr 1, 2016 at 7:36 PM,  <vish...@stanford.edu> wrote:
> basically, if I do:
>
>> e = Expr(:quote, :(:x)) # (quoting a symbol)
>> show(e)
>
> :($(Expr(:quote, :(:x
>
>> print(e)
>
> $(Expr(:quote, :(:x)))
>
>> string(e)
>
> "\$(Expr(:quote, :(:x)))"
>
>> parse(string(e))
>
> :($(Expr(:$, :(Expr(:quote,$(Expr(:quote, :(:x
>
>
> that's not the same expression as represented by the string. Then when I go
> to eval it (which should still work, semantically speaking)
>
> ERROR: unsupported or misplaced expression $
>
>
> Is there a way around this? I'd like to make an expression a string, print
> it to a file, then slurp it up later and eval it.
>
> At the very least, there should be a consistent way to stringify and
> reparse/eval expression objects, even if they look ugly like
> :($(Expr(:quote, :(:x. I'd hate to have to reimplement the whole
> printing of every expression ever in order to make this work (there doesn't
> seem to be a way to extend a method, like print or show, to only work on
> expressions with a particular head?)
>
>
>
> Vishesh



-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


  1   2   3   4   >