Re: [julia-users] the state of GUI toolkits?

2015-04-30 Thread Andreas Lobinger
Hello colleagues,

On Tuesday, April 28, 2015 at 1:11:17 PM UTC+2, Tim Holy wrote:

 Also, on 0.3 Gtk loads just fine for me. Not sure why it's not working on 
 PkgEvaluator. 


That it works for you and (assumingly) others in the Gtk.jl development i 
already learned from the local issue discussion. But that makes it even 
harder to find the real problem. PackageEvaluator.jl e.g. is reporting an 
issue with Gtk itself.

I'm setting my hope in the debugger.


[julia-users] Import statements

2015-04-30 Thread Bill Hart
We used to have the following in our code

import Base: convert, promote_rule, show, string, parseint, serialize,
 deserialize, base, bin, dec, oct, hex, gcd, gcdx, lcm, div,
size,
 zero, one, sign, hash, abs, deepcopy, rem, mod, isequal

But in the latest update something seems to have changed. We only seem to
get it to work if we explicitly import Base.convert, Base.promote_rule,
Base.show, etc.

The documentation now says:

The import keyword supports all the same syntax as using, but only operates
on a single name at a time.

However, I don't understand what that means. It sits right after an example
where using Base: is used as per the import Base: example I give above.
I don't know if it is telling me I can or can't use that syntax for
import.

Could someone clarify what the changes are?

Bill.


Re: [julia-users] performance of functions

2015-04-30 Thread Tim Holy
Check the SO post again; there are now many suggested workarounds, some of 
which are not a big hit to readability.

And no, this won't be fixed in 0.4.

--Tim

On Wednesday, April 29, 2015 08:57:46 PM Sebastian Good wrote:
 I ran into this issue today
 (http://stackoverflow.com/questions/26173635/performance-penalty-using-anony
 mous-function-in-julia) whereby functions -- whether anonymous or not --
 generate lots of garbage when called indirectly. That is when using type
 signatures like
 clever_function(f::Function).
 
 Is making this code more efficient in scope for the 0.4 release? The
 workaround -- hand-coding types then using type dispatch -- is quite
 effective, but definitely a big hit to readability.



Re: [julia-users] Re: Limiting time of multicore run and related cleanup

2015-04-30 Thread Amit Murthy
`interrupt(workers())` is the equivalent of sending a SIGINT to the
workers. The tasks which are consuming 100% CPU are interrupted and they
terminate with an InterruptException.

All processes are still in a running state after this.

On Thu, Apr 30, 2015 at 10:02 AM, Pavel pavel.paramo...@gmail.com wrote:

 The task-option is interesting. Let's say there are 8 CPU cores. Julia's
 ncpus() returns 9 when started with `julia -p 8`, that is to be expected.
 All 8 cores are 100% loaded during the pmap call. Would
 `interrupt(workers())` leave one running?

 On Wednesday, April 29, 2015 at 8:48:15 PM UTC-7, Amit Murthy wrote:

 Your solution seems reasonable enough.

 Another solution : You could schedule a task in your julia code which
 will interrupt the workers after a timeout
 @schedule begin
   sleep(600)
   if pmap_not_complete
  interrupt(workers())
   end
 end

 Start this task before executing the pmap

 Note that this will work only for additional processes created on the
 local machine. For SSH workers, `interrupt` is a message sent to the remote
 workers, which will be unable to process it if the main thread is
 computation bound.



 On Thu, Apr 30, 2015 at 9:08 AM, Pavel pavel.p...@gmail.com wrote:

 Here is my current bash-script (same timeout-way due to the lack of
 alternative suggestions):

 timeout 600 julia -p $(nproc) juliacode.jl results.log 21
 killall -9 -v julia cleanup.log 21

 Does that seem reasonable? Perhaps Linux experts may think of some
 scenarios where this would not be sufficient as far as the
 runaway/non-responding process cleanup?



 On Thursday, April 2, 2015 at 12:15:33 PM UTC-7, Pavel wrote:

 What would be a good way to limit the total runtime of a multicore
 process managed by pmap?

 I have pmap processing a collection of optimization runs (with fminbox)
 and most of the time everything runs smoothly. On occasion however 1-2 out
 of e.g. 8 CPUs take too long to complete one optimization, and
 fminbox/conj. grad. does not have a way to limit run time as recently
 discussed:

 http://julia-programming-language.2336112.n4.nabble.com/fminbox-getting-quot-stuck-quot-td12163.html

 To deal with this in a crude way, at the moment I call Julia from a
 shell (bash) script with timeout:

 timeout 600 julia -p 8 juliacode.jl

 When doing this, is there anything to help find and stop
 zombie-processes (if any) after timeout forces a multicore pmap run to
 terminate? Anything within Julia related to how the processes are spawned?
 Any alternatives to shell timeout? I know NLopt has a time limit option but
 that is not implemented within Julia (but in the underlying C-library).





Re: [julia-users] Re: Limiting time of multicore run and related cleanup

2015-04-30 Thread Amit Murthy
 `interrupt` will work for local workers as well as SSH ones. I had
mentioned otherwise above.

On Thu, Apr 30, 2015 at 12:08 PM, Amit Murthy amit.mur...@gmail.com wrote:

 `interrupt(workers())` is the equivalent of sending a SIGINT to the
 workers. The tasks which are consuming 100% CPU are interrupted and they
 terminate with an InterruptException.

 All processes are still in a running state after this.

 On Thu, Apr 30, 2015 at 10:02 AM, Pavel pavel.paramo...@gmail.com wrote:

 The task-option is interesting. Let's say there are 8 CPU cores. Julia's
 ncpus() returns 9 when started with `julia -p 8`, that is to be expected.
 All 8 cores are 100% loaded during the pmap call. Would
 `interrupt(workers())` leave one running?

 On Wednesday, April 29, 2015 at 8:48:15 PM UTC-7, Amit Murthy wrote:

 Your solution seems reasonable enough.

 Another solution : You could schedule a task in your julia code which
 will interrupt the workers after a timeout
 @schedule begin
   sleep(600)
   if pmap_not_complete
  interrupt(workers())
   end
 end

 Start this task before executing the pmap

 Note that this will work only for additional processes created on the
 local machine. For SSH workers, `interrupt` is a message sent to the remote
 workers, which will be unable to process it if the main thread is
 computation bound.



 On Thu, Apr 30, 2015 at 9:08 AM, Pavel pavel.p...@gmail.com wrote:

 Here is my current bash-script (same timeout-way due to the lack of
 alternative suggestions):

 timeout 600 julia -p $(nproc) juliacode.jl results.log 21
 killall -9 -v julia cleanup.log 21

 Does that seem reasonable? Perhaps Linux experts may think of some
 scenarios where this would not be sufficient as far as the
 runaway/non-responding process cleanup?



 On Thursday, April 2, 2015 at 12:15:33 PM UTC-7, Pavel wrote:

 What would be a good way to limit the total runtime of a multicore
 process managed by pmap?

 I have pmap processing a collection of optimization runs (with
 fminbox) and most of the time everything runs smoothly. On occasion 
 however
 1-2 out of e.g. 8 CPUs take too long to complete one optimization, and
 fminbox/conj. grad. does not have a way to limit run time as recently
 discussed:

 http://julia-programming-language.2336112.n4.nabble.com/fminbox-getting-quot-stuck-quot-td12163.html

 To deal with this in a crude way, at the moment I call Julia from a
 shell (bash) script with timeout:

 timeout 600 julia -p 8 juliacode.jl

 When doing this, is there anything to help find and stop
 zombie-processes (if any) after timeout forces a multicore pmap run to
 terminate? Anything within Julia related to how the processes are spawned?
 Any alternatives to shell timeout? I know NLopt has a time limit option 
 but
 that is not implemented within Julia (but in the underlying C-library).






Re: [julia-users] the state of GUI toolkits?

2015-04-30 Thread Andreas Lobinger
Hello collegue,

On Tuesday, April 28, 2015 at 11:11:27 AM UTC+2, Tim Holy wrote:

 Here's one vote for Gtk. Currently it might need some love to fix up for 
 recent 
 julia changes---presumably you (or someone) could fix it up in a couple of 
 hours. 


i already spend some time looking into Gtk.jl but as an outside i'm missing 
somehow the storyline of this package. I (e.g.) never understood why 
calling GtkWindow has to be done via macro and how the Gtk event loop is 
spliced with the julia event loop. This all might be good ideas, but for 
sure it's more than just a C-API adaption layer. 

So for a real contribution i need more reverse engineering first.




Re: [julia-users] Building sysimg error with multi-core start

2015-04-30 Thread René Donner
Hi, 

I had to fiddle with the precompilation myself, initially hitting similar 
issues.

Are you starting julia simply with julia or do you specify the path to the 
system image using the -J parameter? In case you use -J you need to use the 
exeflags parameter of addprocs to specify this for the workers as well, 
otherwise they will load the default sys.jl.

The locations for sys.{ji,o} depend on whether you ran make install or are 
using julia right after make or installed a julia binary. Perhaps run find . 
-name sys.*, note where the default sys.ji is and where the one you get from 
build_sysimg gets saved, make sure they are where you expect them to be, in my 
case I happened to generate them in (to me) unexpected places.

I tried to simplify the build process in 
https://github.com/rened/SystemImageBuilder.jl. By default, it will precompile 
all installed packages except for ones with known problems or dependencies on 
such non-precompilable packages. Inclusion / exclusion of packages can be 
configured. Work in progress, feedback would be much appreciated! (disclaimer: 
in the worst case be prepared to reinstall julia / rerun make clean; make)

Rene




Am 30.04.2015 um 06:49 schrieb Pavel pavel.paramo...@gmail.com:

 I am building a custom Julia image (v0.3) using
 https://github.com/JuliaLang/JuliaBox/blob/master/docker/build_sysimg.jl
 by calling
 build_sysimg(joinpath(dirname(Sys.dlpath(libjulia)),sys), native, 
 /home/juser/jimg.jl, force=true)
 
 A number of modules are listed in jimg.jl as `using Package` for 
 pre-compilation.
 
 The image builds without errors and starts fine, and package load time is 
 much shorter after pre-compilation as expected. However, when Julia is 
 started with more than one CPU core, e.g. `julia -p 2`, the following error 
 appears at startup:
 
 ERROR: `convert` has no method matching convert(::Type{Dict{K,V}}, 
 ::Parameters)
  in create_worker at multi.jl:1067
  in start_cluster_workers at multi.jl:1028
  in addprocs at multi.jl:1237
  in process_options at ./client.jl:236
  in _start at ./client.jl:354
 
 Any suggestions? Thanks.
 



[julia-users] Performance of Distributed Arrays

2015-04-30 Thread Ángel de Vicente
Hello all,

I'm trying to understand the sort of performance that we can get in 
Parallel with Julia. DistributedArrays look very tempting, but my first try 
gives me a hopeless performance. As a test code, I got it from the slides 
(pages 75-80) at 
http://www.csd.uwo.ca/~moreno/cs2101a_moreno/Parallel_computing_with_Julia.pdf

The code, which just defines two functions is available at: 
https://bitbucket.org/snippets/angelv/5kb4 
and also attached to this message for convenience 

When I run it in my 8-core laptop in serial or in parallel (see below for 
output of the different runs), I see no better performance in parallel 
(though I see a huge increase in the allocated memory in parallel). With 
version 0.4-dev drand is not defined (well, actually the whole Distributed 
Arrays section is gone from the documentation for 0.4-dev).

Any pointers on what can be done to improve this appreciated (this has to 
be the simplest possible parallel program, with no communication at all, so 
we should be able to get near perfect scalability here).

Thanks a lot,
Ángel de Vicente

==

angelv@pilas:~/mhdsolver-julia/Misc/Julia_Parallel$ julia -q
julia 
println(VERSION);require(simulation.jl);N=100;T=1000;A=rand(3,N);@time 
SimulationSerial(A,N,T)
0.3.7
elapsed time: 2.376680715 seconds (80 bytes allocated)

==

angelv@pilas:~/mhdsolver-julia/Misc/Julia_Parallel$ julia -p 4 -q
julia 
println(VERSION);require(simulation.jl);N=100;T=1000;dA=drand(3,N);@time 
SimulationParallel(dA,N,T)
0.3.7
elapsed time: 2.510426469 seconds (20011756 bytes allocated)
4-element Array{Any,1}:
 nothing
 nothing
 nothing
 nothing



angelv@pilas:~/mhdsolver-julia/Misc/Julia_Parallel$ 
/home/angelv/JULIA-DEV/julia/julia -q
julia 
println(VERSION);require(simulation.jl);N=100;T=1000;A=rand(3,N);@time 
SimulationSerial(A,N,T)
0.4.0-dev+4572
elapsed time: 2.33095253 seconds (283 kB allocated)

=

angelv@pilas:~/mhdsolver-julia/Misc/Julia_Parallel$ 
/home/angelv/JULIA-DEV/julia/julia -q -p 4
julia 
println(VERSION);require(simulation.jl);N=100;T=1000;dA=drand(3,N);@time 
SimulationParallel(dA,N,T)
0.4.0-dev+4572
ERROR: UndefVarError: drand not defined





simulation.jl
Description: Binary data


Re: [julia-users] Re: Scope of variables in julia

2015-04-30 Thread Zheng Wendell
@David, your code is the same as case 4.

@Tom, no, in case 3, all *z*'s have local scope (of *for* loop), even the
*z* in function *g()*. Ref: the issue cited by @mauro

On Thu, Apr 30, 2015 at 1:15 AM, Tom Breloff t...@breloff.com wrote:

 This solves the problem because z is now local to g.  I don't think it
 gets at the heart of the issue though, which is the strangeness of z
 seeming to have global scope (outside of the for loop) when inside the g()
 definition, but the other z's have scope local to the for loop.  Is that
 intended? Or a bug? I'm not sure.


 On Wednesday, April 29, 2015 at 5:26:32 PM UTC-4, David Gold wrote:

 How about:

 julia for i=1:2
if i=2; println(z); end
z=Hi
g(z)= println(z)
g(z)
end
 Hi
 Hi
 Hi


 Does this just fall under case 4, or does it change your analysis?

 On Wednesday, April 29, 2015 at 11:20:17 AM UTC-4, Sisyphuss wrote:

 Please these four versions:
 Version 1:
 for i=1:2
 if i=2; println(z); end
 z=Hi
 end
 No error

 Version 2:
 for i=1:2
 z=Hi
 g()= println(z)
 g()
 end
 No error

 Version 3:
 for i=1:2
 if i=2; println(z); end
 z=Hi
 g()= println(z)
 g()
 end
 ERROR: z not defined

 Version 4:
 for i=1:2
 if i=2; println(z); end
 z=Hi
 g(x)= println(x)
 g(z)
 end
 No error

 My guess is: Version 1 treats `z` in the same way as local variable
 (let's call it *local way*). Version 2 treats `z` in the same way as
 global variable although it's in a local scope (let's call it *global
 way*). Version 3 treats it simultaneously in the local/global way, thus
 introduce an error. Version 4 is a walk around and also a better
 programming habit.

 If my guess is right, I further conclude that the main dilemma of Julia
 is that it depends on the scope (local/global scope) to decide the
 treatment of variables (local/global way); however, when scopes are nested,
 the problem appears.




 On Wednesday, April 29, 2015 at 4:53:12 PM UTC+2, Sisyphuss wrote:

 Here's a variant version of your code:
 ```
 for i=1:10
 if i=2; println(z); end
 z=2
 g()=(*global z*; 2z)
 println(z)
 end
 ```
 If `z` is defined global, there will not be any error. I would have
 like to use `nonlocal`, but there isn't this keyword in Julia.
 In my personal opinion, the magic in your original code is that when
 the compiler see the definition of `g()`, it will try to do some *amazing
 *things on the compilation of `z`.


 On Wednesday, April 29, 2015 at 4:37:24 PM UTC+2, Pooya wrote:

 That's exactly my question: Why should defining a function inside the
 loop mess with the variables in the loop?

 On Wednesday, April 29, 2015 at 10:33:03 AM UTC-4, Sisyphuss wrote:

 Another *miracle* here is that if you delete g()=2z, there will be
 no error!


 On Wednesday, April 29, 2015 at 3:53:23 PM UTC+2, Pooya wrote:

 Can someone explain why this is the desired behavior? z is defined
 until the end of first iteration in the for loop, but not in the 
 beginning
 of the next:

 julia for i=1:10
if i=2; println(z); end
z=2
g()=2z
println(z)
end
 2
 ERROR: z not defined
  in anonymous at no file:2




[julia-users] initializing arrays of tuples in julia 0.4

2015-04-30 Thread Jim Garrison
In Julia 0.3, I can create an empty array of tuples as follows:

julia (Int,Int)[]
0-element Array{(Int64,Int64),1}

If I want, I can even initialize it with a comprehension.

julia (Int,Int)[(z, 2z) for z in 1:3]
3-element Array{(Int64,Int64),1}:
 (1,2)
 (2,4)
 (3,6)

On 0.4 (since the merging of #10380), both of these fail.  How can I get my 
code to work again?  Also, is there any planned deprecation path that would 
allow this code to continue working, albeit with a deprecation warning?


Re: [julia-users] initializing arrays of tuples in julia 0.4

2015-04-30 Thread Mauro
This works:

julia Tuple{Int,Int}[(z, 2z) for z in 1:3]
3-element Array{Tuple{Int64,Int64},1}:
 (1,2)
 (2,4)
 (3,6)

because now the type of (2,3) is Tuple{Int,Int} and not (Int,Int) (which
is just a tuple of datatypes).

On Thu, 2015-04-30 at 11:53, Jim Garrison j...@garrison.cc wrote:
 In Julia 0.3, I can create an empty array of tuples as follows:

 julia (Int,Int)[]
 0-element Array{(Int64,Int64),1}

 If I want, I can even initialize it with a comprehension.

 julia (Int,Int)[(z, 2z) for z in 1:3]
 3-element Array{(Int64,Int64),1}:
  (1,2)
  (2,4)
  (3,6)

 On 0.4 (since the merging of #10380), both of these fail.  How can I get my 
 code to work again?  Also, is there any planned deprecation path that would 
 allow this code to continue working, albeit with a deprecation warning?



Re: [julia-users] Possible bug in @spawnat or fetch?

2015-04-30 Thread Sam Kaplan
Thanks Amit!  In the GitHub issue that you posted I'll add a link to back 
to this thread.

On Wednesday, April 29, 2015 at 10:40:54 PM UTC-5, Amit Murthy wrote:

 Simpler case.

 julia function test2()
@async x=1
x = 2
end

 test2 (generic function with 1 method)


 julia test2()

 ERROR: UndefVarError: x not defined

  in test2 at none:2


 Issue created : https://github.com/JuliaLang/julia/issues/11062


 On Thu, Apr 30, 2015 at 8:55 AM, Amit Murthy amit@gmail.com 
 javascript: wrote:

 Yes, this looks like a bug. In fact the below causes an error:

 function test2()
   ref = @spawnat workers()[1] begin
x=1
end; 
   x=2
 end

 Can you open an issue on github?


 On Thu, Apr 30, 2015 at 7:07 AM, Sam Kaplan sam.t@gmail.com 
 javascript: wrote:

 Hello,

 I have the following code example:
 addprocs(1)

 function test1()
 ref = @spawnat workers()[1] begin
 x = 1
 end
 y = fetch(ref)
 @show y
 end

 function test2()
 ref = @spawnat workers()[1] begin
 x = 1
 end
 x = fetch(ref)
 @show x
 end

 function main()
 test1()
 test2()
 end

 main()

 giving the following output:
 y = 1
 ERROR: x not defined
  in test2 at /tmp/test.jl:12
  in main at /tmp/test.jl:21
  in include at /usr/bin/../lib64/julia/sys.so
  in include_from_node1 at ./loading.jl:128
  in process_options at /usr/bin/../lib64/julia/sys.so
  in _start at /usr/bin/../lib64/julia/sys.so
 while loading /tmp/test.jl, in expression starting on line 24


 Is this a valid error in the code or a bug in Julia?  The error seems to 
 be caused when the variable that is local to the `@spawnat` block has its 
 name mirrored by the variable being assigned to by the `fetch` call.

 For reference, I am running version 0.3.6:
_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() for help.
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.6
  _/ |\__'_|_|_|\__'_|  |  
 |__/   |  x86_64-redhat-linux


 Thanks!

 Sam





[julia-users] Re: initializing arrays of tuples in julia 0.4

2015-04-30 Thread Alex
You can use Compat.jl :

julia Compat.@compat Tuple{Int,Int}[(z, 2z) for z in 1:3]
3-element Array{(Int64,Int64),1}:
 (1,2)
 (2,4)
 (3,6)

(this is for 0.3; without @compat I get Array{(Any...,),1})

Best,

Alex.

On Thursday, 30 April 2015 11:53:32 UTC+2, Jim Garrison wrote:

 In Julia 0.3, I can create an empty array of tuples as follows:

 julia (Int,Int)[]
 0-element Array{(Int64,Int64),1}

 If I want, I can even initialize it with a comprehension.

 julia (Int,Int)[(z, 2z) for z in 1:3]
 3-element Array{(Int64,Int64),1}:
  (1,2)
  (2,4)
  (3,6)

 On 0.4 (since the merging of #10380), both of these fail.  How can I get 
 my code to work again?  Also, is there any planned deprecation path that 
 would allow this code to continue working, albeit with a deprecation 
 warning?



[julia-users] Build error: could not open \flisp.boot\.

2015-04-30 Thread Ismael VC


Hello everyone!

I’m trying to build Julia at PythonAnywhere https://www.pythonanywhere.com, 
and the build fails because of:

CC src/flisp/flisp.o
CC src/flisp/builtins.o
CC src/flisp/string.o
CC src/flisp/equalhash.o
CC src/flisp/table.o
CC src/flisp/iostream.o
CC src/flisp/julia_extensions.o
CC src/flisp/flmain.o
LINK src/flisp/libflisp.a
LINK src/flisp/flisp
FLISP src/julia_flisp.boot
fatal error:
(io-error file: could not open \flisp.boot\)
make[2]: *** [julia_flisp.boot] Error 1
make[2]: *** Waiting for unfinished jobs
make[1]: *** [julia-release] Error 2
make: *** [release] Error 2
10:07 ~/julia (release-0.3)$

*Note:* I did make cleanall before trying again.

This are the VM specs:

10:08 ~/julia (release-0.3)$ cat /etc/issue
Ubuntu 14.04.2 LTS \n \l

10:15 ~/julia (release-0.3)$ uname -a
Linux giles-liveconsole2 3.13.0-45-generic #74-Ubuntu SMP Tue Jan 13 19:36:28 
UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

I found an issue that had the same problem at some point, but it was in 
BSD, so I believe it won’t be relevant, yet it’s here:

   - 
   https://groups.google.com/forum/#!msg/julia-dev/Z9J9NK9Ge5w/CNwXK3q2BgQJ 

Thanks in advance, cheers!
​


Re: [julia-users] Re: Scope of variables in julia

2015-04-30 Thread Zheng Wendell
1) Response about the scope
The following example in the issue cited by @mauro is an advanced issue
about the scope, and somewhat related to the case here:
```
foo1=1
let
foo2=1
f2() = (foo1=6; foo2=6)
@assert f2()==6
@assert foo1==1 # hard scope
@assert foo2==6 # soft scope
end
```
Here `foo1` is global, and screened by `f2()`; `foo2` is local (let-block),
and accepted by `foo2`.

2) Response about the original code and the variant Version 3
I think it is a bug, and a bug associated with the compiler optimization.
Someone files an issue in Github?
Since it is deeply related to the design (the origin of the high
performance) of Julia, I don't think it to be easily solved.



On Thu, Apr 30, 2015 at 1:05 PM, Zheng Wendell zhengwend...@gmail.com
wrote:

 @David, your code is the same as case 4.

 @Tom, no, in case 3, all *z*'s have local scope (of *for* loop), even the
 *z* in function *g()*. Ref: the issue cited by @mauro

 On Thu, Apr 30, 2015 at 1:15 AM, Tom Breloff t...@breloff.com wrote:

 This solves the problem because z is now local to g.  I don't think it
 gets at the heart of the issue though, which is the strangeness of z
 seeming to have global scope (outside of the for loop) when inside the g()
 definition, but the other z's have scope local to the for loop.  Is that
 intended? Or a bug? I'm not sure.


 On Wednesday, April 29, 2015 at 5:26:32 PM UTC-4, David Gold wrote:

 How about:

 julia for i=1:2
if i=2; println(z); end
z=Hi
g(z)= println(z)
g(z)
end
 Hi
 Hi
 Hi


 Does this just fall under case 4, or does it change your analysis?

 On Wednesday, April 29, 2015 at 11:20:17 AM UTC-4, Sisyphuss wrote:

 Please these four versions:
 Version 1:
 for i=1:2
 if i=2; println(z); end
 z=Hi
 end
 No error

 Version 2:
 for i=1:2
 z=Hi
 g()= println(z)
 g()
 end
 No error

 Version 3:
 for i=1:2
 if i=2; println(z); end
 z=Hi
 g()= println(z)
 g()
 end
 ERROR: z not defined

 Version 4:
 for i=1:2
 if i=2; println(z); end
 z=Hi
 g(x)= println(x)
 g(z)
 end
 No error

 My guess is: Version 1 treats `z` in the same way as local variable
 (let's call it *local way*). Version 2 treats `z` in the same way as
 global variable although it's in a local scope (let's call it *global
 way*). Version 3 treats it simultaneously in the local/global way,
 thus introduce an error. Version 4 is a walk around and also a better
 programming habit.

 If my guess is right, I further conclude that the main dilemma of Julia
 is that it depends on the scope (local/global scope) to decide the
 treatment of variables (local/global way); however, when scopes are nested,
 the problem appears.




 On Wednesday, April 29, 2015 at 4:53:12 PM UTC+2, Sisyphuss wrote:

 Here's a variant version of your code:
 ```
 for i=1:10
 if i=2; println(z); end
 z=2
 g()=(*global z*; 2z)
 println(z)
 end
 ```
 If `z` is defined global, there will not be any error. I would have
 like to use `nonlocal`, but there isn't this keyword in Julia.
 In my personal opinion, the magic in your original code is that when
 the compiler see the definition of `g()`, it will try to do some *amazing
 *things on the compilation of `z`.


 On Wednesday, April 29, 2015 at 4:37:24 PM UTC+2, Pooya wrote:

 That's exactly my question: Why should defining a function inside the
 loop mess with the variables in the loop?

 On Wednesday, April 29, 2015 at 10:33:03 AM UTC-4, Sisyphuss wrote:

 Another *miracle* here is that if you delete g()=2z, there will
 be no error!


 On Wednesday, April 29, 2015 at 3:53:23 PM UTC+2, Pooya wrote:

 Can someone explain why this is the desired behavior? z is defined
 until the end of first iteration in the for loop, but not in the 
 beginning
 of the next:

 julia for i=1:10
if i=2; println(z); end
z=2
g()=2z
println(z)
end
 2
 ERROR: z not defined
  in anonymous at no file:2





[julia-users] Re: Newbie help... First implementation of 3D heat equation solver VERY slow in Julia

2015-04-30 Thread Ángel de Vicente
Hi,

On Wednesday, April 29, 2015 at 5:03:05 PM UTC+1, Viral Shah wrote:

 You may see some better performance with julia 0.4-dev. The other thing to 
 do that is easy is to start julia with the -O option that enables some more 
 optimizations in the JIT, that may or may not help.


thanks for the tip. I downloaded and compiled version 0.4, but for this 
code it performs not significantly different than 0.3.7, either with or 
without the -O option (by the way, I thought that the option would be as 
per traditional compilers, -O2, -O3, etc. but apparently the only accepted 
option is plain -O ?)

Cheers,
Ángel de Vicente


[julia-users] Re: Performance of Distributed Arrays

2015-04-30 Thread Jake Bolewski
DistributedArray performance is pretty bad.  The reason for removing them 
from base was to spur their development.  All I can say at this time is 
that we are actively working on making their performance better.

For every parallel program you have implicit serial overhead (this is 
especially true with multiprocessing).  The fraction of serial work to 
parallel work determines your potential parallel speedup.  The parallel 
work / serial overhead in this case is really bad, so I don't think your 
observation is really surprising.  If this is on a shared memory machine I 
would try using SharedArray's as the serial communication overhead will be 
lower, and the potential parallel speedup much higher.  DistributedArrays 
only really make sense if they are in fact distributed over multiple 
machines. 

On Thursday, April 30, 2015 at 9:18:32 AM UTC-4, Alex wrote:

 Hi,

 I can't say anything regarding the performance of Distributed Arrays. 
 However, note that they have been relocated from Base to a separate 
 package: https://github.com/JuliaParallel/DistributedArrays.jl which 
 should work with 0.4-dev.

 Best,

 Alex.


 On Thursday, 30 April 2015 12:10:47 UTC+2, Ángel de Vicente wrote:

 Hello all,

 I'm trying to understand the sort of performance that we can get in 
 Parallel with Julia. DistributedArrays look very tempting, but my first try 
 gives me a hopeless performance. As a test code, I got it from the slides 
 (pages 75-80) at 

 http://www.csd.uwo.ca/~moreno/cs2101a_moreno/Parallel_computing_with_Julia.pdf

 The code, which just defines two functions is available at: 
 https://bitbucket.org/snippets/angelv/5kb4 
 and also attached to this message for convenience 

 When I run it in my 8-core laptop in serial or in parallel (see below for 
 output of the different runs), I see no better performance in parallel 
 (though I see a huge increase in the allocated memory in parallel). With 
 version 0.4-dev drand is not defined (well, actually the whole Distributed 
 Arrays section is gone from the documentation for 0.4-dev).

 Any pointers on what can be done to improve this appreciated (this has to 
 be the simplest possible parallel program, with no communication at all, so 
 we should be able to get near perfect scalability here).

 Thanks a lot,
 Ángel de Vicente

 ==

 angelv@pilas:~/mhdsolver-julia/Misc/Julia_Parallel$ julia -q
 julia 
 println(VERSION);require(simulation.jl);N=100;T=1000;A=rand(3,N);@time 
 SimulationSerial(A,N,T)
 0.3.7
 elapsed time: 2.376680715 seconds (80 bytes allocated)

 ==

 angelv@pilas:~/mhdsolver-julia/Misc/Julia_Parallel$ julia -p 4 -q
 julia 
 println(VERSION);require(simulation.jl);N=100;T=1000;dA=drand(3,N);@time
  
 SimulationParallel(dA,N,T)
 0.3.7
 elapsed time: 2.510426469 seconds (20011756 bytes allocated)
 4-element Array{Any,1}:
  nothing
  nothing
  nothing
  nothing

 

 angelv@pilas:~/mhdsolver-julia/Misc/Julia_Parallel$ 
 /home/angelv/JULIA-DEV/julia/julia -q
 julia 
 println(VERSION);require(simulation.jl);N=100;T=1000;A=rand(3,N);@time 
 SimulationSerial(A,N,T)
 0.4.0-dev+4572
 elapsed time: 2.33095253 seconds (283 kB allocated)

 =

 angelv@pilas:~/mhdsolver-julia/Misc/Julia_Parallel$ 
 /home/angelv/JULIA-DEV/julia/julia -q -p 4
 julia 
 println(VERSION);require(simulation.jl);N=100;T=1000;dA=drand(3,N);@time
  
 SimulationParallel(dA,N,T)
 0.4.0-dev+4572
 ERROR: UndefVarError: drand not defined





Re: [julia-users] performance of functions

2015-04-30 Thread Tim Holy
Didn't realize it needed updating, so thanks for the bug report. I poked 
around a bit, and I agree it's not entirely straightforward. I'll try to get 
to it soon.

--Tim

On Thursday, April 30, 2015 10:21:20 AM Sebastian Good wrote:
 @anon is a nice piece of functionality but translating it to work
 post-tupocalypse turns out to be more than I can currently grok! Tuples of
 types aren’t types anymore so the mechanics of the @generated functions
 require some changing. Wish I could help; any hints? On April 30, 2015 at
 5:30:57 AM, Tim Holy (tim.h...@gmail.com) wrote:
 
 Check the SO post again; there are now many suggested workarounds, some of
 which are not a big hit to readability.
 
 And no, this won't be fixed in 0.4.
 
 --Tim
 
 On Wednesday, April 29, 2015 08:57:46 PM Sebastian Good wrote:
  I ran into this issue today
  (http://stackoverflow.com/questions/26173635/performance-penalty-using-ano
  ny mous-function-in-julia) whereby functions -- whether anonymous or not
  -- generate lots of garbage when called indirectly. That is when using
  type signatures like
  clever_function(f::Function).
  
  Is making this code more efficient in scope for the 0.4 release? The
  workaround -- hand-coding types then using type dispatch -- is quite
  effective, but definitely a big hit to readability.



[julia-users] how to dispatch on an instance being a datatype

2015-04-30 Thread Tamas Papp
This is toy problem that came up in the context of something larger,
reduced to be simple so that I can ask about it more easily.

Suppose I want to implement the function

is_instanceof_datatype(x) = is(typeof(x),DataType)

using dispatch: a default method that is

is_instanceof_datatype(x) = false

and some other method which only gets called when x is an instance of a
DataType:

is_instanceof_datatype{ ... }(x::T) = true # how to dispatch

but I don't know how to do the latter, hence the 

The context is that I want to write a method that, for instances of
DataTypes, returns the slots in a given order, but for other values it
does something else, and I don't know how to do this.

Best,

Tamas


[julia-users] Re: Performance of Distributed Arrays

2015-04-30 Thread Jake Bolewski
Also, you want to map(fetch, refs) not pmap. 

With that i get better speedup (still not great, but at least  2x with 8 
processors)

julia N=100;T=1000;A=rand(3,N);@time SimulationSerial(A,N,T)
elapsed time: 1.822478028 seconds (233 kB allocated)

julia N=100;T=1000;dA=drand(3,N);@time SimulationParallel(dA,N,T)
elapsed time: 0.617520182 seconds (573 kB allocated)
8-element Array{Any,1}:
 nothing
 nothing
 nothing
 nothing
 nothing
 nothing
 nothing
 nothing

On Thursday, April 30, 2015 at 10:50:23 AM UTC-4, Jake Bolewski wrote:

 DistributedArray performance is pretty bad.  The reason for removing them 
 from base was to spur their development.  All I can say at this time is 
 that we are actively working on making their performance better.

 For every parallel program you have implicit serial overhead (this is 
 especially true with multiprocessing).  The fraction of serial work to 
 parallel work determines your potential parallel speedup.  The parallel 
 work / serial overhead in this case is really bad, so I don't think your 
 observation is really surprising.  If this is on a shared memory machine I 
 would try using SharedArray's as the serial communication overhead will be 
 lower, and the potential parallel speedup much higher.  DistributedArrays 
 only really make sense if they are in fact distributed over multiple 
 machines. 

 On Thursday, April 30, 2015 at 9:18:32 AM UTC-4, Alex wrote:

 Hi,

 I can't say anything regarding the performance of Distributed Arrays. 
 However, note that they have been relocated from Base to a separate 
 package: https://github.com/JuliaParallel/DistributedArrays.jl which 
 should work with 0.4-dev.

 Best,

 Alex.


 On Thursday, 30 April 2015 12:10:47 UTC+2, Ángel de Vicente wrote:

 Hello all,

 I'm trying to understand the sort of performance that we can get in 
 Parallel with Julia. DistributedArrays look very tempting, but my first try 
 gives me a hopeless performance. As a test code, I got it from the slides 
 (pages 75-80) at 

 http://www.csd.uwo.ca/~moreno/cs2101a_moreno/Parallel_computing_with_Julia.pdf

 The code, which just defines two functions is available at: 
 https://bitbucket.org/snippets/angelv/5kb4 
 and also attached to this message for convenience 

 When I run it in my 8-core laptop in serial or in parallel (see below 
 for output of the different runs), I see no better performance in parallel 
 (though I see a huge increase in the allocated memory in parallel). With 
 version 0.4-dev drand is not defined (well, actually the whole Distributed 
 Arrays section is gone from the documentation for 0.4-dev).

 Any pointers on what can be done to improve this appreciated (this has 
 to be the simplest possible parallel program, with no communication at all, 
 so we should be able to get near perfect scalability here).

 Thanks a lot,
 Ángel de Vicente

 ==

 angelv@pilas:~/mhdsolver-julia/Misc/Julia_Parallel$ julia -q
 julia 
 println(VERSION);require(simulation.jl);N=100;T=1000;A=rand(3,N);@time
  
 SimulationSerial(A,N,T)
 0.3.7
 elapsed time: 2.376680715 seconds (80 bytes allocated)

 ==

 angelv@pilas:~/mhdsolver-julia/Misc/Julia_Parallel$ julia -p 4 -q
 julia 
 println(VERSION);require(simulation.jl);N=100;T=1000;dA=drand(3,N);@time
  
 SimulationParallel(dA,N,T)
 0.3.7
 elapsed time: 2.510426469 seconds (20011756 bytes allocated)
 4-element Array{Any,1}:
  nothing
  nothing
  nothing
  nothing

 

 angelv@pilas:~/mhdsolver-julia/Misc/Julia_Parallel$ 
 /home/angelv/JULIA-DEV/julia/julia -q
 julia 
 println(VERSION);require(simulation.jl);N=100;T=1000;A=rand(3,N);@time
  
 SimulationSerial(A,N,T)
 0.4.0-dev+4572
 elapsed time: 2.33095253 seconds (283 kB allocated)

 =

 angelv@pilas:~/mhdsolver-julia/Misc/Julia_Parallel$ 
 /home/angelv/JULIA-DEV/julia/julia -q -p 4
 julia 
 println(VERSION);require(simulation.jl);N=100;T=1000;dA=drand(3,N);@time
  
 SimulationParallel(dA,N,T)
 0.4.0-dev+4572
 ERROR: UndefVarError: drand not defined





[julia-users] Re: how to dispatch on an instance being a datatype

2015-04-30 Thread Tom Breloff
Is this what you're looking for?


julia yyy(x::DataType) = true
yyy (generic function with 1 method)


julia yyy(x) = false
yyy (generic function with 2 methods)


julia yyy(Int)
true


julia yyy(5)
false




On Thursday, April 30, 2015 at 11:04:02 AM UTC-4, Tamas Papp wrote:

 This is toy problem that came up in the context of something larger, 
 reduced to be simple so that I can ask about it more easily. 

 Suppose I want to implement the function 

 is_instanceof_datatype(x) = is(typeof(x),DataType) 

 using dispatch: a default method that is 

 is_instanceof_datatype(x) = false 

 and some other method which only gets called when x is an instance of a 
 DataType: 

 is_instanceof_datatype{ ... }(x::T) = true # how to dispatch 

 but I don't know how to do the latter, hence the  

 The context is that I want to write a method that, for instances of 
 DataTypes, returns the slots in a given order, but for other values it 
 does something else, and I don't know how to do this. 

 Best, 

 Tamas 



Re: [julia-users] Re: Scope of variables in julia

2015-04-30 Thread Sisyphuss
I filed an issue: https://github.com/JuliaLang/julia/issues/11065




Re: [julia-users] Re: Newbie help... First implementation of 3D heat equation solver VERY slow in Julia

2015-04-30 Thread Angel de Vicente
Viral Shah vi...@mayin.org writes:
 You may see some better performance with julia 0.4-dev. The other
 thing to do that is easy is to start julia with the -O option that
 enables some more optimizations in the JIT, that may or may not help.

thanks for the tip. the -O option works only in 0.4, right?

-- 
Ángel de Vicente
http://www.iac.es/galeria/angelv/  


[julia-users] PyPlot plots not showing in IJulia

2015-04-30 Thread axsk
Using IJulia i get the following result:

In [1]:

using PyPlot

x = linspace(0,2*pi,1000)

y = sin(3*x + 4*cos(2*x));

plot(x, y, color=red, linewidth=2.0, linestyle=--)

Out[1]:

1-element Array{Any,1}:
 PyObject matplotlib.lines.Line2D object at 0x7056650


Unfortunately, no plot image is shown.
I am using the current Julia nightly, the master versions of PyPlot and 
PyCall, Python 2.7.3 and IPython 3.1.0 on Linux x86_64.

In case anyone is on these newest versions: Does it work for you?

Do you have any ideas what I could try to get it working?


Re: [julia-users] Build error: could not open \flisp.boot\.

2015-04-30 Thread Isaiah Norton
I am guessing there is some other error further back. Try `make -j1` so it
fails immediately.

On Thu, Apr 30, 2015 at 6:22 AM, Ismael VC ismael.vc1...@gmail.com wrote:

 Hello everyone!

 I’m trying to build Julia at PythonAnywhere
 https://www.pythonanywhere.com, and the build fails because of:

 CC src/flisp/flisp.o
 CC src/flisp/builtins.o
 CC src/flisp/string.o
 CC src/flisp/equalhash.o
 CC src/flisp/table.o
 CC src/flisp/iostream.o
 CC src/flisp/julia_extensions.o
 CC src/flisp/flmain.o
 LINK src/flisp/libflisp.a
 LINK src/flisp/flisp
 FLISP src/julia_flisp.boot
 fatal error:
 (io-error file: could not open \flisp.boot\)
 make[2]: *** [julia_flisp.boot] Error 1
 make[2]: *** Waiting for unfinished jobs
 make[1]: *** [julia-release] Error 2
 make: *** [release] Error 2
 10:07 ~/julia (release-0.3)$

 *Note:* I did make cleanall before trying again.

 This are the VM specs:

 10:08 ~/julia (release-0.3)$ cat /etc/issue
 Ubuntu 14.04.2 LTS \n \l

 10:15 ~/julia (release-0.3)$ uname -a
 Linux giles-liveconsole2 3.13.0-45-generic #74-Ubuntu SMP Tue Jan 13 19:36:28 
 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

 I found an issue that had the same problem at some point, but it was in
 BSD, so I believe it won’t be relevant, yet it’s here:

-
https://groups.google.com/forum/#!msg/julia-dev/Z9J9NK9Ge5w/CNwXK3q2BgQJ

 Thanks in advance, cheers!
 ​



Re: [julia-users] performance of functions

2015-04-30 Thread Sebastian Good
@anon is a nice piece of functionality but translating it to work 
post-tupocalypse turns out to be more than I can currently grok! Tuples of 
types aren’t types anymore so the mechanics of the @generated functions require 
some changing. Wish I could help; any hints?
On April 30, 2015 at 5:30:57 AM, Tim Holy (tim.h...@gmail.com) wrote:

Check the SO post again; there are now many suggested workarounds, some of  
which are not a big hit to readability.  

And no, this won't be fixed in 0.4.  

--Tim  

On Wednesday, April 29, 2015 08:57:46 PM Sebastian Good wrote:  
 I ran into this issue today  
 (http://stackoverflow.com/questions/26173635/performance-penalty-using-anony  
 mous-function-in-julia) whereby functions -- whether anonymous or not --  
 generate lots of garbage when called indirectly. That is when using type  
 signatures like  
 clever_function(f::Function).  
  
 Is making this code more efficient in scope for the 0.4 release? The  
 workaround -- hand-coding types then using type dispatch -- is quite  
 effective, but definitely a big hit to readability.  



Re: [julia-users] performance puzzle: linux vs osx

2015-04-30 Thread Isaiah Norton

 40 days old master


The recent tuple changes (#10380) were merged after that and represent a
very substantial change, so comparing top-of-trunk to a 40-day old version
isn't very useful. I would suggest to try a comparably recent (ideally
identical) build under the VM before drawing any conclusions.

Also, this could very well be a test case for a performance regression from
#10380.

On Wed, Apr 29, 2015 at 4:50 PM, Spencer Lyon spencerly...@gmail.com
wrote:

 I ran into strange performance issues in an algorithm I have been working
 on.

 I have a test case as well as some timing and profiler results at this
 gist: https://gist.github.com/spencerlyon2/d21d6368a2ccbf6f1e7b


 I summarize the issues here. Consider the following code (note I am
 defining myexp because of this issue:
 https://github.com/JuliaLang/julia/issues/11048. It turns out that on OS
 X, calling apple's libm gives a substantial speed up -- e.g. I'm doing
 everything I can to give OS X a chance to win here)

 the code:

 @osx? (
  begin
  myexp(x::Float64) = ccall((:exp, :libm), Float64, (Float64,), x)
  # myexp(x::Float64) = exp(x)
  end
: begin
  myexp(x::Float64) = exp(x)
  end
)

 function test_func(data::Matrix, points::Matrix)
 # extract input dimensions
 n, d = size(data)
 n_points = size(points, 1)

 # transpose data and points to access columns at a time
 data = data'
 points = points'

 # Define constants
 hbar = n^(-1.0/(d+4.0))
 hbar2 = hbar^2
 constant = 1.0/(n*hbar^(d) * (2π)^(d/2))

 # allocate space
 density = Array(Float64, n_points)
 Di_min = Array(Float64, n_points)

 # apply formula (2)
 for i=1:n_points  # loop over all points
 dens_i = 0.0
 min_di2 = Inf
 for j=1:n_points  # loop over all other points
 d_i2_j = 0.0
 for k=1:d  # loop over d
 @inbounds d_i2_j += ((points[k, i] - data[k, j])^2)
 end
 dens_i += myexp(-0.5*d_i2_j/hbar2)
 if i != j  d_i2_j  min_di2
 min_di2 = d_i2_j
 end
 end
 density[i] = constant * dens_i
 Di_min[i] = sqrt(min_di2)
 end

 return density, Di_min
 end



 To test the performance of this code on linux and OS X, I started up a
 docker image with a recent (40 days old master) julia from my OS X machine
 and compared the timing against running it on OS X directly (with 1 days
 old julia). I found that for `data, points = randn(9500, 2)` on linux
 version takes about 2.6 seconds to run `test_func` whereas on OS X  it
 takes about 9.3.

 I can't explain this large (almost 4x) performance hit that I get from
 running the code on the native OS vs the virtual machine.

 More details (profiler results, timing stats, self-contained runnable
 example) in the gist:
 https://gist.github.com/spencerlyon2/d21d6368a2ccbf6f1e7b






Re: [julia-users] type stable leading_zeros etc.

2015-04-30 Thread Sebastian Good
And I guess as a matter of practicality, a vectorized leading_zeros instruction 
should leave its results in the same sized registers as it started, or it would 
only be possible on Int64s, though I don’t know if LLVM is doing that just yet.
On April 30, 2015 at 2:36:53 PM, Sebastian Good 
(sebast...@palladiumconsulting.com) wrote:

Existing compiler intrinsics work this way (__lzcnt, __lzcnt64, __lzcnt16), It 
came up for me in the following line of code in StreamingStats

    ρ(s::Uint32) = uint32(uint32(leading_zeros(s)) + 0x0001)

The outer uint32 is no longer necessary in v0.4 because the addition no longer 
expands 32-bit operands to a 64-bit result. The inner one is still necessary 
because leading_zeros does. I imagine there are many little functions like this 
that should probably act the same way.

I ran into in my own code for converting IBM/370 floating points to IEEE

    local norml::UInt32 = leading_zeros(fr)
    fr = norml
    ex = (ex  2) - 130 - norml

Where I had to convert  norml to a UInt32 to preserve type stability in the bit 
shifting operation below, where I’m working with 32 bit numbers. Leaving this 
convert out causes the usual massive slowdown in speed when converting tens of 
millions of numbers.

Arguments I can make for making them have the same type — recognizing this is 
quite subjective!

- If you’re doing something with leading_zeros, you’re aware you’re working 
directly in an integer register in binary code; you’re trying to do something 
clever and you’ll want type stability
- No binary number could have more leading zeros than it itself could represent 
- The existing intrinsics are written this way
- Because I ran into it twice and wished it were that way both times! :-D



On April 30, 2015 at 2:16:26 PM, Stefan Karpinski (ste...@karpinski.org) wrote:

I'm not sure why the result of leading_zeros should be of the same type as the 
argument. What's the use case?
 

Re: [julia-users] Re: Scope of variables in julia

2015-04-30 Thread Mauro
On Thu, 2015-04-30 at 18:37, Tom Breloff t...@breloff.com wrote:
 I actually wonder if the bug is that Versions 1 and 4 *should* produce an 
 error, but they secretly work.  In your version 1:

 for i=1:2
 if i=2; println(z); end 
 z=Hi 
 end 

 z should be local to each iteration of the loop, so I think the second pass 
 should produce an undefined error. See from the manual:

 for loops and comprehensions have a special additional behavior: any new 
 variables introduced in their body scopes are freshly allocated for each 
 loop iteration. Therefore these constructs are similar to while loops with 
 let blocks inside:


 Am I missing something?

Jeff says no: https://github.com/JuliaLang/julia/issues/11065

This works, but shouldn't:

for i=1:9
if i==1
j=1
else
j +=1
end
@show j
end

and this is what it should be equivalent to (and throws):

i= 1
while i10
let j
if i==1
j=1
else
j +=1
end
@show j
end
i+=1
end


 On Thursday, April 30, 2015 at 9:40:29 AM UTC-4, Sisyphuss wrote:

 I filed an issue: https://github.com/JuliaLang/julia/issues/11065






Re: [julia-users] Re: Scope of variables in julia

2015-04-30 Thread Tom Breloff
I actually wonder if the bug is that Versions 1 and 4 *should* produce an 
error, but they secretly work.  In your version 1:

for i=1:2
if i=2; println(z); end 
z=Hi 
end 

z should be local to each iteration of the loop, so I think the second pass 
should produce an undefined error. See from the manual:

for loops and comprehensions have a special additional behavior: any new 
 variables introduced in their body scopes are freshly allocated for each 
 loop iteration. Therefore these constructs are similar to while loops with 
 let blocks inside:


Am I missing something?

On Thursday, April 30, 2015 at 9:40:29 AM UTC-4, Sisyphuss wrote:

 I filed an issue: https://github.com/JuliaLang/julia/issues/11065




Re: [julia-users] Re: Newbie help... First implementation of 3D heat equation solver VERY slow in Julia

2015-04-30 Thread Patrick O'Leary
On Thursday, April 30, 2015 at 11:29:15 AM UTC-5, Ángel de Vicente wrote:

 Angel de Vicente angel.vicente.garr...@gmail.com writes: 

  Viral Shah vi...@mayin.org writes: 
  You may see some better performance with julia 0.4-dev. The other 
  thing to do that is easy is to start julia with the -O option that 
  enables some more optimizations in the JIT, that may or may not help. 
  
  thanks for the tip. the -O option works only in 0.4, right? 

 sorry about the repeated messages (was configuring access to julia-users 
 from a non-gmail account, thought it was not working and resent it, 
 while apparently I just had to wait a few hours for the message to get 
 through) 


Don't worry about it; I didn't realize that another version had been posted 
when I approved this message. But that email address should be cleared to 
post now. Sorry!


Re: [julia-users] Build error: could not open \flisp.boot\.

2015-04-30 Thread Ismael VC


If anyone is interested in checking out the console session, just send me a 
message to ismael.vc1...@gmail.com, thanks!

El jueves, 30 de abril de 2015, 11:59:13 (UTC-5), Ismael VC escribió:

Thank yo very much Isaiah, I've just did `make clean  make` again and I 
 still get the same error:

 ```
 Making install in SYM
 CC src/jltypes.o
 CC src/gf.o
 CC src/support/hashing.o
 CC src/support/timefuncs.o
 CC src/support/ptrhash.o
 CC src/support/operators.o
 CC src/support/utf8.o
 CC src/support/ios.o
 CC src/support/htable.o
 CC src/support/bitvector.o
 CC src/support/int2str.o
 CC src/support/libsupportinit.o
 CC src/support/arraylist.o
 CC src/support/strtod.o
 LINK src/support/libsupport.a
 CC src/flisp/flisp.o
 CC src/flisp/builtins.o
 CC src/flisp/string.o
 CC src/flisp/equalhash.o
 CC src/flisp/table.o
 CC src/flisp/iostream.o
 CC src/flisp/julia_extensions.o
 LINK src/flisp/libflisp.a
 CC src/flisp/flmain.o
 LINK src/flisp/flisp
 FLISP src/julia_flisp.boot
 fatal error:
 (io-error file: could not open \flisp.boot\)
 make[2]: *** [julia_flisp.boot] Error 1
 make[1]: *** [julia-release] Error 2
 make: *** [release] Error 2

 16:18 ~/julia (release-0.3)$  ls -l src/flisp/flisp.boot 
 -rw-rw-r-- 1 MexLavu registered_users 35369 Apr 30 06:58 
 src/flisp/flisp.boot 
 ```

 PythonAnywhere lets us to share a console session, if you or anyone else 
 is interested in this error, I just need you email address to send you the 
 link.


 El jueves, 30 de abril de 2015, 9:22:45 (UTC-5), Isaiah escribió:

 I am guessing there is some other error further back. Try `make -j1` so 
 it fails immediately.

 On Thu, Apr 30, 2015 at 6:22 AM, Ismael VC ismael...@gmail.com wrote:

 Hello everyone!

 I’m trying to build Julia at PythonAnywhere 
 https://www.pythonanywhere.com, and the build fails because of:

 CC src/flisp/flisp.o
 CC src/flisp/builtins.o
 CC src/flisp/string.o
 CC src/flisp/equalhash.o
 CC src/flisp/table.o
 CC src/flisp/iostream.o
 CC src/flisp/julia_extensions.o
 CC src/flisp/flmain.o
 LINK src/flisp/libflisp.a
 LINK src/flisp/flisp
 FLISP src/julia_flisp.boot
 fatal error:
 (io-error file: could not open \flisp.boot\)
 make[2]: *** [julia_flisp.boot] Error 1
 make[2]: *** Waiting for unfinished jobs
 make[1]: *** [julia-release] Error 2
 make: *** [release] Error 2
 10:07 ~/julia (release-0.3)$

 *Note:* I did make cleanall before trying again.

 This are the VM specs:

 10:08 ~/julia (release-0.3)$ cat /etc/issue
 Ubuntu 14.04.2 LTS \n \l

 10:15 ~/julia (release-0.3)$ uname -a
 Linux giles-liveconsole2 3.13.0-45-generic #74-Ubuntu SMP Tue Jan 13 
 19:36:28 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

 I found an issue that had the same problem at some point, but it was in 
 BSD, so I believe it won’t be relevant, yet it’s here:

- 
https://groups.google.com/forum/#!msg/julia-dev/Z9J9NK9Ge5w/CNwXK3q2BgQJ 

 Thanks in advance, cheers!
 ​


  ​


Re: [julia-users] Build error: could not open \flisp.boot\.

2015-04-30 Thread Ismael VC
Thank yo very much Isaiah, I've just did `make clean  make` again and I 
still get the same error:

```
Making install in SYM
CC src/jltypes.o
CC src/gf.o
CC src/support/hashing.o
CC src/support/timefuncs.o
CC src/support/ptrhash.o
CC src/support/operators.o
CC src/support/utf8.o
CC src/support/ios.o
CC src/support/htable.o
CC src/support/bitvector.o
CC src/support/int2str.o
CC src/support/libsupportinit.o
CC src/support/arraylist.o
CC src/support/strtod.o
LINK src/support/libsupport.a
CC src/flisp/flisp.o
CC src/flisp/builtins.o
CC src/flisp/string.o
CC src/flisp/equalhash.o
CC src/flisp/table.o
CC src/flisp/iostream.o
CC src/flisp/julia_extensions.o
LINK src/flisp/libflisp.a
CC src/flisp/flmain.o
LINK src/flisp/flisp
FLISP src/julia_flisp.boot
fatal error:
(io-error file: could not open \flisp.boot\)
make[2]: *** [julia_flisp.boot] Error 1
make[1]: *** [julia-release] Error 2
make: *** [release] Error 2

16:18 ~/julia (release-0.3)$  ls -l src/flisp/flisp.boot 
-rw-rw-r-- 1 MexLavu registered_users 35369 Apr 30 06:58 
src/flisp/flisp.boot 
```

PythonAnywhere lets us to share a console session, if you or anyone else is 
interested in this error, I just need you email address to send you the 
link.


El jueves, 30 de abril de 2015, 9:22:45 (UTC-5), Isaiah escribió:

 I am guessing there is some other error further back. Try `make -j1` so it 
 fails immediately.

 On Thu, Apr 30, 2015 at 6:22 AM, Ismael VC ismael...@gmail.com 
 javascript: wrote:

 Hello everyone!

 I’m trying to build Julia at PythonAnywhere 
 https://www.pythonanywhere.com, and the build fails because of:

 CC src/flisp/flisp.o
 CC src/flisp/builtins.o
 CC src/flisp/string.o
 CC src/flisp/equalhash.o
 CC src/flisp/table.o
 CC src/flisp/iostream.o
 CC src/flisp/julia_extensions.o
 CC src/flisp/flmain.o
 LINK src/flisp/libflisp.a
 LINK src/flisp/flisp
 FLISP src/julia_flisp.boot
 fatal error:
 (io-error file: could not open \flisp.boot\)
 make[2]: *** [julia_flisp.boot] Error 1
 make[2]: *** Waiting for unfinished jobs
 make[1]: *** [julia-release] Error 2
 make: *** [release] Error 2
 10:07 ~/julia (release-0.3)$

 *Note:* I did make cleanall before trying again.

 This are the VM specs:

 10:08 ~/julia (release-0.3)$ cat /etc/issue
 Ubuntu 14.04.2 LTS \n \l

 10:15 ~/julia (release-0.3)$ uname -a
 Linux giles-liveconsole2 3.13.0-45-generic #74-Ubuntu SMP Tue Jan 13 
 19:36:28 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

 I found an issue that had the same problem at some point, but it was in 
 BSD, so I believe it won’t be relevant, yet it’s here:

- 
https://groups.google.com/forum/#!msg/julia-dev/Z9J9NK9Ge5w/CNwXK3q2BgQJ 

 Thanks in advance, cheers!
 ​




Re: [julia-users] type stable leading_zeros etc.

2015-04-30 Thread Stefan Karpinski
I'm not sure why the result of leading_zeros should be of the same type as
the argument. What's the use case?

On Thu, Apr 30, 2015 at 11:23 AM, Sebastian Good 
sebast...@palladiumconsulting.com wrote:

 Recent 0.4 changes made expressions like Int32(1) + Int32(2) type stable,
 i.e returning Int32 instead of Int64 as they did previously (on a 64-bit
 system, anyway). However leading_zeros seems to always return an Int64. I
 wonder if it makes sense to convert the result of leading_zeros to the
 same type as its argument, at least for the base Integer bittypes?

 FWIW, this came up while updating StreamStats.jl for v0.4, where a few
 functions had to take care to convert intermediate results back to Int32.



[julia-users] Re: PyPlot plots not showing in IJulia

2015-04-30 Thread Nils Gudat
Copy-pasting your code produces the plot for me as expected, have you tried 
Pkg.update()?


Re: [julia-users] Re: Defining a function in different modules

2015-04-30 Thread Tom Breloff
Can anyone point me in the right direction of the files/functions in the 
core library where dispatch is handled?  I'd like to explore a little so I 
can make comments that account for the relative ease at implementing some 
of the changes suggested.  

I agree that it would be really nice, in some cases, to auto-merge function 
definitions between namespaces (database connects are very simple OO 
example).   However, if 2 different modules define foo(x::Float64, y::Int), 
then there should be an error if they're both exported (or if not an error, 
then at least force qualified access??)   Now in my mind, the tricky part 
comes when a package writer defines:

module MyModule
export foo
type MyType end
foo(x::MyType) = ...
foo(x) = ...
end


... and then writes other parts of the package depending on foo(5) to do 
something very specific.  This may work perfectly until the user calls 
using SomeOtherModule which in turn exported foo(x::Int).  If there's an 
auto-merge between the modules, then foo(x::MyModule.MyType), foo(x::Any), 
and foo(Int) all exist and are valid calls.

*If* the auto-merge changes the calling behavior for foo *within module 
MyModule*, then we have a big problem.  I.e. we have something like:

internalmethod() = foo(5)


that is defined within MyModule... If internalmethod() now maps to 
SomeOtherModule.foo(x::Int)... all the related internal code within 
MyModule will likely break.  However, if internalmethod() still maps to 
MyModule.foo(), then I think we're safe.


So I guess the question: can we auto-merge the foo methods *in user space 
only *(i.e. in the REPL where someone called using MyModule, 
SomeOtherModule), and keep calls within modules un-merged (unless of 
course you call using SomeOtherPackage within MyModule... after which 
it's my responsibility to know about and call the correct foo).

Are there other pitfalls to auto-merging in user-space-only?  I can't 
comment on how hard this is to implement, but I don't foresee how it breaks 
dispatch or any of the other powerful concepts.



On Thursday, April 30, 2015 at 12:19:07 PM UTC-4, Michael Francis wrote:

 My goal is not to remove namespaces, quite the opposite, for types a 
 namespace is an elegant solution to resolving the ambiguity between 
 different types of the same name. What I do object to is that functions 
 (which are defined against user defined types) are relegated to being 
 second class citizens in the Julia world unless you are developing in Base. 
 For people in Base the world is great, it all just works. For everybody 
 else you either shoe horn your behavior into one of the Base methods by 
 extending it, or you are forced into qualifying names when you don't need 
 to. 

 1) Didn't that horse already bolt with Base. If Base were subdivided into 
 strict namespaces of functionality then I see this argument, but that isn't 
 the case and everybody would be complaining that they need to type 
 strings.find(string) 
 2) To me that is what multiple dispatch is all about. I am calling a 
 function and I want the run time to decide which implementation to use, if 
 I wanted to have to qualify all calls to a method I'd be far better off in 
 an OO language. 


 On Thursday, April 30, 2015 at 2:15:51 AM UTC-4, Tamas Papp wrote:


 On Thu, Apr 30 2015, Stefan Karpinski ste...@karpinski.org wrote: 

  Function merging has these problems: 
  
 1. It complects name resolution with dispatch – they are no longer 
 orthogonal. 
 2. It makes all bindings from `using` semantically ambiguous – you 
 have 
 no idea what a name means without actually doing a call. 

 IMO orthogonality of name resolution and dispatch should be preserved -- 
 it is a nice property of the language and makes reasoning about code 
 much easier. Many languages have this property, and it has stood the 
 test of time, also in combination with multiple dispatch (Common 
 Lisp). Giving it up would be a huge price to pay for some DWIM feature. 

 Best, 

 Tamas 



Re: [julia-users] Re: Defining a function in different modules

2015-04-30 Thread Mauro
 Can anyone point me in the right direction of the files/functions in the
 core library where dispatch is handled?  I'd like to explore a little so I
 can make comments that account for the relative ease at implementing some
 of the changes suggested.


 Start here:
 https://github.com/JuliaLang/julia/blob/cd455af0e26370a8899c1d7b3d194aacd8c87e9e/src/gf.c#L1655

and this
https://www.youtube.com/watch?v=osdeT-tWjzk

 On Thu, Apr 30, 2015 at 1:11 PM, Tom Breloff t...@breloff.com wrote:

 Can anyone point me in the right direction of the files/functions in the
 core library where dispatch is handled?  I'd like to explore a little so I
 can make comments that account for the relative ease at implementing some
 of the changes suggested.

 I agree that it would be really nice, in some cases, to auto-merge
 function definitions between namespaces (database connects are very simple
 OO example).   However, if 2 different modules define foo(x::Float64,
 y::Int), then there should be an error if they're both exported (or if not
 an error, then at least force qualified access??)   Now in my mind, the
 tricky part comes when a package writer defines:

 module MyModule
 export foo
 type MyType end
 foo(x::MyType) = ...
 foo(x) = ...
 end


 ... and then writes other parts of the package depending on foo(5) to do
 something very specific.  This may work perfectly until the user calls
 using SomeOtherModule which in turn exported foo(x::Int).  If there's an
 auto-merge between the modules, then foo(x::MyModule.MyType), foo(x::Any),
 and foo(Int) all exist and are valid calls.

 *If* the auto-merge changes the calling behavior for foo *within module
 MyModule*, then we have a big problem.  I.e. we have something like:

 internalmethod() = foo(5)


 that is defined within MyModule... If internalmethod() now maps to
 SomeOtherModule.foo(x::Int)... all the related internal code within
 MyModule will likely break.  However, if internalmethod() still maps to
 MyModule.foo(), then I think we're safe.


 So I guess the question: can we auto-merge the foo methods *in user space
 only *(i.e. in the REPL where someone called using MyModule,
 SomeOtherModule), and keep calls within modules un-merged (unless of
 course you call using SomeOtherPackage within MyModule... after which
 it's my responsibility to know about and call the correct foo).

 Are there other pitfalls to auto-merging in user-space-only?  I can't
 comment on how hard this is to implement, but I don't foresee how it breaks
 dispatch or any of the other powerful concepts.



 On Thursday, April 30, 2015 at 12:19:07 PM UTC-4, Michael Francis wrote:

 My goal is not to remove namespaces, quite the opposite, for types a
 namespace is an elegant solution to resolving the ambiguity between
 different types of the same name. What I do object to is that functions
 (which are defined against user defined types) are relegated to being
 second class citizens in the Julia world unless you are developing in Base.
 For people in Base the world is great, it all just works. For everybody
 else you either shoe horn your behavior into one of the Base methods by
 extending it, or you are forced into qualifying names when you don't need
 to.

 1) Didn't that horse already bolt with Base. If Base were subdivided into
 strict namespaces of functionality then I see this argument, but that isn't
 the case and everybody would be complaining that they need to type
 strings.find(string)
 2) To me that is what multiple dispatch is all about. I am calling a
 function and I want the run time to decide which implementation to use, if
 I wanted to have to qualify all calls to a method I'd be far better off in
 an OO language.


 On Thursday, April 30, 2015 at 2:15:51 AM UTC-4, Tamas Papp wrote:


 On Thu, Apr 30 2015, Stefan Karpinski ste...@karpinski.org wrote:

  Function merging has these problems:
 
 1. It complects name resolution with dispatch – they are no longer
 orthogonal.
 2. It makes all bindings from `using` semantically ambiguous – you
 have
 no idea what a name means without actually doing a call.

 IMO orthogonality of name resolution and dispatch should be preserved --
 it is a nice property of the language and makes reasoning about code
 much easier. Many languages have this property, and it has stood the
 test of time, also in combination with multiple dispatch (Common
 Lisp). Giving it up would be a huge price to pay for some DWIM feature.

 Best,

 Tamas





Re: [julia-users] type stable leading_zeros etc.

2015-04-30 Thread Sebastian Good
Existing compiler intrinsics work this way (__lzcnt, __lzcnt64, __lzcnt16), It 
came up for me in the following line of code in StreamingStats

    ρ(s::Uint32) = uint32(uint32(leading_zeros(s)) + 0x0001)

The outer uint32 is no longer necessary in v0.4 because the addition no longer 
expands 32-bit operands to a 64-bit result. The inner one is still necessary 
because leading_zeros does. I imagine there are many little functions like this 
that should probably act the same way.

I ran into in my own code for converting IBM/370 floating points to IEEE

    local norml::UInt32 = leading_zeros(fr)
    fr = norml
    ex = (ex  2) - 130 - norml

Where I had to convert  norml to a UInt32 to preserve type stability in the bit 
shifting operation below, where I’m working with 32 bit numbers. Leaving this 
convert out causes the usual massive slowdown in speed when converting tens of 
millions of numbers.

Arguments I can make for making them have the same type — recognizing this is 
quite subjective!

- If you’re doing something with leading_zeros, you’re aware you’re working 
directly in an integer register in binary code; you’re trying to do something 
clever and you’ll want type stability
- No binary number could have more leading zeros than it itself could represent 
- The existing intrinsics are written this way
- Because I ran into it twice and wished it were that way both times! :-D



On April 30, 2015 at 2:16:26 PM, Stefan Karpinski (ste...@karpinski.org) wrote:

I'm not sure why the result of leading_zeros should be of the same type as the 
argument. What's the use case?
 

[julia-users] Re: the state of GUI toolkits?

2015-04-30 Thread Max Suster
I too would love to have a Qt5.jl package.  Having such a robust and 
cross-platform GUI interface would make many projects more attractive to 
(non-expert) outsiders coming into Julia. I have been meaning to find time for 
this, but wrapping the whole of Qt5 alone is quite a project. . . Perhaps, if 
several people contributed, however, this might be more realistic in the 
shorter term?


[julia-users] Out-of-memory errors

2015-04-30 Thread Páll Haraldsson

In 0.3.5.

julia @time sum(rand(1))
ERROR: MemoryError()
 in rand at random.jl:123

julia gc()

julia @time sum(rand(1))
elapsed time: 4.127246913 seconds (80168 bytes allocated, 0.83% gc time)
4.999858681707974e7

julia gc()

julia @time sum(rand(10)) # ten time more, that I understandably 
can't get around unless I close other programs
ERROR: MemoryError()
 in rand at random.jl:123

julia @time sum(rand(1))
elapsed time: 3.418166239 seconds (80168 bytes allocated, 1.56% gc time)
5.000421281918045e7


$ ps aux |grep julia
palli19981  4.5 21.4 1095364 866536 pts/4  S+   16:57   0:15 julia

$ ps aux |grep julia
palli19981  1.6 20.2 1160900 817812 pts/4  S+   16:57   0:19 julia


Main question:

I know often there isn't much to to if memory has really run out (yes, you 
could catch the exception and do something - what? in say safety critical 
situations). Going by the next try after the gc() it seems the memory had 
not actually run out. I thought if memory is low because of garbage, then 
first a gc() would be forced and only if there really is not enough memory 
then you get the error.

[I guess there is a tiny possibility that the memory available to Julia got 
bigger inbetween.]

I'm looking into this, I guess the above depends on memory being allocated 
in one chunk, maybe not something to be relied on?


[I expect the memory available to julia process (VSZ, not RSS) to just grow 
and never shrink right?]

[The last @time gets less time while there is more gc work.. Probably a 
coincidence/load from other processes explain, otherwise would be odd?]


-- 
Palli.

P.S.

This works:

julia edit(rand, (Real,))

but strangely not:

julia edit(rand, (Float64,))
ERROR: could not find function definition
 in functionlocs at reflection.jl:171
 in edit at interactiveutil.jl:56



Re: [julia-users] Re: Defining a function in different modules

2015-04-30 Thread Matt Bauman
On Thursday, April 30, 2015 at 1:11:27 PM UTC-4, Tom Breloff wrote:

 I agree that it would be really nice, in some cases, to auto-merge 
 function definitions between namespaces (database connects are very simple 
 OO example).   However, if 2 different modules define foo(x::Float64, 
 y::Int), then there should be an error if they're both exported (or if not 
 an error, then at least force qualified access??)   Now in my mind, the 
 tricky part comes when a package writer defines:

 module MyModule
 export foo
 type MyType end
 foo(x::MyType) = ...
 foo(x) = ...
 end


I think this is a very interesting discussion, but it all seems to come 
back to a human communication issue.  Each package author must *somehow* 
communicate to both users and other package authors that they mean the same 
thing when they define a function that's intended to be used 
interchangeably.  We can either do this explicitly (e.g., by joining or 
forming an organization like JuliaStats/StatsBase.jl, JuliaDB/DBI.jl, 
JuliaIO/FileIO.jl, etc.), or we can try write code in Julia to help mediate 
this discussion.

The heuristics you're proposing sound interesting (and may even work, 
especially when combined with delaying ambiguity warnings and making them 
errors at an ambiguous call), but I have a hunch that it will take a lot of 
work to implement.  And I'm not sure that it really makes things better. 
 Bad actors can still define their interfaces to prevent others from using 
the same names with multiple dispatch (e.g., by only defining 
`connect(::String)`). Doing the sort of automatic filetype dispatch (like 
FileIO is working towards) still needs *one* place where `load(data.jld)` 
is interpreted and re-dispatched to `load(::FileType{:jld})` that the 
HDF5/JLD package can define its dispatch on.  Finally, one currently 
unsolved area is plotting.  None of the `plot` methods defined in any of 
the various packages are combined into the same function, nor could they 
feasibly do so without massive coordination between the package authors 
(for no real functional gain).  This proposal doesn't really solve that, 
either.  It'll be just as impossible to do `using Gadfly, Winston` and have 
the `plot` function just work.

I hope this doesn't read as overly negative.  I think it's great that folks 
are pushing the edges here and proposing new ideas.  But I'm afraid that 
this won't replace the collaboration needed to get these sorts of 
interfaces working well and interchangeably.


Re: [julia-users] Re: Newbie help... First implementation of 3D heat equation solver VERY slow in Julia

2015-04-30 Thread Angel de Vicente


Angel de Vicente angel.vicente.garr...@gmail.com writes:

 Viral Shah vi...@mayin.org writes:
 You may see some better performance with julia 0.4-dev. The other
 thing to do that is easy is to start julia with the -O option that
 enables some more optimizations in the JIT, that may or may not help.

 thanks for the tip. the -O option works only in 0.4, right?

sorry about the repeated messages (was configuring access to julia-users
from a non-gmail account, thought it was not working and resent it,
while apparently I just had to wait a few hours for the message to get
through)
-- 
Ángel de Vicente
http://www.iac.es/galeria/angelv/  


[julia-users] Re: the state of GUI toolkits?

2015-04-30 Thread Tom Breloff
I would consider contributing, since I 1) would like to use it, and 2) want 
to learn more about integrating with C++.  My problem is that I've never 
seen or used Qt5 before, only Qt4.  So I'd need someone else to take the 
lead.

On Thursday, April 30, 2015 at 12:34:13 PM UTC-4, Max Suster wrote:

 I too would love to have a Qt5.jl package.  Having such a robust and 
 cross-platform GUI interface would make many projects more attractive to 
 (non-expert) outsiders coming into Julia. I have been meaning to find time 
 for this, but wrapping the whole of Qt5 alone is quite a project. . . 
 Perhaps, if several people contributed, however, this might be more 
 realistic in the shorter term? 



Re: [julia-users] Re: Defining a function in different modules

2015-04-30 Thread Isaiah Norton

 Can anyone point me in the right direction of the files/functions in the
 core library where dispatch is handled?  I'd like to explore a little so I
 can make comments that account for the relative ease at implementing some
 of the changes suggested.


Start here:
https://github.com/JuliaLang/julia/blob/cd455af0e26370a8899c1d7b3d194aacd8c87e9e/src/gf.c#L1655

On Thu, Apr 30, 2015 at 1:11 PM, Tom Breloff t...@breloff.com wrote:

 Can anyone point me in the right direction of the files/functions in the
 core library where dispatch is handled?  I'd like to explore a little so I
 can make comments that account for the relative ease at implementing some
 of the changes suggested.

 I agree that it would be really nice, in some cases, to auto-merge
 function definitions between namespaces (database connects are very simple
 OO example).   However, if 2 different modules define foo(x::Float64,
 y::Int), then there should be an error if they're both exported (or if not
 an error, then at least force qualified access??)   Now in my mind, the
 tricky part comes when a package writer defines:

 module MyModule
 export foo
 type MyType end
 foo(x::MyType) = ...
 foo(x) = ...
 end


 ... and then writes other parts of the package depending on foo(5) to do
 something very specific.  This may work perfectly until the user calls
 using SomeOtherModule which in turn exported foo(x::Int).  If there's an
 auto-merge between the modules, then foo(x::MyModule.MyType), foo(x::Any),
 and foo(Int) all exist and are valid calls.

 *If* the auto-merge changes the calling behavior for foo *within module
 MyModule*, then we have a big problem.  I.e. we have something like:

 internalmethod() = foo(5)


 that is defined within MyModule... If internalmethod() now maps to
 SomeOtherModule.foo(x::Int)... all the related internal code within
 MyModule will likely break.  However, if internalmethod() still maps to
 MyModule.foo(), then I think we're safe.


 So I guess the question: can we auto-merge the foo methods *in user space
 only *(i.e. in the REPL where someone called using MyModule,
 SomeOtherModule), and keep calls within modules un-merged (unless of
 course you call using SomeOtherPackage within MyModule... after which
 it's my responsibility to know about and call the correct foo).

 Are there other pitfalls to auto-merging in user-space-only?  I can't
 comment on how hard this is to implement, but I don't foresee how it breaks
 dispatch or any of the other powerful concepts.



 On Thursday, April 30, 2015 at 12:19:07 PM UTC-4, Michael Francis wrote:

 My goal is not to remove namespaces, quite the opposite, for types a
 namespace is an elegant solution to resolving the ambiguity between
 different types of the same name. What I do object to is that functions
 (which are defined against user defined types) are relegated to being
 second class citizens in the Julia world unless you are developing in Base.
 For people in Base the world is great, it all just works. For everybody
 else you either shoe horn your behavior into one of the Base methods by
 extending it, or you are forced into qualifying names when you don't need
 to.

 1) Didn't that horse already bolt with Base. If Base were subdivided into
 strict namespaces of functionality then I see this argument, but that isn't
 the case and everybody would be complaining that they need to type
 strings.find(string)
 2) To me that is what multiple dispatch is all about. I am calling a
 function and I want the run time to decide which implementation to use, if
 I wanted to have to qualify all calls to a method I'd be far better off in
 an OO language.


 On Thursday, April 30, 2015 at 2:15:51 AM UTC-4, Tamas Papp wrote:


 On Thu, Apr 30 2015, Stefan Karpinski ste...@karpinski.org wrote:

  Function merging has these problems:
 
 1. It complects name resolution with dispatch – they are no longer
 orthogonal.
 2. It makes all bindings from `using` semantically ambiguous – you
 have
 no idea what a name means without actually doing a call.

 IMO orthogonality of name resolution and dispatch should be preserved --
 it is a nice property of the language and makes reasoning about code
 much easier. Many languages have this property, and it has stood the
 test of time, also in combination with multiple dispatch (Common
 Lisp). Giving it up would be a huge price to pay for some DWIM feature.

 Best,

 Tamas




Re: [julia-users] Re: Defining a function in different modules

2015-04-30 Thread Tom Breloff
Bookmarked and watching.  Thanks  :)

On Thursday, April 30, 2015 at 1:39:54 PM UTC-4, Mauro wrote:

  Can anyone point me in the right direction of the files/functions in 
 the 
  core library where dispatch is handled?  I'd like to explore a little 
 so I 
  can make comments that account for the relative ease at implementing 
 some 
  of the changes suggested. 
  
  
  Start here: 
  
 https://github.com/JuliaLang/julia/blob/cd455af0e26370a8899c1d7b3d194aacd8c87e9e/src/gf.c#L1655
  

 and this 
 https://www.youtube.com/watch?v=osdeT-tWjzk 

  On Thu, Apr 30, 2015 at 1:11 PM, Tom Breloff t...@breloff.com 
 javascript: wrote: 
  
  Can anyone point me in the right direction of the files/functions in 
 the 
  core library where dispatch is handled?  I'd like to explore a little 
 so I 
  can make comments that account for the relative ease at implementing 
 some 
  of the changes suggested. 
  
  I agree that it would be really nice, in some cases, to auto-merge 
  function definitions between namespaces (database connects are very 
 simple 
  OO example).   However, if 2 different modules define foo(x::Float64, 
  y::Int), then there should be an error if they're both exported (or if 
 not 
  an error, then at least force qualified access??)   Now in my mind, the 
  tricky part comes when a package writer defines: 
  
  module MyModule 
  export foo 
  type MyType end 
  foo(x::MyType) = ... 
  foo(x) = ... 
  end 
  
  
  ... and then writes other parts of the package depending on foo(5) to 
 do 
  something very specific.  This may work perfectly until the user calls 
  using SomeOtherModule which in turn exported foo(x::Int).  If there's 
 an 
  auto-merge between the modules, then foo(x::MyModule.MyType), 
 foo(x::Any), 
  and foo(Int) all exist and are valid calls. 
  
  *If* the auto-merge changes the calling behavior for foo *within module 
  MyModule*, then we have a big problem.  I.e. we have something like: 
  
  internalmethod() = foo(5) 
  
  
  that is defined within MyModule... If internalmethod() now maps to 
  SomeOtherModule.foo(x::Int)... all the related internal code within 
  MyModule will likely break.  However, if internalmethod() still maps to 
  MyModule.foo(), then I think we're safe. 
  
  
  So I guess the question: can we auto-merge the foo methods *in user 
 space 
  only *(i.e. in the REPL where someone called using MyModule, 
  SomeOtherModule), and keep calls within modules un-merged (unless of 
  course you call using SomeOtherPackage within MyModule... after which 
  it's my responsibility to know about and call the correct foo). 
  
  Are there other pitfalls to auto-merging in user-space-only?  I can't 
  comment on how hard this is to implement, but I don't foresee how it 
 breaks 
  dispatch or any of the other powerful concepts. 
  
  
  
  On Thursday, April 30, 2015 at 12:19:07 PM UTC-4, Michael Francis 
 wrote: 
  
  My goal is not to remove namespaces, quite the opposite, for types a 
  namespace is an elegant solution to resolving the ambiguity between 
  different types of the same name. What I do object to is that 
 functions 
  (which are defined against user defined types) are relegated to being 
  second class citizens in the Julia world unless you are developing in 
 Base. 
  For people in Base the world is great, it all just works. For 
 everybody 
  else you either shoe horn your behavior into one of the Base methods 
 by 
  extending it, or you are forced into qualifying names when you don't 
 need 
  to. 
  
  1) Didn't that horse already bolt with Base. If Base were subdivided 
 into 
  strict namespaces of functionality then I see this argument, but that 
 isn't 
  the case and everybody would be complaining that they need to type 
  strings.find(string) 
  2) To me that is what multiple dispatch is all about. I am calling a 
  function and I want the run time to decide which implementation to 
 use, if 
  I wanted to have to qualify all calls to a method I'd be far better 
 off in 
  an OO language. 
  
  
  On Thursday, April 30, 2015 at 2:15:51 AM UTC-4, Tamas Papp wrote: 
  
  
  On Thu, Apr 30 2015, Stefan Karpinski ste...@karpinski.org wrote: 
  
   Function merging has these problems: 
   
  1. It complects name resolution with dispatch – they are no 
 longer 
  orthogonal. 
  2. It makes all bindings from `using` semantically ambiguous – 
 you 
  have 
  no idea what a name means without actually doing a call. 
  
  IMO orthogonality of name resolution and dispatch should be preserved 
 -- 
  it is a nice property of the language and makes reasoning about code 
  much easier. Many languages have this property, and it has stood the 
  test of time, also in combination with multiple dispatch (Common 
  Lisp). Giving it up would be a huge price to pay for some DWIM 
 feature. 
  
  Best, 
  
  Tamas 
  
  



[julia-users] Re: icc/icpc options

2015-04-30 Thread Oleg Mikulchenko
Thank you, actually I was expected that answer. 


Re: [julia-users] Building sysimg error with multi-core start

2015-04-30 Thread Pavel
Hi Rene,

Good point about the worker startup flags, will keep that in mind. In my 
case however that does not seem to be the problem as I am overwriting the 
default image (doing that in a Docker container so it is fine to mess 
things up and play).

Narrowed down the problem to one specific package pre-compilation, 
proceeding with the GitHub issue now as the case does not seem to be 
trivial:
https://github.com/JuliaLang/julia/issues/11063

Thanks for replying.
Pavel


On Thursday, April 30, 2015 at 2:20:47 AM UTC-7, René Donner wrote:

 Hi, 

 I had to fiddle with the precompilation myself, initially hitting similar 
 issues. 

 Are you starting julia simply with julia or do you specify the path to 
 the system image using the -J parameter? In case you use -J you need to 
 use the exeflags parameter of addprocs to specify this for the workers as 
 well, otherwise they will load the default sys.jl. 

 The locations for sys.{ji,o} depend on whether you ran make install or 
 are using julia right after make or installed a julia binary. Perhaps run 
 find . -name sys.*, note where the default sys.ji is and where the one 
 you get from build_sysimg gets saved, make sure they are where you expect 
 them to be, in my case I happened to generate them in (to me) unexpected 
 places. 

 I tried to simplify the build process in 
 https://github.com/rened/SystemImageBuilder.jl. By default, it will 
 precompile all installed packages except for ones with known problems or 
 dependencies on such non-precompilable packages. Inclusion / exclusion of 
 packages can be configured. Work in progress, feedback would be much 
 appreciated! (disclaimer: in the worst case be prepared to reinstall julia 
 / rerun make clean; make) 

 Rene 




 Am 30.04.2015 um 06:49 schrieb Pavel pavel.p...@gmail.com javascript:: 


  I am building a custom Julia image (v0.3) using 
  https://github.com/JuliaLang/JuliaBox/blob/master/docker/build_sysimg.jl 
  by calling 
  build_sysimg(joinpath(dirname(Sys.dlpath(libjulia)),sys), 
 native, /home/juser/jimg.jl, force=true) 
  
  A number of modules are listed in jimg.jl as `using Package` for 
 pre-compilation. 
  
  The image builds without errors and starts fine, and package load time 
 is much shorter after pre-compilation as expected. However, when Julia is 
 started with more than one CPU core, e.g. `julia -p 2`, the following error 
 appears at startup: 
  
  ERROR: `convert` has no method matching convert(::Type{Dict{K,V}}, 
 ::Parameters) 
   in create_worker at multi.jl:1067 
   in start_cluster_workers at multi.jl:1028 
   in addprocs at multi.jl:1237 
   in process_options at ./client.jl:236 
   in _start at ./client.jl:354 
  
  Any suggestions? Thanks. 
  



Re: [julia-users] Re: Defining a function in different modules

2015-04-30 Thread Tom Breloff
I think it's a good thing to spell out precise scenarios where using 
multiple modules at the same time is good and unambiguous, and when you can 
get in trouble.  If anything, it gives developers an idea of the edge cases 
that need to be handled and can help in thinking about design changes 
and/or workarounds, and may help to more clearly define best practices.  I 
think the crux of the issue is this:

*Name clash between similar/competing modules, which likely don't know or 
care about the other, and which may or may not define functionality common 
to Base*

Good examples mentioned are databases and plotting packages.   FileSystemDB 
and CloudDB might both want to export a connect(s::String) method, just 
like Winston and Gadfly might want to export a plot(v::Vector{Float64}) 
method.  I don't think this is a bad thing, but for it to work with using 
FileSystemDB, CloudDB we have a couple requirements:

   - Within both FileSystemDB and CloudDB, they *must *call their 
   respective connect methods internally.  If this doesn't hold then every 
   package writer must know about every other package in existence now and in 
   the future to ensure nothing breaks.  This requirement could be relaxed a 
   little if the package writer had some control over what/how its internal 
   methods could be overwritten.  (Comparing to C++, a class can have 
   protected methods which can effectively be redefined by another class, but 
   also private methods which cannot.  It's up to the class writer to decide 
   which parts can be changed without breaking internals.)  In effect, a 
   module could have private methods that are never monkey-patched, and 
   public methods that could be.  Some languages do this with naming 
   conventions (underscores, etc).  The decision would then rest with the 
   package developer as to whether it would break their code to allow a 
   different module to override their methods.  Here's an example where 
   there's one function that you *really* don't want someone else to 
   overwrite, and another that doesn't really matter.  (Idea: could possibly 
   achieve this by automatically converting calls to _cleanup() within this 
   module into MyModule._cleanup() during parsing?)
   module MyModule
   
   const somethingImportant = loadMassiveDatabaseIntoMemory()
   _cleanup() = cleanup(somethingImportant)
   
   type MyType end
   string(x::MyType) = MyType{}
   
   ...
   
   end
   - In the scope where the using call occurred, ambiguous calls should 
   require explicit calling
  - The pain of this could certainly be lessened with syntax like 
  using Gadfly as G, Winston
  which could use Winston's plot method by default... forcing you to 
  call G.plot() otherwise.  This might be close to how it works now, 
anyways.
  or potentially harder to implement properly (but maybe under the hood 
  just re-sorts the module priorities?):
  import Gadfly, Winston
  
  with Gadfly
plot(x)
  end
  
  with Winston
   plot(y)
  end
  Both are very reasonable syntax from my point of view.  At some point 
  the user has to tell us what they want, right?  You can't use multiple 
  packages defining the exact same method and expect something to just 
work
   


Note: I think monkey patching is ok in some circumstances, but usually 
dangerous in packages (i.e. redefining
Base.sin(x::Real) = fly(Vegas)
in a package will lead to problems that the package maintainer just 
couldn't foresee.)  Monkey patching by an end user is a much different 
story, as they usually have a better idea on how all the components 
interact.  This kind of thing should end up in a best practices guide, 
though... not forced by the language.  

Maybe we just need better tooling to identify package interop... i.e. a 
test system that will do using X, Y, Z, ...  in a systematic way before 
testing a package, thus letting tests happen with random subsets of other 
packages polluting the Main module to identify how fragile that package may 
be in the wild (and also whether using that package leads to breakages 
elsewhere).

Thoughts?  Does any of this exist already and I just don't know about it?


On Thursday, April 30, 2015 at 2:18:03 PM UTC-4, Matt Bauman wrote:

 On Thursday, April 30, 2015 at 1:11:27 PM UTC-4, Tom Breloff wrote:

 I agree that it would be really nice, in some cases, to auto-merge 
 function definitions between namespaces (database connects are very simple 
 OO example).   However, if 2 different modules define foo(x::Float64, 
 y::Int), then there should be an error if they're both exported (or if not 
 an error, then at least force qualified access??)   Now in my mind, the 
 tricky part comes when a package writer defines:

 module MyModule
 export foo
 type MyType end
 foo(x::MyType) = ...
 foo(x) = ...
 end


 I think this is a very interesting discussion, but it all seems to come 
 back to a human communication issue.  Each 

[julia-users] function similar to matlab tabulate

2015-04-30 Thread Alexandros Fakos
Hi, 

Is there a way to get a table of frequencies of the unique values in an 
array in Julia?
Something like matlab's tabulate

Thanks a lot,
Alex


[julia-users] broadcast comparison operators

2015-04-30 Thread Alexandros Fakos
Hi, 

Why the following commands give different results?

julia broadcast(.==,[1.0],[0.0,1.0])
2-element Array{Float64,1}:
 0.0
 1.0

julia repmat([1.0],2,1).==[0.0,1.0]
2x1 BitArray{2}:
 false
  true



How can I use broadcast for array comparisons (with a bit array as output)?

Thanks,
Alex


[julia-users] Re: function similar to matlab tabulate

2015-04-30 Thread Alexandros Fakos
Thanks a lot. countmap returns a dictionary but I would prefer an array. 
How can I do that?

Thank you

On Thursday, April 30, 2015 at 4:41:43 PM UTC-4, Johan Sigfrids wrote:

 countmap in the StatsBase.jl package does this.

 On Thursday, April 30, 2015 at 11:11:37 PM UTC+3, Alexandros Fakos wrote:

 Hi, 

 Is there a way to get a table of frequencies of the unique values in an 
 array in Julia?
 Something like matlab's tabulate

 Thanks a lot,
 Alex



Re: [julia-users] Re: Defining a function in different modules

2015-04-30 Thread Scott Jones


On Wednesday, April 29, 2015 at 11:36:15 PM UTC-4, MA Laforge wrote:

 *Scott and Michael:*
 I am pretty certain I understand what you are saying, but I find your 
 examples/descriptions a bit confusing.  I think (hope) I know why Stafan is 
 confused.

 *Stefan:*
 I think Scott has a valid point but I disagree that exported functions 
 from a module must reference types defined in that module.  It is my 
 strong belief that Scott is merely focusing on the symptom instead of the 
 cause.


I've never advocated that!  What I've said, a number of times now, is that 
*if* the compiler determines that a method in a module is using a type 
defined in that module, which should mean
that it is unambiguous, and it is exported, that if somebody does using 
xxx, it should be merged.
I actually now think that it really needs to be explicit... it is not very 
clear in Julia what the intent of the programmer was.

You have me confused with Michael here... I disagree with that part of what 
he was saying, and agree totally with Stefan that it would be way to 
limiting.
I also think that the approach of just postponing the check for ambiguity 
to run-time would be a very bad thing... I think it's hard enough now to 
know
if your module is correct, it seems a lot of stuff isn't caught until 
run-time.

I also think that your description of the problem is very good, but I don't 
think your solution does enough to make things better...

I would propose adding some new syntax to indicate if a function name in a 
module is meant to extend a particular function, instead of that happening 
implicitly because
it was previously imported, as well as syntax to indicate that a function 
name is meant to be a new concept, that others can extend, and can 
unambiguously be used
in the global namespace (i.e. it uses one (or more) of its own type(s) as 
part of its signature...
module baz
function foo(args) export extends Base # i.e. extends Base.foo, is 
exported
function silly(args) export extends Database # i.e. extends Database.silly, 
is exported
function bar(abc::MyType) export generic # Creates a new concept bar, 
which is exported, and can be extended by other modules explicity with the 
extends syntax
function ugh(args) extends Base   # extends Base.ugh in the 
module, but is not visible outside the module (have to use baz.ugh to call)
function myfunc(args)# makes a local 
definition, will hide other definitions from Base or used in the parent 
module, but can be use as baz.myfunc
function myownfunc(args) private  # makes a local 
definition, as above, but is not visible at all in parent module

I'm not sure if this is possible, but, what if you wanted to extend a 
particular function, but *only* in the context of your module?
Currently, the extension is made public, even if you didn't export the 
function, which seems a bit wrong to me...
To me, it seems that whether a function is meant to be a new generic, or 
extend something else, is orthogonal to whether you want it to be able to 
be used directly
in code that does using.  It would also be good to be able to make 
methods (even ones that are generic within the module) (or types, for that 
matter) private from the rest of the world...
so that outside the module, they cannot be used...
I see a big problem with Julia, in that it seems that one cannot prevent 
users from directly accessing your implementation, as opposed to be limited 
to just the abstract interface
you made.   This was a wonderful thing in CLU... I suppose they no longer 
teach CLU at MIT? :-(
[it would also be wonderful to be able to specify which exceptions a method 
can throw, and have that strict... if a method specifies its exceptions, 
then any unhandled exceptions
in the method get converted to a special Unhanded Exception exception...]

Scott


 

 Fortunately, I am certain that this is not the crux of the problem...  And 
 I agree completely with Stefan: Limiting exports to this particular case is 
 extremely restrictive (and unnecessary).  I also agree that, this 
 restriction *would* keep developers from developing very useful monkey 
 patches (among other things).  So let's look at the problem differently...

 *Problem 1: A module owns its verbs.*
 See discussion above discussion (
 https://groups.google.com/d/msg/julia-users/sk8Gxq7ws3w/ASFlqZmVwYsJ) if 
 you are not familiar with the idea.

 *Problem 2: Julia has 2 symbol types (for objects/methods/...)*
 Well, this is not really a problem... It is terrific!  Julia's 
 multi-dispatch engine allows us to overload methods in a way I have never 
 seen before!

 In fact, the multi-dispatch system allows programmers to DO AWAY WITH 
 namespaces altogether for the new type of symbol (at least to a first 
 order).

 *So what are the symbol types?*
 1) Conventional Type: Used in most other imperative languages
 2) Multi-Dispatch Type: Symbols of methods whose signature can 

Re: [julia-users] Re: function similar to matlab tabulate

2015-04-30 Thread Milan Bouchet-Valat
Le jeudi 30 avril 2015 à 13:41 -0700, Johan Sigfrids a écrit :
 countmap in the StatsBase.jl package does this.
There's also https://github.com/nalimilan/Tables.jl/

 On Thursday, April 30, 2015 at 11:11:37 PM UTC+3, Alexandros Fakos
 wrote:
 Hi, 
 
 
 Is there a way to get a table of frequencies of the unique
 values in an array in Julia?
 Something like matlab's tabulate
 
 
 Thanks a lot,
 Alex



Re: [julia-users] Re: the state of GUI toolkits?

2015-04-30 Thread Isaiah Norton

 The part that is clearly daunting is the interface for event handling,
 namely signals and slots.


There is a WIP clang plugin designed to replace moc. Worth keeping an eye
on.
https://github.com/woboq/moc-ng

On Thu, Apr 30, 2015 at 4:59 PM, Max Suster mxsst...@gmail.com wrote:

 Good to hear interest. I will also have to look at what might be a good
 strategy for wrapping Qt5 with Cxx. The core functionality of Qt5 (shared
 by Qt4) would be an obvious place to start.  The part that is clearly
 daunting is the interface for event handling, namely signals and slots. Not
 only we have to deal with/replace the underlying XML support, but also the
 syntax has changed a lot between Qt4 and Qt5.


Re: [julia-users] Re: Defining a function in different modules

2015-04-30 Thread Stefan Karpinski
On Thu, Apr 30, 2015 at 12:15 AM, MA Laforge ma.laforge...@gmail.com
wrote:

 Stefan,

 I am sorry, but my experience leads me to disagree with your statement
 that Julia is unable to dispatch a function dynamically (@ runtime).


I didn't say this – all function calls are dispatched dynamically.


 ...So your statement confuses me a little...


That's probably because you missed the statement right before that code
where I said assuming hypothetical code with function merging – in other
words that example is not how things work currently, but how it has been
proposed that they could work – a behavior which I'm arguing against. If
you run that code on 0.3 it will always call Bar.f; if you run it in
0.4-dev, it will give an error (as I believe it should).


[julia-users] Re: .jl to .exe

2015-04-30 Thread Páll Haraldsson
[People answered literally.]

As answered, cross-compile no, but is it really a problem? You would just 
install Julia on Windows (and Mac OS X, etc.) and compile from there. I 
assume it's possible in this indirect way to cross-compile in any direction.

I assume the script works on all platforms Julia supports (and gcc is 
available for). Windows is mentioned in the script so I assume is supports 
Windows etc.

You would need a Windows machine, but not really. A Windows license - I 
assume you could use Wine. Then you wouldn't need a VM that you would 
otherwise need or a real machine.

For Mac OS X, you might need actual hardware and software - e.g. pay..

On Thursday, April 23, 2015 at 3:15:46 PM UTC, pauld11718 wrote:

 Will it be possible to cross-compile?
 Do all the coding on linux(64 bit) and generate the exe for windows(32 
 bit)?



Re: [julia-users] readtable produce wrong column name

2015-04-30 Thread Jacob Quinn
DataFrame column names must be valid Julia identifiers, so readtable does
the conversion when reading data in.

-Jacob

On Thu, Apr 30, 2015 at 12:43 PM, Li Zhang fff...@gmail.com wrote:

 hi all,

 I first use writetable to write a dataframe to a csv file. some of the
 column names are (somename), or name  other.

 the output csv file showed exact name headers, but when i use readtable to
 read the csv file back to a dataframe, column names become _somename_ and
 name_other.

 I am not sure if i missed some optional parameters that would do the right
 work or this is bug?

 any thought?



[julia-users] Interact.jl widgets does not show up when IJulia and Ipython updated to the latest version

2015-04-30 Thread Li Zhang
anyone have same problems? or an issue needs to be filed.


Re: [julia-users] Build error: could not open \flisp.boot\.

2015-04-30 Thread Isaiah Norton
The underlying issue is that uv_exepath [1] fails, leading to flisp being
unable to find where it is running [2], which leads to the error message
you observed.

From some cursory googling, my I guess this is an issue with the
PythonAnywhere docker and/or apparmor configuration. Clearly it is possible
to run Julia under docker (see JuliaBox), so this will likely need to be
resolved by PythonAnywhere.

(related:
http://stackoverflow.com/questions/1023306/finding-current-executables-path-without-proc-self-exe
)

[1]
https://github.com/libuv/libuv/blob/1f711e4d6d2137776a113b29a791d115fb4a0c63/src/unix/linux-core.c#L402-L419
[2]
https://github.com/JuliaLang/julia/blob/dea3d0e42029af05f58a0069491238462382e591/src/flisp/flisp.c#L2424-L2426



On Thu, Apr 30, 2015 at 1:08 PM, Ismael VC ismael.vc1...@gmail.com wrote:

 If anyone is interested in checking out the console session, just send me
 a message to ismael.vc1...@gmail.com, thanks!

 El jueves, 30 de abril de 2015, 11:59:13 (UTC-5), Ismael VC escribió:

 Thank yo very much Isaiah, I've just did `make clean  make` again and I
 still get the same error:

 ```
 Making install in SYM
 CC src/jltypes.o
 CC src/gf.o
 CC src/support/hashing.o
 CC src/support/timefuncs.o
 CC src/support/ptrhash.o
 CC src/support/operators.o
 CC src/support/utf8.o
 CC src/support/ios.o
 CC src/support/htable.o
 CC src/support/bitvector.o
 CC src/support/int2str.o
 CC src/support/libsupportinit.o
 CC src/support/arraylist.o
 CC src/support/strtod.o
 LINK src/support/libsupport.a
 CC src/flisp/flisp.o
 CC src/flisp/builtins.o
 CC src/flisp/string.o
 CC src/flisp/equalhash.o
 CC src/flisp/table.o
 CC src/flisp/iostream.o
 CC src/flisp/julia_extensions.o
 LINK src/flisp/libflisp.a
 CC src/flisp/flmain.o
 LINK src/flisp/flisp
 FLISP src/julia_flisp.boot
 fatal error:
 (io-error file: could not open \flisp.boot\)
 make[2]: *** [julia_flisp.boot] Error 1
 make[1]: *** [julia-release] Error 2
 make: *** [release] Error 2

 16:18 ~/julia (release-0.3)$  ls -l src/flisp/flisp.boot
 -rw-rw-r-- 1 MexLavu registered_users 35369 Apr 30 06:58
 src/flisp/flisp.boot
 ```

 PythonAnywhere lets us to share a console session, if you or anyone else
 is interested in this error, I just need you email address to send you the
 link.


 El jueves, 30 de abril de 2015, 9:22:45 (UTC-5), Isaiah escribió:

 I am guessing there is some other error further back. Try `make -j1` so
 it fails immediately.

 On Thu, Apr 30, 2015 at 6:22 AM, Ismael VC ismael...@gmail.com wrote:

 Hello everyone!

 I’m trying to build Julia at PythonAnywhere
 https://www.pythonanywhere.com, and the build fails because of:

 CC src/flisp/flisp.o
 CC src/flisp/builtins.o
 CC src/flisp/string.o
 CC src/flisp/equalhash.o
 CC src/flisp/table.o
 CC src/flisp/iostream.o
 CC src/flisp/julia_extensions.o
 CC src/flisp/flmain.o
 LINK src/flisp/libflisp.a
 LINK src/flisp/flisp
 FLISP src/julia_flisp.boot
 fatal error:
 (io-error file: could not open \flisp.boot\)
 make[2]: *** [julia_flisp.boot] Error 1
 make[2]: *** Waiting for unfinished jobs
 make[1]: *** [julia-release] Error 2
 make: *** [release] Error 2
 10:07 ~/julia (release-0.3)$

 *Note:* I did make cleanall before trying again.

 This are the VM specs:

 10:08 ~/julia (release-0.3)$ cat /etc/issue
 Ubuntu 14.04.2 LTS \n \l

 10:15 ~/julia (release-0.3)$ uname -a
 Linux giles-liveconsole2 3.13.0-45-generic #74-Ubuntu SMP Tue Jan 13 
 19:36:28 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

 I found an issue that had the same problem at some point, but it was in
 BSD, so I believe it won’t be relevant, yet it’s here:

-
https://groups.google.com/forum/#!msg/julia-dev/Z9J9NK9Ge5w/CNwXK3q2BgQJ

 Thanks in advance, cheers!
 ​


  ​



[julia-users] Re: readtable produce wrong column name

2015-04-30 Thread Li Zhang
but in my original dataframe, julia doesn't complain when i add a column 
using df[symbol((somename))]=dataarray.

On Thursday, April 30, 2015 at 2:43:19 PM UTC-4, Li Zhang wrote:

 hi all,

 I first use writetable to write a dataframe to a csv file. some of the 
 column names are (somename), or name  other.

 the output csv file showed exact name headers, but when i use readtable to 
 read the csv file back to a dataframe, column names become _somename_ and 
 name_other.

 I am not sure if i missed some optional parameters that would do the right 
 work or this is bug?

 any thought?



[julia-users] Performance variability - can we expect Julia to be the fastest (best) language?

2015-04-30 Thread Páll Haraldsson

Hi,

[As a best language is subjective, I'll put that aside for a moment.]

Part I.

The goal, as I understand, for Julia is at least within a factor of two of 
C and already matching it mostly and long term beating that (and C++). 
[What other goals are there? How about 0.4 now or even 1.0..?]

While that is the goal as a language, you can write slow code in any 
language and Julia makes that easier. :) [If I recall, Bezanson mentioned 
it (the global problem) as a feature, any change there?]


I've been following this forum for months and newbies hit the same issues. 
But almost always without fail, Julia can be speed up (easily as Tim Holy 
says). I'm thinking about the exceptions to that - are there any left? And 
about the first code slowness (see Part II).

Just recently the last two flaws of Julia that I could see where fixed: 
Decimal floating point is in (I'll look into the 100x slowness, that is 
probably to be expected of any language, still I think may be a 
misunderstanding and/or I can do much better). And I understand the tuple 
slowness has been fixed (that was really the only core language defect). 
The former wasn't a performance problem (mostly a non existence problem and 
correctness one (where needed)..).


Still we see threads like this one recent one:

https://groups.google.com/forum/#!topic/julia-users/-bx9xIfsHHw
It seems changing the order of nested loops also helps

Obviously Julia can't beat assembly but really C/Fortran is already close 
enough (within a small factor). The above row vs. column major (caching 
effects in general) can kill performance in all languages. Putting that 
newbie mistake aside, is there any reason Julia can be within a small 
factor of assembly (or C) in all cases already?


Part II.

Except for caching issues, I still want the most newbie code or 
intentionally brain-damaged code to run faster than at least 
Python/scripting/interpreted languages.

Potential problems (that I think are solved or at least not problems in 
theory):

1. I know Any kills performance. Still, isn't that the default in Python 
(and Ruby, Perl?)? Is there a good reason Julia can't be faster than at 
least all the so-called scripting languages in all cases (excluding small 
startup overhead, see below)?

2. The global issue, not sure if that slows other languages down, say 
Python. Even if it doesn't, should Julia be slower than Python because of 
global?

3. Garbage collection. I do not see that as a problem, incorrect? Mostly 
performance variability ([3D] games - subject for another post, as I'm 
not sure that is even a problem in theory..). Should reference counting 
(Python) be faster? On the contrary, I think RC and even manual memory 
management could be slower.

4. Concurrency, see nr. 3. GC may or may not have an issue with it. It can 
be a problem, what about in Julia? There are concurrent GC algorithms 
and/or real-time (just not in Julia). Other than GC is there any big 
(potential) problem for concurrent/parallel? I know about the threads work 
and new GC in 0.4.

5. Subarrays (array slicing?). Not really what I consider a problem, 
compared to say C (and Python?). I know 0.4 did optimize it, but what 
languages do similar stuff? Functional ones?

6. In theory, pure functional languages should be faster. Are they in 
practice in many or any case? Julia has non-mutable state if needed but 
maybe not as powerful? This seems a double-edged sword. I think Julia 
designers intentionally chose mutable state to conserve memory. Pros and 
cons? Mostly Pros for Julia?

7. Startup time. Python is faster and for say web use, or compared to PHP 
could be an issue, but would be solved by not doing CGI-style web. How 
good/fast is Julia/the libraries right now for say web use? At least for 
long running programs (intended target of Julia) startup time is not an 
issue.

8. MPI, do not know enough about it and parallel in general, seems you are 
doing a good job. I at least think there is no inherent limitation. At 
least Python is not in any way better for parallel/concurrent?

9. Autoparallel. Julia doesn't try to be, but could (be an addon?). Is 
anyone doing really good and could outperform manual Julia?

10. Any other I'm missing?


Wouldn't any of the above or any you can think of be considered performance 
bugs? I know for libraries you are very aggressive. I'm thinking about 
Julia as a core language mostly, but maybe you are already fastest already 
for most math stuff (if implemented at all)?


I know to get the best speed, 0.4 is needed. Still, (for the above) what 
are the problems for 0.3? Have most of the fixed speed issues been 
backported? Is Compat.jl needed (or have anything to do with speed?) I 
think slicing and threads stuff (and global?) may be the only exceptions.

Rust and some other languages also claim no abstraction penalty and maybe 
also other desirable things (not for speed) that Julia doesn't have. Good 
reason it/they might be faster or a good 

[julia-users] Re: function similar to matlab tabulate

2015-04-30 Thread Johan Sigfrids
countmap in the StatsBase.jl package does this.

On Thursday, April 30, 2015 at 11:11:37 PM UTC+3, Alexandros Fakos wrote:

 Hi, 

 Is there a way to get a table of frequencies of the unique values in an 
 array in Julia?
 Something like matlab's tabulate

 Thanks a lot,
 Alex



Re: [julia-users] Defining a function in different modules

2015-04-30 Thread Stefan Karpinski
On Wed, Apr 29, 2015 at 9:08 PM, Scott Jones scott.paul.jo...@gmail.com
wrote:

 Your restrictions are making it very hard to develop easy to use APIs that
 make sense for the people using them…

 That’s why so many people have been bringing this issue up…


Not a single person who maintains a major Julia package has complained
about this. Which doesn't mean that there can't possibly be an issue here,
but it seems to strongly suggest that this is one of those concerns that
initially appears dire, when coming from a particular programming
background, but which dissipates once one acclimatizes to the multiple
dispatch mindset – in particular the idea that one generic function =
one verb concept.


[julia-users] icc/icpc options

2015-04-30 Thread Oleg Mikulchenko
Hi,

Is there the right way to setup Intel compiler's options in Make.inc for 
building Julia with icc and MKL?

E.g. I would like to use AVX2 options. I has tried to add them to some 
pieces in Make.inc, but didn't see options at compiling by icpc/icc/




Re: [julia-users] Re: icc/icpc options

2015-04-30 Thread Isaiah Norton
Note that we turn off AVX2 in the default configuration because LLVM 3.3
codegen with AVX2 on was badly broken on Haswell:
https://github.com/JuliaLang/julia/blob/dea3d0e42029af05f58a0069491238462382e591/src/codegen.cpp#L5418-L5421

On Thu, Apr 30, 2015 at 3:20 PM, Oleg Mikulchenko olegmi...@gmail.com
wrote:

 Thank you, actually I was expected that answer.



Re: [julia-users] Build error: could not open \flisp.boot\.

2015-04-30 Thread Ismael VC
Thank you very much Isaiah, I will report this to PythonAnywhere, have a
nice day!

On Thu, Apr 30, 2015 at 2:07 PM, Isaiah Norton isaiah.nor...@gmail.com
wrote:

 The underlying issue is that uv_exepath [1] fails, leading to flisp being
 unable to find where it is running [2], which leads to the error message
 you observed.

 From some cursory googling, my I guess this is an issue with the
 PythonAnywhere docker and/or apparmor configuration. Clearly it is possible
 to run Julia under docker (see JuliaBox), so this will likely need to be
 resolved by PythonAnywhere.

 (related:
 http://stackoverflow.com/questions/1023306/finding-current-executables-path-without-proc-self-exe
 )

 [1]
 https://github.com/libuv/libuv/blob/1f711e4d6d2137776a113b29a791d115fb4a0c63/src/unix/linux-core.c#L402-L419
 [2]
 https://github.com/JuliaLang/julia/blob/dea3d0e42029af05f58a0069491238462382e591/src/flisp/flisp.c#L2424-L2426



 On Thu, Apr 30, 2015 at 1:08 PM, Ismael VC ismael.vc1...@gmail.com
 wrote:

 If anyone is interested in checking out the console session, just send me
 a message to ismael.vc1...@gmail.com, thanks!

 El jueves, 30 de abril de 2015, 11:59:13 (UTC-5), Ismael VC escribió:

 Thank yo very much Isaiah, I've just did `make clean  make` again and I
 still get the same error:

 ```
 Making install in SYM
 CC src/jltypes.o
 CC src/gf.o
 CC src/support/hashing.o
 CC src/support/timefuncs.o
 CC src/support/ptrhash.o
 CC src/support/operators.o
 CC src/support/utf8.o
 CC src/support/ios.o
 CC src/support/htable.o
 CC src/support/bitvector.o
 CC src/support/int2str.o
 CC src/support/libsupportinit.o
 CC src/support/arraylist.o
 CC src/support/strtod.o
 LINK src/support/libsupport.a
 CC src/flisp/flisp.o
 CC src/flisp/builtins.o
 CC src/flisp/string.o
 CC src/flisp/equalhash.o
 CC src/flisp/table.o
 CC src/flisp/iostream.o
 CC src/flisp/julia_extensions.o
 LINK src/flisp/libflisp.a
 CC src/flisp/flmain.o
 LINK src/flisp/flisp
 FLISP src/julia_flisp.boot
 fatal error:
 (io-error file: could not open \flisp.boot\)
 make[2]: *** [julia_flisp.boot] Error 1
 make[1]: *** [julia-release] Error 2
 make: *** [release] Error 2

 16:18 ~/julia (release-0.3)$  ls -l src/flisp/flisp.boot
 -rw-rw-r-- 1 MexLavu registered_users 35369 Apr 30 06:58
 src/flisp/flisp.boot
 ```

 PythonAnywhere lets us to share a console session, if you or anyone else
 is interested in this error, I just need you email address to send you the
 link.


 El jueves, 30 de abril de 2015, 9:22:45 (UTC-5), Isaiah escribió:

 I am guessing there is some other error further back. Try `make -j1` so
 it fails immediately.

 On Thu, Apr 30, 2015 at 6:22 AM, Ismael VC ismael...@gmail.com wrote:

 Hello everyone!

 I’m trying to build Julia at PythonAnywhere
 https://www.pythonanywhere.com, and the build fails because of:

 CC src/flisp/flisp.o
 CC src/flisp/builtins.o
 CC src/flisp/string.o
 CC src/flisp/equalhash.o
 CC src/flisp/table.o
 CC src/flisp/iostream.o
 CC src/flisp/julia_extensions.o
 CC src/flisp/flmain.o
 LINK src/flisp/libflisp.a
 LINK src/flisp/flisp
 FLISP src/julia_flisp.boot
 fatal error:
 (io-error file: could not open \flisp.boot\)
 make[2]: *** [julia_flisp.boot] Error 1
 make[2]: *** Waiting for unfinished jobs
 make[1]: *** [julia-release] Error 2
 make: *** [release] Error 2
 10:07 ~/julia (release-0.3)$

 *Note:* I did make cleanall before trying again.

 This are the VM specs:

 10:08 ~/julia (release-0.3)$ cat /etc/issue
 Ubuntu 14.04.2 LTS \n \l

 10:15 ~/julia (release-0.3)$ uname -a
 Linux giles-liveconsole2 3.13.0-45-generic #74-Ubuntu SMP Tue Jan 13 
 19:36:28 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

 I found an issue that had the same problem at some point, but it was
 in BSD, so I believe it won’t be relevant, yet it’s here:

-

 https://groups.google.com/forum/#!msg/julia-dev/Z9J9NK9Ge5w/CNwXK3q2BgQJ

 Thanks in advance, cheers!
 ​


  ​





Re: [julia-users] function similar to matlab tabulate

2015-04-30 Thread René Donner
Hi,

I am not aware of a built-in function, but this should do the trick:

  a = [a,a,b,c,c,c,d]
  d = Dict()

  for x in a
d[x] = get!(d,x,0) + 1
  end

  for k in sort(collect(keys(d)))
println(Value $k with count $(d[k]))
  end

Cheers,

Rene


Am 30.04.2015 um 20:35 schrieb Alexandros Fakos alexandrosfa...@gmail.com:

 Hi, 
 
 Is there a way to get a table of frequencies of the unique values in an array 
 in Julia?
 Something like matlab's tabulate
 
 Thanks a lot,
 Alex



[julia-users] Re: the state of GUI toolkits?

2015-04-30 Thread Max Suster
Good to hear interest. I will also have to look at what might be a good 
strategy for wrapping Qt5 with Cxx. The core functionality of Qt5 (shared by 
Qt4) would be an obvious place to start.  The part that is clearly daunting is 
the interface for event handling, namely signals and slots. Not only we have to 
deal with/replace the underlying XML support, but also the syntax has changed a 
lot between Qt4 and Qt5.

Re: [julia-users] Re: Defining a function in different modules

2015-04-30 Thread Stefan Karpinski
On Thu, Apr 30, 2015 at 12:19 PM, Michael Francis mdcfran...@gmail.com
wrote:

 My goal is not to remove namespaces, quite the opposite, for types a
 namespace is an elegant solution to resolving the ambiguity between
 different types of the same name. What I do object to is that functions
 (which are defined against user defined types) are relegated to being
 second class citizens in the Julia world unless you are developing in Base.
 For people in Base the world is great, it all just works. For everybody
 else you either shoe horn your behavior into one of the Base methods by
 extending it, or you are forced into qualifying names when you don't need
 to.


There's nothing privileged about Base except that `using Base` is
automatically done in other (non-bare) modules. If you want to extend a
function from Base, you have to do `Base.foo(args...) = whatever`. The same
applies to functions from other modules. The Distributions and StatsBase
packages are good examples of this being done in the Julia ecosystem. What
is wrong with having shared concepts defined in a shared module?


 1) Didn't that horse already bolt with Base. If Base were subdivided into
 strict namespaces of functionality then I see this argument, but that isn't
 the case and everybody would be complaining that they need to type
 strings.find(string)


I don't understand how any horses have bolted – Base is not particularly
special, it just provides a default set of names that are available to use.
It in no way precludes defining your own meanings for any names at all or
sharing them among a set of modules.

2) To me that is what multiple dispatch is all about. I am calling a
 function and I want the run time to decide which implementation to use, if
 I wanted to have to qualify all calls to a method I'd be far better off in
 an OO language.


This is exactly what happens, but you have to first be clear about which
function you mean. The whole point of namespaces is that different modules
can have different meanings for the same name. If two modules don't have
the same meaning for `foo` then you have to clarify which sense of `foo`
you intended. Once you've picked a meaning of `foo`, if it is a generic
function, then the appropriate method will be used when you call it on a
set of arguments.


Re: [julia-users] icc/icpc options

2015-04-30 Thread Stefan Karpinski
Just so you're aware, changing C compilers would have no impact on
performance of your Julia code – Julia code is always generated by LLVM, no
matter what C compiler you use for Julia's small C code base. Changing BLAS
to MKL is supported and will have an effect on BLAS operations.

On Thu, Apr 30, 2015 at 3:03 PM, Oleg Mikulchenko olegmi...@gmail.com
wrote:

 Hi,

 Is there the right way to setup Intel compiler's options in Make.inc for
 building Julia with icc and MKL?

 E.g. I would like to use AVX2 options. I has tried to add them to some
 pieces in Make.inc, but didn't see options at compiling by icpc/icc/





[julia-users] Performance variability - can we expect Julia to be the fastest (best) language?

2015-04-30 Thread Ivar Nesje
That was lots of questions. I'll answer one.

I know to get the best speed, 0.4 is needed. Still, (for the above) what are 
the problems for 0.3? Have most of the fixed speed issues been backported? Is 
Compat.jl needed (or have anything to do with speed?) I think slicing and 
threads stuff (and global?) may be the only exceptions.

We maintain 0.3 for obvious bug fixes, and try to be really careful to not 
introduce new ones. Most of the performance enhancements are deep changes to 
the system that might introduce bugs, so they can't be backported easily.

Compat.jl is a tool you can use to support 0.4 and 0.3 with the same code base, 
without depreciation warnings. It has nothing to do with speed. 


[julia-users] Macro to generate function signature

2015-04-30 Thread David Gold
I'm working (in 3.7) on a function that takes two functions f and g as 
inputs, merges all non-conflicting methods of f and g into one function h, 
and returns h. I'm trying to use macros to generate the signatures of each 
method for h:

macro argsgen(typle, n::Int64)
y = eval(:($typle))
xn = symbol(x$n)
return :( $xn :: $(y[n]) )
end

macro repper(typle, n::Int64)
ex = @argsgen($typle, 1)
for i in 2:n
ex = string(ex, , @argsgen($typle, $n))
end

return parse(ex)
end

So, if f has a method with signature (x::Int64), then I can feed the 
1-tuple '(Int64,)' to @argsgen, which generates the proper signature: 

julia ex1 = macroexpand( :(@argsgen((Int64,), 1) ))
:(x1::Int64)

and I can thereby define a method for h:


julia h(@argsgen((Int64,), 1)) = x1 + 1
foobar (generic function with 1 method)

julia h(1)
2

The problem is when I try to do this for a signature of more than one 
argument, say (Int64, Int64). Above I've tried implementing a macro that 
generates an expression of repeated '@argsgen' calls with the appropriate 
second argument:

ex = macroexpand( :( @repper (Int64, Int64) 2 ) )
:((x1::Int64,x2::Int64))

which doesn't quite work when I attempt to use it for a signature:

julia h(@repper (Int64, Int64) 2) = x1 + x2
ERROR: syntax: (x1::Int64,x2::Int64) is not a valid function argument name

How can I get @repper to return an expression that isn't a tuple, but also 
has two @argsgen calls separated by a comma? I've tried a couple of other 
ways of doing this, but each expression wants to be returned as a tuple. 
And I can't find out how to make the tuple splat in such a way that works 
as a signature for h. Any thoughts? (Also, I imagine the tupocalypse will 
break this, but I can deal with that later.) 

Thank you all!
David


Re: [julia-users] Defining a function in different modules

2015-04-30 Thread Jeff Bezanson
On Thu, Apr 30, 2015 at 5:26 PM, Scott Jones scott.paul.jo...@gmail.com wrote:
 Maybe because it seems that a lot of the major packages have been put into
 Base, so it isn't a problem, as MA Laforge pointed out, leading to Base
 being incredibly large,

That's absurd. There are 500 packages. We added Dates and...what else?
We would like Base to be a bit smaller
(https://github.com/JuliaLang/julia/issues/5155), but incredibly
large is a bit of an overstatement. It's *nothing* compared to
matlab's default namespace for example.

 with stuff that means Julia's MIT license doesn't mean all that much,
 because it includes GPL code by default...

So the license of the entire compiler, runtime, and 90%+ of the
standard library doesn't mean much? Ouch.
In any case Viral started adding a flag to exclude GPL libs last week.
The changes for that are tiny.

I'm still confused about MongoDB vs. TokuMX. In your last post about
them you mentioned using them as drop-in replacements for each other.
But before that you said they are competitors, and won't necessarily
implement the same interface. If they have incompatible interfaces,
how can they be drop-in replacements? I don't get it.


Re: [julia-users] Re: Performance variability - can we expect Julia to be the fastest (best) language?

2015-04-30 Thread Harry B
Sorry my comment wasn't well thought out and a bit off topic. On 
exceptions/errors my issue is this 
https://github.com/JuliaLang/julia/issues/7026
On profiling, I was comparing to Go, but again off topic and I take my 
comment back. I don't have any intelligent remarks to add (yet!) :)
Thank you for the all the work you are doing. 

On Thursday, April 30, 2015 at 7:00:01 PM UTC-7, Tim Holy wrote:

 Harry, I'm curious about 2 of your 3 last points: 

 On Thursday, April 30, 2015 05:50:15 PM Harry B wrote: 
  (exceptions?, debugging, profiling tools) 

 We have exceptions. What aspect are you referring to? 
 Debugger: yes, that's missing, and it's a huge gap. 
 Profiling tools: in my view we're doing OK (better than Matlab, in my 
 opinion), 
 but what do you see as missing? 

 --Tim 

  
  Thanks 
  -- 
  Harry 
  
  On Thursday, April 30, 2015 at 3:43:36 PM UTC-7, Páll Haraldsson wrote: 
   It seemed to me tuples where slow because of Any used. I understand 
 tuples 
   have been fixed, I'm not sure how. 
   
   I do not remember the post/all the details. Yes, tuples where slow/er 
 than 
   Python. Maybe it was Dict, isn't that kind of a tuple? Now we have 
 Pair in 
   0.4. I do not have 0.4, maybe I should bite the bullet and install.. 
 I'm 
   not doing anything production related and trying things out and using 
   0.3[.5] to avoid stability problems.. Then I can't judge the speed.. 
   
   Another potential issue I saw with tuples (maybe that is not a problem 
 in 
   general, and I do not know that languages do this) is that they can 
 take a 
   lot of memory (to copy around). I was thinking, maybe they should do 
   similar to databases, only use a fixed amount of memory (a page) 
 with a 
   pointer to overflow data.. 
   
   2015-04-30 22:13 GMT+00:00 Ali Rezaee arv@gmail.com 
 javascript:: 
   They were interesting questions. 
   I would also like to know why poorly written Julia code 
   sometimes performs worse than similar python code, especially when 
 tuples 
   are involved. Did you say it was fixed? 
   
   On Thursday, April 30, 2015 at 9:58:35 PM UTC+2, Páll Haraldsson 
 wrote: 
   Hi, 
   
   [As a best language is subjective, I'll put that aside for a 
 moment.] 
   
   Part I. 
   
   The goal, as I understand, for Julia is at least within a factor of 
 two 
   of C and already matching it mostly and long term beating that (and 
   C++). 
   [What other goals are there? How about 0.4 now or even 1.0..?] 
   
   While that is the goal as a language, you can write slow code in any 
   language and Julia makes that easier. :) [If I recall, Bezanson 
   mentioned 
   it (the global problem) as a feature, any change there?] 
   
   
   I've been following this forum for months and newbies hit the same 
   issues. But almost always without fail, Julia can be speed up 
 (easily as 
   Tim Holy says). I'm thinking about the exceptions to that - are 
 there 
   any 
   left? And about the first code slowness (see Part II). 
   
   Just recently the last two flaws of Julia that I could see where 
 fixed: 
   Decimal floating point is in (I'll look into the 100x slowness, that 
 is 
   probably to be expected of any language, still I think may be a 
   misunderstanding and/or I can do much better). And I understand the 
   tuple 
   slowness has been fixed (that was really the only core language 
   defect). 
   The former wasn't a performance problem (mostly a non existence 
 problem 
   and 
   correctness one (where needed)..). 
   
   
   Still we see threads like this one recent one: 
   
   https://groups.google.com/forum/#!topic/julia-users/-bx9xIfsHHw 
   It seems changing the order of nested loops also helps 
   
   Obviously Julia can't beat assembly but really C/Fortran is already 
   close enough (within a small factor). The above row vs. column major 
   (caching effects in general) can kill performance in all languages. 
   Putting 
   that newbie mistake aside, is there any reason Julia can be within a 
   small 
   factor of assembly (or C) in all cases already? 
   
   
   Part II. 
   
   Except for caching issues, I still want the most newbie code or 
   intentionally brain-damaged code to run faster than at least 
   Python/scripting/interpreted languages. 
   
   Potential problems (that I think are solved or at least not problems 
 in 
   theory): 
   
   1. I know Any kills performance. Still, isn't that the default in 
 Python 
   (and Ruby, Perl?)? Is there a good reason Julia can't be faster than 
 at 
   least all the so-called scripting languages in all cases (excluding 
   small 
   startup overhead, see below)? 
   
   2. The global issue, not sure if that slows other languages down, 
 say 
   Python. Even if it doesn't, should Julia be slower than Python 
 because 
   of 
   global? 
   
   3. Garbage collection. I do not see that as a problem, incorrect? 
 Mostly 
   performance variability ([3D] games - subject for another post, as 
 I'm 
   not sure 

Re: [julia-users] Re: Performance variability - can we expect Julia to be the fastest (best) language?

2015-04-30 Thread Jeff Bezanson
It is true that we have not yet done enough to optimize the worst and
worse performance cases. The bright side of that is that we have room
to improve; it's not that we've run out of ideas and techniques.

Tim is right that the complexity of our dispatch system makes julia
potentially slower than python. But in dispatch-heavy code I've seen
cases where we are faster or slower; it depends.

Python's string and dictionary operations, in particular, are really
fast. This is not surprising considering what the language was
designed for, and that they have a big library of well-tuned C code
for these things.

I still maintain that it is misleading to describe an *asymptotic*
slowdown as 800x slower. If you name a constant factor, it sounds
like you're talking about a constant factor slowdown. But the number
is arbitrary, because it depends on data size. In theory, of course,
an asymptotic slowdown is *much worse* than a constant factor
slowdown. However in the systems world constant factors are often more
important, and are often what we talk about.

You say a lot of the algorithms are O(n) instead of O(1). Are there
any examples other than length()?

I disagree that UTF-8 has no space savings over UTF-32 when using the
full range of unicode. The reason is that strings often have only a
small percentage of non-BMP characters, with lots of spaces and
newlines etc. You don't want your whole file to use 4x the space just
to use one emoji.


On Fri, May 1, 2015 at 12:42 AM, Harry B harrysun...@gmail.com wrote:
 Sorry my comment wasn't well thought out and a bit off topic. On
 exceptions/errors my issue is this
 https://github.com/JuliaLang/julia/issues/7026
 On profiling, I was comparing to Go, but again off topic and I take my
 comment back. I don't have any intelligent remarks to add (yet!) :)
 Thank you for the all the work you are doing.

 On Thursday, April 30, 2015 at 7:00:01 PM UTC-7, Tim Holy wrote:

 Harry, I'm curious about 2 of your 3 last points:

 On Thursday, April 30, 2015 05:50:15 PM Harry B wrote:
  (exceptions?, debugging, profiling tools)

 We have exceptions. What aspect are you referring to?
 Debugger: yes, that's missing, and it's a huge gap.
 Profiling tools: in my view we're doing OK (better than Matlab, in my
 opinion),
 but what do you see as missing?

 --Tim

 
  Thanks
  --
  Harry
 
  On Thursday, April 30, 2015 at 3:43:36 PM UTC-7, Páll Haraldsson wrote:
   It seemed to me tuples where slow because of Any used. I understand
   tuples
   have been fixed, I'm not sure how.
  
   I do not remember the post/all the details. Yes, tuples where slow/er
   than
   Python. Maybe it was Dict, isn't that kind of a tuple? Now we have
   Pair in
   0.4. I do not have 0.4, maybe I should bite the bullet and install..
   I'm
   not doing anything production related and trying things out and using
   0.3[.5] to avoid stability problems.. Then I can't judge the speed..
  
   Another potential issue I saw with tuples (maybe that is not a problem
   in
   general, and I do not know that languages do this) is that they can
   take a
   lot of memory (to copy around). I was thinking, maybe they should do
   similar to databases, only use a fixed amount of memory (a page)
   with a
   pointer to overflow data..
  
   2015-04-30 22:13 GMT+00:00 Ali Rezaee arv@gmail.com
   javascript::
   They were interesting questions.
   I would also like to know why poorly written Julia code
   sometimes performs worse than similar python code, especially when
   tuples
   are involved. Did you say it was fixed?
  
   On Thursday, April 30, 2015 at 9:58:35 PM UTC+2, Páll Haraldsson
   wrote:
   Hi,
  
   [As a best language is subjective, I'll put that aside for a
   moment.]
  
   Part I.
  
   The goal, as I understand, for Julia is at least within a factor of
   two
   of C and already matching it mostly and long term beating that (and
   C++).
   [What other goals are there? How about 0.4 now or even 1.0..?]
  
   While that is the goal as a language, you can write slow code in any
   language and Julia makes that easier. :) [If I recall, Bezanson
   mentioned
   it (the global problem) as a feature, any change there?]
  
  
   I've been following this forum for months and newbies hit the same
   issues. But almost always without fail, Julia can be speed up
   (easily as
   Tim Holy says). I'm thinking about the exceptions to that - are
   there
   any
   left? And about the first code slowness (see Part II).
  
   Just recently the last two flaws of Julia that I could see where
   fixed:
   Decimal floating point is in (I'll look into the 100x slowness, that
   is
   probably to be expected of any language, still I think may be a
   misunderstanding and/or I can do much better). And I understand the
   tuple
   slowness has been fixed (that was really the only core language
   defect).
   The former wasn't a performance problem (mostly a non existence
   problem
   and
   correctness one (where needed)..).
  
  

[julia-users] Re: Macro to generate function signature

2015-04-30 Thread Peter Brady
If you can include the function name in the macro argument list this should 
work

julia macro repper(fname, args...)
   ex = Expr(:call, fname)
   for (i, arg) in enumerate(args)
   push!(ex.args, Expr(:(::), symbol(x$i), arg))
   end
   ex
   end

julia (@repper h Int64 Int64) = x1+x2
h (generic function with 1 method)

julia h(1, 2)
3

julia h(1, 4)
5



On Thursday, April 30, 2015 at 3:19:38 PM UTC-6, David Gold wrote:

 I'm working (in 3.7) on a function that takes two functions f and g as 
 inputs, merges all non-conflicting methods of f and g into one function h, 
 and returns h. I'm trying to use macros to generate the signatures of each 
 method for h:

 macro argsgen(typle, n::Int64)
 y = eval(:($typle))
 xn = symbol(x$n)
 return :( $xn :: $(y[n]) )
 end

 macro repper(typle, n::Int64)
 ex = @argsgen($typle, 1)
 for i in 2:n
 ex = string(ex, , @argsgen($typle, $n))
 end

 return parse(ex)
 end

 So, if f has a method with signature (x::Int64), then I can feed the 
 1-tuple '(Int64,)' to @argsgen, which generates the proper signature: 

 julia ex1 = macroexpand( :(@argsgen((Int64,), 1) ))
 :(x1::Int64)

 and I can thereby define a method for h:


 julia h(@argsgen((Int64,), 1)) = x1 + 1
 foobar (generic function with 1 method)

 julia h(1)
 2

 The problem is when I try to do this for a signature of more than one 
 argument, say (Int64, Int64). Above I've tried implementing a macro that 
 generates an expression of repeated '@argsgen' calls with the appropriate 
 second argument:

 ex = macroexpand( :( @repper (Int64, Int64) 2 ) )
 :((x1::Int64,x2::Int64))

 which doesn't quite work when I attempt to use it for a signature:

 julia h(@repper (Int64, Int64) 2) = x1 + x2
 ERROR: syntax: (x1::Int64,x2::Int64) is not a valid function argument 
 name

 How can I get @repper to return an expression that isn't a tuple, but also 
 has two @argsgen calls separated by a comma? I've tried a couple of other 
 ways of doing this, but each expression wants to be returned as a tuple. 
 And I can't find out how to make the tuple splat in such a way that works 
 as a signature for h. Any thoughts? (Also, I imagine the tupocalypse will 
 break this, but I can deal with that later.) 

 Thank you all!
 David



Re: [julia-users] Re: Performance variability - can we expect Julia to be the fastest (best) language?

2015-04-30 Thread Páll Haraldsson
I wouldn't expect a difference in Julia for code like that (didn't check).
But I guess what we are often seeing is someone comparing a tuned Python
code to newbie Julia code. I still want it faster than that code..
(assuming same algorithm, note row vs. column major caveat).

The main point of mine, *should* Python at any time win?

2015-04-30 21:36 GMT+00:00 Sisyphuss zhengwend...@gmail.com:

 This post interests me. I'll write something here to follow this post.

 The performance gap between normal code in Python and badly-written code
 in Julia is something I'd like to know too.
 As far as I know, Python interpret does some mysterious optimizations. For
 example `(x**2)**2` is 100x faster than `x**4`.




 On Thursday, April 30, 2015 at 9:58:35 PM UTC+2, Páll Haraldsson wrote:


 Hi,

 [As a best language is subjective, I'll put that aside for a moment.]

 Part I.

 The goal, as I understand, for Julia is at least within a factor of two
 of C and already matching it mostly and long term beating that (and C++).
 [What other goals are there? How about 0.4 now or even 1.0..?]

 While that is the goal as a language, you can write slow code in any
 language and Julia makes that easier. :) [If I recall, Bezanson mentioned
 it (the global problem) as a feature, any change there?]


 I've been following this forum for months and newbies hit the same
 issues. But almost always without fail, Julia can be speed up (easily as
 Tim Holy says). I'm thinking about the exceptions to that - are there any
 left? And about the first code slowness (see Part II).

 Just recently the last two flaws of Julia that I could see where fixed:
 Decimal floating point is in (I'll look into the 100x slowness, that is
 probably to be expected of any language, still I think may be a
 misunderstanding and/or I can do much better). And I understand the tuple
 slowness has been fixed (that was really the only core language defect).
 The former wasn't a performance problem (mostly a non existence problem and
 correctness one (where needed)..).


 Still we see threads like this one recent one:

 https://groups.google.com/forum/#!topic/julia-users/-bx9xIfsHHw
 It seems changing the order of nested loops also helps

 Obviously Julia can't beat assembly but really C/Fortran is already close
 enough (within a small factor). The above row vs. column major (caching
 effects in general) can kill performance in all languages. Putting that
 newbie mistake aside, is there any reason Julia can be within a small
 factor of assembly (or C) in all cases already?


 Part II.

 Except for caching issues, I still want the most newbie code or
 intentionally brain-damaged code to run faster than at least
 Python/scripting/interpreted languages.

 Potential problems (that I think are solved or at least not problems in
 theory):

 1. I know Any kills performance. Still, isn't that the default in Python
 (and Ruby, Perl?)? Is there a good reason Julia can't be faster than at
 least all the so-called scripting languages in all cases (excluding small
 startup overhead, see below)?

 2. The global issue, not sure if that slows other languages down, say
 Python. Even if it doesn't, should Julia be slower than Python because of
 global?

 3. Garbage collection. I do not see that as a problem, incorrect? Mostly
 performance variability ([3D] games - subject for another post, as I'm
 not sure that is even a problem in theory..). Should reference counting
 (Python) be faster? On the contrary, I think RC and even manual memory
 management could be slower.

 4. Concurrency, see nr. 3. GC may or may not have an issue with it. It
 can be a problem, what about in Julia? There are concurrent GC algorithms
 and/or real-time (just not in Julia). Other than GC is there any big
 (potential) problem for concurrent/parallel? I know about the threads work
 and new GC in 0.4.

 5. Subarrays (array slicing?). Not really what I consider a problem,
 compared to say C (and Python?). I know 0.4 did optimize it, but what
 languages do similar stuff? Functional ones?

 6. In theory, pure functional languages should be faster. Are they in
 practice in many or any case? Julia has non-mutable state if needed but
 maybe not as powerful? This seems a double-edged sword. I think Julia
 designers intentionally chose mutable state to conserve memory. Pros and
 cons? Mostly Pros for Julia?

 7. Startup time. Python is faster and for say web use, or compared to PHP
 could be an issue, but would be solved by not doing CGI-style web. How
 good/fast is Julia/the libraries right now for say web use? At least for
 long running programs (intended target of Julia) startup time is not an
 issue.

 8. MPI, do not know enough about it and parallel in general, seems you
 are doing a good job. I at least think there is no inherent limitation. At
 least Python is not in any way better for parallel/concurrent?

 9. Autoparallel. Julia doesn't try to be, but could (be an addon?). Is
 anyone doing really good and 

[julia-users] Re: Macro to generate function signature

2015-04-30 Thread Peter Brady
Looks like I deleted my post.

If you can include the function name in the macro argument list this should 
work

julia macro repper(fname, args...)
   ex = Expr(:call, fname)
   for (i, arg) in enumerate(args)
   push!(ex.args, Expr(:(::), symbol(x$i), arg))
   end
   ex
   end

julia (@repper h Int64 Int64) = x1+x2
h (generic function with 1 method)

julia h(1, 2)
3

julia h(1, 4)
5




On Thursday, April 30, 2015 at 3:19:38 PM UTC-6, David Gold wrote:

 I'm working (in 3.7) on a function that takes two functions f and g as 
 inputs, merges all non-conflicting methods of f and g into one function h, 
 and returns h. I'm trying to use macros to generate the signatures of each 
 method for h:

 macro argsgen(typle, n::Int64)
 y = eval(:($typle))
 xn = symbol(x$n)
 return :( $xn :: $(y[n]) )
 end

 macro repper(typle, n::Int64)
 ex = @argsgen($typle, 1)
 for i in 2:n
 ex = string(ex, , @argsgen($typle, $n))
 end

 return parse(ex)
 end

 So, if f has a method with signature (x::Int64), then I can feed the 
 1-tuple '(Int64,)' to @argsgen, which generates the proper signature: 

 julia ex1 = macroexpand( :(@argsgen((Int64,), 1) ))
 :(x1::Int64)

 and I can thereby define a method for h:


 julia h(@argsgen((Int64,), 1)) = x1 + 1
 foobar (generic function with 1 method)

 julia h(1)
 2

 The problem is when I try to do this for a signature of more than one 
 argument, say (Int64, Int64). Above I've tried implementing a macro that 
 generates an expression of repeated '@argsgen' calls with the appropriate 
 second argument:

 ex = macroexpand( :( @repper (Int64, Int64) 2 ) )
 :((x1::Int64,x2::Int64))

 which doesn't quite work when I attempt to use it for a signature:

 julia h(@repper (Int64, Int64) 2) = x1 + x2
 ERROR: syntax: (x1::Int64,x2::Int64) is not a valid function argument 
 name

 How can I get @repper to return an expression that isn't a tuple, but also 
 has two @argsgen calls separated by a comma? I've tried a couple of other 
 ways of doing this, but each expression wants to be returned as a tuple. 
 And I can't find out how to make the tuple splat in such a way that works 
 as a signature for h. Any thoughts? (Also, I imagine the tupocalypse will 
 break this, but I can deal with that later.) 

 Thank you all!
 David



Re: [julia-users] Re: Performance variability - can we expect Julia to be the fastest (best) language?

2015-04-30 Thread Páll Haraldsson
It seemed to me tuples where slow because of Any used. I understand tuples
have been fixed, I'm not sure how.

I do not remember the post/all the details. Yes, tuples where slow/er than
Python. Maybe it was Dict, isn't that kind of a tuple? Now we have Pair in
0.4. I do not have 0.4, maybe I should bite the bullet and install.. I'm
not doing anything production related and trying things out and using
0.3[.5] to avoid stability problems.. Then I can't judge the speed..

Another potential issue I saw with tuples (maybe that is not a problem in
general, and I do not know that languages do this) is that they can take a
lot of memory (to copy around). I was thinking, maybe they should do
similar to databases, only use a fixed amount of memory (a page) with a
pointer to overflow data..

2015-04-30 22:13 GMT+00:00 Ali Rezaee arv.ka...@gmail.com:

 They were interesting questions.
 I would also like to know why poorly written Julia code sometimes performs
 worse than similar python code, especially when tuples are involved. Did
 you say it was fixed?

 On Thursday, April 30, 2015 at 9:58:35 PM UTC+2, Páll Haraldsson wrote:


 Hi,

 [As a best language is subjective, I'll put that aside for a moment.]

 Part I.

 The goal, as I understand, for Julia is at least within a factor of two
 of C and already matching it mostly and long term beating that (and C++).
 [What other goals are there? How about 0.4 now or even 1.0..?]

 While that is the goal as a language, you can write slow code in any
 language and Julia makes that easier. :) [If I recall, Bezanson mentioned
 it (the global problem) as a feature, any change there?]


 I've been following this forum for months and newbies hit the same
 issues. But almost always without fail, Julia can be speed up (easily as
 Tim Holy says). I'm thinking about the exceptions to that - are there any
 left? And about the first code slowness (see Part II).

 Just recently the last two flaws of Julia that I could see where fixed:
 Decimal floating point is in (I'll look into the 100x slowness, that is
 probably to be expected of any language, still I think may be a
 misunderstanding and/or I can do much better). And I understand the tuple
 slowness has been fixed (that was really the only core language defect).
 The former wasn't a performance problem (mostly a non existence problem and
 correctness one (where needed)..).


 Still we see threads like this one recent one:

 https://groups.google.com/forum/#!topic/julia-users/-bx9xIfsHHw
 It seems changing the order of nested loops also helps

 Obviously Julia can't beat assembly but really C/Fortran is already close
 enough (within a small factor). The above row vs. column major (caching
 effects in general) can kill performance in all languages. Putting that
 newbie mistake aside, is there any reason Julia can be within a small
 factor of assembly (or C) in all cases already?


 Part II.

 Except for caching issues, I still want the most newbie code or
 intentionally brain-damaged code to run faster than at least
 Python/scripting/interpreted languages.

 Potential problems (that I think are solved or at least not problems in
 theory):

 1. I know Any kills performance. Still, isn't that the default in Python
 (and Ruby, Perl?)? Is there a good reason Julia can't be faster than at
 least all the so-called scripting languages in all cases (excluding small
 startup overhead, see below)?

 2. The global issue, not sure if that slows other languages down, say
 Python. Even if it doesn't, should Julia be slower than Python because of
 global?

 3. Garbage collection. I do not see that as a problem, incorrect? Mostly
 performance variability ([3D] games - subject for another post, as I'm
 not sure that is even a problem in theory..). Should reference counting
 (Python) be faster? On the contrary, I think RC and even manual memory
 management could be slower.

 4. Concurrency, see nr. 3. GC may or may not have an issue with it. It
 can be a problem, what about in Julia? There are concurrent GC algorithms
 and/or real-time (just not in Julia). Other than GC is there any big
 (potential) problem for concurrent/parallel? I know about the threads work
 and new GC in 0.4.

 5. Subarrays (array slicing?). Not really what I consider a problem,
 compared to say C (and Python?). I know 0.4 did optimize it, but what
 languages do similar stuff? Functional ones?

 6. In theory, pure functional languages should be faster. Are they in
 practice in many or any case? Julia has non-mutable state if needed but
 maybe not as powerful? This seems a double-edged sword. I think Julia
 designers intentionally chose mutable state to conserve memory. Pros and
 cons? Mostly Pros for Julia?

 7. Startup time. Python is faster and for say web use, or compared to PHP
 could be an issue, but would be solved by not doing CGI-style web. How
 good/fast is Julia/the libraries right now for say web use? At least for
 long running programs (intended target of Julia) 

[julia-users] Re: [ANN] DecFP.jl - decimal floating-point math

2015-04-30 Thread Páll Haraldsson
[Thanks for this addition..! I've been putting this off as I didn't really 
need this personally. Just Wrapped PyDecimal as an exercise..]

As far as I know, numbers in JSON are defined to be binary floating point.. 
(as that is what JavaScript only has) but allowed to be taken otherwise or 
something.. Not sure how that really works in practice.

What I would think is the first priority is to change ODBC.jl I noticed it 
uses binary floating point for the DECIMAL/NUMERIC type.. There was no good 
alternative and as it isn't really meant to be binary and there is a 
separate SQL binary type if that is what you want I'm not sure there is any 
harm in changing only positive (to at least some type, do not think 
arbitrary precision is needed or preferred, but not sure).

The sooner ODBC.jl is changed fewer will notice a change. Maybe this 
applies to other DB wrappers (didn't check, there are others).

Would a switch in any of such cases be needed, with default to the new way? 
If default to binary people would probably never change/know about it.. And 
I guess not either if default was decimal.. or probably not care about 
compatible with old/incorrect way..

In general is there a better way for broken compatibility? I know about 
Compat.jl, seems for syntax level and can't work/not made for this.

-- 
Palli.

On Wednesday, April 29, 2015 at 12:08:48 PM UTC, Scott Jones wrote:

 One place where I'd like to see this used is with the JSON parser... the 
 current JSON parser can't represent all numbers, it's a potentially lossy 
 transformation from JSON numbers to
 binary floats... although Javascript only has the equivalent of Float64 
 for all numbers, the good JSON parsers that I've seen have the option of 
 using something like
 Python's Decimal or Java's BigDecimal).   This would get us part of the 
 way there... you do really need to use BigInt and an arbitrary precision 
 decimal floating point type to correctly
 handle all numbers without losing information (this is why I'd also like 
 to see a wrapper for the decNumber package, it supports all 6 fixed with 
 formats, an arbitrary precision format, it supports many more platforms 
 such as IBM's Power, and might even be usable for ARM builds).

 Great stuff!

 On Tuesday, April 28, 2015 at 9:26:17 PM UTC-4, Steven G. Johnson wrote:

 The DecFP package

   https://github.com/stevengj/DecFP.jl

 provides 32-bit, 64-bit, and 128-bit binary-encoded decimal 
 floating-point types following the IEEE 754-2008, implemented as a wrapper 
 around the (BSD-licensed) Intel Decimal Floating-Point Math Library 
 https://software.intel.com/en-us/articles/intel-decimal-floating-point-math-library.
   
 Decimal floating-point types are useful in situations where you need to 
 exactly represent decimal values, typically human inputs.

 As software floating point, this is about 100x slower than hardware 
 binary floating-point math.  On the other hand, it is significantly 
 (10-100x) faster than arbitrary-precision decimal arithmetic, and is a 
 memory-efficient bitstype.

 The basic arithmetic functions, conversions from other numeric types, and 
 numerous special functions are supported.



[julia-users] Re: [ANN] DecFP.jl - decimal floating-point math

2015-04-30 Thread Páll Haraldsson
In general, you should be able to use the DecFP types in any context where 
you would have used binary floating-point types: arrays, complex 
arithmetic, and linear algebra should all work, for the most part.

Way better than what I was going for - I thought only +, -, *, / was needed 
and maybe only wanted.. plus convert but not automatic:

Mixed operations involving decimal and binary floating-point or integer 
types are supported (the result is promoted to decimal floating-point).

Is that really advised? Is that what you do or the C library? In C you have 
automatic promotion between native types, but I guess you can't otherwise. 
The library you wrap provides converts (good), you make them automatic 
(bad?).

At first blush, as Julia is generic by default, this seems what you would 
want, but is it? When I thought about this, it seemed just dangerous. If 
you force people to use manual conversion you might get away with wrapping 
fewer functions?

Most basic arithmetic functions are supported, and many special functions (
sqrt, log, trigonometric functions, etc.).

I'm not sure if the library you wrap provides this and if it does, doesn't 
convert to binary floating point, runs the function, then converts back? 
Whether it does it or you do it, is it not better to let the user decide on 
both converts? And faster if you eliminate some..

I just scanned the code or one file:

for c in (:π, :e, :γ, :catalan, :φ)

Not sure if these are the only Julia constants.. Or only you care about. 
Anyway, wasn't expecting to see them in any decimal context. I would not be 
rich with π dollars in my account. :)

-- 
Palli.

On Wednesday, April 29, 2015 at 1:26:17 AM UTC, Steven G. Johnson wrote:

 The DecFP package

   https://github.com/stevengj/DecFP.jl

 provides 32-bit, 64-bit, and 128-bit binary-encoded decimal floating-point 
 types following the IEEE 754-2008, implemented as a wrapper around the 
 (BSD-licensed) Intel Decimal Floating-Point Math Library 
 https://software.intel.com/en-us/articles/intel-decimal-floating-point-math-library.
   
 Decimal floating-point types are useful in situations where you need to 
 exactly represent decimal values, typically human inputs.

 As software floating point, this is about 100x slower than hardware binary 
 floating-point math.  On the other hand, it is significantly (10-100x) 
 faster than arbitrary-precision decimal arithmetic, and is a 
 memory-efficient bitstype.

 The basic arithmetic functions, conversions from other numeric types, and 
 numerous special functions are supported.



[julia-users] Re: [ANN] DecFP.jl - decimal floating-point math

2015-04-30 Thread Páll Haraldsson

On Thursday, April 30, 2015 at 11:56:28 PM UTC, Páll Haraldsson wrote:

 In general, you should be able to use the DecFP types in any context 
 where you would have used binary floating-point types: arrays, complex 
 arithmetic, and linear algebra should all work, for the most part.

 Way better than what I was going for - I thought only +, -, *, / was 
 needed and maybe only wanted.. plus convert but not automatic:

 s/better/more/
 

 Mixed operations involving decimal and binary floating-point or integer 
 types are supported (the result is promoted to decimal floating-point).

  

 Is that really advised? Is that what you do or the C library? In C you 
 have automatic promotion between native types, but I guess you can't 
 otherwise. The library you wrap provides converts (good), you make them 
 automatic (bad?).


Maybe I should have stopped about here. The above dangerous, will clarify 
below:

At first blush, as Julia is generic by default, this seems what you would 
 want, but is it? When I thought about this, it seemed just dangerous. If 
 you force people to use manual conversion you might get away with wrapping 
 fewer functions?

 Most basic arithmetic functions are supported, and many special functions 
 (sqrt, log, trigonometric functions, etc.).


The functions aren't dangerous per se. But are they and the irrational 
constants not a sign you want binary floating point? Are you converting 
back and fourth for type stability reasons? If you are doing that is type 
instability is slow (and yes slower than either binary or decimal) wouldn't 
manual convert give you a hint and force you to convert to binary floating 
point and be faster and be what you want?

I've also been think about the numeric tower. Is this for sure ok:

abstract DecimalFloatingPoint : FloatingPoint

what are the implications? Are you saying Decimal is similar to Binary (on 
the same inheritance level). Maybe I'm confused, as all but leaf-types 
are abstract.

should it be:

abstract DecimalFloatingPoint : Real

What are then the implications?

I'm also thinking about other decimal types, arbitrary precision. they need 
no NaN (is the NaN the same for binay and decimal..?). Do not think 
BigFloat has NaN and that can be under FloatingPoint so again, maybe I'm 
just confused.

-- 
Palli.


 I'm not sure if the library you wrap provides this and if it does, doesn't 
 convert to binary floating point, runs the function, then converts back? 
 Whether it does it or you do it, is it not better to let the user decide on 
 both converts? And faster if you eliminate some..

 I just scanned the code or one file:

 for c in (:π, :e, :γ, :catalan, :φ)

 Not sure if these are the only Julia constants.. Or only you care about. 
 Anyway, wasn't expecting to see them in any decimal context. I would not be 
 rich with π dollars in my account. :)

 -- 
 Palli.

 On Wednesday, April 29, 2015 at 1:26:17 AM UTC, Steven G. Johnson wrote:

 The DecFP package

   https://github.com/stevengj/DecFP.jl

 provides 32-bit, 64-bit, and 128-bit binary-encoded decimal 
 floating-point types following the IEEE 754-2008, implemented as a wrapper 
 around the (BSD-licensed) Intel Decimal Floating-Point Math Library 
 https://software.intel.com/en-us/articles/intel-decimal-floating-point-math-library.
   
 Decimal floating-point types are useful in situations where you need to 
 exactly represent decimal values, typically human inputs.

 As software floating point, this is about 100x slower than hardware 
 binary floating-point math.  On the other hand, it is significantly 
 (10-100x) faster than arbitrary-precision decimal arithmetic, and is a 
 memory-efficient bitstype.

 The basic arithmetic functions, conversions from other numeric types, and 
 numerous special functions are supported.



Re: [julia-users] Re: function similar to matlab tabulate

2015-04-30 Thread Tim Holy
See collect. But in julia, it's often the case that you don't need to do 
things that way. For example,

for (key,value) in mydict
# do something with key and value
end

might be faster because it does not end up allocating any memory.

--Tim

On Thursday, April 30, 2015 02:05:37 PM Alexandros Fakos wrote:
 Thanks a lot. countmap returns a dictionary but I would prefer an array.
 How can I do that?
 
 Thank you
 
 On Thursday, April 30, 2015 at 4:41:43 PM UTC-4, Johan Sigfrids wrote:
  countmap in the StatsBase.jl package does this.
  
  On Thursday, April 30, 2015 at 11:11:37 PM UTC+3, Alexandros Fakos wrote:
  Hi,
  
  Is there a way to get a table of frequencies of the unique values in an
  array in Julia?
  Something like matlab's tabulate
  
  Thanks a lot,
  Alex



[julia-users] Re: How to optimize the collect function?

2015-04-30 Thread Harry B

FWIW, there was a 10x difference between  Julia-0.4.0-dev-8fc5b4e605 (Apr 5 
version) and Julia-0.4.0-dev-dea3d0e420.app (today). So makes sure which 
nightly you are comparing.
(this is an Mac 10.9.5 early 2013 macbook pro)

Julia-0.4.0-dev-8fc5b4e605.app  = 120 ms
Julia-0.4.0-dev-dea3d0e420.app = 17 ms
Python 2.7.8 shows 8ms similar to your numbers

On Thursday, April 30, 2015 at 4:53:44 PM UTC-7, Ali Rezaee wrote:

 I tried on a Linux desktop, and I still get similar timings. Does anyone 
 else get different timings?

 On Friday, May 1, 2015 at 12:43:24 AM UTC+2, Ali Rezaee wrote:

 Dear all,

 I have a ported all my project from python to Julia. However, there is 
 one part of my code that I have not been able to optimize much. It is 
 between 3 to 9 times slower than similar python code.
 Do you have any suggestions / explanations?

 Julia code:
 function collect_zip(a::Array=[a,b,c,d],b::Array=[0.2,0.1,0.1,0.6
 ])
   c = collect(zip(a,b))
 end

 collect_zip()
 @time for i in 1:1; j= collect_zip();end # elapsed time: 0.089672328 
 seconds (9 MB allocated)

 Python code:
 def listzip(a = [a,b,c,d], b = [0.2,0.1,0.1,0.6]):
 c = list(zip(a,b))
 a = timeit.timeit(listzip(),from __main__ import listzip,number=1
 )
 print(a) # 0.0105497862674

 P.S: I need to convert the zip to an array because after that I need to 
 sort them.

 Thanks a lot,



Re: [julia-users] Build error: could not open \flisp.boot\.

2015-04-30 Thread cdm
https://groups.google.com/d/msg/julia-users/jpGtvFUakqY/Nkf_bX6a2TAJ

On Thursday, April 30, 2015 at 5:45:59 PM UTC-7, cdm wrote:


 if i recall correctly, koding.com allows for a shared console
 (but i certainly could be mis-recollecting ...), but Julia definitely
 runs on their system ...

 best,

 cdm



Re: [julia-users] Build error: could not open \flisp.boot\.

2015-04-30 Thread cdm

if i recall correctly, koding.com allows for a shared console
(but i certainly could be mis-recollecting ...), but Julia definitely
runs on their system ...

best,

cdm


Re: [julia-users] Re: Performance variability - can we expect Julia to be the fastest (best) language?

2015-04-30 Thread Scott Jones


On Thursday, April 30, 2015 at 6:34:23 PM UTC-4, Páll Haraldsson wrote:

 Interesting.. does that mean Unicode then that is esp. faster or something 
 else?

 800x faster is way worse than I thought and no good reason for it..


That particular case is because CPython (which is the standard C 
implementation of Python, what most people mean when they use Python), has 
optimized the case of

var += string

which is appending to a variable.

Although strings *are* immutable in Python, as in Julia, Python detects 
that you are replacing a string with the string concatenated with another, 
and if
nobody else has a reference to the string in that variable, it can simply 
update the string in place, and otherwise, it makes a new string big enough 
for the result,
and sets the variable to that new string.
 

 I'm really intrigued what is this slow, can't be the simple things like 
 say just string concatenation?!

 You can get similar speed using PyCall.jl :)


I'm not so sure... I don't really think so - because you still have to move 
the string from Julia (which uses either ASCII or UTF-8 for strings by 
default, you have to specifically
convert them to get UTF-16 or UTF-32...) to Python, and then back... and 
Julia's string conversions are rather slow... O(n^2) in most cases...
(I'm working in improving that, I hope I can get my changes accepted into 
Julia's Base)

For some obscure function like Levenshtein distance I might expect this (or 
 not implemented yet in Julia) as Python would use tuned C code or in any 
 function where you need to do non-trivial work per function-call.


 I failed to add regex to the list as an example as I was pretty sure it 
 was as fast (or faster, because of macros) as Perl as it is using the same 
 library.

 Similarly for all Unicode/UTF-8 stuff I was not expecting slowness. I know 
 the work on that in Python2/3 and expected Julia could/did similar.


No, a lot of the algorithms are O(n) instead of O(1), because of the 
decision to use UTF-8...
I'd like to convince the core team to change Julia to do what Python 3 does.
UTF-8 is pretty bad to use for internal string representation (where it 
shines is an an interchange format).
UTF-8 can take up to 50% more storage than UTF-16 if you are just dealing 
with BMP characters.
If you have some field that needs to hold a certain number of Unicode 
characters, for the full range of Unicode,
you need to allocate 4 bytes for every character, so no savings compared to 
UTF-16 or UTF-32.

Python 3 internally stores strings as either: 7-bit (ASCII), 8-bit (ANSI 
Latin1, only characters  0x100 present), 16-bit (UCS-2, i.e. there are no 
non-BMP characters present),
or 32-bit (UTF-32).  You might wonder why there is a special distinction 
between 7-bit ASCII and 8-bit ANSI Latin 1... they are both Unicode 
subsets, but 7-bit ASCII
can also be used directly without conversion as UTF-8.
All internal formats are directly addressable (unlike Julia's UTF8String 
and UTF16String), and the conversions between the 4 internal types is very 
fast, simple
widening (or a no-op, as in the case of ASCII - ANSI), when going from 
smaller to larger.

Julia also has a big problem with always wanting to have a terminating \0 
byte or word, which means that you can't take a substring or slice of 
another string without
making a copy to be able to add that terminating \0 (so lots of extra 
memory allocation and garbage collection for common algorithms).

I hope that makes things a bit clearer!

Scott


Re: [julia-users] Re: Performance variability - can we expect Julia to be the fastest (best) language?

2015-04-30 Thread Tim Holy
Harry, I'm curious about 2 of your 3 last points:

On Thursday, April 30, 2015 05:50:15 PM Harry B wrote:
 (exceptions?, debugging, profiling tools)

We have exceptions. What aspect are you referring to?
Debugger: yes, that's missing, and it's a huge gap.
Profiling tools: in my view we're doing OK (better than Matlab, in my opinion), 
but what do you see as missing? 

--Tim

 
 Thanks
 --
 Harry
 
 On Thursday, April 30, 2015 at 3:43:36 PM UTC-7, Páll Haraldsson wrote:
  It seemed to me tuples where slow because of Any used. I understand tuples
  have been fixed, I'm not sure how.
  
  I do not remember the post/all the details. Yes, tuples where slow/er than
  Python. Maybe it was Dict, isn't that kind of a tuple? Now we have Pair in
  0.4. I do not have 0.4, maybe I should bite the bullet and install.. I'm
  not doing anything production related and trying things out and using
  0.3[.5] to avoid stability problems.. Then I can't judge the speed..
  
  Another potential issue I saw with tuples (maybe that is not a problem in
  general, and I do not know that languages do this) is that they can take a
  lot of memory (to copy around). I was thinking, maybe they should do
  similar to databases, only use a fixed amount of memory (a page) with a
  pointer to overflow data..
  
  2015-04-30 22:13 GMT+00:00 Ali Rezaee arv@gmail.com javascript::
  They were interesting questions.
  I would also like to know why poorly written Julia code
  sometimes performs worse than similar python code, especially when tuples
  are involved. Did you say it was fixed?
  
  On Thursday, April 30, 2015 at 9:58:35 PM UTC+2, Páll Haraldsson wrote:
  Hi,
  
  [As a best language is subjective, I'll put that aside for a moment.]
  
  Part I.
  
  The goal, as I understand, for Julia is at least within a factor of two
  of C and already matching it mostly and long term beating that (and
  C++).
  [What other goals are there? How about 0.4 now or even 1.0..?]
  
  While that is the goal as a language, you can write slow code in any
  language and Julia makes that easier. :) [If I recall, Bezanson
  mentioned
  it (the global problem) as a feature, any change there?]
  
  
  I've been following this forum for months and newbies hit the same
  issues. But almost always without fail, Julia can be speed up (easily as
  Tim Holy says). I'm thinking about the exceptions to that - are there
  any
  left? And about the first code slowness (see Part II).
  
  Just recently the last two flaws of Julia that I could see where fixed:
  Decimal floating point is in (I'll look into the 100x slowness, that is
  probably to be expected of any language, still I think may be a
  misunderstanding and/or I can do much better). And I understand the
  tuple
  slowness has been fixed (that was really the only core language
  defect).
  The former wasn't a performance problem (mostly a non existence problem
  and
  correctness one (where needed)..).
  
  
  Still we see threads like this one recent one:
  
  https://groups.google.com/forum/#!topic/julia-users/-bx9xIfsHHw
  It seems changing the order of nested loops also helps
  
  Obviously Julia can't beat assembly but really C/Fortran is already
  close enough (within a small factor). The above row vs. column major
  (caching effects in general) can kill performance in all languages.
  Putting
  that newbie mistake aside, is there any reason Julia can be within a
  small
  factor of assembly (or C) in all cases already?
  
  
  Part II.
  
  Except for caching issues, I still want the most newbie code or
  intentionally brain-damaged code to run faster than at least
  Python/scripting/interpreted languages.
  
  Potential problems (that I think are solved or at least not problems in
  theory):
  
  1. I know Any kills performance. Still, isn't that the default in Python
  (and Ruby, Perl?)? Is there a good reason Julia can't be faster than at
  least all the so-called scripting languages in all cases (excluding
  small
  startup overhead, see below)?
  
  2. The global issue, not sure if that slows other languages down, say
  Python. Even if it doesn't, should Julia be slower than Python because
  of
  global?
  
  3. Garbage collection. I do not see that as a problem, incorrect? Mostly
  performance variability ([3D] games - subject for another post, as I'm
  not sure that is even a problem in theory..). Should reference counting
  (Python) be faster? On the contrary, I think RC and even manual memory
  management could be slower.
  
  4. Concurrency, see nr. 3. GC may or may not have an issue with it. It
  can be a problem, what about in Julia? There are concurrent GC
  algorithms
  and/or real-time (just not in Julia). Other than GC is there any big
  (potential) problem for concurrent/parallel? I know about the threads
  work
  and new GC in 0.4.
  
  5. Subarrays (array slicing?). Not really what I consider a problem,
  compared to say C (and Python?). I know 0.4 did optimize it, but what
  

Re: [julia-users] Re: Performance of Distributed Arrays

2015-04-30 Thread Jake Bolewski
Yes, performance will be largely the same on 0.4.

If you have to do any performance sensitive code at scale MPI is really the 
only option I can recomend now.  I don't know what you are trying to do but 
the MPI.jl library is a bit incomplete so it would be great if you used it 
and could contribute back in some way.  All the basic operations should be 
covered.

-Jake

On Thursday, April 30, 2015 at 12:29:15 PM UTC-4, Ángel de Vicente wrote:

 Hi Jake, 

 Jake Bolewski jakebo...@gmail.com javascript: writes: 
  DistributedArray performance is pretty bad.  The reason for removing 
  them from base was to spur their development.  All I can say at this 
  time is that we are actively working on making their performance 
  better. 

 OK, thanks. Should I try with the DistributedArray package in 0.4-dev or 
 for the moment the performance will be similar? 


  For every parallel program you have implicit serial overhead (this is 
  especially true with multiprocessing).  The fraction of serial work to 
  parallel work determines your potential parallel speedup.  The 
  parallel work / serial overhead in this case is really bad, so I don't 
  think your observation is really surprising.  If this is on a shared 
  memory machine I would try using SharedArray's as the serial 
  communication overhead will be lower, and the potential parallel 
  speedup much higher.  DistributedArrays only really make sense if they 
  are in fact distributed over multiple machines. 

 I will try SharedArray's, but the goal is to be able to run the code 
 (not this one :-)) over distributed machines. For the moment my only 
 hope is MPI.jl then? 

 Thanks, 
 -- 
 Ángel de Vicente 
 http://www.iac.es/galeria/angelv/   



Re: [julia-users] Re: Performance variability - can we expect Julia to be the fastest (best) language?

2015-04-30 Thread Harry B
a newbie comment:  If it can be made a bit more easier to write code that 
uses all the cores ( I am comparing to Go with its channels), it probably 
doesn't need to be faster than Python. 

From an outsider's perspective, @everywhere is inconvenient. pmap etc 
doesn't cover nearly as many cases as Go channels. May be it is 
documentation problem.

I wouldn't think it would be good to try to extract every last bit of speed 
when you are 0.4.. there are so many things to cleanup/build in the 
language and standard library (exceptions?, debugging, profiling tools)

Thanks
--
Harry

On Thursday, April 30, 2015 at 3:43:36 PM UTC-7, Páll Haraldsson wrote:

 It seemed to me tuples where slow because of Any used. I understand tuples 
 have been fixed, I'm not sure how.

 I do not remember the post/all the details. Yes, tuples where slow/er than 
 Python. Maybe it was Dict, isn't that kind of a tuple? Now we have Pair in 
 0.4. I do not have 0.4, maybe I should bite the bullet and install.. I'm 
 not doing anything production related and trying things out and using 
 0.3[.5] to avoid stability problems.. Then I can't judge the speed..

 Another potential issue I saw with tuples (maybe that is not a problem in 
 general, and I do not know that languages do this) is that they can take a 
 lot of memory (to copy around). I was thinking, maybe they should do 
 similar to databases, only use a fixed amount of memory (a page) with a 
 pointer to overflow data..

 2015-04-30 22:13 GMT+00:00 Ali Rezaee arv@gmail.com javascript::

 They were interesting questions.
 I would also like to know why poorly written Julia code 
 sometimes performs worse than similar python code, especially when tuples 
 are involved. Did you say it was fixed?

 On Thursday, April 30, 2015 at 9:58:35 PM UTC+2, Páll Haraldsson wrote:


 Hi,

 [As a best language is subjective, I'll put that aside for a moment.]

 Part I.

 The goal, as I understand, for Julia is at least within a factor of two 
 of C and already matching it mostly and long term beating that (and C++). 
 [What other goals are there? How about 0.4 now or even 1.0..?]

 While that is the goal as a language, you can write slow code in any 
 language and Julia makes that easier. :) [If I recall, Bezanson mentioned 
 it (the global problem) as a feature, any change there?]


 I've been following this forum for months and newbies hit the same 
 issues. But almost always without fail, Julia can be speed up (easily as 
 Tim Holy says). I'm thinking about the exceptions to that - are there any 
 left? And about the first code slowness (see Part II).

 Just recently the last two flaws of Julia that I could see where fixed: 
 Decimal floating point is in (I'll look into the 100x slowness, that is 
 probably to be expected of any language, still I think may be a 
 misunderstanding and/or I can do much better). And I understand the tuple 
 slowness has been fixed (that was really the only core language defect). 
 The former wasn't a performance problem (mostly a non existence problem and 
 correctness one (where needed)..).


 Still we see threads like this one recent one:

 https://groups.google.com/forum/#!topic/julia-users/-bx9xIfsHHw
 It seems changing the order of nested loops also helps

 Obviously Julia can't beat assembly but really C/Fortran is already 
 close enough (within a small factor). The above row vs. column major 
 (caching effects in general) can kill performance in all languages. Putting 
 that newbie mistake aside, is there any reason Julia can be within a small 
 factor of assembly (or C) in all cases already?


 Part II.

 Except for caching issues, I still want the most newbie code or 
 intentionally brain-damaged code to run faster than at least 
 Python/scripting/interpreted languages.

 Potential problems (that I think are solved or at least not problems in 
 theory):

 1. I know Any kills performance. Still, isn't that the default in Python 
 (and Ruby, Perl?)? Is there a good reason Julia can't be faster than at 
 least all the so-called scripting languages in all cases (excluding small 
 startup overhead, see below)?

 2. The global issue, not sure if that slows other languages down, say 
 Python. Even if it doesn't, should Julia be slower than Python because of 
 global?

 3. Garbage collection. I do not see that as a problem, incorrect? Mostly 
 performance variability ([3D] games - subject for another post, as I'm 
 not sure that is even a problem in theory..). Should reference counting 
 (Python) be faster? On the contrary, I think RC and even manual memory 
 management could be slower.

 4. Concurrency, see nr. 3. GC may or may not have an issue with it. It 
 can be a problem, what about in Julia? There are concurrent GC algorithms 
 and/or real-time (just not in Julia). Other than GC is there any big 
 (potential) problem for concurrent/parallel? I know about the threads work 
 and new GC in 0.4.

 5. Subarrays (array slicing?). Not really what I consider 

[julia-users] Re: How to optimize the collect function?

2015-04-30 Thread Qian Long
I use Linux and got similar results as Jiahao:

Julia 0.3.7: 43ms
Julia 0.4.0-dev+4572: 2ms
python 2.7.6: 7ms

Recently, Julia had improved the performance of Tuple type greatly. Did you 
update your Julia 0.4 to the newest one?

On Friday, May 1, 2015 at 8:53:44 AM UTC+9, Ali Rezaee wrote:

 I tried on a Linux desktop, and I still get similar timings. Does anyone 
 else get different timings?

 On Friday, May 1, 2015 at 12:43:24 AM UTC+2, Ali Rezaee wrote:

 Dear all,

 I have a ported all my project from python to Julia. However, there is 
 one part of my code that I have not been able to optimize much. It is 
 between 3 to 9 times slower than similar python code.
 Do you have any suggestions / explanations?

 Julia code:
 function collect_zip(a::Array=[a,b,c,d],b::Array=[0.2,0.1,0.1,0.6
 ])
   c = collect(zip(a,b))
 end

 collect_zip()
 @time for i in 1:1; j= collect_zip();end # elapsed time: 0.089672328 
 seconds (9 MB allocated)

 Python code:
 def listzip(a = [a,b,c,d], b = [0.2,0.1,0.1,0.6]):
 c = list(zip(a,b))
 a = timeit.timeit(listzip(),from __main__ import listzip,number=1
 )
 print(a) # 0.0105497862674

 P.S: I need to convert the zip to an array because after that I need to 
 sort them.

 Thanks a lot,



[julia-users] Re: How to optimize the collect function?

2015-04-30 Thread Scott T
Oh cool, this is the one thing that I've actually contributed to base 
Julia! In this pull request https://github.com/JuliaLang/julia/pull/10961 we 
found and fixed a type instability in the collect function. Julia v0.3 or 
v0.4 up until a few days ago will still suffer from this. You could write 
your own version of the function using the information in that pull request 
if you want the speed increase now (i.e. make a function called something 
like my_collect that reads like this 
https://github.com/JuliaLang/julia/blob/63dac6dd531c87be9fa9e504f1afc050a838635b/base/array.jl#L255),
 
or update to a newer version of 0.4-dev. My timings are 5 ms in Python 
2.7.8 and 2.7 ms in the latest Julia 0.4. 

Cheers,

Scott T

On Thursday, 30 April 2015 23:43:24 UTC+1, Ali Rezaee wrote:

 Dear all,

 I have a ported all my project from python to Julia. However, there is one 
 part of my code that I have not been able to optimize much. It is between 3 
 to 9 times slower than similar python code.
 Do you have any suggestions / explanations?

 Julia code:
 function collect_zip(a::Array=[a,b,c,d],b::Array=[0.2,0.1,0.1,0.6
 ])
   c = collect(zip(a,b))
 end

 collect_zip()
 @time for i in 1:1; j= collect_zip();end # elapsed time: 0.089672328 
 seconds (9 MB allocated)

 Python code:
 def listzip(a = [a,b,c,d], b = [0.2,0.1,0.1,0.6]):
 c = list(zip(a,b))
 a = timeit.timeit(listzip(),from __main__ import listzip,number=1)
 print(a) # 0.0105497862674

 P.S: I need to convert the zip to an array because after that I need to 
 sort them.

 Thanks a lot,



[julia-users] Re: [ANN] DecFP.jl - decimal floating-point math

2015-04-30 Thread Scott Jones


On Thursday, April 30, 2015 at 7:20:22 PM UTC-4, Páll Haraldsson wrote:

 [Thanks for this addition..! I've been putting this off as I didn't really 
 need this personally. Just Wrapped PyDecimal as an exercise..]

 As far as I know, numbers in JSON are defined to be binary floating 
 point.. (as that is what JavaScript only has) but allowed to be taken 
 otherwise or something.. Not sure how that really works in practice.


Not at all!!!  They are definitely not defined to be binary floating point. 
 In fact, the only way to exactly represent an arbitrary JSON number (in 
addition to just keeping it as a string), is to store it as an arbitrary 
precision *decimal* floating point number.   There are a number of parsers 
that do just that.   The JSON spec was written carefully to say nothing 
about the limits of scale and precision of numbers... it does say that 
Infinity and NaNs are not allowed, so it is actually not able to correctly 
store an arbitrary IEEE-754 floating point # (either binary OR decimal).

Please read the relevant JSON specs, either RFC 7159 or ECMA 404.  RFC 7159 
(which obsoleted RFC 4627) talks very specifically about numbers - it does 
say that for *portability*,
you might have troubles with numbers that are outside the range 
representable by an IEEE-754 64-bit binary floating point value (i.e. 
double).  However, both the RFC and the ECMA
standard are very explicit that JSON numbers are base-10 numbers.

What I would think is the first priority is to change ODBC.jl I noticed it 
 uses binary floating point for the DECIMAL/NUMERIC type.. There was no good 
 alternative and as it isn't really meant to be binary and there is a 
 separate SQL binary type if that is what you want I'm not sure there is any 
 harm in changing only positive (to at least some type, do not think 
 arbitrary precision is needed or preferred, but not sure).

 The sooner ODBC.jl is changed fewer will notice a change. Maybe this 
 applies to other DB wrappers (didn't check, there are others).


Binary floating point is simply not acceptable for many database 
applications...
A good example (which came from Dr. Mike Cowlishaw, IBM fellow, inventor of 
the Rexx language, author of the decNumber package and a large contributor 
to the IEEE 754-2008 decimal floating point standard), who showed how a 
telephone company could lose a large amount of money because of decimal to 
binary conversion errors...

This is a very good presentation: 
https://www.sinenomine.net/sites/default/files/Cowlishaw-DecimalArithmetic-Hillgang2008_0.pdf

Would a switch in any of such cases be needed, with default to the new way? 
 If default to binary people would probably never change/know about it.. And 
 I guess not either if default was decimal.. or probably not care about 
 compatible with old/incorrect way..


ODBC.jl should simply switch to using a decimal floating point for those 
types of fields... (and JSON should be changed so that it can use decimal 
floating point if desired).

Scott


[julia-users] Re: [ANN] DecFP.jl - decimal floating-point math

2015-04-30 Thread Scott Jones


On Thursday, April 30, 2015 at 8:01:05 PM UTC-4, Páll Haraldsson wrote:

 Mixed operations involving decimal and binary floating-point or integer 
 types are supported (the result is promoted to decimal floating-point).

 Is that really advised?

 I meant for binary only. Mixing with integer is ok.

 Another idea. Mixing ok, but promoting to binary. Subsequent calculations 
 will not be slow :) Con, why start with decimal in the first place?


Very bad idea... the reason people use decimal arithmetic is for 
correctness (different sort of correctness than what the numerical 
computing people want...)

Both very much have there place...
 

 Another idea, can convertions be disabled by default (get runtime 
 errors/exceptions) and you could enable them if you want globally? Not sure 
 if that works or has to do with macros.. Runtime penalty?

 -- 
 Palli.

 On Thursday, April 30, 2015 at 11:56:28 PM UTC, Páll Haraldsson wrote:

 In general, you should be able to use the DecFP types in any context 
 where you would have used binary floating-point types: arrays, complex 
 arithmetic, and linear algebra should all work, for the most part.

 Way better than what I was going for - I thought only +, -, *, / was 
 needed and maybe only wanted.. plus convert but not automatic:

 Mixed operations involving decimal and binary floating-point or integer 
 types are supported (the result is promoted to decimal floating-point).

 Is that really advised? Is that what you do or the C library? In C you 
 have automatic promotion between native types, but I guess you can't 
 otherwise. The library you wrap provides converts (good), you make them 
 automatic (bad?).

 At first blush, as Julia is generic by default, this seems what you would 
 want, but is it? When I thought about this, it seemed just dangerous. If 
 you force people to use manual conversion you might get away with wrapping 
 fewer functions?

 Most basic arithmetic functions are supported, and many special 
 functions (sqrt, log, trigonometric functions, etc.).

 I'm not sure if the library you wrap provides this and if it does, 
 doesn't convert to binary floating point, runs the function, then converts 
 back? Whether it does it or you do it, is it not better to let the user 
 decide on both converts? And faster if you eliminate some..

 I just scanned the code or one file:

 for c in (:π, :e, :γ, :catalan, :φ)

 Not sure if these are the only Julia constants.. Or only you care about. 
 Anyway, wasn't expecting to see them in any decimal context. I would not be 
 rich with π dollars in my account. :)

 -- 
 Palli.

 On Wednesday, April 29, 2015 at 1:26:17 AM UTC, Steven G. Johnson wrote:

 The DecFP package

   https://github.com/stevengj/DecFP.jl

 provides 32-bit, 64-bit, and 128-bit binary-encoded decimal 
 floating-point types following the IEEE 754-2008, implemented as a wrapper 
 around the (BSD-licensed) Intel Decimal Floating-Point Math Library 
 https://software.intel.com/en-us/articles/intel-decimal-floating-point-math-library.
   
 Decimal floating-point types are useful in situations where you need to 
 exactly represent decimal values, typically human inputs.

 As software floating point, this is about 100x slower than hardware 
 binary floating-point math.  On the other hand, it is significantly 
 (10-100x) faster than arbitrary-precision decimal arithmetic, and is a 
 memory-efficient bitstype.

 The basic arithmetic functions, conversions from other numeric types, 
 and numerous special functions are supported.



[julia-users] Re: How to optimize the collect function?

2015-04-30 Thread Ali Rezaee
That is amazing. I just updated to the latest Julia and my timing is 1.6 ms 
now.

Thank you for your help.

On Friday, May 1, 2015 at 12:43:24 AM UTC+2, Ali Rezaee wrote:

 Dear all,

 I have a ported all my project from python to Julia. However, there is one 
 part of my code that I have not been able to optimize much. It is between 3 
 to 9 times slower than similar python code.
 Do you have any suggestions / explanations?

 Julia code:
 function collect_zip(a::Array=[a,b,c,d],b::Array=[0.2,0.1,0.1,0.6
 ])
   c = collect(zip(a,b))
 end

 collect_zip()
 @time for i in 1:1; j= collect_zip();end # elapsed time: 0.089672328 
 seconds (9 MB allocated)

 Python code:
 def listzip(a = [a,b,c,d], b = [0.2,0.1,0.1,0.6]):
 c = list(zip(a,b))
 a = timeit.timeit(listzip(),from __main__ import listzip,number=1)
 print(a) # 0.0105497862674

 P.S: I need to convert the zip to an array because after that I need to 
 sort them.

 Thanks a lot,



Re: [julia-users] Re: Performance variability - can we expect Julia to be the fastest (best) language?

2015-04-30 Thread Tim Holy
Strings have long been a performance sore-spot in julia, so we're glad Scott 
is hammering on that topic.

For interpreted code (including Julia with Any types), it's very possible 
that Python is and will remain faster. For one thing, Python is single-
dispatch, which means that when the interpreter has to go look up the function 
corresponding to your next expression, typically the list of applicable 
methods is quite short. In contrast, julia sometimes has to sort through huge 
method tables to determine the appropriate one to dispatch to. Multiple 
dispatch adds a lot of power to the language, and there's no performance cost 
for code that has been compiled, but it does make interpreted code slower.

Best,
--Tim

On Thursday, April 30, 2015 10:34:20 PM Páll Haraldsson wrote:
 Interesting.. does that mean Unicode then that is esp. faster or something
 else?
 
 800x faster is way worse than I thought and no good reason for it..
 
 I'm really intrigued what is this slow, can't be the simple things like say
 just string concatenation?!
 
 You can get similar speed using PyCall.jl :)
 
 For some obscure function like Levenshtein distance I might expect this (or
 not implemented yet in Julia) as Python would use tuned C code or in any
 function where you need to do non-trivial work per function-call.
 
 
 I failed to add regex to the list as an example as I was pretty sure it was
 as fast (or faster, because of macros) as Perl as it is using the same
 library.
 
 Similarly for all Unicode/UTF-8 stuff I was not expecting slowness. I know
 the work on that in Python2/3 and expected Julia could/did similar.
 
 2015-04-30 22:10 GMT+00:00 Scott Jones scott.paul.jo...@gmail.com:
  Yes... Python will win on string processing... esp. with Python 3... I
  quickly ran into things that were  800x faster in Python...
  (I hope to help change that though!)
  
  Scott
  
  On Thursday, April 30, 2015 at 6:01:45 PM UTC-4, Páll Haraldsson wrote:
  I wouldn't expect a difference in Julia for code like that (didn't
  check). But I guess what we are often seeing is someone comparing a tuned
  Python code to newbie Julia code. I still want it faster than that code..
  (assuming same algorithm, note row vs. column major caveat).
  
  The main point of mine, *should* Python at any time win?
  
  2015-04-30 21:36 GMT+00:00 Sisyphuss zhengw...@gmail.com:
  This post interests me. I'll write something here to follow this post.
  
  The performance gap between normal code in Python and badly-written code
  in Julia is something I'd like to know too.
  As far as I know, Python interpret does some mysterious optimizations.
  For example `(x**2)**2` is 100x faster than `x**4`.
  
  On Thursday, April 30, 2015 at 9:58:35 PM UTC+2, Páll Haraldsson wrote:
  Hi,
  
  [As a best language is subjective, I'll put that aside for a moment.]
  
  Part I.
  
  The goal, as I understand, for Julia is at least within a factor of two
  of C and already matching it mostly and long term beating that (and
  C++).
  [What other goals are there? How about 0.4 now or even 1.0..?]
  
  While that is the goal as a language, you can write slow code in any
  language and Julia makes that easier. :) [If I recall, Bezanson
  mentioned
  it (the global problem) as a feature, any change there?]
  
  
  I've been following this forum for months and newbies hit the same
  issues. But almost always without fail, Julia can be speed up (easily
  as
  Tim Holy says). I'm thinking about the exceptions to that - are there
  any
  left? And about the first code slowness (see Part II).
  
  Just recently the last two flaws of Julia that I could see where fixed:
  Decimal floating point is in (I'll look into the 100x slowness, that is
  probably to be expected of any language, still I think may be a
  misunderstanding and/or I can do much better). And I understand the
  tuple
  slowness has been fixed (that was really the only core language
  defect).
  The former wasn't a performance problem (mostly a non existence problem
  and
  correctness one (where needed)..).
  
  
  Still we see threads like this one recent one:
  
  https://groups.google.com/forum/#!topic/julia-users/-bx9xIfsHHw
  It seems changing the order of nested loops also helps
  
  Obviously Julia can't beat assembly but really C/Fortran is already
  close enough (within a small factor). The above row vs. column major
  (caching effects in general) can kill performance in all languages.
  Putting
  that newbie mistake aside, is there any reason Julia can be within a
  small
  factor of assembly (or C) in all cases already?
  
Received server disconnect: b0 'Idle Timeout'

  
  Part II.
  
  Except for caching issues, I still want the most newbie code or
  intentionally brain-damaged code to run faster than at least
  Python/scripting/interpreted languages.
  
  Potential problems (that I think are solved or at least not problems in
  theory):
  
  1. I know Any kills performance. Still, isn't that the 

Re: [julia-users] Re: Performance variability - can we expect Julia to be the fastest (best) language?

2015-04-30 Thread Scott Jones

 On Apr 30, 2015, at 9:58 PM, Tim Holy tim.h...@gmail.com wrote:
 
 Strings have long been a performance sore-spot in julia, so we're glad Scott 
 is hammering on that topic.

Thanks, Tim!  I was beginning to think I’d be banned from all Julia forums, for 
being a thorn in the side of
the Julia developers…
(I do want to say again… if I didn’t think what all of you had created wasn’t 
incredibly great, I wouldn’t be so interested
in making it even greater, in the particular areas I know a little about…
Also, the issues I’ve found are not because the developers aren’t brilliant 
[I’ve been super impressed, and I don’t impress
that easily!], but rather, either it’s outside of their area of expertise [as 
the numerical computing stuff is outside mine], or they
are incredibly busy making great strides in the areas that they are more 
interested in…)

 For interpreted code (including Julia with Any types), it's very possible 
 that Python is and will remain faster. For one thing, Python is single-
 dispatch, which means that when the interpreter has to go look up the 
 function 
 corresponding to your next expression, typically the list of applicable 
 methods is quite short. In contrast, julia sometimes has to sort through huge 
 method tables to determine the appropriate one to dispatch to. Multiple 
 dispatch adds a lot of power to the language, and there's no performance cost 
 for code that has been compiled, but it does make interpreted code slower.

Good point…

Scott

Re: [julia-users] Re: Defining a function in different modules

2015-04-30 Thread elextr



 Are there other pitfalls to auto-merging in user-space-only?  



Tom, 
 
The problem with auto merging in user code (as I see it)

module a
export f
f(x) = 4
f(x::Int)=5

user code:

using a
f(5) # gives 5 fine
f(6.0)+1 # gives 5 fine
f(5.0)  # gives 4, fine

now another module:

module b # totally separate from a, a and b have nothing in common except 
both use the verb f
export f
f(x::Float64) = four

user code:

using a,b # assuming methods of f() are auto-merged in user space

# existing code

f(5) # still gives 5, ok
f(6.0)+1 # can't add string and int, at least this breaks noisily 
f(5.0) # four?, existing code is silently changed :(

Consider also how much more complex the problem becomes if f() is generic, 
how many f{T}(x::T) exist and have to be merged? Don't know, depends on 
usage.
 


Re: [julia-users] Defining a function in different modules

2015-04-30 Thread Scott Jones
Maybe because it seems that a lot of the major packages have been put into 
Base, so it isn't a problem, as MA Laforge pointed out, leading to Base 
being incredibly large,
with stuff that means Julia's MIT license doesn't mean all that much, 
because it includes GPL code by default...

Scott

On Thursday, April 30, 2015 at 5:03:52 PM UTC-4, Stefan Karpinski wrote:

 On Wed, Apr 29, 2015 at 9:08 PM, Scott Jones scott.pa...@gmail.com 
 javascript: wrote:

 Your restrictions are making it very hard to develop easy to use APIs 
 that make sense for the people using them…

 That’s why so many people have been bringing this issue up…


 Not a single person who maintains a major Julia package has complained 
 about this. Which doesn't mean that there can't possibly be an issue here, 
 but it seems to strongly suggest that this is one of those concerns that 
 initially appears dire, when coming from a particular programming 
 background, but which dissipates once one acclimatizes to the multiple 
 dispatch mindset – in particular the idea that one generic function = 
 one verb concept.



[julia-users] Re: .jl to .exe

2015-04-30 Thread Páll Haraldsson


 A Windows license - I assume you could use Wine.


I wasn't clear for those who do not know Wine. It's a way to not have to 
have a Windows license. It helps you run Windows software on Linux (and Mac 
OS X I think). It can't handle all software. I think it should handle Julia 
(didn't check). Then if should work for this. If you have say C code with 
your Julia code, I wouldn't be sure. Still, in many cases that code would 
not API dependent and would work and maybe also even if not.
 

 For Mac OS X, you might need actual hardware and software - e.g. pay..

 On Thursday, April 23, 2015 at 3:15:46 PM UTC, pauld11718 wrote:

 Will it be possible to cross-compile?
 Do all the coding on linux(64 bit) and generate the exe for windows(32 
 bit)?



  1   2   >