[julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Bryan Rivera
Good to know, this is quite a core issue.  I see performant lambda's as a 
must.  

Next would be to have C++ like options for how local vars are handled.  We 
should be able to negate some options with intelligent compilation.

Those Python vs. Julia benchmarks are about to get a whole lot faster 


On Friday, January 22, 2016 at 11:55:25 AM UTC-5, Cedric St-Jean wrote:
>
>
> In this code, whether you use @anon or not, Julia will create 100 object 
> instances to store the z values. 
>
> The speed difference between the two will soon be gone. 
> 
>
> Cédric
>
>

Re: [julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Tim Holy
On Friday, January 22, 2016 12:03:02 PM Cedric St-Jean wrote:
> It looks like my understanding of FastAnonymous was flawed. Why doesn't it
> create the type at macro time, and just instantiate it at runtime, yielding
> 1 type / @anon ? Is there any complication that prevents that?

Yes:

for z in (1, 1.0)
f = @anon c->c+z
@show fieldtype(typeof(f), :z)
end

yields this output:

fieldtype(typeof(f),:z) = Int64
fieldtype(typeof(f),:z) = Float64

The fields of the constructed "function" need concrete type, if you're going to 
get good performance.

Best,
--Tim


> 
> Cédric
> 
> On Friday, January 22, 2016 at 2:15:20 PM UTC-5, Tim Holy wrote:
> > On Friday, January 22, 2016 10:21:31 AM Bryan Rivera wrote:
> > > For 1000 elements:
> > > 
> > > 0.00019s vs 0.035s respectively
> > > 
> > > Thanks!
> > 
> > Glad it helped.
> > 
> > > Is the reason the faster code has more allocations bc it is
> > > inserting vars into the single function?  (Opposed to the slower
> > > code already having its vars filled in.)
> > 
> > Every time you call @anon, it creates a brand-new type (and an instance of
> > that type) that julia has never seen before. That requires JITting any
> > code
> > that gets invoked on this instance. So the usual advice, "run once to JIT,
> > then do your timing" doesn't work in this case: it JITs every time.
> > 
> > --Tim
> > 
> > > On Friday, January 22, 2016 at 12:23:59 PM UTC-5, Tim Holy wrote:
> > > > Just use
> > > > 
> > > > z = 1
> > > > function2 = @anon c -> c + z
> > > > for z = 1:100
> > > > 
> > > > function2.z = z
> > > > # do whatever with function2, including making a copy
> > > > 
> > > > end
> > > > 
> > > > --Tim
> > > > 
> > > > On Friday, January 22, 2016 08:55:25 AM Cedric St-Jean wrote:
> > > > > (non-mutating) Closures and FastAnonymous work essentially the same
> > 
> > way.
> > 
> > > > > They store the data that is closed over (more or less) and a
> > 
> > function
> > 
> > > > > pointer. The thing is that there's only one data structure in Julia
> > 
> > for
> > 
> > > > all
> > > > 
> > > > > regular anonymous functions, whereas FastAnonymous creates one per
> > 
> > @anon
> > 
> > > > > site. Because the FastAnonymous-created datatype is specific to that
> > > > > function definition, the standard Julia machinery takes over and
> > > > 
> > > > produces
> > > > 
> > > > > efficient code. It's just as good as if the function had been
> > 
> > defined
> > 
> > > > > normally with `function foo(...) ... end`
> > > > > 
> > > > > 
> > > > > for z = 1:100
> > > > > 
> > > > > function2 = @anon c -> (c + z)
> > > > > 
> > > > > dict[z] =  function2
> > > > > 
> > > > > end
> > > > > 
> > > > > 
> > > > > So we end up creating multiple functions for each z value.
> > > > > 
> > > > > 
> > > > > In this code, whether you use @anon or not, Julia will create 100
> > 
> > object
> > 
> > > > > instances to store the z values.
> > > > > 
> > > > > The speed difference between the two will soon be gone.
> > > > > 
> > > > > 
> > > > > Cédric
> > > > > 
> > > > > On Friday, January 22, 2016 at 11:31:36 AM UTC-5, Bryan Rivera
> > 
> > wrote:
> > > > > > I have to do some investigating here.  I thought we could do
> > 
> > something
> > 
> > > > > > like that but wasn't quite sure how it would look.
> > > > > > 
> > > > > > Check this out:
> > > > > > 
> > > > > > This code using FastAnonymous optimizes to the very same code
> > 
> > below it
> > 
> > > > > > where functions have been manually injected:
> > > > > > 
> > > > > > using FastAnonymous
> > > > > > 
> > > > > > 
> > > > > > function function1(a, b, function2)
> > > > > > 
> > > > > >   if(a > b)
> > > > > >   
> > > > > > c = a + b
> > > > > > return function2(c)
> > > > > >   
> > > > > >   else
> > > > > >   
> > > > > > # do anything
> > > > > > # but return nothing
> > > > > >   
> > > > > >   end
> > > > > > 
> > > > > > end
> > > > > > 
> > > > > > 
> > > > > > z = 10
> > > > > > function2 = @anon c -> (c + z)
> > > > > > 
> > > > > > 
> > > > > > a = 1
> > > > > > b = 2
> > > > > > @code_llvm function1(a, b, function2)
> > > > > > @code_native function1(a, b, function2)
> > > > > > 
> > > > > > Manually unrolled equivalent:
> > > > > > 
> > > > > > function function1(a, b, z)
> > > > > > 
> > > > > >   if(a > b)
> > > > > >   
> > > > > > c = a + b
> > > > > > return function2(c, z)
> > > > > >   
> > > > > >   else
> > > > > >   
> > > > > > # do anything
> > > > > > # but return nothing
> > > > > >   
> > > > > >   end
> > > > > > 
> > > > > > end
> > > > > > 
> > > > > > 
> > > > > > function function2(c, z)
> > > > > > 
> > > > > >   return c + z
> > > > > > 
> > > > > > end
> > > > > > 
> > > > > > 
> > > > > > a = 1
> > > > > > b = 2
> > > > > > z = 10
> > > > > > 
> > > > > > 
> > > > > > @code_llvm function1(a, b, z)
> > > > > > 
> > > > > > @code_native function1(a, b, z)
> > > > > > 
> > > > > > 

Re: [julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Cedric St-Jean
What about creating a parametric type, with one parameter/closed-over 
variable?

On Friday, January 22, 2016 at 3:20:48 PM UTC-5, Tim Holy wrote:
>
> On Friday, January 22, 2016 12:03:02 PM Cedric St-Jean wrote: 
> > It looks like my understanding of FastAnonymous was flawed. Why doesn't 
> it 
> > create the type at macro time, and just instantiate it at runtime, 
> yielding 
> > 1 type / @anon ? Is there any complication that prevents that? 
>
> Yes: 
>
> for z in (1, 1.0) 
> f = @anon c->c+z 
> @show fieldtype(typeof(f), :z) 
> end 
>
> yields this output: 
>
> fieldtype(typeof(f),:z) = Int64 
> fieldtype(typeof(f),:z) = Float64 
>
> The fields of the constructed "function" need concrete type, if you're 
> going to 
> get good performance. 
>
> Best, 
> --Tim 
>
>
> > 
> > Cédric 
> > 
> > On Friday, January 22, 2016 at 2:15:20 PM UTC-5, Tim Holy wrote: 
> > > On Friday, January 22, 2016 10:21:31 AM Bryan Rivera wrote: 
> > > > For 1000 elements: 
> > > > 
> > > > 0.00019s vs 0.035s respectively 
> > > > 
> > > > Thanks! 
> > > 
> > > Glad it helped. 
> > > 
> > > > Is the reason the faster code has more allocations bc it is 
> > > > inserting vars into the single function?  (Opposed to the slower 
> > > > code already having its vars filled in.) 
> > > 
> > > Every time you call @anon, it creates a brand-new type (and an 
> instance of 
> > > that type) that julia has never seen before. That requires JITting any 
> > > code 
> > > that gets invoked on this instance. So the usual advice, "run once to 
> JIT, 
> > > then do your timing" doesn't work in this case: it JITs every time. 
> > > 
> > > --Tim 
> > > 
> > > > On Friday, January 22, 2016 at 12:23:59 PM UTC-5, Tim Holy wrote: 
> > > > > Just use 
> > > > > 
> > > > > z = 1 
> > > > > function2 = @anon c -> c + z 
> > > > > for z = 1:100 
> > > > > 
> > > > > function2.z = z 
> > > > > # do whatever with function2, including making a copy 
> > > > > 
> > > > > end 
> > > > > 
> > > > > --Tim 
> > > > > 
> > > > > On Friday, January 22, 2016 08:55:25 AM Cedric St-Jean wrote: 
> > > > > > (non-mutating) Closures and FastAnonymous work essentially the 
> same 
> > > 
> > > way. 
> > > 
> > > > > > They store the data that is closed over (more or less) and a 
> > > 
> > > function 
> > > 
> > > > > > pointer. The thing is that there's only one data structure in 
> Julia 
> > > 
> > > for 
> > > 
> > > > > all 
> > > > > 
> > > > > > regular anonymous functions, whereas FastAnonymous creates one 
> per 
> > > 
> > > @anon 
> > > 
> > > > > > site. Because the FastAnonymous-created datatype is specific to 
> that 
> > > > > > function definition, the standard Julia machinery takes over and 
> > > > > 
> > > > > produces 
> > > > > 
> > > > > > efficient code. It's just as good as if the function had been 
> > > 
> > > defined 
> > > 
> > > > > > normally with `function foo(...) ... end` 
> > > > > > 
> > > > > > 
> > > > > > for z = 1:100 
> > > > > > 
> > > > > > function2 = @anon c -> (c + z) 
> > > > > > 
> > > > > > dict[z] =  function2 
> > > > > > 
> > > > > > end 
> > > > > > 
> > > > > > 
> > > > > > So we end up creating multiple functions for each z value. 
> > > > > > 
> > > > > > 
> > > > > > In this code, whether you use @anon or not, Julia will create 
> 100 
> > > 
> > > object 
> > > 
> > > > > > instances to store the z values. 
> > > > > > 
> > > > > > The speed difference between the two will soon be gone. 
> > > > > >  
> > > > > > 
> > > > > > Cédric 
> > > > > > 
> > > > > > On Friday, January 22, 2016 at 11:31:36 AM UTC-5, Bryan Rivera 
> > > 
> > > wrote: 
> > > > > > > I have to do some investigating here.  I thought we could do 
> > > 
> > > something 
> > > 
> > > > > > > like that but wasn't quite sure how it would look. 
> > > > > > > 
> > > > > > > Check this out: 
> > > > > > > 
> > > > > > > This code using FastAnonymous optimizes to the very same code 
> > > 
> > > below it 
> > > 
> > > > > > > where functions have been manually injected: 
> > > > > > > 
> > > > > > > using FastAnonymous 
> > > > > > > 
> > > > > > > 
> > > > > > > function function1(a, b, function2) 
> > > > > > > 
> > > > > > >   if(a > b) 
> > > > > > >   
> > > > > > > c = a + b 
> > > > > > > return function2(c) 
> > > > > > >   
> > > > > > >   else 
> > > > > > >   
> > > > > > > # do anything 
> > > > > > > # but return nothing 
> > > > > > >   
> > > > > > >   end 
> > > > > > > 
> > > > > > > end 
> > > > > > > 
> > > > > > > 
> > > > > > > z = 10 
> > > > > > > function2 = @anon c -> (c + z) 
> > > > > > > 
> > > > > > > 
> > > > > > > a = 1 
> > > > > > > b = 2 
> > > > > > > @code_llvm function1(a, b, function2) 
> > > > > > > @code_native function1(a, b, function2) 
> > > > > > > 
> > > > > > > Manually unrolled equivalent: 
> > > > > > > 
> > > > > > > function function1(a, b, z) 
> > > > > > > 
> > > > > > >   if(a > b) 
> > > > > > >   
> > > > > 

[julia-users] Re: How to run a detached command and return execution to the parent script?

2016-01-22 Thread Adrian Salceanu
I guess what I'm looking for is the equivalent of Ruby's Process#spawn

In REPL: 

>> pid = Process.spawn("ping www.google.com", :out => '/dev/null')
83210
>> <-- the process is running in the background and 
control has been returned to the REPL


vineri, 22 ianuarie 2016, 22:06:01 UTC+1, Adrian Salceanu a scris:
>
> Hi, 
>
> I'm hammering at a web app and I'm trying to setup functionality to 
> monitor the file system for changes and restart/reload the server 
> automatically so the changes are picked up (I'm using Mux which uses 
> HttpServer). 
>
> The approach I have in mind is: 
>
> 1. have a startup script which is run from the command line, something 
> like: 
> $ julia -L startup.jl
>
> 2. the startup script launches the web app, which starts the web server. 
> My intention was to run 
> $ julia -L app.jl 
> as a command inside startup.jl, detached, and have the startup.jl script 
> get back control, with app.jl running detached in the background. 
>
> 3. once startup.jl gets back control, it begins monitoring the file system 
> and when changes are detected, kills the app and relaunches it. 
>
> That was the theory. Now, I might be missing something but I can't find a 
> way to detach the command I'm running and get control back to the startup 
> script. And I tried a lot of things! 
>
> ===
>
> I'm providing simpler example using "ping", which also run indefinitely, 
> similar to the web server. 
>
> julia> run(detach(`ping "www.google.com"`)) # the command is detached and 
> continues to run after the julia REPL is closed, but at this point the REPL 
> does not get control, there's no cursor available in the REPL
> PING www.google.com (173.194.45.82): 56 data bytes
> 64 bytes from 173.194.45.82: icmp_seq=0 ttl=54 time=30.138 ms
> 64 bytes from 173.194.45.82: icmp_seq=1 ttl=54 time=30.417 ms
> ... more output ...
> 64 bytes from 173.194.45.82: icmp_seq=7 ttl=54 time=30.486 ms
> 64 bytes from 173.194.45.82: icmp_seq=8 ttl=54 time=30.173 ms
> ^CERROR: InterruptException:   
>   < here I press Ctrl+C and only now the REPL gets back 
> the cursor, with the command still running in the background
>
> ===
>
> Also, related to this, passing "&" into the command to detach does not 
> work as expected, the "&" is interpreted as argument of the command. Not 
> sure if this would help anyway to return control to the startup.jl script? 
>
> julia> run(detach(`ping "www.google.com" &`));
> usage: ping [-AaDdfnoQqRrv] [-b boundif] [-c count] [-G sweepmaxsize]
> [-g sweepminsize] [-h sweepincrsize] [-i wait] [−k 
> trafficclass]
> [-l preload] [-M mask | time] [-m ttl] [-p pattern]
> [-S src_addr] [-s packetsize] [-t timeout][-W waittime] [-z 
> tos]
> host
>ping [-AaDdfLnoQqRrv] [-b boundif] [-c count] [-I iface] [-i wait]
> [−k trafficclass] [-l preload] [-M mask | time] [-m ttl] [-p 
> pattern] [-S src_addr]
> [-s packetsize] [-T ttl] [-t timeout] [-W waittime]
> [-z tos] mcast-group
> ERROR: failed process: Process(`ping www.google.com &`, 
> ProcessExited(64)) [64]
>  in run at /usr/local/Cellar/julia/0.4.2/lib/julia/sys.dylib
>
> ===
>
> Thanks
>


Re: [julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Cedric St-Jean
Tim,

Every time you call @anon, it creates a brand-new type (and an instance of 
> that type) that julia has never seen before.


It looks like my understanding of FastAnonymous was flawed. Why doesn't it 
create the type at macro time, and just instantiate it at runtime, yielding 
1 type / @anon ? Is there any complication that prevents that?

Cédric

On Friday, January 22, 2016 at 2:15:20 PM UTC-5, Tim Holy wrote:
>
> On Friday, January 22, 2016 10:21:31 AM Bryan Rivera wrote: 
> > For 1000 elements: 
> > 
> > 0.00019s vs 0.035s respectively 
> > 
> > Thanks! 
>
> Glad it helped. 
>
> > Is the reason the faster code has more allocations bc it is 
> > inserting vars into the single function?  (Opposed to the slower 
> > code already having its vars filled in.) 
>
> Every time you call @anon, it creates a brand-new type (and an instance of 
> that type) that julia has never seen before. That requires JITting any 
> code 
> that gets invoked on this instance. So the usual advice, "run once to JIT, 
> then do your timing" doesn't work in this case: it JITs every time. 
>
> --Tim 
>
> > 
> > On Friday, January 22, 2016 at 12:23:59 PM UTC-5, Tim Holy wrote: 
> > > Just use 
> > > 
> > > z = 1 
> > > function2 = @anon c -> c + z 
> > > for z = 1:100 
> > > 
> > > function2.z = z 
> > > # do whatever with function2, including making a copy 
> > > 
> > > end 
> > > 
> > > --Tim 
> > > 
> > > On Friday, January 22, 2016 08:55:25 AM Cedric St-Jean wrote: 
> > > > (non-mutating) Closures and FastAnonymous work essentially the same 
> way. 
> > > > They store the data that is closed over (more or less) and a 
> function 
> > > > pointer. The thing is that there's only one data structure in Julia 
> for 
> > > 
> > > all 
> > > 
> > > > regular anonymous functions, whereas FastAnonymous creates one per 
> @anon 
> > > > site. Because the FastAnonymous-created datatype is specific to that 
> > > > function definition, the standard Julia machinery takes over and 
> > > 
> > > produces 
> > > 
> > > > efficient code. It's just as good as if the function had been 
> defined 
> > > > normally with `function foo(...) ... end` 
> > > > 
> > > > 
> > > > for z = 1:100 
> > > > 
> > > > function2 = @anon c -> (c + z) 
> > > > 
> > > > dict[z] =  function2 
> > > > 
> > > > end 
> > > > 
> > > > 
> > > > So we end up creating multiple functions for each z value. 
> > > > 
> > > > 
> > > > In this code, whether you use @anon or not, Julia will create 100 
> object 
> > > > instances to store the z values. 
> > > > 
> > > > The speed difference between the two will soon be gone. 
> > > >  
> > > > 
> > > > Cédric 
> > > > 
> > > > On Friday, January 22, 2016 at 11:31:36 AM UTC-5, Bryan Rivera 
> wrote: 
> > > > > I have to do some investigating here.  I thought we could do 
> something 
> > > > > like that but wasn't quite sure how it would look. 
> > > > > 
> > > > > Check this out: 
> > > > > 
> > > > > This code using FastAnonymous optimizes to the very same code 
> below it 
> > > > > where functions have been manually injected: 
> > > > > 
> > > > > using FastAnonymous 
> > > > > 
> > > > > 
> > > > > function function1(a, b, function2) 
> > > > > 
> > > > >   if(a > b) 
> > > > >   
> > > > > c = a + b 
> > > > > return function2(c) 
> > > > >   
> > > > >   else 
> > > > >   
> > > > > # do anything 
> > > > > # but return nothing 
> > > > >   
> > > > >   end 
> > > > > 
> > > > > end 
> > > > > 
> > > > > 
> > > > > z = 10 
> > > > > function2 = @anon c -> (c + z) 
> > > > > 
> > > > > 
> > > > > a = 1 
> > > > > b = 2 
> > > > > @code_llvm function1(a, b, function2) 
> > > > > @code_native function1(a, b, function2) 
> > > > > 
> > > > > Manually unrolled equivalent: 
> > > > > 
> > > > > function function1(a, b, z) 
> > > > > 
> > > > >   if(a > b) 
> > > > >   
> > > > > c = a + b 
> > > > > return function2(c, z) 
> > > > >   
> > > > >   else 
> > > > >   
> > > > > # do anything 
> > > > > # but return nothing 
> > > > >   
> > > > >   end 
> > > > > 
> > > > > end 
> > > > > 
> > > > > 
> > > > > function function2(c, z) 
> > > > > 
> > > > >   return c + z 
> > > > > 
> > > > > end 
> > > > > 
> > > > > 
> > > > > a = 1 
> > > > > b = 2 
> > > > > z = 10 
> > > > > 
> > > > > 
> > > > > @code_llvm function1(a, b, z) 
> > > > > 
> > > > > @code_native function1(a, b, z) 
> > > > > 
> > > > > However, this is a bit too simplistic.  My program actually does 
> this: 
> > > > > 
> > > > > # Test to see if multiple functions are created.  They are. 
> > > > > # We would only need to create a single function if we used julia 
> > > 
> > > anon, 
> > > 
> > > > > but its time inefficient. 
> > > > > 
> > > > > dict = Dict{Int, Any}() 
> > > > > for z = 1:100 
> > > > > 
> > > > > function2 = @anon c -> (c + z) 
> > > > > 
> > > > > dict[z] =  function2 
> > > > > 
> > > > > end 
> > > > > 
> > > > > 
> > > > 

[julia-users] Macros, functions and module

2016-01-22 Thread amiksvi
Hi,

I have this quite weird situation:


module Mod

export @generate_macro_f, @generate_function_g, h

macro generate_f()
return esc(quote
macro f(name)
return quote
print($name)
end
end
end)
end

macro generate_g()
return esc(quote
function g()
@f("hello")
print(" world\n")
end
end)
end

function h()
@generate_f()
@generate_g()
end

end

using Mod

h()
g()

I want the function g to be generated using only one call to a function, 
here h (or possible a macro...), outside the module. Such a code would 
return an error saying that @f is not defined, and I don't understand why 
since I escaped the quote block of generate_f. This example is of course 
oversimplified but @f needs to be called inside g.

Any thought?

Many thanks,


[julia-users] Re: Macros, functions and module

2016-01-22 Thread Bryan Rivera
I don't know what your exact use case is, but you should't be using macros 
like that.

Does @inline g()  suit your purposes?

If not, I recommend making your example a bit more pertinent to what you 
want accomplished.

On Friday, January 22, 2016 at 3:43:10 PM UTC-5, ami...@gmail.com wrote:
>
> Hi,
>
> I have this quite weird situation:
>
>
> module Mod
>
> export @generate_macro_f, @generate_function_g, h
>
> macro generate_f()
> return esc(quote
> macro f(name)
> return quote
> print($name)
> end
> end
> end)
> end
>
> macro generate_g()
> return esc(quote
> function g()
> @f("hello")
> print(" world\n")
> end
> end)
> end
>
> function h()
> @generate_f()
> @generate_g()
> end
>
> end
>
> using Mod
>
> h()
> g()
>
> I want the function g to be generated using only one call to a function, 
> here h (or possible a macro...), outside the module. Such a code would 
> return an error saying that @f is not defined, and I don't understand why 
> since I escaped the quote block of generate_f. This example is of course 
> oversimplified but @f needs to be called inside g.
>
> Any thought?
>
> Many thanks,
>


Re: [julia-users] Macros, functions and module

2016-01-22 Thread amiksvi
Thanks to both of you, I actually solved my problem in the mean time by 
calling @eval, as Yiacho suggested, working code for the record:


module Mod

export g, h

macro generate_f()
return esc(quote
macro f(name)
return quote
print($name)
end
end
end)
end

macro generate_g()
return esc(quote
function g()
@f("hello")
print(" world\n")
end
end)
end

function h()
@eval @generate_f()
@eval @generate_g()
end

end

using Mod

h()
g()





Re: [julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Tim Holy
On Friday, January 22, 2016 12:24:30 PM Cedric St-Jean wrote:
> What about creating a parametric type, with one parameter/closed-over
> variable?

Currently there's no caching of old FA types. Pull requests to FA are welcome 
;-), though I wonder if it's worth it given Jeff's work.

All FA does is create a type and a call method, with the main trickery being 
to scan the AST of the user-supplied function for variables that are not 
specified as arguments. You can certainly do the same thing manually.

http://stackoverflow.com/a/34354808/1939814

--Tim

> 
> On Friday, January 22, 2016 at 3:20:48 PM UTC-5, Tim Holy wrote:
> > On Friday, January 22, 2016 12:03:02 PM Cedric St-Jean wrote:
> > > It looks like my understanding of FastAnonymous was flawed. Why doesn't
> > 
> > it
> > 
> > > create the type at macro time, and just instantiate it at runtime,
> > 
> > yielding
> > 
> > > 1 type / @anon ? Is there any complication that prevents that?
> > 
> > Yes:
> > 
> > for z in (1, 1.0)
> > 
> > f = @anon c->c+z
> > @show fieldtype(typeof(f), :z)
> > 
> > end
> > 
> > yields this output:
> > 
> > fieldtype(typeof(f),:z) = Int64
> > fieldtype(typeof(f),:z) = Float64
> > 
> > The fields of the constructed "function" need concrete type, if you're
> > going to
> > get good performance.
> > 
> > Best,
> > --Tim
> > 
> > > Cédric
> > > 
> > > On Friday, January 22, 2016 at 2:15:20 PM UTC-5, Tim Holy wrote:
> > > > On Friday, January 22, 2016 10:21:31 AM Bryan Rivera wrote:
> > > > > For 1000 elements:
> > > > > 
> > > > > 0.00019s vs 0.035s respectively
> > > > > 
> > > > > Thanks!
> > > > 
> > > > Glad it helped.
> > > > 
> > > > > Is the reason the faster code has more allocations bc it is
> > > > > inserting vars into the single function?  (Opposed to the slower
> > > > > code already having its vars filled in.)
> > > > 
> > > > Every time you call @anon, it creates a brand-new type (and an
> > 
> > instance of
> > 
> > > > that type) that julia has never seen before. That requires JITting any
> > > > code
> > > > that gets invoked on this instance. So the usual advice, "run once to
> > 
> > JIT,
> > 
> > > > then do your timing" doesn't work in this case: it JITs every time.
> > > > 
> > > > --Tim
> > > > 
> > > > > On Friday, January 22, 2016 at 12:23:59 PM UTC-5, Tim Holy wrote:
> > > > > > Just use
> > > > > > 
> > > > > > z = 1
> > > > > > function2 = @anon c -> c + z
> > > > > > for z = 1:100
> > > > > > 
> > > > > > function2.z = z
> > > > > > # do whatever with function2, including making a copy
> > > > > > 
> > > > > > end
> > > > > > 
> > > > > > --Tim
> > > > > > 
> > > > > > On Friday, January 22, 2016 08:55:25 AM Cedric St-Jean wrote:
> > > > > > > (non-mutating) Closures and FastAnonymous work essentially the
> > 
> > same
> > 
> > > > way.
> > > > 
> > > > > > > They store the data that is closed over (more or less) and a
> > > > 
> > > > function
> > > > 
> > > > > > > pointer. The thing is that there's only one data structure in
> > 
> > Julia
> > 
> > > > for
> > > > 
> > > > > > all
> > > > > > 
> > > > > > > regular anonymous functions, whereas FastAnonymous creates one
> > 
> > per
> > 
> > > > @anon
> > > > 
> > > > > > > site. Because the FastAnonymous-created datatype is specific to
> > 
> > that
> > 
> > > > > > > function definition, the standard Julia machinery takes over and
> > > > > > 
> > > > > > produces
> > > > > > 
> > > > > > > efficient code. It's just as good as if the function had been
> > > > 
> > > > defined
> > > > 
> > > > > > > normally with `function foo(...) ... end`
> > > > > > > 
> > > > > > > 
> > > > > > > for z = 1:100
> > > > > > > 
> > > > > > > function2 = @anon c -> (c + z)
> > > > > > > 
> > > > > > > dict[z] =  function2
> > > > > > > 
> > > > > > > end
> > > > > > > 
> > > > > > > 
> > > > > > > So we end up creating multiple functions for each z value.
> > > > > > > 
> > > > > > > 
> > > > > > > In this code, whether you use @anon or not, Julia will create
> > 
> > 100
> > 
> > > > object
> > > > 
> > > > > > > instances to store the z values.
> > > > > > > 
> > > > > > > The speed difference between the two will soon be gone.
> > > > > > > 
> > > > > > > 
> > > > > > > Cédric
> > > > > > > 
> > > > > > > On Friday, January 22, 2016 at 11:31:36 AM UTC-5, Bryan Rivera
> > > > 
> > > > wrote:
> > > > > > > > I have to do some investigating here.  I thought we could do
> > > > 
> > > > something
> > > > 
> > > > > > > > like that but wasn't quite sure how it would look.
> > > > > > > > 
> > > > > > > > Check this out:
> > > > > > > > 
> > > > > > > > This code using FastAnonymous optimizes to the very same code
> > > > 
> > > > below it
> > > > 
> > > > > > > > where functions have been manually injected:
> > > > > > > > 
> > > > > > > > using FastAnonymous
> > > > > > > > 
> > > > > > > > 
> > > > > > > > function function1(a, b, function2)
> > > > > > > > 
> > > > > > > >   if(a > b)

[julia-users] How to run a detached command and return execution to the parent script?

2016-01-22 Thread Adrian Salceanu
Hi, 

I'm hammering at a web app and I'm trying to setup functionality to monitor 
the file system for changes and restart/reload the server automatically so 
the changes are picked up (I'm using Mux which uses HttpServer). 

The approach I have in mind is: 

1. have a startup script which is run from the command line, something 
like: 
$ julia -L startup.jl

2. the startup script launches the web app, which starts the web server. My 
intention was to run 
$ julia -L app.jl 
as a command inside startup.jl, detached, and have the startup.jl script 
get back control, with app.jl running detached in the background. 

3. once startup.jl gets back control, it begins monitoring the file system 
and when changes are detected, kills the app and relaunches it. 

That was the theory. Now, I might be missing something but I can't find a 
way to detach the command I'm running and get control back to the startup 
script. And I tried a lot of things! 

===

I'm providing simpler example using "ping", which also run indefinitely, 
similar to the web server. 

julia> run(detach(`ping "www.google.com"`)) # the command is detached and 
continues to run after the julia REPL is closed, but at this point the REPL 
does not get control, there's no cursor available in the REPL
PING www.google.com (173.194.45.82): 56 data bytes
64 bytes from 173.194.45.82: icmp_seq=0 ttl=54 time=30.138 ms
64 bytes from 173.194.45.82: icmp_seq=1 ttl=54 time=30.417 ms
... more output ...
64 bytes from 173.194.45.82: icmp_seq=7 ttl=54 time=30.486 ms
64 bytes from 173.194.45.82: icmp_seq=8 ttl=54 time=30.173 ms
^CERROR: InterruptException:   
  < here I press Ctrl+C and only now the REPL gets back 
the cursor, with the command still running in the background

===

Also, related to this, passing "&" into the command to detach does not work 
as expected, the "&" is interpreted as argument of the command. Not sure if 
this would help anyway to return control to the startup.jl script? 

julia> run(detach(`ping "www.google.com" &`));
usage: ping [-AaDdfnoQqRrv] [-b boundif] [-c count] [-G sweepmaxsize]
[-g sweepminsize] [-h sweepincrsize] [-i wait] [−k trafficclass]
[-l preload] [-M mask | time] [-m ttl] [-p pattern]
[-S src_addr] [-s packetsize] [-t timeout][-W waittime] [-z tos]
host
   ping [-AaDdfLnoQqRrv] [-b boundif] [-c count] [-I iface] [-i wait]
[−k trafficclass] [-l preload] [-M mask | time] [-m ttl] [-p 
pattern] [-S src_addr]
[-s packetsize] [-T ttl] [-t timeout] [-W waittime]
[-z tos] mcast-group
ERROR: failed process: Process(`ping www.google.com &`, ProcessExited(64)) 
[64]
 in run at /usr/local/Cellar/julia/0.4.2/lib/julia/sys.dylib

===

Thanks


[julia-users] Re: How to run a detached command and return execution to the parent script?

2016-01-22 Thread Adrian Salceanu
Oh! The ruby analogy made me think about actually spawning the detached 
command! Which produced the desired effect! 

julia> @spawn run(detach(`ping www.google.com`))



vineri, 22 ianuarie 2016, 22:29:27 UTC+1, Adrian Salceanu a scris:
>
> I guess what I'm looking for is the equivalent of Ruby's Process#spawn
>
> In REPL: 
>
> >> pid = Process.spawn("ping www.google.com", :out => '/dev/null')
> 83210
> >> <-- the process is running in the background 
> and control has been returned to the REPL
>
>
> vineri, 22 ianuarie 2016, 22:06:01 UTC+1, Adrian Salceanu a scris:
>>
>> Hi, 
>>
>> I'm hammering at a web app and I'm trying to setup functionality to 
>> monitor the file system for changes and restart/reload the server 
>> automatically so the changes are picked up (I'm using Mux which uses 
>> HttpServer). 
>>
>> The approach I have in mind is: 
>>
>> 1. have a startup script which is run from the command line, something 
>> like: 
>> $ julia -L startup.jl
>>
>> 2. the startup script launches the web app, which starts the web server. 
>> My intention was to run 
>> $ julia -L app.jl 
>> as a command inside startup.jl, detached, and have the startup.jl script 
>> get back control, with app.jl running detached in the background. 
>>
>> 3. once startup.jl gets back control, it begins monitoring the file 
>> system and when changes are detected, kills the app and relaunches it. 
>>
>> That was the theory. Now, I might be missing something but I can't find a 
>> way to detach the command I'm running and get control back to the startup 
>> script. And I tried a lot of things! 
>>
>> ===
>>
>> I'm providing simpler example using "ping", which also run indefinitely, 
>> similar to the web server. 
>>
>> julia> run(detach(`ping "www.google.com"`)) # the command is detached 
>> and continues to run after the julia REPL is closed, but at this point the 
>> REPL does not get control, there's no cursor available in the REPL
>> PING www.google.com (173.194.45.82): 56 data bytes
>> 64 bytes from 173.194.45.82: icmp_seq=0 ttl=54 time=30.138 ms
>> 64 bytes from 173.194.45.82: icmp_seq=1 ttl=54 time=30.417 ms
>> ... more output ...
>> 64 bytes from 173.194.45.82: icmp_seq=7 ttl=54 time=30.486 ms
>> 64 bytes from 173.194.45.82: icmp_seq=8 ttl=54 time=30.173 ms
>> ^CERROR: InterruptException: 
>> < here I press Ctrl+C and only now the REPL gets 
>> back the cursor, with the command still running in the background
>>
>> ===
>>
>> Also, related to this, passing "&" into the command to detach does not 
>> work as expected, the "&" is interpreted as argument of the command. Not 
>> sure if this would help anyway to return control to the startup.jl script? 
>>
>> julia> run(detach(`ping "www.google.com" &`));
>> usage: ping [-AaDdfnoQqRrv] [-b boundif] [-c count] [-G sweepmaxsize]
>> [-g sweepminsize] [-h sweepincrsize] [-i wait] [−k 
>> trafficclass]
>> [-l preload] [-M mask | time] [-m ttl] [-p pattern]
>> [-S src_addr] [-s packetsize] [-t timeout][-W waittime] [-z 
>> tos]
>> host
>>ping [-AaDdfLnoQqRrv] [-b boundif] [-c count] [-I iface] [-i wait]
>> [−k trafficclass] [-l preload] [-M mask | time] [-m ttl] [-p 
>> pattern] [-S src_addr]
>> [-s packetsize] [-T ttl] [-t timeout] [-W waittime]
>> [-z tos] mcast-group
>> ERROR: failed process: Process(`ping www.google.com &`, 
>> ProcessExited(64)) [64]
>>  in run at /usr/local/Cellar/julia/0.4.2/lib/julia/sys.dylib
>>
>> ===
>>
>> Thanks
>>
>

Re: [julia-users] Macros, functions and module

2016-01-22 Thread amiksvi
Ok this could seem to make no sense, and this actually does not...
Of course it solved the error in this precise case but the point of 
generating functions through macros inside the module was precisely because 
I have to define a type outside the module that should be used inside one 
of the functions of the module. As the module is parsed, this type is not 
yet defined... So, obviously, when I use @eval in this case, it miserably 
fails.
I'll try to come up with a minimal example which describe a bit more what I 
want to do.


Re: [julia-users] Re: How to run a detached command and return execution to the parent script?

2016-01-22 Thread Stefan Karpinski
@spawn runs a command on a (random) worker process. If you want to do
"background" work in the current process, you can use @async:

julia> t = @async (sleep(5); rand())
Task (runnable) @0x000112d746a0

julia> wait(t)
0.14543742643271207


On Fri, Jan 22, 2016 at 4:33 PM, Adrian Salceanu 
wrote:

> Oh! The ruby analogy made me think about actually spawning the detached
> command! Which produced the desired effect!
>
> julia> @spawn run(detach(`ping www.google.com`))
>
>
>
> vineri, 22 ianuarie 2016, 22:29:27 UTC+1, Adrian Salceanu a scris:
>>
>> I guess what I'm looking for is the equivalent of Ruby's Process#spawn
>>
>> In REPL:
>>
>> >> pid = Process.spawn("ping www.google.com", :out => '/dev/null')
>> 83210
>> >> <-- the process is running in the background
>> and control has been returned to the REPL
>>
>>
>> vineri, 22 ianuarie 2016, 22:06:01 UTC+1, Adrian Salceanu a scris:
>>>
>>> Hi,
>>>
>>> I'm hammering at a web app and I'm trying to setup functionality to
>>> monitor the file system for changes and restart/reload the server
>>> automatically so the changes are picked up (I'm using Mux which uses
>>> HttpServer).
>>>
>>> The approach I have in mind is:
>>>
>>> 1. have a startup script which is run from the command line, something
>>> like:
>>> $ julia -L startup.jl
>>>
>>> 2. the startup script launches the web app, which starts the web server.
>>> My intention was to run
>>> $ julia -L app.jl
>>> as a command inside startup.jl, detached, and have the startup.jl script
>>> get back control, with app.jl running detached in the background.
>>>
>>> 3. once startup.jl gets back control, it begins monitoring the file
>>> system and when changes are detected, kills the app and relaunches it.
>>>
>>> That was the theory. Now, I might be missing something but I can't find
>>> a way to detach the command I'm running and get control back to the startup
>>> script. And I tried a lot of things!
>>>
>>> ===
>>>
>>> I'm providing simpler example using "ping", which also run indefinitely,
>>> similar to the web server.
>>>
>>> julia> run(detach(`ping "www.google.com"`)) # the command is detached
>>> and continues to run after the julia REPL is closed, but at this point the
>>> REPL does not get control, there's no cursor available in the REPL
>>> PING www.google.com (173.194.45.82): 56 data bytes
>>> 64 bytes from 173.194.45.82: icmp_seq=0 ttl=54 time=30.138 ms
>>> 64 bytes from 173.194.45.82: icmp_seq=1 ttl=54 time=30.417 ms
>>> ... more output ...
>>> 64 bytes from 173.194.45.82: icmp_seq=7 ttl=54 time=30.486 ms
>>> 64 bytes from 173.194.45.82: icmp_seq=8 ttl=54 time=30.173 ms
>>> ^CERROR: InterruptException:
>>> < here I press Ctrl+C and only now the REPL gets
>>> back the cursor, with the command still running in the background
>>>
>>> ===
>>>
>>> Also, related to this, passing "&" into the command to detach does not
>>> work as expected, the "&" is interpreted as argument of the command. Not
>>> sure if this would help anyway to return control to the startup.jl script?
>>>
>>> julia> run(detach(`ping "www.google.com" &`));
>>> usage: ping [-AaDdfnoQqRrv] [-b boundif] [-c count] [-G sweepmaxsize]
>>> [-g sweepminsize] [-h sweepincrsize] [-i wait] [−k
>>> trafficclass]
>>> [-l preload] [-M mask | time] [-m ttl] [-p pattern]
>>> [-S src_addr] [-s packetsize] [-t timeout][-W waittime] [-z
>>> tos]
>>> host
>>>ping [-AaDdfLnoQqRrv] [-b boundif] [-c count] [-I iface] [-i wait]
>>> [−k trafficclass] [-l preload] [-M mask | time] [-m ttl] [-p
>>> pattern] [-S src_addr]
>>> [-s packetsize] [-T ttl] [-t timeout] [-W waittime]
>>> [-z tos] mcast-group
>>> ERROR: failed process: Process(`ping www.google.com &`,
>>> ProcessExited(64)) [64]
>>>  in run at /usr/local/Cellar/julia/0.4.2/lib/julia/sys.dylib
>>>
>>> ===
>>>
>>> Thanks
>>>
>>


Re: [julia-users] Re: How to run a detached command and return execution to the parent script?

2016-01-22 Thread Erik Schnetter
If you want to be able to terminate your local Julia process, and have
the server continue to run in the background, then you might want to
check out . This does the
equivalent of run/detach, but in such a way that the detached process
runs as daemon.

Otherwise, if you don't need this functionality, then @async is the way to go.

-erik

On Fri, Jan 22, 2016 at 4:40 PM, Stefan Karpinski  wrote:
> @spawn runs a command on a (random) worker process. If you want to do
> "background" work in the current process, you can use @async:
>
> julia> t = @async (sleep(5); rand())
> Task (runnable) @0x000112d746a0
>
> julia> wait(t)
> 0.14543742643271207
>
>
> On Fri, Jan 22, 2016 at 4:33 PM, Adrian Salceanu 
> wrote:
>>
>> Oh! The ruby analogy made me think about actually spawning the detached
>> command! Which produced the desired effect!
>>
>> julia> @spawn run(detach(`ping www.google.com`))
>>
>>
>>
>> vineri, 22 ianuarie 2016, 22:29:27 UTC+1, Adrian Salceanu a scris:
>>>
>>> I guess what I'm looking for is the equivalent of Ruby's Process#spawn
>>>
>>> In REPL:
>>>
>>> >> pid = Process.spawn("ping www.google.com", :out => '/dev/null')
>>> 83210
>>> >> <-- the process is running in the background
>>> >> and control has been returned to the REPL
>>>
>>>
>>> vineri, 22 ianuarie 2016, 22:06:01 UTC+1, Adrian Salceanu a scris:

 Hi,

 I'm hammering at a web app and I'm trying to setup functionality to
 monitor the file system for changes and restart/reload the server
 automatically so the changes are picked up (I'm using Mux which uses
 HttpServer).

 The approach I have in mind is:

 1. have a startup script which is run from the command line, something
 like:
 $ julia -L startup.jl

 2. the startup script launches the web app, which starts the web server.
 My intention was to run
 $ julia -L app.jl
 as a command inside startup.jl, detached, and have the startup.jl script
 get back control, with app.jl running detached in the background.

 3. once startup.jl gets back control, it begins monitoring the file
 system and when changes are detected, kills the app and relaunches it.

 That was the theory. Now, I might be missing something but I can't find
 a way to detach the command I'm running and get control back to the startup
 script. And I tried a lot of things!

 ===

 I'm providing simpler example using "ping", which also run indefinitely,
 similar to the web server.

 julia> run(detach(`ping "www.google.com"`)) # the command is detached
 and continues to run after the julia REPL is closed, but at this point the
 REPL does not get control, there's no cursor available in the REPL
 PING www.google.com (173.194.45.82): 56 data bytes
 64 bytes from 173.194.45.82: icmp_seq=0 ttl=54 time=30.138 ms
 64 bytes from 173.194.45.82: icmp_seq=1 ttl=54 time=30.417 ms
 ... more output ...
 64 bytes from 173.194.45.82: icmp_seq=7 ttl=54 time=30.486 ms
 64 bytes from 173.194.45.82: icmp_seq=8 ttl=54 time=30.173 ms
 ^CERROR: InterruptException:
 < here I press Ctrl+C and only now the REPL gets back the cursor, with
 the command still running in the background

 ===

 Also, related to this, passing "&" into the command to detach does not
 work as expected, the "&" is interpreted as argument of the command. Not
 sure if this would help anyway to return control to the startup.jl script?

 julia> run(detach(`ping "www.google.com" &`));
 usage: ping [-AaDdfnoQqRrv] [-b boundif] [-c count] [-G sweepmaxsize]
 [-g sweepminsize] [-h sweepincrsize] [-i wait] [−k
 trafficclass]
 [-l preload] [-M mask | time] [-m ttl] [-p pattern]
 [-S src_addr] [-s packetsize] [-t timeout][-W waittime] [-z
 tos]
 host
ping [-AaDdfLnoQqRrv] [-b boundif] [-c count] [-I iface] [-i
 wait]
 [−k trafficclass] [-l preload] [-M mask | time] [-m ttl] [-p
 pattern] [-S src_addr]
 [-s packetsize] [-T ttl] [-t timeout] [-W waittime]
 [-z tos] mcast-group
 ERROR: failed process: Process(`ping www.google.com &`,
 ProcessExited(64)) [64]
  in run at /usr/local/Cellar/julia/0.4.2/lib/julia/sys.dylib

 ===

 Thanks
>
>



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Tim Holy
On Friday, January 22, 2016 11:28:49 AM Bryan Rivera wrote:
> I tried to mitigate that by not timing the code where @anon is called.  Is
> that still the case given the two snippets?

Actually, it looks like you didn't call function1 in both cases. You might 
want to re-test.

You're also using a lot of global variables. Better to pass function2 and 
dictZ as an argument (and make sure dictZ has concrete type, as you have 
done).

Best,
--Tim


Re: [julia-users] Macros, functions and module

2016-01-22 Thread Yichao Yu
On Fri, Jan 22, 2016 at 3:43 PM,   wrote:
> Hi,
>
> I have this quite weird situation:
>
>
> module Mod
>
> export @generate_macro_f, @generate_function_g, h
>
> macro generate_f()
> return esc(quote
> macro f(name)
> return quote
> print($name)
> end
> end
> end)
> end
>
> macro generate_g()
> return esc(quote
> function g()
> @f("hello")
> print(" world\n")
> end
> end)
> end
>
> function h()
> @generate_f()
> @generate_g()
> end
>
> end
>
> using Mod
>
> h()
> g()
>
> I want the function g to be generated using only one call to a function,
> here h (or possible a macro...), outside the module. Such a code would
> return an error saying that @f is not defined, and I don't understand why
> since I escaped the quote block of generate_f. This example is of course
> oversimplified but @f needs to be called inside g.

It's a little bit unclear what you actually want to do but a few
clarification below.

1. Macro expansion happens right after parsing. therefore, @generate_g
is expanded (and failed) right after h() is defined and before the
module definition finishes.
2. Unless you call `eval` with the correct module in `h()`, you won't
be defining any toplevel values in the module that calls `h()`. In
another word, the function you defined is a local variable and you are
not defining it in module you want to call it. Also, I don't think
macro definition is allowed in local scope and you should get an error
about that when you call `h()`

>
> Any thought?
>
> Many thanks,


Re: [julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Bryan Rivera
dude..

dictZ = Dict{Int, Int}()

for z = 1:1000
dictZ[z] = z
end

function testNoFunCopy()
  for z = 1:1000
  function2.z = dictZ[z]
  # do whatever with function2, including making a copy
  end
end

@code_llvm testNoFunCopy()
@code_native testNoFunCopy()

@time testNoFunCopy()

# Test to see if multiple functions are created.  They are.
# We would only need to create a single function if we used julia anon, but 
its time inefficient.

dict = Dict{Int, Any}()

for z = 1:1000
function2 = @anon c -> (c + z)
dict[z] =  function2
end

a = 1
b = 2

function testWithFunCopy()
  for z = 1:1000
function1(a,b, dict[z])
  end
end


@code_llvm testWithFunCopy()
@code_native testWithFunCopy()

@time testWithFunCopy()


For 1000 elements:   

0.00019s vs 0.035s respectively

Thanks!

Is the reason the faster code has more allocations bc it is 
inserting vars into the single function?  (Opposed to the slower 
code already having its vars filled in.)


On Friday, January 22, 2016 at 12:23:59 PM UTC-5, Tim Holy wrote:
>
> Just use 
>
> z = 1 
> function2 = @anon c -> c + z 
> for z = 1:100 
> function2.z = z 
> # do whatever with function2, including making a copy 
> end 
>
> --Tim 
>
> On Friday, January 22, 2016 08:55:25 AM Cedric St-Jean wrote: 
> > (non-mutating) Closures and FastAnonymous work essentially the same way. 
> > They store the data that is closed over (more or less) and a function 
> > pointer. The thing is that there's only one data structure in Julia for 
> all 
> > regular anonymous functions, whereas FastAnonymous creates one per @anon 
> > site. Because the FastAnonymous-created datatype is specific to that 
> > function definition, the standard Julia machinery takes over and 
> produces 
> > efficient code. It's just as good as if the function had been defined 
> > normally with `function foo(...) ... end` 
> > 
> > 
> > for z = 1:100 
> > function2 = @anon c -> (c + z) 
> > 
> > dict[z] =  function2 
> > end 
> > 
> > 
> > So we end up creating multiple functions for each z value. 
> > 
> > 
> > In this code, whether you use @anon or not, Julia will create 100 object 
> > instances to store the z values. 
> > 
> > The speed difference between the two will soon be gone. 
> >  
> > 
> > Cédric 
> > 
> > On Friday, January 22, 2016 at 11:31:36 AM UTC-5, Bryan Rivera wrote: 
> > > I have to do some investigating here.  I thought we could do something 
> > > like that but wasn't quite sure how it would look. 
> > > 
> > > Check this out: 
> > > 
> > > This code using FastAnonymous optimizes to the very same code below it 
> > > where functions have been manually injected: 
> > > 
> > > using FastAnonymous 
> > > 
> > > 
> > > function function1(a, b, function2) 
> > > 
> > >   if(a > b) 
> > >   
> > > c = a + b 
> > > return function2(c) 
> > >   
> > >   else 
> > >   
> > > # do anything 
> > > # but return nothing 
> > >   
> > >   end 
> > > 
> > > end 
> > > 
> > > 
> > > z = 10 
> > > function2 = @anon c -> (c + z) 
> > > 
> > > 
> > > a = 1 
> > > b = 2 
> > > @code_llvm function1(a, b, function2) 
> > > @code_native function1(a, b, function2) 
> > > 
> > > Manually unrolled equivalent: 
> > > 
> > > function function1(a, b, z) 
> > > 
> > >   if(a > b) 
> > >   
> > > c = a + b 
> > > return function2(c, z) 
> > >   
> > >   else 
> > >   
> > > # do anything 
> > > # but return nothing 
> > >   
> > >   end 
> > > 
> > > end 
> > > 
> > > 
> > > function function2(c, z) 
> > > 
> > >   return c + z 
> > > 
> > > end 
> > > 
> > > 
> > > a = 1 
> > > b = 2 
> > > z = 10 
> > > 
> > > 
> > > @code_llvm function1(a, b, z) 
> > > 
> > > @code_native function1(a, b, z) 
> > > 
> > > However, this is a bit too simplistic.  My program actually does this: 
> > > 
> > > # Test to see if multiple functions are created.  They are. 
> > > # We would only need to create a single function if we used julia 
> anon, 
> > > but its time inefficient. 
> > > 
> > > dict = Dict{Int, Any}() 
> > > for z = 1:100 
> > > 
> > > function2 = @anon c -> (c + z) 
> > > 
> > > dict[z] =  function2 
> > > 
> > > end 
> > > 
> > > 
> > > a = 1 
> > > b = 2 
> > > 
> > > function test() 
> > > 
> > >   function1(a,b, dict[100]) 
> > >   function1(a,b, dict[50]) 
> > > 
> > > end 
> > > 
> > > @code_llvm test() 
> > > @code_native test() 
> > > 
> > > 
> > > 
> > > So we end up creating multiple functions for each z value.  We could 
> use 
> > > Julia's anon funs, which would only create a single function, however 
> > > these 
> > > lamdas are less performant than FastAnon. 
> > > 
> > > So its a space vs time tradeoff, I want the speed of FastAnon, without 
> the 
> > > spacial overhead of storing multiple functions. 
> > > 
> > > Can we be greedy?  :) 
> > > 
> > > On Thursday, January 21, 2016 at 9:56:51 PM UTC-5, Cedric St-Jean 
> wrote: 
> > >> Something like this? 
> > >> 
> > >> 

Re: [julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Bryan Rivera
It doesn't look like I can edit my posts anymore.  Is that normal?

I have to cleanup the copied OP.  


Re: [julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Tim Holy
On Friday, January 22, 2016 10:21:31 AM Bryan Rivera wrote:
> For 1000 elements:
> 
> 0.00019s vs 0.035s respectively
> 
> Thanks!

Glad it helped.

> Is the reason the faster code has more allocations bc it is
> inserting vars into the single function?  (Opposed to the slower
> code already having its vars filled in.)

Every time you call @anon, it creates a brand-new type (and an instance of 
that type) that julia has never seen before. That requires JITting any code 
that gets invoked on this instance. So the usual advice, "run once to JIT, 
then do your timing" doesn't work in this case: it JITs every time.

--Tim

> 
> On Friday, January 22, 2016 at 12:23:59 PM UTC-5, Tim Holy wrote:
> > Just use
> > 
> > z = 1
> > function2 = @anon c -> c + z
> > for z = 1:100
> > 
> > function2.z = z
> > # do whatever with function2, including making a copy
> > 
> > end
> > 
> > --Tim
> > 
> > On Friday, January 22, 2016 08:55:25 AM Cedric St-Jean wrote:
> > > (non-mutating) Closures and FastAnonymous work essentially the same way.
> > > They store the data that is closed over (more or less) and a function
> > > pointer. The thing is that there's only one data structure in Julia for
> > 
> > all
> > 
> > > regular anonymous functions, whereas FastAnonymous creates one per @anon
> > > site. Because the FastAnonymous-created datatype is specific to that
> > > function definition, the standard Julia machinery takes over and
> > 
> > produces
> > 
> > > efficient code. It's just as good as if the function had been defined
> > > normally with `function foo(...) ... end`
> > > 
> > > 
> > > for z = 1:100
> > > 
> > > function2 = @anon c -> (c + z)
> > > 
> > > dict[z] =  function2
> > > 
> > > end
> > > 
> > > 
> > > So we end up creating multiple functions for each z value.
> > > 
> > > 
> > > In this code, whether you use @anon or not, Julia will create 100 object
> > > instances to store the z values.
> > > 
> > > The speed difference between the two will soon be gone.
> > > 
> > > 
> > > Cédric
> > > 
> > > On Friday, January 22, 2016 at 11:31:36 AM UTC-5, Bryan Rivera wrote:
> > > > I have to do some investigating here.  I thought we could do something
> > > > like that but wasn't quite sure how it would look.
> > > > 
> > > > Check this out:
> > > > 
> > > > This code using FastAnonymous optimizes to the very same code below it
> > > > where functions have been manually injected:
> > > > 
> > > > using FastAnonymous
> > > > 
> > > > 
> > > > function function1(a, b, function2)
> > > > 
> > > >   if(a > b)
> > > >   
> > > > c = a + b
> > > > return function2(c)
> > > >   
> > > >   else
> > > >   
> > > > # do anything
> > > > # but return nothing
> > > >   
> > > >   end
> > > > 
> > > > end
> > > > 
> > > > 
> > > > z = 10
> > > > function2 = @anon c -> (c + z)
> > > > 
> > > > 
> > > > a = 1
> > > > b = 2
> > > > @code_llvm function1(a, b, function2)
> > > > @code_native function1(a, b, function2)
> > > > 
> > > > Manually unrolled equivalent:
> > > > 
> > > > function function1(a, b, z)
> > > > 
> > > >   if(a > b)
> > > >   
> > > > c = a + b
> > > > return function2(c, z)
> > > >   
> > > >   else
> > > >   
> > > > # do anything
> > > > # but return nothing
> > > >   
> > > >   end
> > > > 
> > > > end
> > > > 
> > > > 
> > > > function function2(c, z)
> > > > 
> > > >   return c + z
> > > > 
> > > > end
> > > > 
> > > > 
> > > > a = 1
> > > > b = 2
> > > > z = 10
> > > > 
> > > > 
> > > > @code_llvm function1(a, b, z)
> > > > 
> > > > @code_native function1(a, b, z)
> > > > 
> > > > However, this is a bit too simplistic.  My program actually does this:
> > > > 
> > > > # Test to see if multiple functions are created.  They are.
> > > > # We would only need to create a single function if we used julia
> > 
> > anon,
> > 
> > > > but its time inefficient.
> > > > 
> > > > dict = Dict{Int, Any}()
> > > > for z = 1:100
> > > > 
> > > > function2 = @anon c -> (c + z)
> > > > 
> > > > dict[z] =  function2
> > > > 
> > > > end
> > > > 
> > > > 
> > > > a = 1
> > > > b = 2
> > > > 
> > > > function test()
> > > > 
> > > >   function1(a,b, dict[100])
> > > >   function1(a,b, dict[50])
> > > > 
> > > > end
> > > > 
> > > > @code_llvm test()
> > > > @code_native test()
> > > > 
> > > > 
> > > > 
> > > > So we end up creating multiple functions for each z value.  We could
> > 
> > use
> > 
> > > > Julia's anon funs, which would only create a single function, however
> > > > these
> > > > lamdas are less performant than FastAnon.
> > > > 
> > > > So its a space vs time tradeoff, I want the speed of FastAnon, without
> > 
> > the
> > 
> > > > spacial overhead of storing multiple functions.
> > > > 
> > > > Can we be greedy?  :)
> > > > 
> > > > On Thursday, January 21, 2016 at 9:56:51 PM UTC-5, Cedric St-Jean
> > 
> > wrote:
> > > >> Something like this?
> > > >> 
> > > >> function function1(a, b, f) # Variable 

Re: [julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Bryan Rivera
I tried to mitigate that by not timing the code where @anon is called.  Is 
that still the case given the two snippets?


Re: [julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Stefan Karpinski
Make sure you time it twice – the faster version may generate more code.

On Fri, Jan 22, 2016 at 1:21 PM, Bryan Rivera 
wrote:

> dude..
>
> dictZ = Dict{Int, Int}()
>
> for z = 1:1000
> dictZ[z] = z
> end
>
> function testNoFunCopy()
>   for z = 1:1000
>   function2.z = dictZ[z]
>   # do whatever with function2, including making a copy
>   end
> end
>
> @code_llvm testNoFunCopy()
> @code_native testNoFunCopy()
>
> @time testNoFunCopy()
>
> # Test to see if multiple functions are created.  They are.
> # We would only need to create a single function if we used julia anon,
> but its time inefficient.
>
> dict = Dict{Int, Any}()
>
> for z = 1:1000
> function2 = @anon c -> (c + z)
> dict[z] =  function2
> end
>
> a = 1
> b = 2
>
> function testWithFunCopy()
>   for z = 1:1000
> function1(a,b, dict[z])
>   end
> end
>
>
> @code_llvm testWithFunCopy()
> @code_native testWithFunCopy()
>
> @time testWithFunCopy()
>
>
> For 1000 elements:
>
> 0.00019s vs 0.035s respectively
>
> Thanks!
>
> Is the reason the faster code has more allocations bc it is
> inserting vars into the single function?  (Opposed to the slower
> code already having its vars filled in.)
>
>
> On Friday, January 22, 2016 at 12:23:59 PM UTC-5, Tim Holy wrote:
>>
>> Just use
>>
>> z = 1
>> function2 = @anon c -> c + z
>> for z = 1:100
>> function2.z = z
>> # do whatever with function2, including making a copy
>> end
>>
>> --Tim
>>
>> On Friday, January 22, 2016 08:55:25 AM Cedric St-Jean wrote:
>> > (non-mutating) Closures and FastAnonymous work essentially the same
>> way.
>> > They store the data that is closed over (more or less) and a function
>> > pointer. The thing is that there's only one data structure in Julia for
>> all
>> > regular anonymous functions, whereas FastAnonymous creates one per
>> @anon
>> > site. Because the FastAnonymous-created datatype is specific to that
>> > function definition, the standard Julia machinery takes over and
>> produces
>> > efficient code. It's just as good as if the function had been defined
>> > normally with `function foo(...) ... end`
>> >
>> >
>> > for z = 1:100
>> > function2 = @anon c -> (c + z)
>> >
>> > dict[z] =  function2
>> > end
>> >
>> >
>> > So we end up creating multiple functions for each z value.
>> >
>> >
>> > In this code, whether you use @anon or not, Julia will create 100
>> object
>> > instances to store the z values.
>> >
>> > The speed difference between the two will soon be gone.
>> > 
>> >
>> > Cédric
>> >
>> > On Friday, January 22, 2016 at 11:31:36 AM UTC-5, Bryan Rivera wrote:
>> > > I have to do some investigating here.  I thought we could do
>> something
>> > > like that but wasn't quite sure how it would look.
>> > >
>> > > Check this out:
>> > >
>> > > This code using FastAnonymous optimizes to the very same code below
>> it
>> > > where functions have been manually injected:
>> > >
>> > > using FastAnonymous
>> > >
>> > >
>> > > function function1(a, b, function2)
>> > >
>> > >   if(a > b)
>> > >
>> > > c = a + b
>> > > return function2(c)
>> > >
>> > >   else
>> > >
>> > > # do anything
>> > > # but return nothing
>> > >
>> > >   end
>> > >
>> > > end
>> > >
>> > >
>> > > z = 10
>> > > function2 = @anon c -> (c + z)
>> > >
>> > >
>> > > a = 1
>> > > b = 2
>> > > @code_llvm function1(a, b, function2)
>> > > @code_native function1(a, b, function2)
>> > >
>> > > Manually unrolled equivalent:
>> > >
>> > > function function1(a, b, z)
>> > >
>> > >   if(a > b)
>> > >
>> > > c = a + b
>> > > return function2(c, z)
>> > >
>> > >   else
>> > >
>> > > # do anything
>> > > # but return nothing
>> > >
>> > >   end
>> > >
>> > > end
>> > >
>> > >
>> > > function function2(c, z)
>> > >
>> > >   return c + z
>> > >
>> > > end
>> > >
>> > >
>> > > a = 1
>> > > b = 2
>> > > z = 10
>> > >
>> > >
>> > > @code_llvm function1(a, b, z)
>> > >
>> > > @code_native function1(a, b, z)
>> > >
>> > > However, this is a bit too simplistic.  My program actually does
>> this:
>> > >
>> > > # Test to see if multiple functions are created.  They are.
>> > > # We would only need to create a single function if we used julia
>> anon,
>> > > but its time inefficient.
>> > >
>> > > dict = Dict{Int, Any}()
>> > > for z = 1:100
>> > >
>> > > function2 = @anon c -> (c + z)
>> > >
>> > > dict[z] =  function2
>> > >
>> > > end
>> > >
>> > >
>> > > a = 1
>> > > b = 2
>> > >
>> > > function test()
>> > >
>> > >   function1(a,b, dict[100])
>> > >   function1(a,b, dict[50])
>> > >
>> > > end
>> > >
>> > > @code_llvm test()
>> > > @code_native test()
>> > >
>> > >
>> > >
>> > > So we end up creating multiple functions for each z value.  We could
>> use
>> > > Julia's anon funs, which would only create a single function, however
>> > > these
>> > > lamdas are less performant than FastAnon.
>> > >
>> > > So its a space vs time tradeoff, I want the speed of FastAnon,

Re: [julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Bryan Rivera
I tried to mitigate that by not timing the code where @anon is called.  Is 
that still the case given the two snippets?

On Friday, January 22, 2016 at 2:15:20 PM UTC-5, Tim Holy wrote:
>
> On Friday, January 22, 2016 10:21:31 AM Bryan Rivera wrote: 
> > For 1000 elements: 
> > 
> > 0.00019s vs 0.035s respectively 
> > 
> > Thanks! 
>
> Glad it helped. 
>
> > Is the reason the faster code has more allocations bc it is 
> > inserting vars into the single function?  (Opposed to the slower 
> > code already having its vars filled in.) 
>
> Every time you call @anon, it creates a brand-new type (and an instance of 
> that type) that julia has never seen before. That requires JITting any 
> code 
> that gets invoked on this instance. So the usual advice, "run once to JIT, 
> then do your timing" doesn't work in this case: it JITs every time. 
>
> --Tim 
>
> > 
> > On Friday, January 22, 2016 at 12:23:59 PM UTC-5, Tim Holy wrote: 
> > > Just use 
> > > 
> > > z = 1 
> > > function2 = @anon c -> c + z 
> > > for z = 1:100 
> > > 
> > > function2.z = z 
> > > # do whatever with function2, including making a copy 
> > > 
> > > end 
> > > 
> > > --Tim 
> > > 
> > > On Friday, January 22, 2016 08:55:25 AM Cedric St-Jean wrote: 
> > > > (non-mutating) Closures and FastAnonymous work essentially the same 
> way. 
> > > > They store the data that is closed over (more or less) and a 
> function 
> > > > pointer. The thing is that there's only one data structure in Julia 
> for 
> > > 
> > > all 
> > > 
> > > > regular anonymous functions, whereas FastAnonymous creates one per 
> @anon 
> > > > site. Because the FastAnonymous-created datatype is specific to that 
> > > > function definition, the standard Julia machinery takes over and 
> > > 
> > > produces 
> > > 
> > > > efficient code. It's just as good as if the function had been 
> defined 
> > > > normally with `function foo(...) ... end` 
> > > > 
> > > > 
> > > > for z = 1:100 
> > > > 
> > > > function2 = @anon c -> (c + z) 
> > > > 
> > > > dict[z] =  function2 
> > > > 
> > > > end 
> > > > 
> > > > 
> > > > So we end up creating multiple functions for each z value. 
> > > > 
> > > > 
> > > > In this code, whether you use @anon or not, Julia will create 100 
> object 
> > > > instances to store the z values. 
> > > > 
> > > > The speed difference between the two will soon be gone. 
> > > >  
> > > > 
> > > > Cédric 
> > > > 
> > > > On Friday, January 22, 2016 at 11:31:36 AM UTC-5, Bryan Rivera 
> wrote: 
> > > > > I have to do some investigating here.  I thought we could do 
> something 
> > > > > like that but wasn't quite sure how it would look. 
> > > > > 
> > > > > Check this out: 
> > > > > 
> > > > > This code using FastAnonymous optimizes to the very same code 
> below it 
> > > > > where functions have been manually injected: 
> > > > > 
> > > > > using FastAnonymous 
> > > > > 
> > > > > 
> > > > > function function1(a, b, function2) 
> > > > > 
> > > > >   if(a > b) 
> > > > >   
> > > > > c = a + b 
> > > > > return function2(c) 
> > > > >   
> > > > >   else 
> > > > >   
> > > > > # do anything 
> > > > > # but return nothing 
> > > > >   
> > > > >   end 
> > > > > 
> > > > > end 
> > > > > 
> > > > > 
> > > > > z = 10 
> > > > > function2 = @anon c -> (c + z) 
> > > > > 
> > > > > 
> > > > > a = 1 
> > > > > b = 2 
> > > > > @code_llvm function1(a, b, function2) 
> > > > > @code_native function1(a, b, function2) 
> > > > > 
> > > > > Manually unrolled equivalent: 
> > > > > 
> > > > > function function1(a, b, z) 
> > > > > 
> > > > >   if(a > b) 
> > > > >   
> > > > > c = a + b 
> > > > > return function2(c, z) 
> > > > >   
> > > > >   else 
> > > > >   
> > > > > # do anything 
> > > > > # but return nothing 
> > > > >   
> > > > >   end 
> > > > > 
> > > > > end 
> > > > > 
> > > > > 
> > > > > function function2(c, z) 
> > > > > 
> > > > >   return c + z 
> > > > > 
> > > > > end 
> > > > > 
> > > > > 
> > > > > a = 1 
> > > > > b = 2 
> > > > > z = 10 
> > > > > 
> > > > > 
> > > > > @code_llvm function1(a, b, z) 
> > > > > 
> > > > > @code_native function1(a, b, z) 
> > > > > 
> > > > > However, this is a bit too simplistic.  My program actually does 
> this: 
> > > > > 
> > > > > # Test to see if multiple functions are created.  They are. 
> > > > > # We would only need to create a single function if we used julia 
> > > 
> > > anon, 
> > > 
> > > > > but its time inefficient. 
> > > > > 
> > > > > dict = Dict{Int, Any}() 
> > > > > for z = 1:100 
> > > > > 
> > > > > function2 = @anon c -> (c + z) 
> > > > > 
> > > > > dict[z] =  function2 
> > > > > 
> > > > > end 
> > > > > 
> > > > > 
> > > > > a = 1 
> > > > > b = 2 
> > > > > 
> > > > > function test() 
> > > > > 
> > > > >   function1(a,b, dict[100]) 
> > > > >   function1(a,b, dict[50]) 
> > > > > 
> > > > > end 
> > > > > 
> > > > > @code_llvm test() 
> > > > > 

Re: [julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Bryan Rivera
Yea those were averages of 5 runs each, minus the first for JIT.

On Friday, January 22, 2016 at 1:24:00 PM UTC-5, Stefan Karpinski wrote:
>
> Make sure you time it twice – the faster version may generate more code.
>
> On Fri, Jan 22, 2016 at 1:21 PM, Bryan Rivera  > wrote:
>
>> dude..
>>
>> dictZ = Dict{Int, Int}()
>>
>> for z = 1:1000
>> dictZ[z] = z
>> end
>>
>> function testNoFunCopy()
>>   for z = 1:1000
>>   function2.z = dictZ[z]
>>   # do whatever with function2, including making a copy
>>   end
>> end
>>
>> @code_llvm testNoFunCopy()
>> @code_native testNoFunCopy()
>>
>> @time testNoFunCopy()
>>
>> # Test to see if multiple functions are created.  They are.
>> # We would only need to create a single function if we used julia anon, 
>> but its time inefficient.
>>
>> dict = Dict{Int, Any}()
>>
>> for z = 1:1000
>> function2 = @anon c -> (c + z)
>> dict[z] =  function2
>> end
>>
>> a = 1
>> b = 2
>>
>> function testWithFunCopy()
>>   for z = 1:1000
>> function1(a,b, dict[z])
>>   end
>> end
>>
>>
>> @code_llvm testWithFunCopy()
>> @code_native testWithFunCopy()
>>
>> @time testWithFunCopy()
>>
>>
>> For 1000 elements:   
>>
>> 0.00019s vs 0.035s respectively
>>
>> Thanks!
>>
>> Is the reason the faster code has more allocations bc it is 
>> inserting vars into the single function?  (Opposed to the slower 
>> code already having its vars filled in.)
>>
>>
>> On Friday, January 22, 2016 at 12:23:59 PM UTC-5, Tim Holy wrote:
>>>
>>> Just use 
>>>
>>> z = 1 
>>> function2 = @anon c -> c + z 
>>> for z = 1:100 
>>> function2.z = z 
>>> # do whatever with function2, including making a copy 
>>> end 
>>>
>>> --Tim 
>>>
>>> On Friday, January 22, 2016 08:55:25 AM Cedric St-Jean wrote: 
>>> > (non-mutating) Closures and FastAnonymous work essentially the same 
>>> way. 
>>> > They store the data that is closed over (more or less) and a function 
>>> > pointer. The thing is that there's only one data structure in Julia 
>>> for all 
>>> > regular anonymous functions, whereas FastAnonymous creates one per 
>>> @anon 
>>> > site. Because the FastAnonymous-created datatype is specific to that 
>>> > function definition, the standard Julia machinery takes over and 
>>> produces 
>>> > efficient code. It's just as good as if the function had been defined 
>>> > normally with `function foo(...) ... end` 
>>> > 
>>> > 
>>> > for z = 1:100 
>>> > function2 = @anon c -> (c + z) 
>>> > 
>>> > dict[z] =  function2 
>>> > end 
>>> > 
>>> > 
>>> > So we end up creating multiple functions for each z value. 
>>> > 
>>> > 
>>> > In this code, whether you use @anon or not, Julia will create 100 
>>> object 
>>> > instances to store the z values. 
>>> > 
>>> > The speed difference between the two will soon be gone. 
>>> >  
>>> > 
>>> > Cédric 
>>> > 
>>> > On Friday, January 22, 2016 at 11:31:36 AM UTC-5, Bryan Rivera wrote: 
>>> > > I have to do some investigating here.  I thought we could do 
>>> something 
>>> > > like that but wasn't quite sure how it would look. 
>>> > > 
>>> > > Check this out: 
>>> > > 
>>> > > This code using FastAnonymous optimizes to the very same code below 
>>> it 
>>> > > where functions have been manually injected: 
>>> > > 
>>> > > using FastAnonymous 
>>> > > 
>>> > > 
>>> > > function function1(a, b, function2) 
>>> > > 
>>> > >   if(a > b) 
>>> > >   
>>> > > c = a + b 
>>> > > return function2(c) 
>>> > >   
>>> > >   else 
>>> > >   
>>> > > # do anything 
>>> > > # but return nothing 
>>> > >   
>>> > >   end 
>>> > > 
>>> > > end 
>>> > > 
>>> > > 
>>> > > z = 10 
>>> > > function2 = @anon c -> (c + z) 
>>> > > 
>>> > > 
>>> > > a = 1 
>>> > > b = 2 
>>> > > @code_llvm function1(a, b, function2) 
>>> > > @code_native function1(a, b, function2) 
>>> > > 
>>> > > Manually unrolled equivalent: 
>>> > > 
>>> > > function function1(a, b, z) 
>>> > > 
>>> > >   if(a > b) 
>>> > >   
>>> > > c = a + b 
>>> > > return function2(c, z) 
>>> > >   
>>> > >   else 
>>> > >   
>>> > > # do anything 
>>> > > # but return nothing 
>>> > >   
>>> > >   end 
>>> > > 
>>> > > end 
>>> > > 
>>> > > 
>>> > > function function2(c, z) 
>>> > > 
>>> > >   return c + z 
>>> > > 
>>> > > end 
>>> > > 
>>> > > 
>>> > > a = 1 
>>> > > b = 2 
>>> > > z = 10 
>>> > > 
>>> > > 
>>> > > @code_llvm function1(a, b, z) 
>>> > > 
>>> > > @code_native function1(a, b, z) 
>>> > > 
>>> > > However, this is a bit too simplistic.  My program actually does 
>>> this: 
>>> > > 
>>> > > # Test to see if multiple functions are created.  They are. 
>>> > > # We would only need to create a single function if we used julia 
>>> anon, 
>>> > > but its time inefficient. 
>>> > > 
>>> > > dict = Dict{Int, Any}() 
>>> > > for z = 1:100 
>>> > > 
>>> > > function2 = @anon c -> (c + z) 
>>> > > 
>>> > > dict[z] =  function2 
>>> > > 
>>> > > end 
>>> > > 
>>> > > 
>>> > > a = 1 
>>> > > 

[julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Viral Shah
Only the first post is moderated, to prevent spammers from getting through. 

-viral

On Friday, January 22, 2016 at 7:32:50 AM UTC+5:30, Bryan Rivera wrote:
>
> Guys, it's killing me having to wait hours until my posts are approved.
>
> (Ability to edit would be nice as well.. Although it looks like I can edit 
> all of my posts save for the original.)
>
> What must be done to overcome this limit?
>


Re: [julia-users] how to i get number of arguments of a function?

2016-01-22 Thread Jon Norberg
Is it also possible to get a list of names of the variables used in a 
function? 

e.g.  for 

function f(x,y)
k=0.1
return x*y+k
end

I'd like to get a list ["k","x","y"]

My first thought was to make a method f() that returns this list, but if 
its possible to do this otherwise and more generally that would be very 
useful 

On Thursday, January 21, 2016 at 6:47:38 PM UTC+1, ami...@gmail.com wrote:
>
> Great, thanks a lot.
> In fact, I also need to evaluate the number of arguments of an anonymous 
> function:
>
> julia> function factory(y)
>  return x -> x + y
>end
> factory (generic function with 1 method)
>
> julia> type Foo
>  f::Function
>end
>
> julia> foo = Foo(factory(2))
> Foo((anonymous function))
>
> julia> methods(foo.f)
> ERROR: ArgumentError: argument is not a generic function
>  in methods at reflection.jl:180
>
>
> Any way to do that..?
>
>

Re: [julia-users] how to i get number of arguments of a function?

2016-01-22 Thread Mauro
> Is it also possible to get a list of names of the variables used in a
> function?
>
> e.g.  for
>
> function f(x,y)
> k=0.1
> return x*y+k
> end
>
> I'd like to get a list ["k","x","y"]
>
> My first thought was to make a method f() that returns this list, but if
> its possible to do this otherwise and more generally that would be very
> useful

You'd have to inspect the AST.  Looks like there is indeed a list in there:

julia> f(x,y) = (a=5;x+y+a)
f (generic function with 1 method)

julia> methods(f).defs.func.code
AST(:($(Expr(:lambda, Any[:x,:y], 
Any[Any[Any[:x,:Any,0],Any[:y,:Any,0],Any[:a,:Any,18]],Any[],0,Any[]], :(begin  
# none, line 1:
a = 5
return x + y + a
end)

You can get at it with:

julia> ast = Base.uncompressed_ast(methods(f).defs.func.code)
:($(Expr(:lambda, Any[:x,:y], 
Any[Any[Any[:x,:Any,0],Any[:y,:Any,0],Any[:a,:Any,18]],Any[],0,Any[]], :(begin  
# none, line 1:
a = 5
return x + y + a
end

julia> ast.args[2][1]
3-element Array{Any,1}:
 Any[:x,:Any,0]
 Any[:y,:Any,0]
 Any[:a,:Any,18]


Re: [julia-users] Re: How to run a detached command and return execution to the parent script?

2016-01-22 Thread Adrian Salceanu
Thanks

Per my previous comment, unfortunately @async / @spawn cause the app / 
server to exit immediately. 

Let me give @persist a try. 

Cheers! 

vineri, 22 ianuarie 2016, 22:45:10 UTC+1, Erik Schnetter a scris:
>
> If you want to be able to terminate your local Julia process, and have 
> the server continue to run in the background, then you might want to 
> check out . This does the 
> equivalent of run/detach, but in such a way that the detached process 
> runs as daemon. 
>
> Otherwise, if you don't need this functionality, then @async is the way to 
> go. 
>
> -erik 
>
> On Fri, Jan 22, 2016 at 4:40 PM, Stefan Karpinski  > wrote: 
> > @spawn runs a command on a (random) worker process. If you want to do 
> > "background" work in the current process, you can use @async: 
> > 
> > julia> t = @async (sleep(5); rand()) 
> > Task (runnable) @0x000112d746a0 
> > 
> > julia> wait(t) 
> > 0.14543742643271207 
> > 
> > 
> > On Fri, Jan 22, 2016 at 4:33 PM, Adrian Salceanu  > 
> > wrote: 
> >> 
> >> Oh! The ruby analogy made me think about actually spawning the detached 
> >> command! Which produced the desired effect! 
> >> 
> >> julia> @spawn run(detach(`ping www.google.com`)) 
> >> 
> >> 
> >> 
> >> vineri, 22 ianuarie 2016, 22:29:27 UTC+1, Adrian Salceanu a scris: 
> >>> 
> >>> I guess what I'm looking for is the equivalent of Ruby's Process#spawn 
> >>> 
> >>> In REPL: 
> >>> 
> >>> >> pid = Process.spawn("ping www.google.com", :out => '/dev/null') 
> >>> 83210 
> >>> >> <-- the process is running in the 
> background 
> >>> >> and control has been returned to the REPL 
> >>> 
> >>> 
> >>> vineri, 22 ianuarie 2016, 22:06:01 UTC+1, Adrian Salceanu a scris: 
>  
>  Hi, 
>  
>  I'm hammering at a web app and I'm trying to setup functionality to 
>  monitor the file system for changes and restart/reload the server 
>  automatically so the changes are picked up (I'm using Mux which uses 
>  HttpServer). 
>  
>  The approach I have in mind is: 
>  
>  1. have a startup script which is run from the command line, 
> something 
>  like: 
>  $ julia -L startup.jl 
>  
>  2. the startup script launches the web app, which starts the web 
> server. 
>  My intention was to run 
>  $ julia -L app.jl 
>  as a command inside startup.jl, detached, and have the startup.jl 
> script 
>  get back control, with app.jl running detached in the background. 
>  
>  3. once startup.jl gets back control, it begins monitoring the file 
>  system and when changes are detected, kills the app and relaunches 
> it. 
>  
>  That was the theory. Now, I might be missing something but I can't 
> find 
>  a way to detach the command I'm running and get control back to the 
> startup 
>  script. And I tried a lot of things! 
>  
>  === 
>  
>  I'm providing simpler example using "ping", which also run 
> indefinitely, 
>  similar to the web server. 
>  
>  julia> run(detach(`ping "www.google.com"`)) # the command is 
> detached 
>  and continues to run after the julia REPL is closed, but at this 
> point the 
>  REPL does not get control, there's no cursor available in the REPL 
>  PING www.google.com (173.194.45.82): 56 data bytes 
>  64 bytes from 173.194.45.82: icmp_seq=0 ttl=54 time=30.138 ms 
>  64 bytes from 173.194.45.82: icmp_seq=1 ttl=54 time=30.417 ms 
>  ... more output ... 
>  64 bytes from 173.194.45.82: icmp_seq=7 ttl=54 time=30.486 ms 
>  64 bytes from 173.194.45.82: icmp_seq=8 ttl=54 time=30.173 ms 
>  ^CERROR: InterruptException: 
>  < here I press Ctrl+C and only now the REPL gets back the cursor, 
> with 
>  the command still running in the background 
>  
>  === 
>  
>  Also, related to this, passing "&" into the command to detach does 
> not 
>  work as expected, the "&" is interpreted as argument of the command. 
> Not 
>  sure if this would help anyway to return control to the startup.jl 
> script? 
>  
>  julia> run(detach(`ping "www.google.com" &`)); 
>  usage: ping [-AaDdfnoQqRrv] [-b boundif] [-c count] [-G sweepmaxsize] 
>  [-g sweepminsize] [-h sweepincrsize] [-i wait] [−k 
>  trafficclass] 
>  [-l preload] [-M mask | time] [-m ttl] [-p pattern] 
>  [-S src_addr] [-s packetsize] [-t timeout][-W waittime] 
> [-z 
>  tos] 
>  host 
> ping [-AaDdfLnoQqRrv] [-b boundif] [-c count] [-I iface] [-i 
>  wait] 
>  [−k trafficclass] [-l preload] [-M mask | time] [-m ttl] 
> [-p 
>  pattern] [-S src_addr] 
>  [-s packetsize] [-T ttl] [-t timeout] [-W waittime] 
>  [-z tos] mcast-group 
>  ERROR: failed process: Process(`ping www.google.com &`, 
>  

Re: [julia-users] Re: How to run a detached command and return execution to the parent script?

2016-01-22 Thread Adrian Salceanu
The problem seems to that HttpServer can not run @async - it exits 
immediately. 

===

using HttpServer

http = HttpHandler() do req::Request, res::Response
Response( ismatch(r"^/hello/", req.resource) ? exit(2) : 404 )
end

server = Server( http )
run( server, 8001 )  # <--- this works but blocks
@async run( server, 8001 ) # <--- this exits immediately

===

It's not necessarily a problem that HttpServer blocks. But what drives me 
nuts is: if I run 
$ julia app.jl & 
in the shell, it works perfectly. The process is placed in the background, 
the server happily listens to the assigned port, etc. 

Why can't I run the same command from within another julia process and get 
the same effect? 


vineri, 22 ianuarie 2016, 22:40:56 UTC+1, Stefan Karpinski a scris:
>
> @spawn runs a command on a (random) worker process. If you want to do 
> "background" work in the current process, you can use @async:
>
> julia> t = @async (sleep(5); rand())
> Task (runnable) @0x000112d746a0
>
> julia> wait(t)
> 0.14543742643271207
>
>
> On Fri, Jan 22, 2016 at 4:33 PM, Adrian Salceanu  > wrote:
>
>> Oh! The ruby analogy made me think about actually spawning the detached 
>> command! Which produced the desired effect! 
>>
>> julia> @spawn run(detach(`ping www.google.com`))
>>
>>
>>
>> vineri, 22 ianuarie 2016, 22:29:27 UTC+1, Adrian Salceanu a scris:
>>>
>>> I guess what I'm looking for is the equivalent of Ruby's Process#spawn
>>>
>>> In REPL: 
>>>
>>> >> pid = Process.spawn("ping www.google.com", :out => '/dev/null')
>>> 83210
>>> >> <-- the process is running in the background 
>>> and control has been returned to the REPL
>>>
>>>
>>> vineri, 22 ianuarie 2016, 22:06:01 UTC+1, Adrian Salceanu a scris:

 Hi, 

 I'm hammering at a web app and I'm trying to setup functionality to 
 monitor the file system for changes and restart/reload the server 
 automatically so the changes are picked up (I'm using Mux which uses 
 HttpServer). 

 The approach I have in mind is: 

 1. have a startup script which is run from the command line, something 
 like: 
 $ julia -L startup.jl

 2. the startup script launches the web app, which starts the web 
 server. My intention was to run 
 $ julia -L app.jl 
 as a command inside startup.jl, detached, and have the startup.jl 
 script get back control, with app.jl running detached in the background. 

 3. once startup.jl gets back control, it begins monitoring the file 
 system and when changes are detected, kills the app and relaunches it. 

 That was the theory. Now, I might be missing something but I can't find 
 a way to detach the command I'm running and get control back to the 
 startup 
 script. And I tried a lot of things! 

 ===

 I'm providing simpler example using "ping", which also run 
 indefinitely, similar to the web server. 

 julia> run(detach(`ping "www.google.com"`)) # the command is detached 
 and continues to run after the julia REPL is closed, but at this point the 
 REPL does not get control, there's no cursor available in the REPL
 PING www.google.com (173.194.45.82): 56 data bytes
 64 bytes from 173.194.45.82: icmp_seq=0 ttl=54 time=30.138 ms
 64 bytes from 173.194.45.82: icmp_seq=1 ttl=54 time=30.417 ms
 ... more output ...
 64 bytes from 173.194.45.82: icmp_seq=7 ttl=54 time=30.486 ms
 64 bytes from 173.194.45.82: icmp_seq=8 ttl=54 time=30.173 ms
 ^CERROR: InterruptException:   
   < here I press Ctrl+C and only now the REPL gets 
 back the cursor, with the command still running in the background

 ===

 Also, related to this, passing "&" into the command to detach does not 
 work as expected, the "&" is interpreted as argument of the command. Not 
 sure if this would help anyway to return control to the startup.jl script? 

 julia> run(detach(`ping "www.google.com" &`));
 usage: ping [-AaDdfnoQqRrv] [-b boundif] [-c count] [-G sweepmaxsize]
 [-g sweepminsize] [-h sweepincrsize] [-i wait] [−k 
 trafficclass]
 [-l preload] [-M mask | time] [-m ttl] [-p pattern]
 [-S src_addr] [-s packetsize] [-t timeout][-W waittime] [-z 
 tos]
 host
ping [-AaDdfLnoQqRrv] [-b boundif] [-c count] [-I iface] [-i 
 wait]
 [−k trafficclass] [-l preload] [-M mask | time] [-m ttl] 
 [-p pattern] [-S src_addr]
 [-s packetsize] [-T ttl] [-t timeout] [-W waittime]
 [-z tos] mcast-group
 ERROR: failed process: Process(`ping www.google.com &`, 
 ProcessExited(64)) [64]
  in run at /usr/local/Cellar/julia/0.4.2/lib/julia/sys.dylib

 ===

 Thanks

>>>
>

[julia-users] Help with some code where we would normally use inheritance

2016-01-22 Thread Bryan Rivera
Hopefully you can help me out with this.

I left out function implementations that we should't need to see.  This 
part of the code I think seems ok:


abstract Intervalizer{Time <: Integer, Value <: Union{AbstractFloat, Integer
}}


type StreamingIntervalizer{Time, Value} <: Intervalizer{Time, Value}

  intervalState::IntervalState
  currentValue::Value
  defaultValue::Value

  function StreamingIntervalizer(interval, startTime)

this = new(IntervalState{Time}(interval, startTime + interval))

this.defaultValue = zero(Value) # Create a zero of the ::Value.
this.currentValue = this.defaultValue # Set current value to default.

return this
  end
end


function intervalize{T, V}(i::Intervalizer{T,V}, time::T, value::V, 
onCompletedInterval)

if(isNextInterval(i.intervalState, time))

  onCompletedInterval(getCurrentIntervalTime(i.intervalState), i.
currentValue)

  # Set next interval.
  shiftInterval!(i)

  # Reset Value.
  resetValue!(i)

end

# Increment current value.
incrementCurrentValue!(value)
end

I need to inject onCompletedInterval rather than return either something or 
nothing because I don't want to do another if block later to check whether 
a value exists.  I think this is correct, but might be part of the problem.

This is where the issue comes in:

type HistoricalIntervalizer{Time, Value} <: Intervalizer{Time, Value}

  si::StreamingIntervalizer{Time, Value}
  array::Array{Value}

  function HistoricalIntervalizer(interval, startTime)

array = zeros(Value, 24 * 60)

this = new(StreamingIntervalizer(interval, startTime), array)

return this
  end
end


using FastAnonymous

# !! We are stuck having to specify type here, bc saveToArray requires an 
initial array.
array = zeros(Float64, 24 * 60)

# Define our anon function once.
saveToArray = @anon (time, value) -> begin
  array[time + 1] = value
end

function intervalize{T, V}(hi::HistoricalIntervalizer{T,V}, time::T, value::
V)
  saveToArray.array = hi.array
  intervalize(hi.si, time, value, saveToArray)
end
 

With this we have to fix the type of array so we can define the anon 
function.

We could slightly modify this approach by moving the anon function decl 
into the inner constructer so the type is not a problem, but now we will be 
creating multiple anon functions.

Any ideas?



Re: [julia-users] Re: How to run a detached command and return execution to the parent script?

2016-01-22 Thread Adrian Salceanu
Oh, @async has worked actually! 

It correctly run the command, but the startup script itself was finishing 
and exiting immediately after. 

Thank you very much Stefan and Erik! 


vineri, 22 ianuarie 2016, 23:26:23 UTC+1, Adrian Salceanu a scris:
>
> Thanks! 
>
> @async works perfectly with your example. And also works great with 
> running the "ping" command. The problem is web app / Mux / HttpServer exit 
> immediately if run @async. Same with @spawn, the app exits immediately. 
>
>
> vineri, 22 ianuarie 2016, 22:40:56 UTC+1, Stefan Karpinski a scris:
>>
>> @spawn runs a command on a (random) worker process. If you want to do 
>> "background" work in the current process, you can use @async:
>>
>> julia> t = @async (sleep(5); rand())
>> Task (runnable) @0x000112d746a0
>>
>> julia> wait(t)
>> 0.14543742643271207
>>
>>
>> On Fri, Jan 22, 2016 at 4:33 PM, Adrian Salceanu  
>> wrote:
>>
>>> Oh! The ruby analogy made me think about actually spawning the detached 
>>> command! Which produced the desired effect! 
>>>
>>> julia> @spawn run(detach(`ping www.google.com`))
>>>
>>>
>>>
>>> vineri, 22 ianuarie 2016, 22:29:27 UTC+1, Adrian Salceanu a scris:

 I guess what I'm looking for is the equivalent of Ruby's Process#spawn

 In REPL: 

 >> pid = Process.spawn("ping www.google.com", :out => '/dev/null')
 83210
 >> <-- the process is running in the background 
 and control has been returned to the REPL


 vineri, 22 ianuarie 2016, 22:06:01 UTC+1, Adrian Salceanu a scris:
>
> Hi, 
>
> I'm hammering at a web app and I'm trying to setup functionality to 
> monitor the file system for changes and restart/reload the server 
> automatically so the changes are picked up (I'm using Mux which uses 
> HttpServer). 
>
> The approach I have in mind is: 
>
> 1. have a startup script which is run from the command line, something 
> like: 
> $ julia -L startup.jl
>
> 2. the startup script launches the web app, which starts the web 
> server. My intention was to run 
> $ julia -L app.jl 
> as a command inside startup.jl, detached, and have the startup.jl 
> script get back control, with app.jl running detached in the background. 
>
> 3. once startup.jl gets back control, it begins monitoring the file 
> system and when changes are detected, kills the app and relaunches it. 
>
> That was the theory. Now, I might be missing something but I can't 
> find a way to detach the command I'm running and get control back to the 
> startup script. And I tried a lot of things! 
>
> ===
>
> I'm providing simpler example using "ping", which also run 
> indefinitely, similar to the web server. 
>
> julia> run(detach(`ping "www.google.com"`)) # the command is detached 
> and continues to run after the julia REPL is closed, but at this point 
> the 
> REPL does not get control, there's no cursor available in the REPL
> PING www.google.com (173.194.45.82): 56 data bytes
> 64 bytes from 173.194.45.82: icmp_seq=0 ttl=54 time=30.138 ms
> 64 bytes from 173.194.45.82: icmp_seq=1 ttl=54 time=30.417 ms
> ... more output ...
> 64 bytes from 173.194.45.82: icmp_seq=7 ttl=54 time=30.486 ms
> 64 bytes from 173.194.45.82: icmp_seq=8 ttl=54 time=30.173 ms
> ^CERROR: InterruptException:   
>   < here I press Ctrl+C and only now the REPL 
> gets 
> back the cursor, with the command still running in the background
>
> ===
>
> Also, related to this, passing "&" into the command to detach does not 
> work as expected, the "&" is interpreted as argument of the command. Not 
> sure if this would help anyway to return control to the startup.jl 
> script? 
>
> julia> run(detach(`ping "www.google.com" &`));
> usage: ping [-AaDdfnoQqRrv] [-b boundif] [-c count] [-G sweepmaxsize]
> [-g sweepminsize] [-h sweepincrsize] [-i wait] [−k 
> trafficclass]
> [-l preload] [-M mask | time] [-m ttl] [-p pattern]
> [-S src_addr] [-s packetsize] [-t timeout][-W waittime] 
> [-z tos]
> host
>ping [-AaDdfLnoQqRrv] [-b boundif] [-c count] [-I iface] [-i 
> wait]
> [−k trafficclass] [-l preload] [-M mask | time] [-m ttl] 
> [-p pattern] [-S src_addr]
> [-s packetsize] [-T ttl] [-t timeout] [-W waittime]
> [-z tos] mcast-group
> ERROR: failed process: Process(`ping www.google.com &`, 
> ProcessExited(64)) [64]
>  in run at /usr/local/Cellar/julia/0.4.2/lib/julia/sys.dylib
>
> ===
>
> Thanks
>

>>

Re: [julia-users] Macros, functions and module

2016-01-22 Thread Yichao Yu
On Fri, Jan 22, 2016 at 4:34 PM,   wrote:
> Ok this could seem to make no sense, and this actually does not...

It will not work since @eval eval's in the module that calls the
macro, you need to use the eval function and supply with the right
module. `current_module()` might work but it really depend on what you
want to do and what API you are desiging.

> Of course it solved the error in this precise case but the point of
> generating functions through macros inside the module was precisely because
> I have to define a type outside the module that should be used inside one of
> the functions of the module. As the module is parsed, this type is not yet
> defined... So, obviously, when I use @eval in this case, it miserably fails.
> I'll try to come up with a minimal example which describe a bit more what I
> want to do.


Re: [julia-users] Re: How to run a detached command and return execution to the parent script?

2016-01-22 Thread Adrian Salceanu
Thanks! 

@async works perfectly with your example. And also works great with running 
the "ping" command. The problem is web app / Mux / HttpServer exit 
immediately if run @async. Same with @spawn, the app exits immediately. 


vineri, 22 ianuarie 2016, 22:40:56 UTC+1, Stefan Karpinski a scris:
>
> @spawn runs a command on a (random) worker process. If you want to do 
> "background" work in the current process, you can use @async:
>
> julia> t = @async (sleep(5); rand())
> Task (runnable) @0x000112d746a0
>
> julia> wait(t)
> 0.14543742643271207
>
>
> On Fri, Jan 22, 2016 at 4:33 PM, Adrian Salceanu  > wrote:
>
>> Oh! The ruby analogy made me think about actually spawning the detached 
>> command! Which produced the desired effect! 
>>
>> julia> @spawn run(detach(`ping www.google.com`))
>>
>>
>>
>> vineri, 22 ianuarie 2016, 22:29:27 UTC+1, Adrian Salceanu a scris:
>>>
>>> I guess what I'm looking for is the equivalent of Ruby's Process#spawn
>>>
>>> In REPL: 
>>>
>>> >> pid = Process.spawn("ping www.google.com", :out => '/dev/null')
>>> 83210
>>> >> <-- the process is running in the background 
>>> and control has been returned to the REPL
>>>
>>>
>>> vineri, 22 ianuarie 2016, 22:06:01 UTC+1, Adrian Salceanu a scris:

 Hi, 

 I'm hammering at a web app and I'm trying to setup functionality to 
 monitor the file system for changes and restart/reload the server 
 automatically so the changes are picked up (I'm using Mux which uses 
 HttpServer). 

 The approach I have in mind is: 

 1. have a startup script which is run from the command line, something 
 like: 
 $ julia -L startup.jl

 2. the startup script launches the web app, which starts the web 
 server. My intention was to run 
 $ julia -L app.jl 
 as a command inside startup.jl, detached, and have the startup.jl 
 script get back control, with app.jl running detached in the background. 

 3. once startup.jl gets back control, it begins monitoring the file 
 system and when changes are detected, kills the app and relaunches it. 

 That was the theory. Now, I might be missing something but I can't find 
 a way to detach the command I'm running and get control back to the 
 startup 
 script. And I tried a lot of things! 

 ===

 I'm providing simpler example using "ping", which also run 
 indefinitely, similar to the web server. 

 julia> run(detach(`ping "www.google.com"`)) # the command is detached 
 and continues to run after the julia REPL is closed, but at this point the 
 REPL does not get control, there's no cursor available in the REPL
 PING www.google.com (173.194.45.82): 56 data bytes
 64 bytes from 173.194.45.82: icmp_seq=0 ttl=54 time=30.138 ms
 64 bytes from 173.194.45.82: icmp_seq=1 ttl=54 time=30.417 ms
 ... more output ...
 64 bytes from 173.194.45.82: icmp_seq=7 ttl=54 time=30.486 ms
 64 bytes from 173.194.45.82: icmp_seq=8 ttl=54 time=30.173 ms
 ^CERROR: InterruptException:   
   < here I press Ctrl+C and only now the REPL gets 
 back the cursor, with the command still running in the background

 ===

 Also, related to this, passing "&" into the command to detach does not 
 work as expected, the "&" is interpreted as argument of the command. Not 
 sure if this would help anyway to return control to the startup.jl script? 

 julia> run(detach(`ping "www.google.com" &`));
 usage: ping [-AaDdfnoQqRrv] [-b boundif] [-c count] [-G sweepmaxsize]
 [-g sweepminsize] [-h sweepincrsize] [-i wait] [−k 
 trafficclass]
 [-l preload] [-M mask | time] [-m ttl] [-p pattern]
 [-S src_addr] [-s packetsize] [-t timeout][-W waittime] [-z 
 tos]
 host
ping [-AaDdfLnoQqRrv] [-b boundif] [-c count] [-I iface] [-i 
 wait]
 [−k trafficclass] [-l preload] [-M mask | time] [-m ttl] 
 [-p pattern] [-S src_addr]
 [-s packetsize] [-T ttl] [-t timeout] [-W waittime]
 [-z tos] mcast-group
 ERROR: failed process: Process(`ping www.google.com &`, 
 ProcessExited(64)) [64]
  in run at /usr/local/Cellar/julia/0.4.2/lib/julia/sys.dylib

 ===

 Thanks

>>>
>

[julia-users] Re: Help with some code where we would normally use inheritance

2016-01-22 Thread Bryan Rivera
I think I almost got it - just need to figure out how to inject 
onCompletedInterval with the proper vars in macro.

macro gen_intervalize(onCompletedInterval)

  return quote
  if(isNextInterval(i.intervalState, time))

onCompletedInterval(getCurrentIntervalTime(i.intervalState), i.
currentValue)

# Set next interval.
shiftInterval!(i)

# Reset Value.
resetValue!(i)
  end

  # Increment current value.
  incrementCurrentValue!(value)
   end
end


function intervalize{T, V}(hi::HistoricalIntervalizer{T,V}, time::T, value::
V)
  @gen_intervalize quote
minuteOfDay = (time / (60  * 1000))
hi.array[minuteOfDay + 1] = value
  end
end

That's powerful.  



[julia-users] help with using Julia + Cilk

2016-01-22 Thread Kevin Lin
Hi all,

I have been working on a project that involves a Julia front-end calling a 
C back-end (it's a neural network simulator, in case you're wondering). 
 Since the C side is essentially a bunch of nested for-loops, I've been 
experimenting with using Cilk+ (via GCC5) to parallelize things.  Loading 
and running the C code from within a Julia session works just fine, both 
from the command-line and from Jupyter.  But using Cilk somehow messes with 
PyPlot.

Specifically, here is a minimal example that produces the problem I see in 
Jupyter (relevant code below):

In [1]: include("hello.jl")

hello 5
hello 0
hello 2
hello 1
hello 6
hello 3
hello 4
hello 7
hello 8
hello 9


In [2]: using PyPlot

Warning: error initializing module PyPlot:
ErrorException("Failed to pyimport("matplotlib"): PyPlot will not work until 
you have a functioning matplotlib module.

For automated Matplotlib installation, try configuring PyCall to use the Conda 
Python distribution within Julia.  Relaunch Julia and run:
  ENV["PYTHON"]=""
  Pkg.build("PyCall")
  using PyPlot

pyimport exception was: PyError (:PyImport_ImportModule) 
AttributeError('_ARRAY_API not found',)
  File "/usr/local/lib/python2.7/site-packages/matplotlib/__init__.py", line 
180, in 
from matplotlib.cbook import is_string_like
  File "/usr/local/lib/python2.7/site-packages/matplotlib/cbook.py", line 34, 
in 
import numpy.ma as ma
  File "/usr/local/lib/python2.7/site-packages/numpy/ma/__init__.py", line 46, 
in 
from . import core
  File "/usr/local/lib/python2.7/site-packages/numpy/ma/core.py", line 30, in 

import numpy.core.umath as umath
")

The code:

hello.jl:
module hello ccall((:hello, "hello.dylib"), Void, (Cint,), 10) end#module 


hello.c:
/** compile with gcc -fcilkplus -dynamiclib -o hello.dylib hello.c 
-lcilkrts **/
#include  #include  #include  void 
hello(int n) { int i; cilk_for (i=1; i <= n; i++ ) { sleep(1.0); 
printf("hello %d\n",i); } }

I'm on a Mac (10.11) running Julia 0.4.3, PyPlot 2.1.1, and gcc 5.3.0 
(macports). My installation of PyPlot has been working just fine except for 
this particular issue.

Also, if I change the "cilk_for" in hello.c to "for" (but still link 
libcilkrts) then PyPlot would load and work just fine.

Any idea what's going on? I'm not sure this is a problem with Julia or 
PyPlot per se, but don't know enough about the ffi, the cilk runtime, or 
PyPlot/PyCall internals to figure this out.

Thanks for any pointers!

KL



[julia-users] Re: Using MNE with Julia using PyCall

2016-01-22 Thread Rob
Thanks, this solved my problem.

On Tuesday, 19 January 2016 00:59:33 UTC+1, Steven G. Johnson wrote:
>
>
>
> On Monday, January 18, 2016 at 6:03:15 PM UTC-5, Rob wrote:
>>
>> I am trying to use the python MNE library in Julia.
>>
>> When I call the python function it returns a `Dict{Any,Any}` instead of a 
>> type `info`, when I pass this variable back to another python function I 
>> get the error 
>>
>
> You can use the lower-level "pycall" function to have more control over 
> the return type and any conversions.
>
> e.g. you can do
>
>  pycall(somepythonfunction, PyObject, args)
>
> to return a "raw" Python object with no conversions. 
>


[julia-users] help with using Julia + Cilk

2016-01-22 Thread Lutfullah Tomak
It is unrelated to Julia because you are missing a python library that is 
needed by PyPlot. From error, I think you're missing numpy.

[julia-users] Interrupting script gives constant error

2016-01-22 Thread Lutfullah Tomak
I have a julia script that runs IJulia
as

#!/bin/sh
julia -e "using IJulia; notebook()"

Interrupting this script gives recurring error reading

jl_uv_writecb() ERROR: bad file descripter EBADF

and cannot be stopped. I needed to cancel because IJulia freezes giving warning 
lots of depreciation warning because of recent change in read* and write* 
function. I'm on 0.5-dev+2238 Commit 8e036b4. Debian Sid armc7-a.
How can I stop that errors and how I can I stop depreciation warnings.

Re: [julia-users] Optimizing Function Injection in Julia

2016-01-22 Thread Kevin Squire
I have noticed that I've had to approve more than one post by Bryan.

Bryan, have you been using the same email address when you post here?

Kevin

On Friday, January 22, 2016, Viral Shah  wrote:

> Only the first post is moderated, to prevent spammers from getting
> through.
>
> -viral
>
> On Friday, January 22, 2016 at 7:32:50 AM UTC+5:30, Bryan Rivera wrote:
>>
>> Guys, it's killing me having to wait hours until my posts are approved.
>>
>> (Ability to edit would be nice as well.. Although it looks like I can
>> edit all of my posts save for the original.)
>>
>> What must be done to overcome this limit?
>>
>


Re: [julia-users] Re: How to run a detached command and return execution to the parent script?

2016-01-22 Thread Adrian Salceanu
No no, It's perfectly fine, it was my fault. What I haven't realized is 
that if I start the server async then my script will finish immediately, 
which also terminated the server. It was my responsibility to keep the 
whole app alive now. 

It works like a charm! 


sâmbătă, 23 ianuarie 2016, 00:06:13 UTC+1, Stefan Karpinski a scris:
>
> The shell works with processes, Julia has tasks where are not the same 
> thing...
>
> On Fri, Jan 22, 2016 at 5:49 PM, Adrian Salceanu  > wrote:
>
>> The problem seems to that HttpServer can not run @async - it exits 
>> immediately. 
>>
>> ===
>>
>> using HttpServer
>>
>> http = HttpHandler() do req::Request, res::Response
>> Response( ismatch(r"^/hello/", req.resource) ? exit(2) : 404 )
>> end
>>
>> server = Server( http )
>> run( server, 8001 )  # <--- this works but blocks
>> @async run( server, 8001 ) # <--- this exits immediately
>>
>> ===
>>
>> It's not necessarily a problem that HttpServer blocks. But what drives me 
>> nuts is: if I run 
>> $ julia app.jl & 
>> in the shell, it works perfectly. The process is placed in the 
>> background, the server happily listens to the assigned port, etc. 
>>
>> Why can't I run the same command from within another julia process and 
>> get the same effect? 
>>
>>
>> vineri, 22 ianuarie 2016, 22:40:56 UTC+1, Stefan Karpinski a scris:
>>>
>>> @spawn runs a command on a (random) worker process. If you want to do 
>>> "background" work in the current process, you can use @async:
>>>
>>> julia> t = @async (sleep(5); rand())
>>> Task (runnable) @0x000112d746a0
>>>
>>> julia> wait(t)
>>> 0.14543742643271207
>>>
>>>
>>> On Fri, Jan 22, 2016 at 4:33 PM, Adrian Salceanu  
>>> wrote:
>>>
 Oh! The ruby analogy made me think about actually spawning the detached 
 command! Which produced the desired effect! 

 julia> @spawn run(detach(`ping www.google.com`))



 vineri, 22 ianuarie 2016, 22:29:27 UTC+1, Adrian Salceanu a scris:
>
> I guess what I'm looking for is the equivalent of Ruby's Process#spawn
>
> In REPL: 
>
> >> pid = Process.spawn("ping www.google.com", :out => '/dev/null')
> 83210
> >> <-- the process is running in the 
> background and control has been returned to the REPL
>
>
> vineri, 22 ianuarie 2016, 22:06:01 UTC+1, Adrian Salceanu a scris:
>>
>> Hi, 
>>
>> I'm hammering at a web app and I'm trying to setup functionality to 
>> monitor the file system for changes and restart/reload the server 
>> automatically so the changes are picked up (I'm using Mux which uses 
>> HttpServer). 
>>
>> The approach I have in mind is: 
>>
>> 1. have a startup script which is run from the command line, 
>> something like: 
>> $ julia -L startup.jl
>>
>> 2. the startup script launches the web app, which starts the web 
>> server. My intention was to run 
>> $ julia -L app.jl 
>> as a command inside startup.jl, detached, and have the startup.jl 
>> script get back control, with app.jl running detached in the background. 
>>
>> 3. once startup.jl gets back control, it begins monitoring the file 
>> system and when changes are detected, kills the app and relaunches it. 
>>
>> That was the theory. Now, I might be missing something but I can't 
>> find a way to detach the command I'm running and get control back to the 
>> startup script. And I tried a lot of things! 
>>
>> ===
>>
>> I'm providing simpler example using "ping", which also run 
>> indefinitely, similar to the web server. 
>>
>> julia> run(detach(`ping "www.google.com"`)) # the command is 
>> detached and continues to run after the julia REPL is closed, but at 
>> this 
>> point the REPL does not get control, there's no cursor available in the 
>> REPL
>> PING www.google.com (173.194.45.82): 56 data bytes
>> 64 bytes from 173.194.45.82: icmp_seq=0 ttl=54 time=30.138 ms
>> 64 bytes from 173.194.45.82: icmp_seq=1 ttl=54 time=30.417 ms
>> ... more output ...
>> 64 bytes from 173.194.45.82: icmp_seq=7 ttl=54 time=30.486 ms
>> 64 bytes from 173.194.45.82: icmp_seq=8 ttl=54 time=30.173 ms
>> ^CERROR: InterruptException: 
>> < here I press Ctrl+C and only now the REPL 
>> gets back the cursor, with the command still running in the background
>>
>> ===
>>
>> Also, related to this, passing "&" into the command to detach does 
>> not work as expected, the "&" is interpreted as argument of the command. 
>> Not sure if this would help anyway to return control to the startup.jl 
>> script? 
>>
>> julia> run(detach(`ping "www.google.com" &`));
>> usage: ping [-AaDdfnoQqRrv] [-b boundif] [-c count] [-G sweepmaxsize]
>> [-g sweepminsize] 

[julia-users] randperm run time is slow

2016-01-22 Thread Brian Lucena
I am running a simulation that requires me to generate random permutations 
of size 300 million.  I would ideally like to run this generation in a loop 
10K or 100K times. I am surprised that randperm is not faster in Julia than 
it is.  It seems to be considerably slower than the equivalent in R (and R 
is clearly not known for speed)  :) 

In Julia:

*julia> **@time asd=randperm(3)*

 43.437829 seconds (6 allocations: 2.235 GB, 0.01% gc time)

*3-element Array{Int64,1}:*


In R


> start = Sys.time()

> asd = sample(3)

> Sys.time()-start

Time difference of 23.27244 secs

Julia seems to be twice as slow!

Any thoughts on why that is?  Does randperm use the "knuth shuffle" or does 
it use some other algorithm.

Thanks,
Brian



[julia-users] optim stopping without reaching convergence

2016-01-22 Thread grandemundo82
There is really something weird happening with optim. really puzzled but 
what could go wrong.

So when I run:

optr = optimize(objfun,sps0,method = :nelder_mead,iterations=2000)

The algorithm stops after precisely 10 function evaluations. Output:

 

Results of Optimization Algorithm
 * Algorithm: Nelder-Mead
 * Starting Point: [0.66,0.5,-0.3,0.05,5.0,2.0,3.0,2.0,3.0]
 * Minimum: [0.7602,0.6001, ...]
 * Value of Function at Minimum: 2438.299855
 * Iterations: *2000*
 * Convergence: false
   * |x - x'| < NaN: false
   * |f(x) - f(x')| / |f(x)| < 1.0e-08: false
   * |g(x)| < NaN: false
   * Exceeded Maximum Number of Iterations: true
 * Objective Function Calls: *2010*
 * Gradient Call: 0




As you can see Objective Function Calls: 2010 and Iterations: 2000 while I am 
sure there are just functions evaluations (1000-2000).


Anyone has any clue what could be causing this?

Should I reset iterations number somewhere? how to do that?


many thanks!!!



[julia-users] Cannot pull with rebase when I try to use Pkg.update()

2016-01-22 Thread Diogo Falcão
I am trying to use Julia on Windows for the first time, and when I try to 
run Pkg.update(), I got this error:

error: Cannot pull with rebase: You have unstaged changes.


When I run "git status" in the METADATA folder, I got:


On branch metadata-v2
Your branch is behind 'origin/metadata-v2' by 524 commits, and can be 
fast-forwarded.
  (use "git pull" to update your local branch)
Changes not staged for commit:
  (use "git add ..." to update what will be committed)
  (use "git checkout -- ..." to discard changes in working directory)

modified:   FixedEffectModels/versions/0.1.0/requires
modified:   FixedEffectModels/versions/0.1.1/requires
modified:   FixedEffectModels/versions/0.2.0/requires
modified:   FixedEffectModels/versions/0.2.1/requires
modified:   FixedEffectModels/versions/0.2.2/requires

no changes added to commit (use "git add" and/or "git commit -a")


How can I solve this? Thanks

-- 

--
Esta mensagem pode conter informação confidencial e/ou privilegiada.Se você 
não for o destinatário ou a pessoa autorizada a receber esta mensagem, você 
não deve usar, copiar, divulgar, alterar e não tomar nenhuma ação em 
relação a esta mensagem ou qualquer informação aqui contida.Se você recebeu 
esta mensagem erroneamente, por favor entre em contato imediatamente ou 
responsa por e-mail ao remetente e apague esta mensagem. Opiniões pessoais 
do remetente não refletem, necessariamente, o ponto de vista da Neurotech, 
o qual é divulgado somente por pessoas autorizadas. Antes de imprimir este 
e-mail, veja se realmente é necessário. Ajude a preservar o meio ambiente.

This message may contain confidential and/or privileged information. If you 
are not the addressee or authorized to receive this for the 
addressee, please, you must not use, copy, disclose, change, take any 
action based on this message or any information herein. Personal opinions 
of the sender do not necessarily reflect the view of Neurotech, which is 
only divulged by authorized persons. Please consider the environment before 
printing this email.


Re: [julia-users] randperm run time is slow

2016-01-22 Thread Rafael Fourquet
> Let's capture this as a Julia performance issue on github,
> if we can't figure out an easy way to speed this up right away.

I think I remember having identified a potentially sub-optimal
implementation of this function few weeks back (perhaps no more than
what Tim suggested) and had planned to investigate further (when time
permits...)


Re: [julia-users] Interrupting script gives constant error

2016-01-22 Thread Lutfullah Tomak
Thanks for your answer.
I later googled jl_uv_writecb. Basically, it happens because of depreciation 
warning. And, it also in repl too if a depreciated method is interrupted. 
Depreciation warning for readall would make IJulia unusable even if I use 
jupyter.

[julia-users] Re: Interrupting script gives constant error

2016-01-22 Thread Bryan Rivera
You are on the potentially unstable dev branch 0.5.

Try 0.4 instead.  

You should probably file this as an issue on Julia's github.

On Friday, January 22, 2016 at 7:32:55 AM UTC-5, Lutfullah Tomak wrote:
>
> I have a julia script that runs IJulia
> as
>
> #!/bin/sh
> julia -e "using IJulia; notebook()"
>
> Interrupting this script gives recurring error reading
>
> jl_uv_writecb() ERROR: bad file descripter EBADF
>
> and cannot be stopped. I needed to cancel because IJulia freezes giving 
> warning lots of depreciation warning because of recent change in read* and 
> write* function. I'm on 0.5-dev+2238 Commit 8e036b4. Debian Sid armc7-a.
> How can I stop that errors and how I can I stop depreciation warnings.
>
>

Re: [julia-users] Optimizing Function Injection in Julia

2016-01-22 Thread Bryan Rivera
Thanks for the responses guys.

@Mauro Hopefully I'm approved soon :)

@Viral Makes sense, looks like it works.

@Kevin This is the only email I've used to post here.  It might be another 
Bryan.




[julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Bryan Rivera
I have to do some investigating here.  I thought we could do something like 
that but wasn't quite sure how it would look.

Check this out:

This code using FastAnonymous optimizes to the very same code below it 
where functions have been manually injected:

using FastAnonymous


function function1(a, b, function2)
  if(a > b)
c = a + b
return function2(c)
  else
# do anything
# but return nothing
  end
end


z = 10
function2 = @anon c -> (c + z)


a = 1
b = 2
@code_llvm function1(a, b, function2)
@code_native function1(a, b, function2)

Manually unrolled equivalent:

function function1(a, b, z)
  if(a > b)
c = a + b
return function2(c, z)
  else
# do anything
# but return nothing
  end
end


function function2(c, z)
  return c + z
end


a = 1
b = 2
z = 10


@code_llvm function1(a, b, z)

@code_native function1(a, b, z)

However, this is a bit too simplistic.  My program actually does this:

# Test to see if multiple functions are created.  They are.
# We would only need to create a single function if we used julia anon, but 
its time inefficient.

dict = Dict{Int, Any}()
for z = 1:100
function2 = @anon c -> (c + z)

dict[z] =  function2
end


a = 1
b = 2

function test()
  function1(a,b, dict[100])
  function1(a,b, dict[50])
end

@code_llvm test()
@code_native test()



So we end up creating multiple functions for each z value.  We could use 
Julia's anon funs, which would only create a single function, however these 
lamdas are less performant than FastAnon.

So its a space vs time tradeoff, I want the speed of FastAnon, without the 
spacial overhead of storing multiple functions.

Can we be greedy?  :)


On Thursday, January 21, 2016 at 9:56:51 PM UTC-5, Cedric St-Jean wrote:
>
> Something like this?
>
> function function1(a, b, f) # Variable needed in callback fun injected.
> if(a > b)
>   c = a + b
>   res = f(c) # Callback function has been injected.
>   return res + 1
> else
>   # do anything
>   # but return nothing
> end
> end
>
> type SomeCallBack
> z::Int
> end
> Base.call(callback::SomeCallBack, c) = c + callback.z
>
> function1(2, 1, SomeCallBack(10))
>
> Because of JIT, this is 100% equivalent to your "callback function has 
> been injected" example, performance-wise. My feeling is that .call 
> overloading is not to be abused in Julia, so I would favor using a regular 
> function call with a descriptive name instead of call overloading, but the 
> same performance guarantees apply. Does that answer your question?
>
> On Thursday, January 21, 2016 at 9:02:50 PM UTC-5, Bryan Rivera wrote:
>>
>> I think what I wrote above might be too complicated, as it is an attempt 
>> to solve this problem.
>>
>> In essence this is what I want: 
>>
>>
>>
>> function function1(a, b, onGreaterThanCallback)
>>   if(a > b)
>> c = a + b
>> res = onGreaterThanCallback(c, z)
>> return res + 1
>>   else
>> # do anything
>> # but return nothing
>>   end
>> end
>>
>>
>> global onGreaterThanCallback = (c) -> c + z
>>
>> function1(a, b, onGreaterThanCallback)
>>
>>
>> Problems:
>>
>> The global variable.
>>
>> The anonymous function which has performance impact (vs other 
>> approaches).  We could use Tim Holy's @anon, but then the value of `z` is 
>> fixed at function definition, which we don't always want.
>>
>> I think that the ideal optimization would look like this:
>>
>>   function function1(a, b, z) # Variable needed in callback fun 
>> injected.
>> if(a > b)
>>   c = a + b
>>   res = c + z # Callback function has been injected.
>>   return res + 1
>> else
>>   # do anything
>>   # but return nothing
>> end
>>   end
>>
>>
>>   function1(a, b, z)
>>
>> In OO languages we would be using an abstract class or its equivalent. 
>>  But I've thought about it, and read the discussions on interfaces, and 
>> don't see those solutions optimizing the code out like I did above.
>>
>> Any ideas?
>>
>

Re: [julia-users] Issue with return values from subtypes

2016-01-22 Thread Scott Jones


On Friday, January 22, 2016 at 1:03:05 AM UTC-5, Mauro wrote:
>
> you mean why the second errors: 
>
> julia> Irrational 
> Irrational{sym} 
>
> julia> Irrational{sym} 
> ERROR: UndefVarError: sym not defined 
>
> ?  Irrational is all that is needed. Would this help: 
>

I guess my point is, why does subtypes display something that is not a 
valid type?
It seems to be more of a display problem, in that if you use the type 
returned from subtypes directly, it is fine.
Same thing with the returned Rational{T<:Integer}, where if you try to use 
that, you get an error about the <:
What would be the correct text forms that would create the same types as 
returned by subtypes(Real)?

Thanks, Scott
 

> julia> Irrational{TypeVar(:T)} 
> Irrational{T} 
>
> On Thu, 2016-01-21 at 19:21, Scott Jones  > wrote: 
> > I ran across something strange today, with some of the test code, that 
> used 
> > something like: 
> > for x in [subtypes(Real) ; subtypes(Complex)] 
> > ... 
> > end 
> > 
> > The issue is that subtypes(Real) returns: 
> > 
> > *julia> **subtypes(Real)* 
> > 
> > *4-element Array{Any,1}:* 
> > 
> > * AbstractFloat   * 
> > 
> > * Integer * 
> > 
> > * Irrational{sym} * 
> > 
> > * Rational{T<:Integer}* 
> > 
> > If instead of having subtypes(Real), I try to put the types in directly, 
> it 
> > won't accept either Irrational{sym} or Rational{T<:Integer}. 
> > Is there a correct way to represent those?  Should Irrational{sym} be 
> > something like Irrational{::Symbol}? 
> > Is this a bug in subtypes? 
> > 
> > Thanks, Scott 
>


[julia-users] Re: Interrupting script gives constant error

2016-01-22 Thread Lutfullah Tomak
Since I am on arm machine I use latest master branch for Arm ABI. Recent update 
to read* and write* methods has not updated in IJulia. I will just update 
depreciated methods in IJulia by myself I think.

Re: [julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Tim Holy
Just use

z = 1
function2 = @anon c -> c + z
for z = 1:100
function2.z = z
# do whatever with function2, including making a copy
end

--Tim

On Friday, January 22, 2016 08:55:25 AM Cedric St-Jean wrote:
> (non-mutating) Closures and FastAnonymous work essentially the same way.
> They store the data that is closed over (more or less) and a function
> pointer. The thing is that there's only one data structure in Julia for all
> regular anonymous functions, whereas FastAnonymous creates one per @anon
> site. Because the FastAnonymous-created datatype is specific to that
> function definition, the standard Julia machinery takes over and produces
> efficient code. It's just as good as if the function had been defined
> normally with `function foo(...) ... end`
> 
> 
> for z = 1:100
> function2 = @anon c -> (c + z)
> 
> dict[z] =  function2
> end
> 
> 
> So we end up creating multiple functions for each z value.
> 
> 
> In this code, whether you use @anon or not, Julia will create 100 object
> instances to store the z values.
> 
> The speed difference between the two will soon be gone.
> 
> 
> Cédric
> 
> On Friday, January 22, 2016 at 11:31:36 AM UTC-5, Bryan Rivera wrote:
> > I have to do some investigating here.  I thought we could do something
> > like that but wasn't quite sure how it would look.
> > 
> > Check this out:
> > 
> > This code using FastAnonymous optimizes to the very same code below it
> > where functions have been manually injected:
> > 
> > using FastAnonymous
> > 
> > 
> > function function1(a, b, function2)
> > 
> >   if(a > b)
> >   
> > c = a + b
> > return function2(c)
> >   
> >   else
> >   
> > # do anything
> > # but return nothing
> >   
> >   end
> > 
> > end
> > 
> > 
> > z = 10
> > function2 = @anon c -> (c + z)
> > 
> > 
> > a = 1
> > b = 2
> > @code_llvm function1(a, b, function2)
> > @code_native function1(a, b, function2)
> > 
> > Manually unrolled equivalent:
> > 
> > function function1(a, b, z)
> > 
> >   if(a > b)
> >   
> > c = a + b
> > return function2(c, z)
> >   
> >   else
> >   
> > # do anything
> > # but return nothing
> >   
> >   end
> > 
> > end
> > 
> > 
> > function function2(c, z)
> > 
> >   return c + z
> > 
> > end
> > 
> > 
> > a = 1
> > b = 2
> > z = 10
> > 
> > 
> > @code_llvm function1(a, b, z)
> > 
> > @code_native function1(a, b, z)
> > 
> > However, this is a bit too simplistic.  My program actually does this:
> > 
> > # Test to see if multiple functions are created.  They are.
> > # We would only need to create a single function if we used julia anon,
> > but its time inefficient.
> > 
> > dict = Dict{Int, Any}()
> > for z = 1:100
> > 
> > function2 = @anon c -> (c + z)
> > 
> > dict[z] =  function2
> > 
> > end
> > 
> > 
> > a = 1
> > b = 2
> > 
> > function test()
> > 
> >   function1(a,b, dict[100])
> >   function1(a,b, dict[50])
> > 
> > end
> > 
> > @code_llvm test()
> > @code_native test()
> > 
> > 
> > 
> > So we end up creating multiple functions for each z value.  We could use
> > Julia's anon funs, which would only create a single function, however
> > these
> > lamdas are less performant than FastAnon.
> > 
> > So its a space vs time tradeoff, I want the speed of FastAnon, without the
> > spacial overhead of storing multiple functions.
> > 
> > Can we be greedy?  :)
> > 
> > On Thursday, January 21, 2016 at 9:56:51 PM UTC-5, Cedric St-Jean wrote:
> >> Something like this?
> >> 
> >> function function1(a, b, f) # Variable needed in callback fun injected.
> >> 
> >> if(a > b)
> >> 
> >>   c = a + b
> >>   res = f(c) # Callback function has been injected.
> >>   return res + 1
> >> 
> >> else
> >> 
> >>   # do anything
> >>   # but return nothing
> >> 
> >> end
> >> 
> >> end
> >> 
> >> type SomeCallBack
> >> 
> >> z::Int
> >> 
> >> end
> >> Base.call(callback::SomeCallBack, c) = c + callback.z
> >> 
> >> function1(2, 1, SomeCallBack(10))
> >> 
> >> Because of JIT, this is 100% equivalent to your "callback function has
> >> been injected" example, performance-wise. My feeling is that .call
> >> overloading is not to be abused in Julia, so I would favor using a
> >> regular
> >> function call with a descriptive name instead of call overloading, but
> >> the
> >> same performance guarantees apply. Does that answer your question?
> >> 
> >> On Thursday, January 21, 2016 at 9:02:50 PM UTC-5, Bryan Rivera wrote:
> >>> I think what I wrote above might be too complicated, as it is an attempt
> >>> to solve this problem.
> >>> 
> >>> In essence this is what I want:
> >>> 
> >>> 
> >>> # wasmerged, _, _, _ = elide_pairwise!(ttree1, ttree2, canmerge; 
nbrs=idbgv)

> >>> function function1(a, b, onGreaterThanCallback)
> >>> 
> >>>   if(a > b)
> >>>   
> >>> c = a + b
> >>> res = onGreaterThanCallback(c, z)
> >>> return res + 1
> >>>   
> >>>   else
> >>>   
> >>> # do anything

[julia-users] Re: help with using Julia + Cilk

2016-01-22 Thread Kevin Lin
Thanks for the quick response.  Missing python lib does not seem to be 
problem, because (as I said) PyPlot etc have all been working just fine 
except when a c module using cilk (such as the hello code above) is called 
*before* the "using PyPlot".  Indeed, if I do

using PyPlot
include("hello.jl")
plot(...)

i.e., reverse the order of PyPlot and hello in my earlier example, then it 
works fine on one of my machines but not on the others.  This behavior 
seems a bit odd.

Still, it is quite possibly a problem with my installation.  Can anyone 
reproduce this behavior?

Thanks,
Kevin

On Friday, January 22, 2016 at 5:18:25 AM UTC-7, Lutfullah Tomak wrote:
>
> It is unrelated to Julia because you are missing a python library that is 
> needed by PyPlot. From error, I think you're missing numpy.



[julia-users] Re: Optimizing Function Injection in Julia

2016-01-22 Thread Cedric St-Jean
(non-mutating) Closures and FastAnonymous work essentially the same way. 
They store the data that is closed over (more or less) and a function 
pointer. The thing is that there's only one data structure in Julia for all 
regular anonymous functions, whereas FastAnonymous creates one per @anon 
site. Because the FastAnonymous-created datatype is specific to that 
function definition, the standard Julia machinery takes over and produces 
efficient code. It's just as good as if the function had been defined 
normally with `function foo(...) ... end`


for z = 1:100
function2 = @anon c -> (c + z)

dict[z] =  function2
end


So we end up creating multiple functions for each z value.
>
>  
In this code, whether you use @anon or not, Julia will create 100 object 
instances to store the z values. 

The speed difference between the two will soon be gone. 


Cédric

On Friday, January 22, 2016 at 11:31:36 AM UTC-5, Bryan Rivera wrote:
>
> I have to do some investigating here.  I thought we could do something 
> like that but wasn't quite sure how it would look.
>
> Check this out:
>
> This code using FastAnonymous optimizes to the very same code below it 
> where functions have been manually injected:
>
> using FastAnonymous
>
>
> function function1(a, b, function2)
>   if(a > b)
> c = a + b
> return function2(c)
>   else
> # do anything
> # but return nothing
>   end
> end
>
>
> z = 10
> function2 = @anon c -> (c + z)
>
>
> a = 1
> b = 2
> @code_llvm function1(a, b, function2)
> @code_native function1(a, b, function2)
>
> Manually unrolled equivalent:
>
> function function1(a, b, z)
>   if(a > b)
> c = a + b
> return function2(c, z)
>   else
> # do anything
> # but return nothing
>   end
> end
>
>
> function function2(c, z)
>   return c + z
> end
>
>
> a = 1
> b = 2
> z = 10
>
>
> @code_llvm function1(a, b, z)
>
> @code_native function1(a, b, z)
>
> However, this is a bit too simplistic.  My program actually does this:
>
> # Test to see if multiple functions are created.  They are.
> # We would only need to create a single function if we used julia anon, 
> but its time inefficient.
>
> dict = Dict{Int, Any}()
> for z = 1:100
> function2 = @anon c -> (c + z)
>
> dict[z] =  function2
> end
>
>
> a = 1
> b = 2
>
> function test()
>   function1(a,b, dict[100])
>   function1(a,b, dict[50])
> end
>
> @code_llvm test()
> @code_native test()
>
>
>
> So we end up creating multiple functions for each z value.  We could use 
> Julia's anon funs, which would only create a single function, however these 
> lamdas are less performant than FastAnon.
>
> So its a space vs time tradeoff, I want the speed of FastAnon, without the 
> spacial overhead of storing multiple functions.
>
> Can we be greedy?  :)
>
>
> On Thursday, January 21, 2016 at 9:56:51 PM UTC-5, Cedric St-Jean wrote:
>>
>> Something like this?
>>
>> function function1(a, b, f) # Variable needed in callback fun injected.
>> if(a > b)
>>   c = a + b
>>   res = f(c) # Callback function has been injected.
>>   return res + 1
>> else
>>   # do anything
>>   # but return nothing
>> end
>> end
>>
>> type SomeCallBack
>> z::Int
>> end
>> Base.call(callback::SomeCallBack, c) = c + callback.z
>>
>> function1(2, 1, SomeCallBack(10))
>>
>> Because of JIT, this is 100% equivalent to your "callback function has 
>> been injected" example, performance-wise. My feeling is that .call 
>> overloading is not to be abused in Julia, so I would favor using a regular 
>> function call with a descriptive name instead of call overloading, but the 
>> same performance guarantees apply. Does that answer your question?
>>
>> On Thursday, January 21, 2016 at 9:02:50 PM UTC-5, Bryan Rivera wrote:
>>>
>>> I think what I wrote above might be too complicated, as it is an attempt 
>>> to solve this problem.
>>>
>>> In essence this is what I want: 
>>>
>>>
>>>
>>> function function1(a, b, onGreaterThanCallback)
>>>   if(a > b)
>>> c = a + b
>>> res = onGreaterThanCallback(c, z)
>>> return res + 1
>>>   else
>>> # do anything
>>> # but return nothing
>>>   end
>>> end
>>>
>>>
>>> global onGreaterThanCallback = (c) -> c + z
>>>
>>> function1(a, b, onGreaterThanCallback)
>>>
>>>
>>> Problems:
>>>
>>> The global variable.
>>>
>>> The anonymous function which has performance impact (vs other 
>>> approaches).  We could use Tim Holy's @anon, but then the value of `z` is 
>>> fixed at function definition, which we don't always want.
>>>
>>> I think that the ideal optimization would look like this:
>>>
>>>   function function1(a, b, z) # Variable needed in callback fun 
>>> injected.
>>> if(a > b)
>>>   c = a + b
>>>   res = c + z # Callback function has been injected.
>>>   return res + 1
>>> else
>>>   # do anything
>>>   # but return nothing
>>> end
>>>   end
>>>
>>>
>>>   

Re: [julia-users] Julia Dict key in keys() but not in sort(collect(keys()))

2016-01-22 Thread Stefan Karpinski
The only way this could legitimately happen is if badnum is NaN – otherwise
something fishy is going on. If you can provide a reproducing example, that
would be helpful. This could also happen if you've changed hashing or
equality for numbers.

On Fri, Jan 22, 2016 at 2:42 AM, Mauro  wrote:

> I cannot reproduce:
>
> julia> badnum=0.9122066068007542
> 0.9122066068007542
>
> julia> g =  Dict{Float64,Float64}()
> Dict{Float64,Float64} with 0 entries
>
> julia> g[badnum] = 1
> 1
>
> julia> badnum∈keys(g)
> true
>
> julia> badnum∈sort(collect(keys(g)))
> true
>
> Maybe you can post a complete run-able example (always best)?
>
> On Thu, 2016-01-21 at 23:34, Michael Lindon 
> wrote:
> > I have a problem I don't quite understand. Here is an example:
> >
> > julia> badnum∈keys(g)
> > false
> > julia> badnum∈sort(collect(keys(g)))
> > true
> > julia> badnum
> > 0.9122066068007542
> >
> > Does the collecting the keys do some type conversion which introduces
> some
> > rounding? I have defined g as Dict{Float64,Float64}()
>


Re: [julia-users] immutability, value types and reference types?

2016-01-22 Thread Stefan Karpinski
What are you trying to accomplish?

On Thu, Jan 21, 2016 at 7:00 PM,  wrote:

>
>
> On Thursday, January 21, 2016 at 8:48:19 AM UTC-8, Stefan Karpinski wrote:
>>
>> Semantically all objects are reference types in Julia. It just happens
>> that if they're immutable the system is free to copy them or not since
>> there's no way to tell the difference.
>>
>
> So how do you make Haskell-like immutable reference objects, i.e. trees of
> immutable objects?
> When can you tell if an immutable object is inlined in its container or is
> a reference, or are you saying they are always by reference and never
> inlined in the containing object?
>
>
>>


Re: [julia-users] Ambiguous methods warnings for DataFrames and Images

2016-01-22 Thread Stefan Karpinski
I'm pretty sure I convinced Jeff that we should change ambiguities, at
least between modules, into runtime errors instead of definition-time
warnings. So you can expect this to change in the future. Of course, that
doesn't help right now.

On Thu, Jan 21, 2016 at 6:06 PM, Tim Holy  wrote:

> There is a solution, but no one (including me) has yet gotten around to it:
>
> https://github.com/JuliaStats/DataArrays.jl/issues/168
>
> The more general solution is to turn ambiguity warnings into automatically-
> (and silently-)generated "stub" functions that throw an error when called.
> However, I think that's waiting on an overhaul of the type system.
>
> Best,
> --Tim
>
> On Thursday, January 21, 2016 01:44:49 PM Mauro wrote:
> > It's a known wart: https://github.com/JuliaLang/julia/issues/6190
> >
> > But as far as I recall that issue thread, no solution has been found yet.
> >
> > On Thu, 2016-01-21 at 13:32, cormull...@mac.com wrote:
> > > Just wondering if there's a solution in the future for this trivial but
> mildly irritating problem:
> > > julia> using DataFrames, Images
> > > WARNING: New definition
> > >
> > > .+(Images.AbstractImageDirect, AbstractArray) at
> > > /Users/me/.julia/v0.4/Images/src/algorithms.jl:22>
> > > is ambiguous with:
> > > .+(AbstractArray, Union{DataArrays.PooledDataArray,
> > > DataArrays.DataArray}, AbstractArray...) at
> > > /Users/me/.julia/v0.4/DataArrays/src/broadcast.jl:297.>
> > > To fix, define
> > >
> > > .+(Images.AbstractImageDirect,
> Union{DataArrays.PooledDataArray,
> > > DataArrays.DataArray})>
> > > before the new definition.
> > >
> > > and so on for 150 lines. It's not a major problem, of course (I just
> > > ignore it). But I'm curious as to what the fix will be.
>
>


Re: [julia-users] Interrupting script gives constant error

2016-01-22 Thread Miguel Bazdresch
I tend to just run Jupyter by itself from a command line, not from Julia.
Interrupting Julia's notebook() doesn't work reliably.

-- mb

On Fri, Jan 22, 2016 at 7:32 AM, Lutfullah Tomak 
wrote:

> I have a julia script that runs IJulia
> as
>
> #!/bin/sh
> julia -e "using IJulia; notebook()"
>
> Interrupting this script gives recurring error reading
>
> jl_uv_writecb() ERROR: bad file descripter EBADF
>
> and cannot be stopped. I needed to cancel because IJulia freezes giving
> warning lots of depreciation warning because of recent change in read* and
> write* function. I'm on 0.5-dev+2238 Commit 8e036b4. Debian Sid armc7-a.
> How can I stop that errors and how I can I stop depreciation warnings.


[julia-users] Re: Interrupting script gives constant error

2016-01-22 Thread Lutfullah Tomak
Afterall changing readbytes in ~IJulia/src/stdio.jl to read is just worked. 
Before this, IJulia prompted many warnings about depreciation of readbytes for 
one cell evaluation and browser was freezing while showing those warnings.

[julia-users] Scheduling external programs in slurm with Julia and ClusterManagers

2016-01-22 Thread William Patterson
Hi, I am new to Julia and this is my first post in this group.

I am writing a script in Julia that schedules external programs with Slurm 
on a Rocks Cluster. When it came to actually scheduling the jobs I ran into 
difficulties getting external programs to run inside a function.

The code below is a snippet from my program that is causing issues 
independently:
using ClusterManagers

@everywhere function test_run_job()
return readall(`echo test`)
end

function schedule_jobs()
job_num = 10

addprocs(SlurmManager(job_num))

output = []
pids = []
for i in workers()
out, pid = fetch(@spawnat i (test_run_job(), getpid()))
push!(output, out)
push!(pids, pid)
end
println(output)

for i in workers()
rmprocs(i)
end
end

When I run the code above as-is i get the error below:
ERROR: LoadError: On worker 2: 10
function test_run_job not defined on process 2
 in error at ./error.jl:21
 in anonymous at serialize.jl:526
 in anonymous at multi.jl:1364
 in anonymous at multi.jl:910
 in run_work_thunk at multi.jl:651
 in run_work_thunk at multi.jl:660
 in anonymous at task.jl:58
 in remotecall_fetch at multi.jl:737
 in call_on_owner at multi.jl:783
 in fetch at multi.jl:801
 in schedule_jobs at the/path/to/my/script/myscript.jl:15
 in include at ./boot.jl:261
 in include_from_node1 at ./loading.jl:304
 in process_options at ./client.jl:280
 in _start at ./client.jl:378

Can someone please explain what I could be doing wrong?
Thank you!


[julia-users] newbie

2016-01-22 Thread PattiMichelle Sheaffer
I've been using IDL, matlab, and octave for a while (mostly IDL) but Juila 
looks really interesting.  I have been interested in wavelets (one of the 
reasons for matlab) and think that if Julia wavelet package got 
non-separable wavelets (e.g., quincunx based) it would really stand out in 
that department.  For image analysis, separability (directionality x/y) and 
its impact on images is a problem everyone just sort of lives with.  Not 
everyone is familiar with nonseparable wavelets.  I don't find them in most 
of the standard intro texts.  Thoughts?

Thanks for letting me join,
Patricia


[julia-users] Running multiple scripts in parallel Julia sessions

2016-01-22 Thread Ritchie Lee
Let's say I have 10 julia scripts, scripts = ["script1.jl", "script2.jl", 
, "script10.jl"] and I would like to run them in parallel in separate 
Julia sessions, but 4 at a time (since I only have 4 cores on my machine). 
 Is there any way to do this programmatically?

I tried doing this:

addprocs(4)
pmap(include, scripts)

or

addprocs(4)
@parallel for s in scripts
include(s)
end

However, this seems to reuse the same session, so all the global consts in 
the script file are colliding.  I would like to make sure that the 
namespaces are completely separate.

Thanks!


Re: [julia-users] randperm run time is slow

2016-01-22 Thread Viral Shah
Generating one by one or together is a small part of the execution time:

julia> @time rand(3*10^8);
  1.281946 seconds (10 allocations: 2.235 GB, 0.23% gc time)

julia> r(n) = for i=1:n; rand(); end;
julia> @time r(3*10^8)
  0.632389 seconds (6 allocations: 192 bytes)

The one by one is perhaps faster because it doesn't have the array 
allocation overhead. So, it 


-viral

On Saturday, January 23, 2016 at 6:53:38 AM UTC+5:30, Tim Holy wrote:
>
> Try 
> @edit randperm(10) 
> and see for yourself. 
>
> My bet is that you could speed it up by generating all the random numbers 
> you'll need in one go, rather than generating them one-by-one. Want to 
> give it 
> a shot? 
>
> --Tim 
>
> On Friday, January 22, 2016 02:54:51 PM Brian Lucena wrote: 
> > I am running a simulation that requires me to generate random 
> permutations 
> > of size 300 million.  I would ideally like to run this generation in a 
> loop 
> > 10K or 100K times. I am surprised that randperm is not faster in Julia 
> than 
> > it is.  It seems to be considerably slower than the equivalent in R (and 
> R 
> > is clearly not known for speed)  :) 
> > 
> > In Julia: 
> > 
> > *julia> **@time asd=randperm(3)* 
> > 
> >  43.437829 seconds (6 allocations: 2.235 GB, 0.01% gc time) 
> > 
> > *3-element Array{Int64,1}:* 
> > 
> > 
> > In R 
> > 
> > > start = Sys.time() 
> > > 
> > > asd = sample(3) 
> > > 
> > > Sys.time()-start 
> > 
> > Time difference of 23.27244 secs 
> > 
> > Julia seems to be twice as slow! 
> > 
> > Any thoughts on why that is?  Does randperm use the "knuth shuffle" or 
> does 
> > it use some other algorithm. 
> > 
> > Thanks, 
> > B 
>


Re: [julia-users] randperm run time is slow

2016-01-22 Thread Viral Shah
If you see the code as @timholy suggested, you will see the knuth shuffle 
implementation. Let's capture this as a Julia performance issue on github, 
if we can't figure out an easy way to speed this up right away.

-viral

On Saturday, January 23, 2016 at 9:47:21 AM UTC+5:30, Viral Shah wrote:
>
> Generating one by one or together is a small part of the execution time:
>
> julia> @time rand(3*10^8);
>   1.281946 seconds (10 allocations: 2.235 GB, 0.23% gc time)
>
> julia> r(n) = for i=1:n; rand(); end;
> julia> @time r(3*10^8)
>   0.632389 seconds (6 allocations: 192 bytes)
>
> The one by one is perhaps faster because it doesn't have the array 
> allocation overhead. So, it 
>
>
> -viral
>
> On Saturday, January 23, 2016 at 6:53:38 AM UTC+5:30, Tim Holy wrote:
>>
>> Try 
>> @edit randperm(10) 
>> and see for yourself. 
>>
>> My bet is that you could speed it up by generating all the random numbers 
>> you'll need in one go, rather than generating them one-by-one. Want to 
>> give it 
>> a shot? 
>>
>> --Tim 
>>
>> On Friday, January 22, 2016 02:54:51 PM Brian Lucena wrote: 
>> > I am running a simulation that requires me to generate random 
>> permutations 
>> > of size 300 million.  I would ideally like to run this generation in a 
>> loop 
>> > 10K or 100K times. I am surprised that randperm is not faster in Julia 
>> than 
>> > it is.  It seems to be considerably slower than the equivalent in R 
>> (and R 
>> > is clearly not known for speed)  :) 
>> > 
>> > In Julia: 
>> > 
>> > *julia> **@time asd=randperm(3)* 
>> > 
>> >  43.437829 seconds (6 allocations: 2.235 GB, 0.01% gc time) 
>> > 
>> > *3-element Array{Int64,1}:* 
>> > 
>> > 
>> > In R 
>> > 
>> > > start = Sys.time() 
>> > > 
>> > > asd = sample(3) 
>> > > 
>> > > Sys.time()-start 
>> > 
>> > Time difference of 23.27244 secs 
>> > 
>> > Julia seems to be twice as slow! 
>> > 
>> > Any thoughts on why that is?  Does randperm use the "knuth shuffle" or 
>> does 
>> > it use some other algorithm. 
>> > 
>> > Thanks, 
>> > B 
>>
>

[julia-users] Scheduling external programs in slurm with Julia and ClusterManagers

2016-01-22 Thread Lutfullah Tomak
This code

@everywhere function test_run_job()
return readall(`echo test`)
end

should go after addprocs otherwise it is only defined in existing workers at 
the time it is defined. Moving it there works locally.

[julia-users] Scheduling external programs in slurm with Julia and ClusterManagers

2016-01-22 Thread Lutfullah Tomak
This code

@everywhere function test_run_job()
return readall(`echo test`)
end

should go after addprocs otherwise it is only defined in existing workers at 
the time it is defined. Moving it there works locally.

[julia-users] optim stopping without reaching convergence

2016-01-22 Thread Kristoffer Carlsson
Look at the source 
https://github.com/JuliaOpt/Optim.jl/blob/master/src/nelder_mead.jl

The number of iterations is not necessarily the same as the number of other 
justice function evaluations.

[julia-users] Tuples vs. vectors for indexing into an array

2016-01-22 Thread Cedric St-Jean
I manipulate a lot of images, and I have to deal with image coordinates 
(aka indexes) all the time. I've been storing them as vectors so far, but 
the inefficiency of creating a full-blown array object to store 2 integers 
is gross (and it might add up, though it's hard to profile). OTOH, tuples 
are very awkward, in particular because

ind1 = (4, 5)
ind2 = (20, 67)
ind2 .- ind1 # not defined
ind1 .+ 1 # not defined either

Is there any reason why I should _not_ define those operations? In general, 
I don't want to implement a method I didn't create over a type I didn't 
create, but this seems harmless enough...? Is there any consideration that 
I'm missing?

Cédric


Re: [julia-users] randperm run time is slow

2016-01-22 Thread Tim Holy
Try
@edit randperm(10)
and see for yourself.

My bet is that you could speed it up by generating all the random numbers 
you'll need in one go, rather than generating them one-by-one. Want to give it 
a shot?

--Tim

On Friday, January 22, 2016 02:54:51 PM Brian Lucena wrote:
> I am running a simulation that requires me to generate random permutations
> of size 300 million.  I would ideally like to run this generation in a loop
> 10K or 100K times. I am surprised that randperm is not faster in Julia than
> it is.  It seems to be considerably slower than the equivalent in R (and R
> is clearly not known for speed)  :)
> 
> In Julia:
> 
> *julia> **@time asd=randperm(3)*
> 
>  43.437829 seconds (6 allocations: 2.235 GB, 0.01% gc time)
> 
> *3-element Array{Int64,1}:*
> 
> 
> In R
> 
> > start = Sys.time()
> > 
> > asd = sample(3)
> > 
> > Sys.time()-start
> 
> Time difference of 23.27244 secs
> 
> Julia seems to be twice as slow!
> 
> Any thoughts on why that is?  Does randperm use the "knuth shuffle" or does
> it use some other algorithm.
> 
> Thanks,
> B


Re: [julia-users] Tuples vs. vectors for indexing into an array

2016-01-22 Thread Tim Holy
Check out CartesianIndex. It's a little annoying to type long names, but it 
supports all those operations, and julia already has all the indexing 
operations defined for it.

--Tim

On Friday, January 22, 2016 05:20:59 PM Cedric St-Jean wrote:
> I manipulate a lot of images, and I have to deal with image coordinates
> (aka indexes) all the time. I've been storing them as vectors so far, but
> the inefficiency of creating a full-blown array object to store 2 integers
> is gross (and it might add up, though it's hard to profile). OTOH, tuples
> are very awkward, in particular because
> 
> ind1 = (4, 5)
> ind2 = (20, 67)
> ind2 .- ind1 # not defined
> ind1 .+ 1 # not defined either
> 
> Is there any reason why I should _not_ define those operations? In general,
> I don't want to implement a method I didn't create over a type I didn't
> create, but this seems harmless enough...? Is there any consideration that
> I'm missing?
> 
> Cédric



Re: [julia-users] Tuples vs. vectors for indexing into an array

2016-01-22 Thread Cedric St-Jean
I'll give it a try. It's missing scalar addition, though.

CartesianIndex((4,5)) + 1   # error

Thanks!

On Friday, January 22, 2016 at 8:26:53 PM UTC-5, Tim Holy wrote:
>
> Check out CartesianIndex. It's a little annoying to type long names, but 
> it 
> supports all those operations, and julia already has all the indexing 
> operations defined for it. 
>
> --Tim 
>
> On Friday, January 22, 2016 05:20:59 PM Cedric St-Jean wrote: 
> > I manipulate a lot of images, and I have to deal with image coordinates 
> > (aka indexes) all the time. I've been storing them as vectors so far, 
> but 
> > the inefficiency of creating a full-blown array object to store 2 
> integers 
> > is gross (and it might add up, though it's hard to profile). OTOH, 
> tuples 
> > are very awkward, in particular because 
> > 
> > ind1 = (4, 5) 
> > ind2 = (20, 67) 
> > ind2 .- ind1 # not defined 
> > ind1 .+ 1 # not defined either 
> > 
> > Is there any reason why I should _not_ define those operations? In 
> general, 
> > I don't want to implement a method I didn't create over a type I didn't 
> > create, but this seems harmless enough...? Is there any consideration 
> that 
> > I'm missing? 
> > 
> > Cédric 
>
>

[julia-users] optim stopping without reaching convergence

2016-01-22 Thread Kristoffer Carlsson
Look at the source 
https://github.com/JuliaOpt/Optim.jl/blob/master/src/nelder_mead.jl
The number of iterations is not necessarily the same as the number of objective 
function evaluations

[julia-users] Re: randperm run time is slow

2016-01-22 Thread cdm

is asd typed differently earlier in your code ... ?

see:

https://groups.google.com/forum/#!searchin/julia-users/randperm|sort:date/julia-users/16EO_-jkz8Y/DyPE-rG76GYJ