[julia-users] Re: equivalent of numpy newaxis?

2016-09-12 Thread DNF
For your particular example, it looks like what you want is (and I am just 
guessing what mag_sqr means):
dist = abs2.(x .- y.')
The performance should be the similar to a hand-written loop on version 0.5.

You can read about it here: 
http://docs.julialang.org/en/release-0.5/manual/functions/#dot-syntax-for-vectorizing-functions


On Monday, September 12, 2016 at 9:29:15 PM UTC+2, Neal Becker wrote:
>
> Some time ago I asked this question 
>
> http://stackoverflow.com/questions/25486506/julia-broadcasting-equivalent-of-numpy-newaxis
>  
>
> As a more interesting example, here is some real python code I use: 
> dist = mag_sqr (demod_out[:,np.newaxis] - const.map[np.newaxis,:]) 
>
> where demod_out, const.map are each vectors, mag_sqr performs element-wise 
> euclidean distance, and the result is a 2D array whose 1st axis matches 
> the 
> 1st axis of demod_out, and the 2nd axis matches the 2nd axis of const.map. 
>
>
> From the answers I've seen, julia doesn't really have an equivalent 
> functionality.  The idea here is, without allocating a new array, 
> manipulate 
> the strides to cause broadcasting. 
>
> AFAICT, the best for Julia would be just forget the vectorized code, and 
> explicitly write out loops to perform the computation.  OK, I guess, but 
> maybe not as readable. 
>
> Is there any news on this front? 
>
>

[julia-users] Re: Complex parallel computing implementation

2016-09-12 Thread Adrian Salceanu
Thanks - haven't thought of using Dagger (don't have previous experience 
with it either). I've read the docs now but I'm not sure how using it would 
help, maybe I'm missing something? 

My problem is really about running deeply nested function calls across 
multiple modules, in parallel. Making the code available to the workers 
involves a lot of back tracking, module by module and function by function, 
prepending @everywhere, well... everywhere. Cherry picking function calls 
in a few thousands lines codebase to make it load across processes is a 
task for computers, not people. Is this the idiomatic Julia way, because it 
really feels like I'm misusing the language? Am I supposed to end up with 
tens of calls to @everywhere? - it doesn't feel right... 
 

luni, 12 septembrie 2016, 23:53:06 UTC+2, dnm a scris:
>
> Have you tried Dagger.jl  to 
> set up a DAG of computations you need performed?
>
> On Monday, September 12, 2016 at 5:04:26 PM UTC-4, Adrian Salceanu wrote:
>>
>> This is a random example of an error - not really sure how to debug this, 
>> seems to crash within the Postgres library. The dump is long but not really 
>> helpful. 
>>
>> 12-Sep 21:39:38:WARNING:root:Module __anon__ not defined on process 5
>> 12-Sep 21:39:38:WARNING:root:Module __anon__ not defined on process 3
>> 12-Sep 21:39:39:WARNING:root:Module __anon__ not defined on process 2
>> fatal error on fatal error on 3: ERROR: attempt to send to unknown socket
>> 4: ERROR: attempt to send to unknown socket
>> 12-Sep 21:39:39:WARNING:root:Module __anon__ not defined on process 3
>> 12-Sep 21:39:39:WARNING:root:Module __anon__ not defined on process 5
>> fatal error on 3: ERROR: attempt to send to unknown socket
>> 12-Sep 21:39:39:WARNING:root:Module __anon__ not defined on process 3
>> fatal error on 2: ERROR: attempt to send to unknown socket
>> ERROR: LoadError: On worker 2:
>> LoadError: On worker 2:
>> LoadError: LoadError: LoadError: LoadError: LoadError: 
>> ProcessExitedException()
>>  in yieldto at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>>  in wait at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib (repeats 
>> 3 times)
>>  in take! at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib (repeats 
>> 2 times)
>>  in remotecall_fetch at multi.jl:745
>>  in remotecall_fetch at multi.jl:750
>>  in anonymous at multi.jl:1396
>>
>> ...and 1 other exceptions.
>>
>>  in include_string at loading.jl:282
>>  in include_from_node1 at 
>> /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>>  in require at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>>  in include_string at loading.jl:282
>>  in include_from_node1 at 
>> /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>>  in require at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>>  in eval at 
>> /Users/adrian/Dropbox/Projects/jinnie/lib/Genie/src/Database.jl:1
>>  in include_string at loading.jl:282
>>  in include_from_node1 at 
>> /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>>  in require at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>>  in include_string at loading.jl:282
>>  in include_from_node1 at 
>> /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>>  in eval at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>>  in anonymous at multi.jl:1394
>>  in run_work_thunk at multi.jl:661
>>  in remotecall_fetch at multi.jl:734
>>  in remotecall_fetch at multi.jl:750
>>  in anonymous at multi.jl:1396
>> while loading /Users/adrian/.julia/v0.4/PostgreSQL/src/../deps/build.jl, 
>> in expression starting on line 9
>> while loading /Users/adrian/.julia/v0.4/PostgreSQL/src/PostgreSQL.jl, in 
>> expression starting on line 8
>> while loading 
>> /Users/adrian/Dropbox/Projects/jinnie/lib/Genie/database_adapters/PostgreSQLDatabaseAdapter.jl,
>>  
>> in expression starting on line 3
>> while loading 
>> /Users/adrian/Dropbox/Projects/jinnie/lib/Genie/src/Database.jl, in 
>> expression starting on line 7
>> while loading 
>> /Users/adrian/Dropbox/Projects/jinnie/lib/Genie/src/commands.jl, in 
>> expression starting on line 11
>>  in remotecall_fetch at multi.jl:735
>>  in remotecall_fetch at multi.jl:750
>>  in anonymous at multi.jl:1396
>>
>> ...and 1 other exceptions.
>>
>>  in sync_end at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>>  in anonymous at multi.jl:1405
>>  in include_string at loading.jl:282
>>  in include_from_node1 at 
>> /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>>  in require at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>>  in eval at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>>  in anonymous at multi.jl:1394
>>  in anonymous at multi.jl:923
>>  in run_work_thunk at multi.jl:661
>>  [inlined code] from multi.jl:923
>>  in anonymous at task.jl:63
>> while loading 
>> /Users/adrian/Dropbox/Projects/jinnie/lib/Genie/src/Genie.jl, in expression 
>> starting on line 1
>>  in remotecall_fetch at multi.jl:747
>>  in remotecall_fetch at 

Re: [julia-users] Strange behavior of push! and pop! for an array of array elements

2016-09-12 Thread Tamas Papp
Fill behaves this way not because of a specific design choice based on a
compelling use case, but because of consistency with other language
features. fill does not copy, and arrays are passed by reference in
Julia, consequently you have the behavior described below.

IMO it is best to learn about this and internalize the fact that arrays
and structures are passed by reference. The alternative would be some
DWIM-style solution where fill tries to figure out whether to copy its
first argument or not, which would be a mess.

On Tue, Sep 13 2016, Michele Zaffalon wrote:

> I have been bitten by this myself. Is there a user case for having an array
> filled with references to the same object? Why would one want this
> behaviour?
>
> On Tue, Sep 13, 2016 at 4:45 AM, Yichao Yu  wrote:
>
>>
>>
>> On Mon, Sep 12, 2016 at 10:33 PM, Zhilong Liu 
>> wrote:
>>
>>> Hello all,
>>>
>>> I am pretty new to Julia, and I am trying to perform push and pop inside
>>> an array of 1D array elements. For example, I created the following array
>>> with 1000 empty arrays.
>>>
>>> julia> vring = fill([], 1000)
>>>
>>
>>
>> This creates an array with 1000 identical object, if you want to make them
>> different (but initially equal) object, you can use `[[] for i in 1:1000]`
>>
>>>
>>> Then, when I push an element to vring[2],
>>>
>>>
>>> julia> push!(vring[2],1)
>>>
>>>
>>> I got the following result. Every array element inside vring gets the
>>> value 1. But I only want the 1 to be pushed to the 2nd array element
>>> inside vring. Anybody knows how to do that efficiently?
>>>
>>>
>>> julia> vring
>>>
>>> 1000x1 Array{Array{Any,1},2}:
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  ⋮
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>  Any[1]
>>>
>>>
>>>
>>> Thanks!
>>>
>>> Zhilong Liu
>>>
>>>
>>>
>>>
>>



Re: [julia-users] Julia on Research Computing Podcast RCE

2016-09-12 Thread Richard Hoffpauir
Thanks for posting here.  I didn't know about this podcast.  I listened to a 
few episodes today... it's great!  I hope to hear an episode about Julia soon.

[julia-users] Re: Juno workspace variable display.

2016-09-12 Thread Patrick Belliveau
Works for me too. Thanks Uwe! I'll put in a feature request to have a it 
added to the menu. Juno's getting really good.

Patrick

On Monday, September 12, 2016 at 2:46:31 PM UTC-7, Patrick Belliveau wrote:
>
> Hi all,
>   In his JuliaCon 2016 talk 
>  on Juno's new graphical 
> debugging capabilities, Mike Innes also showed off a workspace pane in Juno 
> that displays currently defined variable values from an interactive Julia 
> session. My impression from the video is that this feature should be 
> available in the latest version of Juno but I can't get it to show up. As 
> far as I can tell, the feature is not included in my version of Juno. Am I 
> missing something or has this functionality not been released yet? I'm on 
> linux, running 
>
> Julia 0.5.0-rc4+0
> atom 1.9.9
> master branches of Atom.jl,CodeTools.jl,Juno.jl checked out and up to date
> ink 0.5.1
> julia-client 0.5.2
> language-julia 0.6
> uber-juno 0.1.1
>
> Thanks, Patrick
>
> P.S. I've just started using Juno and in general I'm really liking it, 
> especially the debugging gui features. Great work Juno team!
>


Re: [julia-users] Strange behavior of push! and pop! for an array of array elements

2016-09-12 Thread Michele Zaffalon
I have been bitten by this myself. Is there a user case for having an array
filled with references to the same object? Why would one want this
behaviour?

On Tue, Sep 13, 2016 at 4:45 AM, Yichao Yu  wrote:

>
>
> On Mon, Sep 12, 2016 at 10:33 PM, Zhilong Liu 
> wrote:
>
>> Hello all,
>>
>> I am pretty new to Julia, and I am trying to perform push and pop inside
>> an array of 1D array elements. For example, I created the following array
>> with 1000 empty arrays.
>>
>> julia> vring = fill([], 1000)
>>
>
>
> This creates an array with 1000 identical object, if you want to make them
> different (but initially equal) object, you can use `[[] for i in 1:1000]`
>
>>
>> Then, when I push an element to vring[2],
>>
>>
>> julia> push!(vring[2],1)
>>
>>
>> I got the following result. Every array element inside vring gets the
>> value 1. But I only want the 1 to be pushed to the 2nd array element
>> inside vring. Anybody knows how to do that efficiently?
>>
>>
>> julia> vring
>>
>> 1000x1 Array{Array{Any,1},2}:
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  ⋮
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>  Any[1]
>>
>>
>>
>> Thanks!
>>
>> Zhilong Liu
>>
>>
>>
>>
>


Re: [julia-users] Strange behavior of push! and pop! for an array of array elements

2016-09-12 Thread Yichao Yu
On Mon, Sep 12, 2016 at 10:33 PM, Zhilong Liu 
wrote:

> Hello all,
>
> I am pretty new to Julia, and I am trying to perform push and pop inside
> an array of 1D array elements. For example, I created the following array
> with 1000 empty arrays.
>
> julia> vring = fill([], 1000)
>


This creates an array with 1000 identical object, if you want to make them
different (but initially equal) object, you can use `[[] for i in 1:1000]`

>
> Then, when I push an element to vring[2],
>
>
> julia> push!(vring[2],1)
>
>
> I got the following result. Every array element inside vring gets the
> value 1. But I only want the 1 to be pushed to the 2nd array element
> inside vring. Anybody knows how to do that efficiently?
>
>
> julia> vring
>
> 1000x1 Array{Array{Any,1},2}:
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>  ⋮
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>  Any[1]
>
>
>
> Thanks!
>
> Zhilong Liu
>
>
>
>


[julia-users] Strange behavior of push! and pop! for an array of array elements

2016-09-12 Thread Zhilong Liu
Hello all,

I am pretty new to Julia, and I am trying to perform push and pop inside an 
array of 1D array elements. For example, I created the following array with 
1000 empty arrays.

julia> vring = fill([], 1000)


Then, when I push an element to vring[2], 


julia> push!(vring[2],1)


I got the following result. Every array element inside vring gets the value 
1. But I only want the 1 to be pushed to the 2nd array element inside vring. 
Anybody knows how to do that efficiently?


julia> vring

1000x1 Array{Array{Any,1},2}:

 Any[1]

 Any[1]

 Any[1]

 Any[1]

 Any[1]

 Any[1]

 Any[1]

 Any[1]

 Any[1]

 Any[1]

 ⋮ 

 Any[1]

 Any[1]

 Any[1]

 Any[1]

 Any[1]

 Any[1]

 Any[1]

 Any[1]

 Any[1]



Thanks!

Zhilong Liu





[julia-users] Re: Juno workspace variable display.

2016-09-12 Thread Chris Rackauckas
I can confirm that works. Wow, never knew that was there. It should be 
added to the menu. Maybe it's still considered experimental.

On Monday, September 12, 2016 at 4:27:52 PM UTC-7, Uwe Fechner wrote:
>
> It works for me:
> Try to open the command palette (Cmd-Shift-P on mac, I guess Ctrl-Shift-P 
> on linux and windows), and type 'julia open workspace'. It opens a window 
> showing all variables and functions in scope.
>
> On Monday, September 12, 2016 at 11:46:31 PM UTC+2, Patrick Belliveau 
> wrote:
>>
>> Hi all,
>>   In his JuliaCon 2016 talk 
>>  on Juno's new graphical 
>> debugging capabilities, Mike Innes also showed off a workspace pane in Juno 
>> that displays currently defined variable values from an interactive Julia 
>> session. My impression from the video is that this feature should be 
>> available in the latest version of Juno but I can't get it to show up. As 
>> far as I can tell, the feature is not included in my version of Juno. Am I 
>> missing something or has this functionality not been released yet? I'm on 
>> linux, running 
>>
>> Julia 0.5.0-rc4+0
>> atom 1.9.9
>> master branches of Atom.jl,CodeTools.jl,Juno.jl checked out and up to date
>> ink 0.5.1
>> julia-client 0.5.2
>> language-julia 0.6
>> uber-juno 0.1.1
>>
>> Thanks, Patrick
>>
>> P.S. I've just started using Juno and in general I'm really liking it, 
>> especially the debugging gui features. Great work Juno team!
>>
>

[julia-users] JuliaDiffEq Logo Poll

2016-09-12 Thread Chris Rackauckas
Sometime last week I threw up a logo idea, and a ton of other really cool 
ideas followed. Now that we have so many awesome choices due to a previous 
thread , it's hard to 
pick. Help us choose the JuliaDiffEq logo by going to this issue 
 and voting for your 
favorite(s). Or if you're interested, add your own entry. 

[If you're new to Github, this is a good time to make an account and 
"contribute to the Julia community"! :)]

P.S. Is there a less hacky way to do polls on Github?


[julia-users] Re: Juno workspace variable display.

2016-09-12 Thread Uwe Fechner
It works for me:
Try to open the command palette (Cmd-Shift-P on mac, I guess Ctrl-Shift-P 
on linux and windows), and type 'julia open workspace'. It opens a window 
showing all variables and functions in scope.

On Monday, September 12, 2016 at 11:46:31 PM UTC+2, Patrick Belliveau wrote:
>
> Hi all,
>   In his JuliaCon 2016 talk 
>  on Juno's new graphical 
> debugging capabilities, Mike Innes also showed off a workspace pane in Juno 
> that displays currently defined variable values from an interactive Julia 
> session. My impression from the video is that this feature should be 
> available in the latest version of Juno but I can't get it to show up. As 
> far as I can tell, the feature is not included in my version of Juno. Am I 
> missing something or has this functionality not been released yet? I'm on 
> linux, running 
>
> Julia 0.5.0-rc4+0
> atom 1.9.9
> master branches of Atom.jl,CodeTools.jl,Juno.jl checked out and up to date
> ink 0.5.1
> julia-client 0.5.2
> language-julia 0.6
> uber-juno 0.1.1
>
> Thanks, Patrick
>
> P.S. I've just started using Juno and in general I'm really liking it, 
> especially the debugging gui features. Great work Juno team!
>


Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-12 Thread Chris Stook


On Monday, September 12, 2016 at 7:22:39 PM UTC-4, Chris Stook wrote:
>
> Last post was incomplete.
>
> abstract AbstractFoo
>
> macro commonfields()
>   return :(
> bar;
> foo;
>   )
> end
>
> type Foo <: AbstractFoo
>   @commonfields()
> end
>
> type Foobar <: AbstractFoo
>   @commonfields()
>   barbaz
>   bazbaz
> end
>
> Chris
>


Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-12 Thread Chris Stook
Last post was incomplete.

abstract AbstractFoo

macro commonfields()
  return :(
bar
foo
  )
end

type Foo <: AbstractFoo
  @commonfields()
end

type Foobar <: AbstractFoo
  @commonfields()
  barbaz
  bazbaz
end

Chris


Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-12 Thread Chris Stook
I use a macro to avoid retyping common fields.  

abstract AbstractFoo

macro commonfields()
  return :(


type Foo <: AbstractFoo
  bar
  baz
end

type Foobar <: AbstractFoo
  bar
  baz
  barbaz
  bazbaz
end


Re: [julia-users] Re: equivalent of numpy newaxis?

2016-09-12 Thread Bob Nnamtrop
I use a simple function for this:

function newdim(A::AbstractArray, d::Integer)
@assert 0 < d <= ndims(A)+1
dim = size(A)
reshape(A, dim[1:d-1]..., 1, dim[d:end]...)
end

But having syntax for a newaxis would be great. See also:

https://github.com/JuliaLang/julia/issues/5405
https://github.com/JuliaLang/julia/issues/4774 (search for newaxis)

Bob

On Mon, Sep 12, 2016 at 5:00 PM, Tim Holy  wrote:

> julia> a = rand(3)
> 3-element Array{Float64,1}:
> 0.47428
> 0.505429
> 0.198919
>
> julia> reshape(a, (3,1))
> 3×1 Array{Float64,2}:
> 0.47428
> 0.505429
> 0.198919
>
> julia> reshape(a, (1,3))
> 1×3 Array{Float64,2}:
> 0.47428  0.505429  0.198919
>
> Is that what you want? (Note that for both of them, the result is
> 2-dimensional.)
>
>
>
> --Tim
>
>
>
> On Monday, September 12, 2016 6:47:04 PM CDT Neal Becker wrote:
>
> > I haven't studied it, but I guess that newaxis increases the
> dimensionality,
>
> > while specifying 0 for the stride. Can reshape do that?
>
> >
>
> > Tim Holy wrote:
>
> > > I'm not certain I understand what `np.newaxis` does, but doesn't
> `reshape`
>
> > > do the same thing? (newaxis does look like a convenient way to specify
>
> > > shape, though.)
>
> > >
>
> > > Best,
>
> > > --Tim
>
> > >
>
> > > On Monday, September 12, 2016 3:28:56 PM CDT Neal Becker wrote:
>
> > >> Some time ago I asked this question
>
> > >> http://stackoverflow.com/questions/25486506/julia-
> broadcasting-equivalent
>
> > >> -of -numpy-newaxis
>
> > >>
>
> > >> As a more interesting example, here is some real python code I use:
>
> > >> dist = mag_sqr (demod_out[:,np.newaxis] - const.map[np.newaxis,:])
>
> > >>
>
> > >> where demod_out, const.map are each vectors, mag_sqr performs
>
> > >> element-wise euclidean distance, and the result is a 2D array whose
> 1st
>
> > >> axis matches the 1st axis of demod_out, and the 2nd axis matches the
> 2nd
>
> > >> axis of const.map.
>
> > >>
>
> > >>
>
> > >> From the answers I've seen, julia doesn't really have an equivalent
>
> > >> functionality. The idea here is, without allocating a new array,
>
> > >> manipulate the strides to cause broadcasting.
>
> > >>
>
> > >> AFAICT, the best for Julia would be just forget the vectorized code,
> and
>
> > >> explicitly write out loops to perform the computation. OK, I guess,
> but
>
> > >> maybe not as readable.
>
> > >>
>
> > >> Is there any news on this front?
>
>
>
>
>


Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-12 Thread Tom Breloff
I think #2 is the right solution, but I also wish there was a nicer syntax
to do it.  Here's how I'd probably tackle it... if I get around to it soon
I'll post the implementation:

@abstract type AbstractFoo
bar::Int
end

@extend AbstractFoo type Foo
end

@extend AbstractFoo type Foobar
baz::Float64
end

# then:
Foo <: AbstractFoo
Foobar <: AbstractFoo

and both Foo and Foobar have a field bar.

I suspect this will be dirt-simple to implement for non-parametrics, but
might be a little tricky otherwise.  It's just a matter of injecting fields
(and parameters) into the proper spot of the type definition.


On Mon, Sep 12, 2016 at 6:00 PM, Chris Rackauckas 
wrote:

> Ahh, that makes a lot of sense as well. I can see how that would make
> everything a lot harder to optimize. Thanks for the explanation!
>
> On Monday, September 12, 2016 at 2:44:22 PM UTC-7, Stefan Karpinski wrote:
>>
>> The biggest practical issue is that if you can subtype a concrete type
>> then you can't store values inline in an array, even if the values are
>> immutable – since a subtype can be bigger than the supertype. This leads to
>> having things like "final" classes, etc. Fundamentally, this is really an
>> issue of failing to separate the concrete type – which is complete and can
>> be instantiated – from the abstract type, which is incomplete and can be
>> subtyped.
>>
>> On Mon, Sep 12, 2016 at 3:17 PM, Chris Rackauckas 
>> wrote:
>>
>>> https://en.wikipedia.org/wiki/Composition_over_inheritance
>>>
>>> http://programmers.stackexchange.com/questions/134097/why-
>>> should-i-prefer-composition-over-inheritance
>>>
>>> https://www.thoughtworks.com/insights/blog/composition-vs-in
>>> heritance-how-choose
>>>
>>> That's just the start. Overtime, people realized inheritance can be
>>> quite fragile, so many style guidelines simply forbid you from doing it.
>>>
>>> On Monday, September 12, 2016 at 11:45:40 AM UTC-7, Bart Janssens wrote:

 Looking at this example, it seems mighty tempting to have the ability
 to subtype a concrete type. Are the exact problems with that documented
 somewhere? I am aware of the following section in the docs:

 "One particularly distinctive feature of Julia’s type system is that
 concrete types may not subtype each other: all concrete types are final and
 may only have abstract types as their supertypes. While this might at first
 seem unduly restrictive, it has many beneficial consequences with
 surprisingly few drawbacks. It turns out that being able to inherit
 behavior is much more important than being able to inherit structure, and
 inheriting both causes significant difficulties in traditional
 object-oriented languages."

 I'm just wondering what the "significant difficulties" are, not
 advocating changing this behaviour.

 On Mon, Sep 12, 2016 at 5:28 PM Stefan Karpinski 
 wrote:

> I would probably go with approach #2 myself and only refer to the .bar
> and .baz fields in all of the generic AbstractFoo methods.
>
> On Mon, Sep 12, 2016 at 10:10 AM, Michael Borregaard <
> mkborr...@gmail.com> wrote:
>
>> Hi,
>>
>> I am defining a set of types to hold scientific data, and trying to
>> get the best out of Julia's type system. The types in my example are
>> 'nested' in the sense that each type will hold progressively more
>> information and thus allow the user to do progressively more. Like this:
>>
>> type Foo
>>   bar
>>   baz
>> end
>>
>> type Foobar
>>   bar  # this
>>   baz  # and this are identical with Foo
>>   barbaz
>>   bazbaz
>> end
>>
>>
>
>>


Re: [julia-users] Re: equivalent of numpy newaxis?

2016-09-12 Thread Tim Holy
*julia> a = rand(3)* 
*3-element Array{Float64,1}:* 
*0.47428 * 
*0.505429* 
*0.198919* 
*julia> reshape(a, (3,1))* 
*3×1 Array{Float64,2}:* 
*0.47428 * 
*0.505429* 
*0.198919* 
*julia> reshape(a, (1,3))* 
*1×3 Array{Float64,2}:* 
*0.47428  0.505429  0.198919*

Is that what you want? (Note that for both of them, the result is 
2-dimensional.)

--Tim

On Monday, September 12, 2016 6:47:04 PM CDT Neal Becker wrote:
> I haven't studied it, but I guess that newaxis increases the dimensionality,
> while specifying 0 for the stride.  Can reshape do that?
> 
> Tim Holy wrote:
> > I'm not certain I understand what `np.newaxis` does, but doesn't `reshape`
> > do the same thing? (newaxis does look like a convenient way to specify
> > shape, though.)
> > 
> > Best,
> > --Tim
> > 
> > On Monday, September 12, 2016 3:28:56 PM CDT Neal Becker wrote:
> >> Some time ago I asked this question
> >> http://stackoverflow.com/questions/25486506/julia-broadcasting-equivalent
> >> -of -numpy-newaxis
> >> 
> >> As a more interesting example, here is some real python code I use:
> >> dist = mag_sqr (demod_out[:,np.newaxis] - const.map[np.newaxis,:])
> >> 
> >> where demod_out, const.map are each vectors, mag_sqr performs
> >> element-wise euclidean distance, and the result is a 2D array whose 1st
> >> axis matches the 1st axis of demod_out, and the 2nd axis matches the 2nd
> >> axis of const.map.
> >> 
> >> 
> >> From the answers I've seen, julia doesn't really have an equivalent
> >> functionality.  The idea here is, without allocating a new array,
> >> manipulate the strides to cause broadcasting.
> >> 
> >> AFAICT, the best for Julia would be just forget the vectorized code, and
> >> explicitly write out loops to perform the computation.  OK, I guess, but
> >> maybe not as readable.
> >> 
> >> Is there any news on this front?




[julia-users] Re: equivalent of numpy newaxis?

2016-09-12 Thread Neal Becker
I haven't studied it, but I guess that newaxis increases the dimensionality, 
while specifying 0 for the stride.  Can reshape do that?

Tim Holy wrote:

> I'm not certain I understand what `np.newaxis` does, but doesn't `reshape`
> do the same thing? (newaxis does look like a convenient way to specify
> shape, though.)
> 
> Best,
> --Tim
> 
> On Monday, September 12, 2016 3:28:56 PM CDT Neal Becker wrote:
>> Some time ago I asked this question
>> http://stackoverflow.com/questions/25486506/julia-broadcasting-equivalent-of
>> -numpy-newaxis
>> 
>> As a more interesting example, here is some real python code I use:
>> dist = mag_sqr (demod_out[:,np.newaxis] - const.map[np.newaxis,:])
>> 
>> where demod_out, const.map are each vectors, mag_sqr performs
>> element-wise euclidean distance, and the result is a 2D array whose 1st
>> axis matches the 1st axis of demod_out, and the 2nd axis matches the 2nd
>> axis of const.map.
>> 
>> 
>> From the answers I've seen, julia doesn't really have an equivalent
>> functionality.  The idea here is, without allocating a new array,
>> manipulate the strides to cause broadcasting.
>> 
>> AFAICT, the best for Julia would be just forget the vectorized code, and
>> explicitly write out loops to perform the computation.  OK, I guess, but
>> maybe not as readable.
>> 
>> Is there any news on this front?




[julia-users] Re: Juno workspace variable display.

2016-09-12 Thread Chris Rackauckas
I don't think it's available yet. This might be something you might want to 
file a request for by opening an issue.

On Monday, September 12, 2016 at 2:46:31 PM UTC-7, Patrick Belliveau wrote:
>
> Hi all,
>   In his JuliaCon 2016 talk 
>  on Juno's new graphical 
> debugging capabilities, Mike Innes also showed off a workspace pane in Juno 
> that displays currently defined variable values from an interactive Julia 
> session. My impression from the video is that this feature should be 
> available in the latest version of Juno but I can't get it to show up. As 
> far as I can tell, the feature is not included in my version of Juno. Am I 
> missing something or has this functionality not been released yet? I'm on 
> linux, running 
>
> Julia 0.5.0-rc4+0
> atom 1.9.9
> master branches of Atom.jl,CodeTools.jl,Juno.jl checked out and up to date
> ink 0.5.1
> julia-client 0.5.2
> language-julia 0.6
> uber-juno 0.1.1
>
> Thanks, Patrick
>
> P.S. I've just started using Juno and in general I'm really liking it, 
> especially the debugging gui features. Great work Juno team!
>


Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-12 Thread Chris Rackauckas
Ahh, that makes a lot of sense as well. I can see how that would make 
everything a lot harder to optimize. Thanks for the explanation!

On Monday, September 12, 2016 at 2:44:22 PM UTC-7, Stefan Karpinski wrote:
>
> The biggest practical issue is that if you can subtype a concrete type 
> then you can't store values inline in an array, even if the values are 
> immutable – since a subtype can be bigger than the supertype. This leads to 
> having things like "final" classes, etc. Fundamentally, this is really an 
> issue of failing to separate the concrete type – which is complete and can 
> be instantiated – from the abstract type, which is incomplete and can be 
> subtyped.
>
> On Mon, Sep 12, 2016 at 3:17 PM, Chris Rackauckas  > wrote:
>
>> https://en.wikipedia.org/wiki/Composition_over_inheritance
>>
>>
>> http://programmers.stackexchange.com/questions/134097/why-should-i-prefer-composition-over-inheritance
>>
>>
>> https://www.thoughtworks.com/insights/blog/composition-vs-inheritance-how-choose
>>
>> That's just the start. Overtime, people realized inheritance can be quite 
>> fragile, so many style guidelines simply forbid you from doing it.
>>
>> On Monday, September 12, 2016 at 11:45:40 AM UTC-7, Bart Janssens wrote:
>>>
>>> Looking at this example, it seems mighty tempting to have the ability to 
>>> subtype a concrete type. Are the exact problems with that documented 
>>> somewhere? I am aware of the following section in the docs:
>>>
>>> "One particularly distinctive feature of Julia’s type system is that 
>>> concrete types may not subtype each other: all concrete types are final and 
>>> may only have abstract types as their supertypes. While this might at first 
>>> seem unduly restrictive, it has many beneficial consequences with 
>>> surprisingly few drawbacks. It turns out that being able to inherit 
>>> behavior is much more important than being able to inherit structure, and 
>>> inheriting both causes significant difficulties in traditional 
>>> object-oriented languages."
>>>
>>> I'm just wondering what the "significant difficulties" are, not 
>>> advocating changing this behaviour.
>>>
>>> On Mon, Sep 12, 2016 at 5:28 PM Stefan Karpinski  
>>> wrote:
>>>
 I would probably go with approach #2 myself and only refer to the .bar 
 and .baz fields in all of the generic AbstractFoo methods.

 On Mon, Sep 12, 2016 at 10:10 AM, Michael Borregaard <
 mkborr...@gmail.com> wrote:

> Hi,
>
> I am defining a set of types to hold scientific data, and trying to 
> get the best out of Julia's type system. The types in my example are 
> 'nested' in the sense that each type will hold progressively more 
> information and thus allow the user to do progressively more. Like this:
>
> type Foo
>   bar
>   baz
> end
>
> type Foobar
>   bar  # this
>   baz  # and this are identical with Foo
>   barbaz
>   bazbaz
> end
>
>

>

[julia-users] Re: can someone help me read julia's memory footprint on this cluster? [SGE]

2016-09-12 Thread Chris Rackauckas
For SGE, a lot of systems let you ssh into the node and use htop. That will 
show you a lot of information about the node, and can help you find out 
which process is using up what about of memory (note, this check only works 
in real-time so your computational has to be long enough).

On Monday, September 12, 2016 at 1:24:20 PM UTC-7, Florian Oswald wrote:
>
> hi all,
>
> i get the following output from the SGE command `qstat -j jobnumber` of a 
> julia job that uses 30 workers. I am confused by the mem column. am I using 
> more memory than what I asked for? I asked for max 4G on each processor.
>
>
> job-array tasks:1-30:1
>
> usage1: cpu=00:00:08, mem=20.61903 GBs, io=0.08026, 
> vmem=2.684G, maxvmem=2.684G
>
> usage2: cpu=00:00:13, mem=35.36547 GBs, io=0.14832, 
> vmem=2.754G, maxvmem=2.754G
>
> usage3: cpu=00:00:16, mem=47.97179 GBs, io=0.10563, 
> vmem=3.084G, maxvmem=3.084G
>
> usage4: cpu=00:00:17, mem=52.39960 GBs, io=0.14685, 
> vmem=3.146G, maxvmem=3.146G
>
> usage5: cpu=00:00:13, mem=38.00948 GBs, io=0.06336, 
> vmem=3.208G, maxvmem=3.208G
>
> usage6: cpu=00:00:14, mem=41.84277 GBs, io=0.08085, 
> vmem=3.208G, maxvmem=3.208G
>
> usage7: cpu=00:00:16, mem=49.34722 GBs, io=0.10563, 
> vmem=3.208G, maxvmem=3.208G
>
> usage8: cpu=00:00:18, mem=56.29933 GBs, io=0.14685, 
> vmem=3.208G, maxvmem=3.208G
>
> usage9: cpu=00:00:21, mem=61.30837 GBs, io=0.13234, 
> vmem=3.145G, maxvmem=3.145G
>
> usage   10: cpu=00:00:18, mem=53.78262 GBs, io=0.10650, 
> vmem=3.209G, maxvmem=3.209G
>
> usage   11: cpu=00:00:19, mem=58.20804 GBs, io=0.16047, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   12: cpu=00:00:19, mem=58.90526 GBs, io=0.15296, 
> vmem=3.209G, maxvmem=3.209G
>
> usage   13: cpu=00:00:13, mem=37.73257 GBs, io=0.06336, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   14: cpu=00:00:15, mem=43.44044 GBs, io=0.08085, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   15: cpu=00:00:19, mem=58.27114 GBs, io=0.13234, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   16: cpu=00:00:17, mem=51.33971 GBs, io=0.10650, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   17: cpu=00:00:19, mem=56.00911 GBs, io=0.16047, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   18: cpu=00:00:19, mem=57.45101 GBs, io=0.15301, 
> vmem=3.209G, maxvmem=3.209G
>
> usage   19: cpu=00:00:19, mem=56.42524 GBs, io=0.13240, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   20: cpu=00:00:18, mem=52.25189 GBs, io=0.10650, 
> vmem=3.209G, maxvmem=3.209G
>
> usage   21: cpu=00:00:20, mem=60.76601 GBs, io=0.12465, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   22: cpu=00:00:22, mem=65.11690 GBs, io=0.14843, 
> vmem=3.207G, maxvmem=3.207G
>
> usage   23: cpu=00:00:18, mem=52.75353 GBs, io=0.11566, 
> vmem=3.146G, maxvmem=3.146G
>
> usage   24: cpu=00:00:15, mem=44.21442 GBs, io=0.04204, 
> vmem=3.207G, maxvmem=3.207G
>
> usage   25: cpu=00:00:20, mem=58.85802 GBs, io=0.14714, 
> vmem=3.209G, maxvmem=3.209G
>
> usage   26: cpu=00:00:18, mem=53.52543 GBs, io=0.12236, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   27: cpu=00:00:20, mem=59.24938 GBs, io=0.13234, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   28: cpu=00:00:20, mem=59.86234 GBs, io=0.12465, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   29: cpu=00:00:18, mem=53.94314 GBs, io=0.11566, 
> vmem=3.209G, maxvmem=3.209G
>
> usage   30: cpu=00:00:16, mem=48.74432 GBs, io=0.10222, 
> vmem=3.208G, maxvmem=3.208G
>


[julia-users] Re: Tutorial Julia language brazilian portuguese

2016-09-12 Thread jmarcellopereira
olá felipe. É uma boa ideia. Se você criar eu compartilho com o pessoal da 
unb.

Hello Felipe. It's a great idea. If you marry I share with the staff of unb.

Em segunda-feira, 12 de setembro de 2016 17:54:50 UTC-3, Phelipe Wesley 
escreveu:
>
> O que acham de criarmos um grupo Julia-Brasil no slack?
>


[julia-users] Re: Complex parallel computing implementation

2016-09-12 Thread dnm
Have you tried Dagger.jl  to 
set up a DAG of computations you need performed?

On Monday, September 12, 2016 at 5:04:26 PM UTC-4, Adrian Salceanu wrote:
>
> This is a random example of an error - not really sure how to debug this, 
> seems to crash within the Postgres library. The dump is long but not really 
> helpful. 
>
> 12-Sep 21:39:38:WARNING:root:Module __anon__ not defined on process 5
> 12-Sep 21:39:38:WARNING:root:Module __anon__ not defined on process 3
> 12-Sep 21:39:39:WARNING:root:Module __anon__ not defined on process 2
> fatal error on fatal error on 3: ERROR: attempt to send to unknown socket
> 4: ERROR: attempt to send to unknown socket
> 12-Sep 21:39:39:WARNING:root:Module __anon__ not defined on process 3
> 12-Sep 21:39:39:WARNING:root:Module __anon__ not defined on process 5
> fatal error on 3: ERROR: attempt to send to unknown socket
> 12-Sep 21:39:39:WARNING:root:Module __anon__ not defined on process 3
> fatal error on 2: ERROR: attempt to send to unknown socket
> ERROR: LoadError: On worker 2:
> LoadError: On worker 2:
> LoadError: LoadError: LoadError: LoadError: LoadError: 
> ProcessExitedException()
>  in yieldto at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in wait at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib (repeats 3 
> times)
>  in take! at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib (repeats 
> 2 times)
>  in remotecall_fetch at multi.jl:745
>  in remotecall_fetch at multi.jl:750
>  in anonymous at multi.jl:1396
>
> ...and 1 other exceptions.
>
>  in include_string at loading.jl:282
>  in include_from_node1 at 
> /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in require at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in include_string at loading.jl:282
>  in include_from_node1 at 
> /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in require at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in eval at 
> /Users/adrian/Dropbox/Projects/jinnie/lib/Genie/src/Database.jl:1
>  in include_string at loading.jl:282
>  in include_from_node1 at 
> /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in require at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in include_string at loading.jl:282
>  in include_from_node1 at 
> /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in eval at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in anonymous at multi.jl:1394
>  in run_work_thunk at multi.jl:661
>  in remotecall_fetch at multi.jl:734
>  in remotecall_fetch at multi.jl:750
>  in anonymous at multi.jl:1396
> while loading /Users/adrian/.julia/v0.4/PostgreSQL/src/../deps/build.jl, 
> in expression starting on line 9
> while loading /Users/adrian/.julia/v0.4/PostgreSQL/src/PostgreSQL.jl, in 
> expression starting on line 8
> while loading 
> /Users/adrian/Dropbox/Projects/jinnie/lib/Genie/database_adapters/PostgreSQLDatabaseAdapter.jl,
>  
> in expression starting on line 3
> while loading 
> /Users/adrian/Dropbox/Projects/jinnie/lib/Genie/src/Database.jl, in 
> expression starting on line 7
> while loading 
> /Users/adrian/Dropbox/Projects/jinnie/lib/Genie/src/commands.jl, in 
> expression starting on line 11
>  in remotecall_fetch at multi.jl:735
>  in remotecall_fetch at multi.jl:750
>  in anonymous at multi.jl:1396
>
> ...and 1 other exceptions.
>
>  in sync_end at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in anonymous at multi.jl:1405
>  in include_string at loading.jl:282
>  in include_from_node1 at 
> /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in require at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in eval at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in anonymous at multi.jl:1394
>  in anonymous at multi.jl:923
>  in run_work_thunk at multi.jl:661
>  [inlined code] from multi.jl:923
>  in anonymous at task.jl:63
> while loading 
> /Users/adrian/Dropbox/Projects/jinnie/lib/Genie/src/Genie.jl, in expression 
> starting on line 1
>  in remotecall_fetch at multi.jl:747
>  in remotecall_fetch at multi.jl:750
>  in anonymous at multi.jl:1396
>
> ...and 4 other exceptions.
>
>  in sync_end at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in anonymous at multi.jl:1405
>  in include at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in include_from_node1 at 
> /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in process_options at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
>  in _start at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
> while loading /Users/adrian/Dropbox/Projects/jinnie/genie.jl, in 
> expression starting on line 16
>
> luni, 12 septembrie 2016, 23:01:51 UTC+2, Adrian Salceanu a scris:
>>
>> I was wondering if anybody can point me towards a tutorial or a large 
>> code base using parallel computing. Everything that is discussed so far in 
>> the docs and books is super simple - take a function, run it in parallel, 
>> the end. 
>>
>> To explain, I'm working 

[julia-users] Juno workspace variable display.

2016-09-12 Thread Patrick Belliveau
Hi all,
  In his JuliaCon 2016 talk 
 on Juno's new graphical 
debugging capabilities, Mike Innes also showed off a workspace pane in Juno 
that displays currently defined variable values from an interactive Julia 
session. My impression from the video is that this feature should be 
available in the latest version of Juno but I can't get it to show up. As 
far as I can tell, the feature is not included in my version of Juno. Am I 
missing something or has this functionality not been released yet? I'm on 
linux, running 

Julia 0.5.0-rc4+0
atom 1.9.9
master branches of Atom.jl,CodeTools.jl,Juno.jl checked out and up to date
ink 0.5.1
julia-client 0.5.2
language-julia 0.6
uber-juno 0.1.1

Thanks, Patrick

P.S. I've just started using Juno and in general I'm really liking it, 
especially the debugging gui features. Great work Juno team!


Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-12 Thread Stefan Karpinski
The biggest practical issue is that if you can subtype a concrete type then
you can't store values inline in an array, even if the values are immutable
– since a subtype can be bigger than the supertype. This leads to having
things like "final" classes, etc. Fundamentally, this is really an issue of
failing to separate the concrete type – which is complete and can be
instantiated – from the abstract type, which is incomplete and can be
subtyped.

On Mon, Sep 12, 2016 at 3:17 PM, Chris Rackauckas 
wrote:

> https://en.wikipedia.org/wiki/Composition_over_inheritance
>
> http://programmers.stackexchange.com/questions/134097/why-should-i-prefer-
> composition-over-inheritance
>
> https://www.thoughtworks.com/insights/blog/composition-vs-
> inheritance-how-choose
>
> That's just the start. Overtime, people realized inheritance can be quite
> fragile, so many style guidelines simply forbid you from doing it.
>
> On Monday, September 12, 2016 at 11:45:40 AM UTC-7, Bart Janssens wrote:
>>
>> Looking at this example, it seems mighty tempting to have the ability to
>> subtype a concrete type. Are the exact problems with that documented
>> somewhere? I am aware of the following section in the docs:
>>
>> "One particularly distinctive feature of Julia’s type system is that
>> concrete types may not subtype each other: all concrete types are final and
>> may only have abstract types as their supertypes. While this might at first
>> seem unduly restrictive, it has many beneficial consequences with
>> surprisingly few drawbacks. It turns out that being able to inherit
>> behavior is much more important than being able to inherit structure, and
>> inheriting both causes significant difficulties in traditional
>> object-oriented languages."
>>
>> I'm just wondering what the "significant difficulties" are, not
>> advocating changing this behaviour.
>>
>> On Mon, Sep 12, 2016 at 5:28 PM Stefan Karpinski 
>> wrote:
>>
>>> I would probably go with approach #2 myself and only refer to the .bar
>>> and .baz fields in all of the generic AbstractFoo methods.
>>>
>>> On Mon, Sep 12, 2016 at 10:10 AM, Michael Borregaard <
>>> mkborr...@gmail.com> wrote:
>>>
 Hi,

 I am defining a set of types to hold scientific data, and trying to get
 the best out of Julia's type system. The types in my example are 'nested'
 in the sense that each type will hold progressively more information and
 thus allow the user to do progressively more. Like this:

 type Foo
   bar
   baz
 end

 type Foobar
   bar  # this
   baz  # and this are identical with Foo
   barbaz
   bazbaz
 end


>>>


[julia-users] Re: Complex parallel computing implementation

2016-09-12 Thread Adrian Salceanu
This is a random example of an error - not really sure how to debug this, 
seems to crash within the Postgres library. The dump is long but not really 
helpful. 

12-Sep 21:39:38:WARNING:root:Module __anon__ not defined on process 5
12-Sep 21:39:38:WARNING:root:Module __anon__ not defined on process 3
12-Sep 21:39:39:WARNING:root:Module __anon__ not defined on process 2
fatal error on fatal error on 3: ERROR: attempt to send to unknown socket
4: ERROR: attempt to send to unknown socket
12-Sep 21:39:39:WARNING:root:Module __anon__ not defined on process 3
12-Sep 21:39:39:WARNING:root:Module __anon__ not defined on process 5
fatal error on 3: ERROR: attempt to send to unknown socket
12-Sep 21:39:39:WARNING:root:Module __anon__ not defined on process 3
fatal error on 2: ERROR: attempt to send to unknown socket
ERROR: LoadError: On worker 2:
LoadError: On worker 2:
LoadError: LoadError: LoadError: LoadError: LoadError: 
ProcessExitedException()
 in yieldto at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in wait at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib (repeats 3 
times)
 in take! at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib (repeats 2 
times)
 in remotecall_fetch at multi.jl:745
 in remotecall_fetch at multi.jl:750
 in anonymous at multi.jl:1396

...and 1 other exceptions.

 in include_string at loading.jl:282
 in include_from_node1 at 
/usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in require at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in include_string at loading.jl:282
 in include_from_node1 at 
/usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in require at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in eval at 
/Users/adrian/Dropbox/Projects/jinnie/lib/Genie/src/Database.jl:1
 in include_string at loading.jl:282
 in include_from_node1 at 
/usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in require at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in include_string at loading.jl:282
 in include_from_node1 at 
/usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in eval at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in anonymous at multi.jl:1394
 in run_work_thunk at multi.jl:661
 in remotecall_fetch at multi.jl:734
 in remotecall_fetch at multi.jl:750
 in anonymous at multi.jl:1396
while loading /Users/adrian/.julia/v0.4/PostgreSQL/src/../deps/build.jl, in 
expression starting on line 9
while loading /Users/adrian/.julia/v0.4/PostgreSQL/src/PostgreSQL.jl, in 
expression starting on line 8
while loading 
/Users/adrian/Dropbox/Projects/jinnie/lib/Genie/database_adapters/PostgreSQLDatabaseAdapter.jl,
 
in expression starting on line 3
while loading 
/Users/adrian/Dropbox/Projects/jinnie/lib/Genie/src/Database.jl, in 
expression starting on line 7
while loading 
/Users/adrian/Dropbox/Projects/jinnie/lib/Genie/src/commands.jl, in 
expression starting on line 11
 in remotecall_fetch at multi.jl:735
 in remotecall_fetch at multi.jl:750
 in anonymous at multi.jl:1396

...and 1 other exceptions.

 in sync_end at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in anonymous at multi.jl:1405
 in include_string at loading.jl:282
 in include_from_node1 at 
/usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in require at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in eval at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in anonymous at multi.jl:1394
 in anonymous at multi.jl:923
 in run_work_thunk at multi.jl:661
 [inlined code] from multi.jl:923
 in anonymous at task.jl:63
while loading /Users/adrian/Dropbox/Projects/jinnie/lib/Genie/src/Genie.jl, 
in expression starting on line 1
 in remotecall_fetch at multi.jl:747
 in remotecall_fetch at multi.jl:750
 in anonymous at multi.jl:1396

...and 4 other exceptions.

 in sync_end at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in anonymous at multi.jl:1405
 in include at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in include_from_node1 at 
/usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in process_options at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
 in _start at /usr/local/Cellar/julia/0.4.6_1/lib/julia/sys.dylib
while loading /Users/adrian/Dropbox/Projects/jinnie/genie.jl, in expression 
starting on line 16

luni, 12 septembrie 2016, 23:01:51 UTC+2, Adrian Salceanu a scris:
>
> I was wondering if anybody can point me towards a tutorial or a large code 
> base using parallel computing. Everything that is discussed so far in the 
> docs and books is super simple - take a function, run it in parallel, the 
> end. 
>
> To explain, I'm working on a full stack MVC web framework - so think many 
> functions and a few types grouped in maybe 20 modules. Plus more or less 20 
> other external modules. The workflow that I'm after is: 
>
> 1. bootstrap - load the necessary components to start up the framework 
> (parsing command line args, loading configuration, setting up include 
> paths, etc)
> 2. start an instance of HttpServer and listen to a 

[julia-users] Complex parallel computing implementation

2016-09-12 Thread Adrian Salceanu
I was wondering if anybody can point me towards a tutorial or a large code 
base using parallel computing. Everything that is discussed so far in the 
docs and books is super simple - take a function, run it in parallel, the 
end. 

To explain, I'm working on a full stack MVC web framework - so think many 
functions and a few types grouped in maybe 20 modules. Plus more or less 20 
other external modules. The workflow that I'm after is: 

1. bootstrap - load the necessary components to start up the framework 
(parsing command line args, loading configuration, setting up include 
paths, etc)
2. start an instance of HttpServer and listen to a port
3. when the server receives a request it invokes a function of the Router 
module which is the entry point into the MVC stack
4. once the Router gets the request, it's pushed up the MVC stack and at 
the end a HttpServer.Response instance is returned

That being said, 
a. my strategy is simple: for each request, spawn the function call at #3 
to a worker
b. imagine that what's at #4 represents 80% of the app, with a multitude of 
functions being invoked across a multitude of modules (ORM, controller, 
Models, Loggers, Databases, Caching, Sessions, Authentication, etc). 

Everything works great single process, but getting the stack to run on 
multiple workers it's a nightmare. I got it to the point where at #3 I can 
invoke a simple function call (something like returning a string or a date) 
and run it on multiple workers. But when I try to invoke the Router and 
snowball the framework, I end up in an avalanche of unknown references. 

The codebase is now littered with calls to @everywhere that add a lot of 
noise. But up to this point I still wasn't able to make it work. The errors 
come from deep within the Julia stack, the workers crash saying that a 
module or function can't be found but there's no stack trace to point me 
towards the location so it's really trial end error commenting and 
uncommenting "include" and "using" statements to see what gives, I get 
errors from within external modules, etc. 

I guess what I'm trying to say is that my experience with parallel 
computing applied to a large codebase is very frustrating. And I was 
wondering if anybody has done a parallel implementation in a larger 
codebase and has any tips on how to attack it. 


P.S.
I might be spoiled by Elixir, but ideally I'd like to be able to spawn 
processes and functions whenever I need, and have the compiler take care of 
making the code available across workers. 
Another thing that would be very useful, also inspired by Elixir, is the 
supervisor design pattern, to look over and restart the workers when they 
crash. 


Re: [julia-users] equivalent of numpy newaxis?

2016-09-12 Thread Tim Holy
I'm not certain I understand what `np.newaxis` does, but doesn't `reshape` do 
the same thing? (newaxis does look like a convenient way to specify shape, 
though.)

Best,
--Tim

On Monday, September 12, 2016 3:28:56 PM CDT Neal Becker wrote:
> Some time ago I asked this question
> http://stackoverflow.com/questions/25486506/julia-broadcasting-equivalent-of
> -numpy-newaxis
> 
> As a more interesting example, here is some real python code I use:
> dist = mag_sqr (demod_out[:,np.newaxis] - const.map[np.newaxis,:])
> 
> where demod_out, const.map are each vectors, mag_sqr performs element-wise
> euclidean distance, and the result is a 2D array whose 1st axis matches the
> 1st axis of demod_out, and the 2nd axis matches the 2nd axis of const.map.
> 
> 
> From the answers I've seen, julia doesn't really have an equivalent
> functionality.  The idea here is, without allocating a new array, manipulate
> the strides to cause broadcasting.
> 
> AFAICT, the best for Julia would be just forget the vectorized code, and
> explicitly write out loops to perform the computation.  OK, I guess, but
> maybe not as readable.
> 
> Is there any news on this front?




[julia-users] Re: Help on building Julia with Intel MKL on Windows?

2016-09-12 Thread Tony Kelman
Intel compilers on windows are MSVC style, which our build system is not really 
set up to handle. There is experimental partial support (search for "MSVC 
support tracking issue" if you're interested) but it would really require 
rewriting the build system to use cmake to work smoothly.

You can build Julia from source with mingw but direct the build system to use 
an mkl library instead of building openblas from source. At one point mkl on 
windows didn't export gfortran style naming conventions for blas and lapack 
functions, but recent versions do last I checked. Could even be able to switch 
from an existing Julia binary by rebuilding the system image.

[julia-users] Re: Tutorial Julia language brazilian portuguese

2016-09-12 Thread Phelipe Wesley
O que acham de criarmos um grupo Julia-Brasil no slack?


[julia-users] Re: equivalent of numpy newaxis?

2016-09-12 Thread Matt Bauman
It's pretty close. In Julia 0.5, we have all the parts that are required to 
make this a possibility.  We have index types that specify both how many 
indices in the source array should be consumed (CartesianIndex{N} spans N 
dimensions) and types that determine what the dimensionality of the output 
should be (the dimensionality of the result is the sum of the 
dimensionalities of the indices).

That means that we now know exactly what the Julia equivalent of 
`np.newaxis` should be: [CartesianIndex{0}()].  This is really cool, and 
something I never considered before.

We're not quite there yet; views don't fully support CartesianIndices (fix 
in progress), and indexing by default creates a copy.

On Monday, September 12, 2016 at 2:29:15 PM UTC-5, Neal Becker wrote:
>
> Some time ago I asked this question 
>
> http://stackoverflow.com/questions/25486506/julia-broadcasting-equivalent-of-numpy-newaxis
>  
>
> As a more interesting example, here is some real python code I use: 
> dist = mag_sqr (demod_out[:,np.newaxis] - const.map[np.newaxis,:]) 
>
> where demod_out, const.map are each vectors, mag_sqr performs element-wise 
> euclidean distance, and the result is a 2D array whose 1st axis matches 
> the 
> 1st axis of demod_out, and the 2nd axis matches the 2nd axis of const.map. 
>
>
> From the answers I've seen, julia doesn't really have an equivalent 
> functionality.  The idea here is, without allocating a new array, 
> manipulate 
> the strides to cause broadcasting. 
>
> AFAICT, the best for Julia would be just forget the vectorized code, and 
> explicitly write out loops to perform the computation.  OK, I guess, but 
> maybe not as readable. 
>
> Is there any news on this front? 
>
>

[julia-users] can someone help me read julia's memory footprint on this cluster? [SGE]

2016-09-12 Thread Florian Oswald
hi all,

i get the following output from the SGE command `qstat -j jobnumber` of a 
julia job that uses 30 workers. I am confused by the mem column. am I using 
more memory than what I asked for? I asked for max 4G on each processor.


job-array tasks:1-30:1

usage1: cpu=00:00:08, mem=20.61903 GBs, io=0.08026, 
vmem=2.684G, maxvmem=2.684G

usage2: cpu=00:00:13, mem=35.36547 GBs, io=0.14832, 
vmem=2.754G, maxvmem=2.754G

usage3: cpu=00:00:16, mem=47.97179 GBs, io=0.10563, 
vmem=3.084G, maxvmem=3.084G

usage4: cpu=00:00:17, mem=52.39960 GBs, io=0.14685, 
vmem=3.146G, maxvmem=3.146G

usage5: cpu=00:00:13, mem=38.00948 GBs, io=0.06336, 
vmem=3.208G, maxvmem=3.208G

usage6: cpu=00:00:14, mem=41.84277 GBs, io=0.08085, 
vmem=3.208G, maxvmem=3.208G

usage7: cpu=00:00:16, mem=49.34722 GBs, io=0.10563, 
vmem=3.208G, maxvmem=3.208G

usage8: cpu=00:00:18, mem=56.29933 GBs, io=0.14685, 
vmem=3.208G, maxvmem=3.208G

usage9: cpu=00:00:21, mem=61.30837 GBs, io=0.13234, 
vmem=3.145G, maxvmem=3.145G

usage   10: cpu=00:00:18, mem=53.78262 GBs, io=0.10650, 
vmem=3.209G, maxvmem=3.209G

usage   11: cpu=00:00:19, mem=58.20804 GBs, io=0.16047, 
vmem=3.208G, maxvmem=3.208G

usage   12: cpu=00:00:19, mem=58.90526 GBs, io=0.15296, 
vmem=3.209G, maxvmem=3.209G

usage   13: cpu=00:00:13, mem=37.73257 GBs, io=0.06336, 
vmem=3.208G, maxvmem=3.208G

usage   14: cpu=00:00:15, mem=43.44044 GBs, io=0.08085, 
vmem=3.208G, maxvmem=3.208G

usage   15: cpu=00:00:19, mem=58.27114 GBs, io=0.13234, 
vmem=3.208G, maxvmem=3.208G

usage   16: cpu=00:00:17, mem=51.33971 GBs, io=0.10650, 
vmem=3.208G, maxvmem=3.208G

usage   17: cpu=00:00:19, mem=56.00911 GBs, io=0.16047, 
vmem=3.208G, maxvmem=3.208G

usage   18: cpu=00:00:19, mem=57.45101 GBs, io=0.15301, 
vmem=3.209G, maxvmem=3.209G

usage   19: cpu=00:00:19, mem=56.42524 GBs, io=0.13240, 
vmem=3.208G, maxvmem=3.208G

usage   20: cpu=00:00:18, mem=52.25189 GBs, io=0.10650, 
vmem=3.209G, maxvmem=3.209G

usage   21: cpu=00:00:20, mem=60.76601 GBs, io=0.12465, 
vmem=3.208G, maxvmem=3.208G

usage   22: cpu=00:00:22, mem=65.11690 GBs, io=0.14843, 
vmem=3.207G, maxvmem=3.207G

usage   23: cpu=00:00:18, mem=52.75353 GBs, io=0.11566, 
vmem=3.146G, maxvmem=3.146G

usage   24: cpu=00:00:15, mem=44.21442 GBs, io=0.04204, 
vmem=3.207G, maxvmem=3.207G

usage   25: cpu=00:00:20, mem=58.85802 GBs, io=0.14714, 
vmem=3.209G, maxvmem=3.209G

usage   26: cpu=00:00:18, mem=53.52543 GBs, io=0.12236, 
vmem=3.208G, maxvmem=3.208G

usage   27: cpu=00:00:20, mem=59.24938 GBs, io=0.13234, 
vmem=3.208G, maxvmem=3.208G

usage   28: cpu=00:00:20, mem=59.86234 GBs, io=0.12465, 
vmem=3.208G, maxvmem=3.208G

usage   29: cpu=00:00:18, mem=53.94314 GBs, io=0.11566, 
vmem=3.209G, maxvmem=3.209G

usage   30: cpu=00:00:16, mem=48.74432 GBs, io=0.10222, 
vmem=3.208G, maxvmem=3.208G


[julia-users] equivalent of numpy newaxis?

2016-09-12 Thread Neal Becker
Some time ago I asked this question
http://stackoverflow.com/questions/25486506/julia-broadcasting-equivalent-of-numpy-newaxis

As a more interesting example, here is some real python code I use:
dist = mag_sqr (demod_out[:,np.newaxis] - const.map[np.newaxis,:])

where demod_out, const.map are each vectors, mag_sqr performs element-wise 
euclidean distance, and the result is a 2D array whose 1st axis matches the 
1st axis of demod_out, and the 2nd axis matches the 2nd axis of const.map.


>From the answers I've seen, julia doesn't really have an equivalent 
functionality.  The idea here is, without allocating a new array, manipulate 
the strides to cause broadcasting.

AFAICT, the best for Julia would be just forget the vectorized code, and 
explicitly write out loops to perform the computation.  OK, I guess, but 
maybe not as readable.

Is there any news on this front?



Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-12 Thread Chris Rackauckas
https://en.wikipedia.org/wiki/Composition_over_inheritance

http://programmers.stackexchange.com/questions/134097/why-should-i-prefer-composition-over-inheritance

https://www.thoughtworks.com/insights/blog/composition-vs-inheritance-how-choose

That's just the start. Overtime, people realized inheritance can be quite 
fragile, so many style guidelines simply forbid you from doing it.

On Monday, September 12, 2016 at 11:45:40 AM UTC-7, Bart Janssens wrote:
>
> Looking at this example, it seems mighty tempting to have the ability to 
> subtype a concrete type. Are the exact problems with that documented 
> somewhere? I am aware of the following section in the docs:
>
> "One particularly distinctive feature of Julia’s type system is that 
> concrete types may not subtype each other: all concrete types are final and 
> may only have abstract types as their supertypes. While this might at first 
> seem unduly restrictive, it has many beneficial consequences with 
> surprisingly few drawbacks. It turns out that being able to inherit 
> behavior is much more important than being able to inherit structure, and 
> inheriting both causes significant difficulties in traditional 
> object-oriented languages."
>
> I'm just wondering what the "significant difficulties" are, not advocating 
> changing this behaviour.
>
> On Mon, Sep 12, 2016 at 5:28 PM Stefan Karpinski  > wrote:
>
>> I would probably go with approach #2 myself and only refer to the .bar 
>> and .baz fields in all of the generic AbstractFoo methods.
>>
>> On Mon, Sep 12, 2016 at 10:10 AM, Michael Borregaard > > wrote:
>>
>>> Hi,
>>>
>>> I am defining a set of types to hold scientific data, and trying to get 
>>> the best out of Julia's type system. The types in my example are 'nested' 
>>> in the sense that each type will hold progressively more information and 
>>> thus allow the user to do progressively more. Like this:
>>>
>>> type Foo
>>>   bar
>>>   baz
>>> end
>>>
>>> type Foobar
>>>   bar  # this
>>>   baz  # and this are identical with Foo
>>>   barbaz
>>>   bazbaz
>>> end
>>>
>>>
>>

[julia-users] Re: Suggestion regarding valuable Youtube videos related to Julia learning

2016-09-12 Thread Chris Rackauckas
I think we should setup something for video tutorials, like the 
JuliaBlogger community except for Youtube (is there such a thing as Youtube 
aggregating?). I plan on doing some tutorials on "Plotting with Plots.jl in 
Juno", "Solving ODEs with DifferentialEquations.jl", "Using Julia's Pkg 
with Github to Navigate Changing Packages", "Using CUDA.jl", etc., and 
other topics which involve a mix of visuals, switching programs/windows, 
and some math all at once (i.e. hard to capture in full with a blog post or 
straight text). 

I was just waiting on a few more features which help with the visuals: Juno 
plot pane, Juno time estimates from the progressbar, DifferentialEquations 
dense output, etc. Those are all pretty much together now, so I'll probably 
be doing this to setup for an October workshop. 

I am pretty sure once a few start doing it, others will join. Videos have a 
far lower barrier to entry than a detailed blog post, so it would be 
helpful to new users.

On Monday, September 12, 2016 at 10:17:26 AM UTC-7, Colin Beckingham wrote:
>
> The various Youtube videos recorded at Julia conferences look very good. 
> It's great to have explanations given by the experts at the top of the 
> Julia tree, no names mentioned, you know who you are. Thanks for this 
> resource.
> From the consumer side, the packages are kinda long. I imagine that many, 
> with the exception of the cohort that wants to see every second live, would 
> benefit from shorter edited versions that present concisely what the 
> speaker wants to say. Not a problem, you say, we just need someone to find 
> the time to sit down and pull out the *obiter dicta* to leave the 
> kernels. But where do we find this time? It can be challenging to decide 
> where to cut.
> It is a well-known practice in the political arena to give presentations 
> that make the editing process easy so that the press gets the right 
> message, the principal points in a short bite with easily identifiable 
> chunks to illustrate the points made in further detail, each in its own 
> right a standalone element.
> The speakers are excellent and experienced presenters. They know that they 
> must tailor the presentation to the audience; my suggestion is that when 
> they look out on the hundreds of people in their immediate room, they keep 
> in mind the tens of thousands who will tune in later.
> Does the video manager have any details on the number of learners 
> accessing these videos and the amount of time they remain glued to the 
> screen?
>


Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Yichao Yu
On Sep 12, 2016 2:52 PM, "Páll Haraldsson" 
wrote:
>
> On Monday, September 12, 2016 at 11:32:48 AM UTC, Neal Becker wrote:
>>
>> Anyone care to make suggestions on this code, how to make it faster, or
more
>> idiomatic Julia?
>
>
>
> It may not matter, but this function:
>
> function coef_from_func(func, delta, size)
>center = float(size-1)/2
>return [func((i - center)*delta) for i in 0:size-1]
> end
>
> returns Array{Any,1} while this could be better:
>
> function coef_from_func(func, delta, size)
>center = float(size-1)/2
>return Float64[func((i - center)*delta) for i in 0:size-1]
> end
>
> returns Array{Float64,1} (if not, maybe helpful to know elsewhere).
>

Not applicable on 0.5

>
> I'm not sure this is more idiomatic, this would be an exception to not
having to specify types.. for speed (both works..)
>
> center = float(size-1)/2
>
> could however just as well be:
>
> center = (size-1)/2 # / implies float result, just as in Python 3 (not
not 2), and I like that choice.
>
> --
> Palli.


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Páll Haraldsson
On Monday, September 12, 2016 at 11:32:48 AM UTC, Neal Becker wrote:

> Anyone care to make suggestions on this code, how to make it faster, or 
> more 
> idiomatic Julia?
>
 

It may not matter, but this function:

function coef_from_func(func, delta, size) 
   center = float(size-1)/2 
   return [func((i - center)*delta) for i in 0:size-1] 
end

returns Array{Any,1} while this could be better:

function coef_from_func(func, delta, size)
   center = float(size-1)/2 
   return Float64[func((i - center)*delta) for i in 0:size-1] 
end

returns Array{Float64,1} (if not, maybe helpful to know elsewhere).


I'm not sure this is more idiomatic, this would be an exception to not 
having to specify types.. for speed (both works..)

center = float(size-1)/2

could however just as well be:

center = (size-1)/2 # / implies float result, just as in Python 3 (not not 
2), and I like that choice.

-- 
Palli.


[julia-users] Re: Help on building Julia with Intel MKL on Windows?

2016-09-12 Thread Zhong Pan
Thanks! I am still a bit confused about this part:

To build Julia for Windows, this page says I need to use MinGW compiler 
either under MSYS2 or 
Cygwin: https://github.com/JuliaLang/julia/blob/master/README.windows.md
However, to build Julia with MKL BLAS and LAPACK libraries, the first link 
you referenced says I need to use Intel Compiler 15 or above. The example 
given is for POSIX system.

So, can I directly use the Windows version of Intel Compiler to compile 
Julia with the options turned on to use MKL, and forget about MinGW 
completely?

Thanks for leading me to VML.jl. I assume this module is independent of MKL 
BLAS and LAPACK, so even if I didn't link Julia with MKL, as long as I have 
MKL binary installed, this would work?



On Monday, September 12, 2016 at 12:35:35 AM UTC-5, Zhong Pan wrote:
>
> Anybody knows how to build Julia with Intel MKL on Windows? 
>
> I found the article below but it is for Linux.
>
> https://software.intel.com/en-us/articles/julia-with-intel-mkl-for-improved-performance
>
> Thanks!
>
>

Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-12 Thread Bart Janssens
Looking at this example, it seems mighty tempting to have the ability to
subtype a concrete type. Are the exact problems with that documented
somewhere? I am aware of the following section in the docs:

"One particularly distinctive feature of Julia’s type system is that
concrete types may not subtype each other: all concrete types are final and
may only have abstract types as their supertypes. While this might at first
seem unduly restrictive, it has many beneficial consequences with
surprisingly few drawbacks. It turns out that being able to inherit
behavior is much more important than being able to inherit structure, and
inheriting both causes significant difficulties in traditional
object-oriented languages."

I'm just wondering what the "significant difficulties" are, not advocating
changing this behaviour.

On Mon, Sep 12, 2016 at 5:28 PM Stefan Karpinski 
wrote:

> I would probably go with approach #2 myself and only refer to the .bar and
> .baz fields in all of the generic AbstractFoo methods.
>
> On Mon, Sep 12, 2016 at 10:10 AM, Michael Borregaard <
> mkborrega...@gmail.com> wrote:
>
>> Hi,
>>
>> I am defining a set of types to hold scientific data, and trying to get
>> the best out of Julia's type system. The types in my example are 'nested'
>> in the sense that each type will hold progressively more information and
>> thus allow the user to do progressively more. Like this:
>>
>> type Foo
>>   bar
>>   baz
>> end
>>
>> type Foobar
>>   bar  # this
>>   baz  # and this are identical with Foo
>>   barbaz
>>   bazbaz
>> end
>>
>>
>


Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-12 Thread Michael Borregaard
Thanks for the prompt response, I will go with that then :-) 

I actually thought later that I may avoid some of the clutter in approach 2 
by adding another level of indirection:

abstract AbstractFoo

type FooData
  bar
  baz
  #... several other fields
end

type FoobarData
  barbaz
  bazbaz
  #... several other fields

type Foo <: AbstractFoo
  foodata::FooData

  Foo(bar, baz) = new(FooData(bar, baz))
end

type Foobar <: AbstractFoo
  foodata::FooData
  foobardata::FoobarData

  Foobar(bar, baz, barbaz, bazbaz) = new(FooData(bar, baz), 
FooBarData(barbaz, bazbaz))
end

However I cannot say whether this is generally useful outside my use case.



[julia-users] Want to contribute to Julia

2016-09-12 Thread cormullion
If you're still in learning_julia mode, you could help out by checking the 
Julia wikibook (https://en.wikibooks.org/wiki/Introducing_Julia) for 0.5 
compatibility. I've been through it once to update some of the more obvious 
changes and deprecations —  but "you gotta catch em all", as they say!

[julia-users] Want to contribute to Julia

2016-09-12 Thread cormullion
If you're still in learning_julia mode, you could help out by checking the 
Julia wikibook (https://en.wikibooks.org/wiki/Introducing_Julia) for 0.5 
compatibility. I've been through it once to update some of the more obvious 
changes and deprecations —  but "you gotta catch em all", as they say!

Re: [julia-users] Want to contribute to Julia

2016-09-12 Thread Chris Rackauckas
I would say start with the package ecosystem. Almost nothing in Julia Base 
is really first-class or special, so almost anything can contribute to 
Julia via packages. For example, things like Traits   
and VML bindings 
 basically add to Julia's core 
features or improve the performance, and packages like CUDA.jl 
 allow you to use GPUs. So I think the 
easiest way to get started is to contribute to (or make) packages which are 
in your expertise / that you're interested. Usually there's a lot less code 
so it's easier to get started, and you can usually find some of the 
developers in a Gitter channel to chat with and have them help you along.

Looking at the package sphere, you can find ways to contribute to anything. 
If you're interested in web development, you may want to check out Genie.jl 
. It's a web framework built in 
Julia that is looking really nice (one example 
), though he still has quite the TODO 
list. If you're interested in numerical differential equations, I could 
help you find a project to get started.  Etc.

Or Tamas Papp's suggestion of the intro issues also gives a lot of good 
problems to work on. Just find what's suitable to you.

On Monday, September 12, 2016 at 7:56:47 AM UTC-7, rishu...@gmail.com wrote:
>
> Thanks for the help. Can you suggest me what should I learn to work in 
> Julia?
>


[julia-users] Suggestion regarding valuable Youtube videos related to Julia learning

2016-09-12 Thread Colin Beckingham
The various Youtube videos recorded at Julia conferences look very good. 
It's great to have explanations given by the experts at the top of the 
Julia tree, no names mentioned, you know who you are. Thanks for this 
resource.
>From the consumer side, the packages are kinda long. I imagine that many, 
with the exception of the cohort that wants to see every second live, would 
benefit from shorter edited versions that present concisely what the 
speaker wants to say. Not a problem, you say, we just need someone to find 
the time to sit down and pull out the *obiter dicta* to leave the kernels. 
But where do we find this time? It can be challenging to decide where to 
cut.
It is a well-known practice in the political arena to give presentations 
that make the editing process easy so that the press gets the right 
message, the principal points in a short bite with easily identifiable 
chunks to illustrate the points made in further detail, each in its own 
right a standalone element.
The speakers are excellent and experienced presenters. They know that they 
must tailor the presentation to the audience; my suggestion is that when 
they look out on the hundreds of people in their immediate room, they keep 
in mind the tens of thousands who will tune in later.
Does the video manager have any details on the number of learners accessing 
these videos and the amount of time they remain glued to the screen?


[julia-users] Re: Help on building Julia with Intel MKL on Windows?

2016-09-12 Thread Chris Rackauckas
You just do what it says here: https://github.com/JuliaLang/julia. Then you 
can replace a lot of the functions using VML.jl 


On Sunday, September 11, 2016 at 10:35:35 PM UTC-7, Zhong Pan wrote:
>
> Anybody knows how to build Julia with Intel MKL on Windows? 
>
> I found the article below but it is for Linux.
>
> https://software.intel.com/en-us/articles/julia-with-intel-mkl-for-improved-performance
>
> Thanks!
>
>

Re: [julia-users] Want to contribute to Julia

2016-09-12 Thread Tim Holy
There are some great resources at http://julialang.org/learning/

Best,
--Tim

On Monday, September 12, 2016 7:56:46 AM CDT 
rishucod...@gmail.com wrote:
> Thanks for the help. Can you suggest me what should I learn to 
work in
> Julia?




Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-12 Thread Stefan Karpinski
I would probably go with approach #2 myself and only refer to the .bar and
.baz fields in all of the generic AbstractFoo methods.

On Mon, Sep 12, 2016 at 10:10 AM, Michael Borregaard  wrote:

> Hi,
>
> I am defining a set of types to hold scientific data, and trying to get
> the best out of Julia's type system. The types in my example are 'nested'
> in the sense that each type will hold progressively more information and
> thus allow the user to do progressively more. Like this:
>
> type Foo
>   bar
>   baz
> end
>
> type Foobar
>   bar  # this
>   baz  # and this are identical with Foo
>   barbaz
>   bazbaz
> end
>
>
> Thus, you can do anything with a Foobar object that you can with a Foo
> object, but not the other way around. The real example is much more
> complex, of course, with levels of nestedness and more fields of complex
> types.
> There are several ways I could design this:
>
>1. I could make all objects be of type Foobar, but make the barbaz and
>bazbaz fields Nullable. I don't think that is ideal, as I would like to use
>dispatch to do things with object Foobar that I cannot do with Foo, instead
>of constantly checking for isnull on specific fields.
>2. I could keep the above design, then define an abstract type
>AbstractFoo and make both Foo and Foobar inherit from this. Then
>AbstractFoo can be used to define functions for everything that can be done
>with the fields that are in Foo objects. The downside is that the types
>become really big and clunky, and especially that my constructors become
>big and tricky to write.
>3. I could use composition to let Foobar contain a Foo object. But
>then I will have to manually dispatch every method defined for Foo to the
>Foo field of Foobar objects.To make it clear what I mean, here are the
>three designs:
>
> # Example 1:
>
> type Foobar{T<:Any}
>   bar
>   baz
>   barbaz::Nullable{T}
>   bazbaz::Nullable{T}
> end
>
>
> # Example 2:
>
> abstract AbstractFoo
>
>
> type Foo <: AbstractFoo
>   bar
>   baz
> end
>
> type Foobar <: AbstractFoo
>   bar
>   baz
>   barbaz
>   bazbaz
> end
>
>
> # Example 3:
>
> type Foo
>   bar
>   baz
> end
>
> type Foobar
>   foo::Foo
>   barbaz
>   bazbaz
> end
>
> I do realize that the easy answer to this is "this depends on your use
> case, there are pros and cons for each method". However, I believe there
> must be a general ideomatic solution, as this issue arises from the design
> of the type system: because you cannot inherit from concrete types in
> Julia, and abstract types (which you can inherit from) cannot have fields.
> In C++, where you can inherit from concrete types, this would have an
> ideomatic solution as:
>
>
> class Foo{
>   protected:
> int bar, baz;
> };
>
> class Foobar: public Foo{
>   int barbaz, bazbaz;
> };
>
>
> I have been struggling for days with different redesigns of my code and I
> really cannot wrap my head around it. I appreciate the help!
>


Re: [julia-users] Julia for Data Science book recently released

2016-09-12 Thread Stefan Karpinski
Great! You'll probably want to make a PR to add this book here:
http://julialang.org/learning/ (repo here:
https://github.com/JuliaLang/julialang.github.com)

On Mon, Sep 12, 2016 at 7:57 AM, Steve Hoberman 
wrote:

> *Julia for Data Science* by Zacharias Voulgaris, PhD, will show you how
> to use the Julia language to solve business critical data science
> challenges. After covering the importance of Julia to the data science
> community and several essential data science principles, we start with the
> basics including how to install Julia and its powerful libraries. Many
> examples are provided as we illustrate how to leverage each Julia command,
> dataset, and function. Learn more about the book and where to obtain a copy
> here: https://technicspub.com/analytics/.
>


Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Stefan Karpinski
All of the globals setup in bench1 are non-const, which means the top-level
benchmarking code is pretty slow, but if N is small, this won't matter
much. If N is large, it's worth either wrapping the setup in a function
body or making all these variables const.

On Mon, Sep 12, 2016 at 8:37 AM, Neal Becker  wrote:

> Steven G. Johnson wrote:
>
> >
> >
> >
> > On Monday, September 12, 2016 at 7:59:33 AM UTC-4, DNF wrote:
> >>
> >> function(p::pnseq)(n,T=Int64)
> >>
> >>>
> > Note that the real problem with this function declaration is that the
> type
> > T is known only at runtime, not at compile-time. It would be better
> to
> > do
> >
> >  function (p::pnseq){T}(n, ::Type{T}=Int64)
>
> Thanks!  This change made a big difference. Now PnSeq is only using a small
> amount of time, as I expected it should.  I prefer this syntax to the
> alternative you suggest below as it seems more logical to me.
>
> >
> > since making the type a parameter like this exposes it as a compile-time
> > constant.  Although it would be even more natural to not have to pass the
> > type explicitly at all, but rather to get it from the type of n, e.g.
> >
> >
> >  function (p::pnseq){T<:Integer}(n::T)
> >
> > I have no idea whether this particular thing is performance-critical,
> > however.   I also see lots and lots of functions that allocate arrays, as
> > opposed to scalar functions that are composed and called on a single
> > array, which makes me think that you are thinking in terms of numpy-style
> > vectorized code, which doesn't take full advantage of Julia.
>
>
> >
> > It would be much easier to give pereformance tips if you could boil it
> > down to a single self-contained function that you want to make faster,
> > rather than requiring us to read through four or five different
> submodules
> > and
> > lots of little one-line functions and types.  (There's nothing wrong with
> > having lots of functions and types in Julia, it is just that this forces
> > us to comprehend a lot more code in order to make useful suggestions.)
>
> Nyquist and CoefFromFunc are normally only used at startup, so they are
> unimportant to optimize.
>
> The real work is PnSeq, Constellation, and the FIRFilters (which I didn't
> write - they are in DSP.jl).  I agree that the style is to operate on and
> return a large vector.
>
> I guess what you're suggesting is that PnSeq should return a single scalar,
> Constellation should map scalar->scalar.  But FIRFilter I think needs to be
> a vector->vector, so I will take advantage of simd?
>
>


Re: [julia-users] Re: Julia Low Pass Filter much slower than identical code in Python ??

2016-09-12 Thread Stefan Karpinski
JIT = Just In Time, i.e. the first time you use the code.

On Mon, Sep 12, 2016 at 6:52 AM, MLicer  wrote:

> Indeed it does! I thought JIT compilation takes place prior to execution
> of the script. Thanks so much, this makes sense now!
>
> Output:
> first call:   0.804573 seconds (1.18 M allocations: 53.183 MB, 1.43% gc
> time)
> repeated call:  0.000472 seconds (217 allocations: 402.938 KB)
>
> Thanks again,
>
> Cheers!
>
>
> On Monday, September 12, 2016 at 12:48:30 PM UTC+2, randm...@gmail.com
> wrote:
>>
>> The Julia code takes 0.000535 seconds for me on the second run -- during
>> the first run, Julia has to compile the method you're timing. Have a look
>> at the performance tips
>> 
>> for a more in depth explanation.
>>
>> Am Montag, 12. September 2016 11:53:01 UTC+2 schrieb MLicer:
>>>
>>> Dear all,
>>>
>>> i've written a low-pass filter in Julia and Python and the code in Julia
>>> seems to be much slower (*0.800 sec in Julia vs 0.000 sec in Python*).
>>> I *must* be coding ineffieciently, can anyone comment on the two codes
>>> below?
>>>
>>> *Julia:*
>>>
>>>
>>> 
>>> using PyPlot, DSP
>>>
>>> # generate data:
>>> x = linspace(0,30,1e4)
>>> sin_noise(arr) = sin(arr) + rand(length(arr))
>>>
>>> # create filter:
>>> designmethod = Butterworth(5)
>>> ff = digitalfilter(Lowpass(0.02),designmethod)
>>> @time yl = filtfilt(ff, sin_noise(x))
>>>
>>> Python:
>>>
>>> from scipy import signal
>>> import numpy as np
>>> import cProfile, pstats
>>>
>>> def sin_noise(arr):
>>> return np.sin(arr) + np.random.rand(len(arr))
>>>
>>> def filterSignal(b,a,x):
>>> return signal.filtfilt(b, a, x, axis=-1)
>>>
>>> def main():
>>> # generate data:
>>> x = np.linspace(0,30,1e4)
>>> y = sin_noise(x)
>>> b, a = signal.butter(5, 0.02, "lowpass", analog=False)
>>> ff = filterSignal(b,a,y)
>>>
>>> cProfile.runctx('filterSignal(b,a,y)',globals(),{'b':b,'a':a,'y':y},
>>> filename='profileStatistics')
>>>
>>> p = pstats.Stats('profileStatistics')
>>> printFirstN = 5
>>> p.sort_stats('cumtime').print_stats(printFirstN)
>>>
>>> if __name__=="__main__":
>>> main()
>>>
>>>
>>> Thanks very much for any replies!
>>>
>>


Re: [julia-users] Julia on Research Computing Podcast RCE

2016-09-12 Thread Stefan Karpinski
We've started a conversation with Brock to make sure that happens :)

On Mon, Sep 12, 2016 at 5:03 AM, Mauro  wrote:

> RCE is a excellent podcast, would be cool the hear some core-devs
> talking there.
>
> On Sun, 2016-09-11 at 02:29, Brock Palen  wrote:
> > I am one half of the HPC/Research Computing podcast
> http://www.rce-cast.com/
> >
> > We would like to feature Julia on the show.  This takes a developer or
> two
> > and is a friendly interview, takes about an hour over the phone or skype
> > and aims to educate our community.
> >
> > Feel free to contact me off list.
> >
> > Thank you !
> >
> > Brock Palen
>


[julia-users] sending code to a Julia REPL running in Emacs/ansi-term

2016-09-12 Thread Tamas Papp
I have been using Emacs/ESS for Julia, but realized Gallium needs a more
capable terminal. Julia runs fine inside ansi-term, but I would like to
have ESS's convenient functionality of sending a line/region/definition
to the REPL.

Is this feasible somehow, with some clever hack of julia-mode and/or ESS?

Best,

Tamas


Re: [julia-users] Want to contribute to Julia

2016-09-12 Thread rishucoding


On Monday, September 12, 2016 at 8:26:47 PM UTC+5:30, rishu...@gmail.com 
wrote:
>
> Thanks for the help. Can you suggest me what should I learn to work in 
> Julia? I suppose python is good.
>


Re: [julia-users] Want to contribute to Julia

2016-09-12 Thread rishucoding
Thanks for the help. Can you suggest me what should I learn to work in 
Julia?


Re: [julia-users] Does anyone know how jl_calls may have to do with SIGUSER1?

2016-09-12 Thread Yichao Yu
On Mon, Sep 12, 2016 at 10:40 AM, K leo  wrote:

> Thanks for the reply.
> I registered a signal handler before calling these and the code now does
> not terminate and appears to run fine.  Will my handler cause problems to
> the jl_calls?
>


You handler doesn't cause any trouble, it will just never be called since
julia catches the signal before the signal handler.



> The documentation mentions about SIGUSER2 with the profiler BTW.
>

It use both on some systems.


>
> On Monday, September 12, 2016 at 10:06:04 PM UTC+8, Yichao Yu wrote:
>>
>>
>>
>> On Mon, Sep 12, 2016 at 10:03 AM, K leo  wrote:
>>
>>> I put the following lines in my C++ code. These are only executed once
>>> near the beginning of the code
>>> and run fine. There are no other julia related statements in the code.
>>> But with the presence of these statements in the code, whenever the code
>>> does some communications requests on the Internet
>>> later on, the code gets a user defined signal 1 and terminates. If I
>>> comment out these lines, then the code
>>> does not get the signal and runs fine. Anyone can shed some light on
>>> what might be going on?
>>>
>>
>> This registers signal handler for SIGUSR1 (which is also used for
>> profiling) and the origin of the signal is probably from elsewhere (e.g.
>> the library you are using).
>>
>>
>>>
>>> jl_init(NULL);
>>>
>>> jl_load(filename);
>>> mod = (jl_value_t*)jl_eval_string(moduleName);
>>> JuliaFunc = jl_get_function((jl_module_t*)mod,"TestFunc");
>>>
>>>
>>>
>>


Re: [julia-users] Does anyone know how jl_calls may have to do with SIGUSER1?

2016-09-12 Thread K leo
Thanks for the reply.
I registered a signal handler before calling these and the code now does 
not terminate and appears to run fine.  Will my handler cause problems to 
the jl_calls?
The documentation mentions about SIGUSER2 with the profiler BTW.

On Monday, September 12, 2016 at 10:06:04 PM UTC+8, Yichao Yu wrote:
>
>
>
> On Mon, Sep 12, 2016 at 10:03 AM, K leo  
> wrote:
>
>> I put the following lines in my C++ code. These are only executed once 
>> near the beginning of the code 
>> and run fine. There are no other julia related statements in the code. 
>> But with the presence of these statements in the code, whenever the code 
>> does some communications requests on the Internet 
>> later on, the code gets a user defined signal 1 and terminates. If I 
>> comment out these lines, then the code 
>> does not get the signal and runs fine. Anyone can shed some light on what 
>> might be going on?
>>
>
> This registers signal handler for SIGUSR1 (which is also used for 
> profiling) and the origin of the signal is probably from elsewhere (e.g. 
> the library you are using).
>  
>
>>
>> jl_init(NULL);
>>
>> jl_load(filename);
>> mod = (jl_value_t*)jl_eval_string(moduleName);
>> JuliaFunc = jl_get_function((jl_module_t*)mod,"TestFunc");
>>
>>
>>
>

Re: [julia-users] default type parameter?

2016-09-12 Thread Mauro

On Mon, 2016-09-12 at 16:07, Yichao Yu  wrote:
> On Mon, Sep 12, 2016 at 9:52 AM, Neal Becker  wrote:
>
>> Taking the following example:
>>
>> type Point{T<:Real}
>>  x::T
>>  y::T
>> end
>>
>> I can construct a Point taking the type "T" from the argument types.
>> Or I can explicity specify the type
>>
>> Point{Int32}(2,2)
>>
>> But I'd like to be able to specify a default type:
>>
>> type Point{T<:Real=Int32}
>>
>> for example, so that an unqualified
>> Point(2,2)
>>
>> would produce Point{Int32}, while still allowing explicity
>> Point{Int64}(2,2)
>>
>
> Overload `Point` to do exactly that.
>
> `Point(x, y) = Point{Int32}(x, y)`
>
> Note that having types as first class object (different from c++) we can't
> have default type parameters at type level since we need the `Point` in
> `Point` and `Point{Int64}` to mean exactly the same thing.

This actually doesn't work because the default constructor is more
specific for (Int,Int) arguments:

Providing this constructor works:

Point{T<:Real}(x::T,y::T) = Point{Int32}(x,y)

Note, this is quite fiddly as these don't work:

Point(x,y) = Point{Int32}(x,y)
Point{T}(x::T,y::T) = Point{Int32}(x,y)

as the default constructor has the signature Point{T<:Real}(x::T,y::T)
which is more specific than above two non-working functions.

Depending on your needs, you can also add above catch-all:
Point(x,y) = Point{Int32}(x,y)

then this works too:

julia> Point(4,5.)
Point{Int32}(4,5)


[julia-users] code design question – best ideomatic way to define nested types?

2016-09-12 Thread Michael Borregaard
Hi,

I am defining a set of types to hold scientific data, and trying to get the 
best out of Julia's type system. The types in my example are 'nested' in 
the sense that each type will hold progressively more information and thus 
allow the user to do progressively more. Like this:

type Foo
  bar
  baz
end

type Foobar
  bar  # this
  baz  # and this are identical with Foo
  barbaz
  bazbaz
end


Thus, you can do anything with a Foobar object that you can with a Foo 
object, but not the other way around. The real example is much more 
complex, of course, with levels of nestedness and more fields of complex 
types.
There are several ways I could design this:

   1. I could make all objects be of type Foobar, but make the barbaz and 
   bazbaz fields Nullable. I don't think that is ideal, as I would like to use 
   dispatch to do things with object Foobar that I cannot do with Foo, instead 
   of constantly checking for isnull on specific fields.
   2. I could keep the above design, then define an abstract type 
   AbstractFoo and make both Foo and Foobar inherit from this. Then 
   AbstractFoo can be used to define functions for everything that can be done 
   with the fields that are in Foo objects. The downside is that the types 
   become really big and clunky, and especially that my constructors become 
   big and tricky to write.
   3. I could use composition to let Foobar contain a Foo object. But then 
   I will have to manually dispatch every method defined for Foo to the Foo 
   field of Foobar objects.To make it clear what I mean, here are the three 
   designs:

# Example 1:

type Foobar{T<:Any}
  bar
  baz
  barbaz::Nullable{T}
  bazbaz::Nullable{T}
end


# Example 2:

abstract AbstractFoo


type Foo <: AbstractFoo
  bar
  baz
end

type Foobar <: AbstractFoo
  bar
  baz
  barbaz
  bazbaz
end


# Example 3:

type Foo
  bar
  baz
end

type Foobar
  foo::Foo
  barbaz
  bazbaz
end

I do realize that the easy answer to this is "this depends on your use 
case, there are pros and cons for each method". However, I believe there 
must be a general ideomatic solution, as this issue arises from the design 
of the type system: because you cannot inherit from concrete types in 
Julia, and abstract types (which you can inherit from) cannot have fields. 
In C++, where you can inherit from concrete types, this would have an 
ideomatic solution as:


class Foo{
  protected:
int bar, baz;
};

class Foobar: public Foo{
  int barbaz, bazbaz;
};


I have been struggling for days with different redesigns of my code and I 
really cannot wrap my head around it. I appreciate the help!


Re: [julia-users] default type parameter?

2016-09-12 Thread Yichao Yu
On Mon, Sep 12, 2016 at 9:52 AM, Neal Becker  wrote:

> Taking the following example:
>
> type Point{T<:Real}
>  x::T
>  y::T
> end
>
> I can construct a Point taking the type "T" from the argument types.
> Or I can explicity specify the type
>
> Point{Int32}(2,2)
>
> But I'd like to be able to specify a default type:
>
> type Point{T<:Real=Int32}
>
> for example, so that an unqualified
> Point(2,2)
>
> would produce Point{Int32}, while still allowing explicity
> Point{Int64}(2,2)
>

Overload `Point` to do exactly that.

`Point(x, y) = Point{Int32}(x, y)`

Note that having types as first class object (different from c++) we can't
have default type parameters at type level since we need the `Point` in
`Point` and `Point{Int64}` to mean exactly the same thing.


>
> I don't see a way to do that
>
>


Re: [julia-users] Does anyone know how jl_calls may have to do with SIGUSER1?

2016-09-12 Thread Yichao Yu
On Mon, Sep 12, 2016 at 10:03 AM, K leo  wrote:

> I put the following lines in my C++ code. These are only executed once
> near the beginning of the code
> and run fine. There are no other julia related statements in the code.
> But with the presence of these statements in the code, whenever the code
> does some communications requests on the Internet
> later on, the code gets a user defined signal 1 and terminates. If I
> comment out these lines, then the code
> does not get the signal and runs fine. Anyone can shed some light on what
> might be going on?
>

This registers signal handler for SIGUSR1 (which is also used for
profiling) and the origin of the signal is probably from elsewhere (e.g.
the library you are using).


>
> jl_init(NULL);
>
> jl_load(filename);
> mod = (jl_value_t*)jl_eval_string(moduleName);
> JuliaFunc = jl_get_function((jl_module_t*)mod,"TestFunc");
>
>
>


[julia-users] Does anyone know how jl_calls may have to do with SIGUSER1?

2016-09-12 Thread K leo
I put the following lines in my C++ code. These are only executed once near 
the beginning of the code 
and run fine. There are no other julia related statements in the code. 
But with the presence of these statements in the code, whenever the code 
does some communications requests on the Internet 
later on, the code gets a user defined signal 1 and terminates. If I 
comment out these lines, then the code 
does not get the signal and runs fine. Anyone can shed some light on what 
might be going on?

jl_init(NULL);

jl_load(filename);
mod = (jl_value_t*)jl_eval_string(moduleName);
JuliaFunc = jl_get_function((jl_module_t*)mod,"TestFunc");




[julia-users] default type parameter?

2016-09-12 Thread Neal Becker
Taking the following example:

type Point{T<:Real}
 x::T
 y::T
end

I can construct a Point taking the type "T" from the argument types.
Or I can explicity specify the type

Point{Int32}(2,2)

But I'd like to be able to specify a default type:

type Point{T<:Real=Int32}

for example, so that an unqualified
Point(2,2)

would produce Point{Int32}, while still allowing explicity
Point{Int64}(2,2)

I don't see a way to do that



[julia-users] Re: How to deal with methods redefinition warnings in 0.5?

2016-09-12 Thread K leo
After calling workspace(), there are even a lot of warnings regarding 
methods in packages.

On Monday, September 12, 2016 at 6:57:47 PM UTC+8, felip...@gmail.com wrote:
>
> Try calling workspace() before repeating include.



[julia-users] Julia for Data Science book recently released

2016-09-12 Thread Steve Hoberman


*Julia for Data Science* by Zacharias Voulgaris, PhD, will show you how to 
use the Julia language to solve business critical data science challenges. 
After covering the importance of Julia to the data science community and 
several essential data science principles, we start with the basics 
including how to install Julia and its powerful libraries. Many examples 
are provided as we illustrate how to leverage each Julia command, dataset, 
and function. Learn more about the book and where to obtain a copy here: 
https://technicspub.com/analytics/.


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Neal Becker
Steven G. Johnson wrote:

> 
> 
> 
> On Monday, September 12, 2016 at 7:59:33 AM UTC-4, DNF wrote:
>>
>> function(p::pnseq)(n,T=Int64)
>>
>>>
> Note that the real problem with this function declaration is that the type
> T is known only at runtime, not at compile-time. It would be better to
> do
> 
>  function (p::pnseq){T}(n, ::Type{T}=Int64)

Thanks!  This change made a big difference. Now PnSeq is only using a small 
amount of time, as I expected it should.  I prefer this syntax to the 
alternative you suggest below as it seems more logical to me.

> 
> since making the type a parameter like this exposes it as a compile-time
> constant.  Although it would be even more natural to not have to pass the
> type explicitly at all, but rather to get it from the type of n, e.g.
> 
> 
>  function (p::pnseq){T<:Integer}(n::T)
> 
> I have no idea whether this particular thing is performance-critical,
> however.   I also see lots and lots of functions that allocate arrays, as
> opposed to scalar functions that are composed and called on a single
> array, which makes me think that you are thinking in terms of numpy-style
> vectorized code, which doesn't take full advantage of Julia.


> 
> It would be much easier to give pereformance tips if you could boil it
> down to a single self-contained function that you want to make faster,
> rather than requiring us to read through four or five different submodules
> and
> lots of little one-line functions and types.  (There's nothing wrong with
> having lots of functions and types in Julia, it is just that this forces
> us to comprehend a lot more code in order to make useful suggestions.)

Nyquist and CoefFromFunc are normally only used at startup, so they are 
unimportant to optimize.

The real work is PnSeq, Constellation, and the FIRFilters (which I didn't 
write - they are in DSP.jl).  I agree that the style is to operate on and 
return a large vector.

I guess what you're suggesting is that PnSeq should return a single scalar, 
Constellation should map scalar->scalar.  But FIRFilter I think needs to be 
a vector->vector, so I will take advantage of simd?



[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread DNF
The code is not particularly long, but it seems like almost every type has 
its own module, which makes it harder to get an overview.

A few of the composite types are defined with abstract types as fields, 
such as FNNyquistPulse. That is not 
optimal: 
http://docs.julialang.org/en/latest/manual/performance-tips/#avoid-fields-with-abstract-type

On Monday, September 12, 2016 at 2:16:52 PM UTC+2, Steven G. Johnson wrote:
>
> It would be much easier to give pereformance tips if you could boil it 
> down to a single self-contained function that you want to make faster, 
> rather than requiring us to read through four or five different submodules 
> and lots of little one-line functions and types.  (There's nothing wrong 
> with having lots of functions and types in Julia, it is just that this 
> forces us to comprehend a lot more code in order to make useful 
> suggestions.)
>


Re: [julia-users] Re: Problem with Plots/Compat on RC4

2016-09-12 Thread Tom Breloff
Josef is really responsive with GR issues... you should open issues when
you find anything: https://github.com/jheinen/GR.jl/issues   Sometimes it's
hard to know if it's a problem with the Plots backend code or with GR
itself, but we'll both see the issues.  GR is pretty mature, but I agree
it's not appropriate for every workflow, and there are still some
unfinished todo items.

I think the original problem might crop up when you try to run Julia from
multiple processes and you have competing compilation, though I'm sure
there's other ways to corrupt your lib directory.

On Mon, Sep 12, 2016 at 7:47 AM, Scott Thomas 
wrote:

> I have not had much luck with the GR backend myself - it looks nice, but
> it has no antialiasing on linux yet and I can't close the plot window once
> it's open. I think it just needs time to mature some more.
>
>
> On Mon, 12 Sep 2016 at 12:31 DNF  wrote:
>
>> Thanks. After deleting .julia/lib and restarting julia I get some
>> warnings before the prompt shows up:
>>
>> WARNING: Method definition cgrad(Any, Any) in module PlotUtils at ~
>> /.julia/PlotUtils/src/color_gradients.jl:82 overwritten at ~/.julia/
>> PlotUtils/src/color_gradients.jl:99.
>> WARNING: Method definition #cgrad(Array{Any, 1}, PlotUtils.#cgrad, Any,
>> Any) in module PlotUtils overwritten.
>> WARNING: could not import Base.lastidx into LegacyStrings
>>
>>
>> Plots with PyPlot backend  now seems to work. But, I am still having some
>> problems with Plots. When I tried using Plots.jl with GR backend, first
>> time it worked. If I close the GTKTerm window that GR draws into, just
>> nothing happens, and occasionally I get a segfault.
>>
>>
>> On Monday, September 12, 2016 at 1:14:30 PM UTC+2, Scott T wrote:
>>>
>>> Oh and just to be a little clearer, that's ~/.julia/lib (the .julia
>>> folder in your home directory) and not the .../julia/lib folder in
>>> Applications.
>>>
>>> On Monday, 12 September 2016 12:12:08 UTC+1, Scott T wrote:

 Try removing .julia/lib (the precompile cache). I had this same issue
 and this appears to have fixed it.

>>>


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Steven G. Johnson


On Monday, September 12, 2016 at 7:59:33 AM UTC-4, DNF wrote:
>
> function(p::pnseq)(n,T=Int64)
>
>>
Note that the real problem with this function declaration is that the type 
T is known only at runtime, not at compile-time. It would be better to 
do

 function (p::pnseq){T}(n, ::Type{T}=Int64)

since making the type a parameter like this exposes it as a compile-time 
constant.  Although it would be even more natural to not have to pass the 
type explicitly at all, but rather to get it from the type of n, e.g.


 function (p::pnseq){T<:Integer}(n::T)

I have no idea whether this particular thing is performance-critical, 
however.   I also see lots and lots of functions that allocate arrays, as 
opposed to scalar functions that are composed and called on a single array, 
which makes me think that you are thinking in terms of numpy-style 
vectorized code, which doesn't take full advantage of Julia.

It would be much easier to give pereformance tips if you could boil it down 
to a single self-contained function that you want to make faster, rather 
than requiring us to read through four or five different submodules and 
lots of little one-line functions and types.  (There's nothing wrong with 
having lots of functions and types in Julia, it is just that this forces us 
to comprehend a lot more code in order to make useful suggestions.)


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Neal Becker
Patrick Kofod Mogensen wrote:

> This surprised me as well, where did you find this syntax?
> 
> On Monday, September 12, 2016 at 1:59:33 PM UTC+2, DNF wrote:
>>
>> I haven't looked very closely at your code, but a brief look reveals that
>> you are defining your functions in a very unusual way. Two examples:
>>
>> function (f::FIRFilter)(x)
>> return filt(f, x)
>> end
>>
>> function(p::pnseq)(n,T=Int64)
>> out = Array{T}(n)
>> for i in eachindex(out)
>> if p.count < p.width
>> p.cache = rand(Int64)
>> p.count = 64
>> end
>> out[i] = p.cache & p.mask
>> p.cache >>= p.width
>> p.count -= p.width
>> end
>> return out
>> end
>>
>> I have never seen this way of defining them before, and I am pretty
>> surprised that it's not a syntax error. Long-form function signatures
>> should be of the form
>> function myfunc{T<:SomeType}(myarg1::T, myarg2)
>> where the type parameter section (in curly bracket) is optional.
>>
>> As I said, I'm surprised it's not a syntax error, but maybe it gets
>> parsed as an anonymous function (just guessing here). If so, and if you
>> are using version 0.4, you can get slow performance.
>>
>> You can read here about the right way to define functions:
>> http://docs.julialang.org/en/stable/manual/functions/
>>
>> On Monday, September 12, 2016 at 1:32:48 PM UTC+2, Neal Becker wrote:
>>>
>>> As a first (nontrivial) try at julia, I put together some simple DSP
>>> code,
>>> which represents a
>>> pn generator (random fixed-width integer generator)
>>> constellation mapping
>>> interpolating FIR filter (from DSP.jl)
>>> decimating FIR filter (from DSP.jl)
>>> mean-square error measure
>>>
>>> Source code is here:
>>> https://github.com/nbecker/julia-test
>>>
>>> Profile result is here:
>>> https://gist.github.com/anonymous/af2459fc831ddbeb6e3be25e5c8d5197
>>>
>>> If I understand how to read this profile (not sure I do) it looks like
>>> 1/2
>>> the time is spent in PnSeq.jl, which seems surprising.
>>>
>>> PnSeq.jl calls rand() to get a Int64, caching the result and then
>>> providing
>>> N bits at a time to fill an Array.  It's supposed to be a fast way to
>>> get an
>>> Array of small-width random integers.
>>>
>>> Most of the number crunching should be in the FIR filter functions,
>>> which I
>>> would have expected to use the most time.
>>>
>>> Anyone care to make suggestions on this code, how to make it faster, or
>>> more
>>> idiomatic Julia?  I'm not proficient with Julia or with Matlab (I've
>>> been using python/numpy/c++ for all my work for years).
>>>
>>>
>>>

I guess I should have said I'm using
Version 0.5.0-rc3+49 (2016-09-08 05:47 UTC)

http://docs.julialang.org/en/latest/manual/methods/#function-like-objects

Coming from my python/c++ background, I've made a practice of, when I have 
an object that has only 1 reasonable way to use it, overloading the function 
call operator for this purpose.  It seems julia-0.5 allows this (and I'm 
happy)



Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Steven G. Johnson


On Monday, September 12, 2016 at 8:07:55 AM UTC-4, Yichao Yu wrote:
>
>
>
> On Mon, Sep 12, 2016 at 8:03 AM, Patrick Kofod Mogensen <
> patrick@gmail.com > wrote:
>
>> This surprised me as well, where did you find this syntax?
>>
>
> Call overload. 
>

(i.e. it's the new syntax for call overloading in Julia 0.5, what would 
have been Base.call in Julia 0.4.) 


Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Yichao Yu
On Mon, Sep 12, 2016 at 8:03 AM, Patrick Kofod Mogensen <
patrick.mogen...@gmail.com> wrote:

> This surprised me as well, where did you find this syntax?
>

Call overload.


>
>
> On Monday, September 12, 2016 at 1:59:33 PM UTC+2, DNF wrote:
>>
>> I haven't looked very closely at your code, but a brief look reveals that
>> you are defining your functions in a very unusual way. Two examples:
>>
>> function (f::FIRFilter)(x)
>> return filt(f, x)
>> end
>>
>> function(p::pnseq)(n,T=Int64)
>> out = Array{T}(n)
>> for i in eachindex(out)
>> if p.count < p.width
>> p.cache = rand(Int64)
>> p.count = 64
>> end
>> out[i] = p.cache & p.mask
>> p.cache >>= p.width
>> p.count -= p.width
>> end
>> return out
>> end
>>
>> I have never seen this way of defining them before, and I am pretty
>> surprised that it's not a syntax error. Long-form function signatures
>> should be of the form
>> function myfunc{T<:SomeType}(myarg1::T, myarg2)
>> where the type parameter section (in curly bracket) is optional.
>>
>> As I said, I'm surprised it's not a syntax error, but maybe it gets
>> parsed as an anonymous function (just guessing here). If so, and if you are
>> using version 0.4, you can get slow performance.
>>
>> You can read here about the right way to define functions:
>> http://docs.julialang.org/en/stable/manual/functions/
>>
>> On Monday, September 12, 2016 at 1:32:48 PM UTC+2, Neal Becker wrote:
>>>
>>> As a first (nontrivial) try at julia, I put together some simple DSP
>>> code,
>>> which represents a
>>> pn generator (random fixed-width integer generator)
>>> constellation mapping
>>> interpolating FIR filter (from DSP.jl)
>>> decimating FIR filter (from DSP.jl)
>>> mean-square error measure
>>>
>>> Source code is here:
>>> https://github.com/nbecker/julia-test
>>>
>>> Profile result is here:
>>> https://gist.github.com/anonymous/af2459fc831ddbeb6e3be25e5c8d5197
>>>
>>> If I understand how to read this profile (not sure I do) it looks like
>>> 1/2
>>> the time is spent in PnSeq.jl, which seems surprising.
>>>
>>> PnSeq.jl calls rand() to get a Int64, caching the result and then
>>> providing
>>> N bits at a time to fill an Array.  It's supposed to be a fast way to
>>> get an
>>> Array of small-width random integers.
>>>
>>> Most of the number crunching should be in the FIR filter functions,
>>> which I
>>> would have expected to use the most time.
>>>
>>> Anyone care to make suggestions on this code, how to make it faster, or
>>> more
>>> idiomatic Julia?  I'm not proficient with Julia or with Matlab (I've
>>> been
>>> using python/numpy/c++ for all my work for years).
>>>
>>>
>>>


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Patrick Kofod Mogensen
This surprised me as well, where did you find this syntax?

On Monday, September 12, 2016 at 1:59:33 PM UTC+2, DNF wrote:
>
> I haven't looked very closely at your code, but a brief look reveals that 
> you are defining your functions in a very unusual way. Two examples:
>
> function (f::FIRFilter)(x)
> return filt(f, x)
> end
>
> function(p::pnseq)(n,T=Int64)
> out = Array{T}(n)
> for i in eachindex(out)
> if p.count < p.width
> p.cache = rand(Int64)
> p.count = 64
> end
> out[i] = p.cache & p.mask
> p.cache >>= p.width
> p.count -= p.width
> end
> return out
> end
>
> I have never seen this way of defining them before, and I am pretty 
> surprised that it's not a syntax error. Long-form function signatures 
> should be of the form
> function myfunc{T<:SomeType}(myarg1::T, myarg2)
> where the type parameter section (in curly bracket) is optional.
>
> As I said, I'm surprised it's not a syntax error, but maybe it gets parsed 
> as an anonymous function (just guessing here). If so, and if you are using 
> version 0.4, you can get slow performance.
>
> You can read here about the right way to define functions: 
> http://docs.julialang.org/en/stable/manual/functions/
>
> On Monday, September 12, 2016 at 1:32:48 PM UTC+2, Neal Becker wrote:
>>
>> As a first (nontrivial) try at julia, I put together some simple DSP 
>> code, 
>> which represents a 
>> pn generator (random fixed-width integer generator) 
>> constellation mapping 
>> interpolating FIR filter (from DSP.jl) 
>> decimating FIR filter (from DSP.jl) 
>> mean-square error measure 
>>
>> Source code is here: 
>> https://github.com/nbecker/julia-test 
>>
>> Profile result is here: 
>> https://gist.github.com/anonymous/af2459fc831ddbeb6e3be25e5c8d5197 
>>
>> If I understand how to read this profile (not sure I do) it looks like 
>> 1/2 
>> the time is spent in PnSeq.jl, which seems surprising.   
>>
>> PnSeq.jl calls rand() to get a Int64, caching the result and then 
>> providing 
>> N bits at a time to fill an Array.  It's supposed to be a fast way to get 
>> an 
>> Array of small-width random integers. 
>>
>> Most of the number crunching should be in the FIR filter functions, which 
>> I 
>> would have expected to use the most time. 
>>
>> Anyone care to make suggestions on this code, how to make it faster, or 
>> more 
>> idiomatic Julia?  I'm not proficient with Julia or with Matlab (I've been 
>> using python/numpy/c++ for all my work for years). 
>>
>>
>>

[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread DNF
I haven't looked very closely at your code, but a brief look reveals that 
you are defining your functions in a very unusual way. Two examples:

function (f::FIRFilter)(x)
return filt(f, x)
end

function(p::pnseq)(n,T=Int64)
out = Array{T}(n)
for i in eachindex(out)
if p.count < p.width
p.cache = rand(Int64)
p.count = 64
end
out[i] = p.cache & p.mask
p.cache >>= p.width
p.count -= p.width
end
return out
end

I have never seen this way of defining them before, and I am pretty 
surprised that it's not a syntax error. Long-form function signatures 
should be of the form
function myfunc{T<:SomeType}(myarg1::T, myarg2)
where the type parameter section (in curly bracket) is optional.

As I said, I'm surprised it's not a syntax error, but maybe it gets parsed 
as an anonymous function (just guessing here). If so, and if you are using 
version 0.4, you can get slow performance.

You can read here about the right way to define functions: 
http://docs.julialang.org/en/stable/manual/functions/

On Monday, September 12, 2016 at 1:32:48 PM UTC+2, Neal Becker wrote:
>
> As a first (nontrivial) try at julia, I put together some simple DSP code, 
> which represents a 
> pn generator (random fixed-width integer generator) 
> constellation mapping 
> interpolating FIR filter (from DSP.jl) 
> decimating FIR filter (from DSP.jl) 
> mean-square error measure 
>
> Source code is here: 
> https://github.com/nbecker/julia-test 
>
> Profile result is here: 
> https://gist.github.com/anonymous/af2459fc831ddbeb6e3be25e5c8d5197 
>
> If I understand how to read this profile (not sure I do) it looks like 1/2 
> the time is spent in PnSeq.jl, which seems surprising.   
>
> PnSeq.jl calls rand() to get a Int64, caching the result and then 
> providing 
> N bits at a time to fill an Array.  It's supposed to be a fast way to get 
> an 
> Array of small-width random integers. 
>
> Most of the number crunching should be in the FIR filter functions, which 
> I 
> would have expected to use the most time. 
>
> Anyone care to make suggestions on this code, how to make it faster, or 
> more 
> idiomatic Julia?  I'm not proficient with Julia or with Matlab (I've been 
> using python/numpy/c++ for all my work for years). 
>
>
>

Re: [julia-users] Re: Problem with Plots/Compat on RC4

2016-09-12 Thread Scott Thomas
I have not had much luck with the GR backend myself - it looks nice, but it
has no antialiasing on linux yet and I can't close the plot window once
it's open. I think it just needs time to mature some more.


On Mon, 12 Sep 2016 at 12:31 DNF  wrote:

> Thanks. After deleting .julia/lib and restarting julia I get some warnings
> before the prompt shows up:
>
> WARNING: Method definition cgrad(Any, Any) in module PlotUtils at ~
> /.julia/PlotUtils/src/color_gradients.jl:82 overwritten at ~/.julia/
> PlotUtils/src/color_gradients.jl:99.
> WARNING: Method definition #cgrad(Array{Any, 1}, PlotUtils.#cgrad, Any,
> Any) in module PlotUtils overwritten.
> WARNING: could not import Base.lastidx into LegacyStrings
>
>
> Plots with PyPlot backend  now seems to work. But, I am still having some
> problems with Plots. When I tried using Plots.jl with GR backend, first
> time it worked. If I close the GTKTerm window that GR draws into, just
> nothing happens, and occasionally I get a segfault.
>
>
> On Monday, September 12, 2016 at 1:14:30 PM UTC+2, Scott T wrote:
>>
>> Oh and just to be a little clearer, that's ~/.julia/lib (the .julia
>> folder in your home directory) and not the .../julia/lib folder in
>> Applications.
>>
>> On Monday, 12 September 2016 12:12:08 UTC+1, Scott T wrote:
>>>
>>> Try removing .julia/lib (the precompile cache). I had this same issue
>>> and this appears to have fixed it.
>>>
>>


[julia-users] 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Neal Becker
As a first (nontrivial) try at julia, I put together some simple DSP code, 
which represents a
pn generator (random fixed-width integer generator)
constellation mapping
interpolating FIR filter (from DSP.jl)
decimating FIR filter (from DSP.jl)
mean-square error measure

Source code is here:
https://github.com/nbecker/julia-test

Profile result is here:
https://gist.github.com/anonymous/af2459fc831ddbeb6e3be25e5c8d5197

If I understand how to read this profile (not sure I do) it looks like 1/2 
the time is spent in PnSeq.jl, which seems surprising.  

PnSeq.jl calls rand() to get a Int64, caching the result and then providing 
N bits at a time to fill an Array.  It's supposed to be a fast way to get an 
Array of small-width random integers.

Most of the number crunching should be in the FIR filter functions, which I 
would have expected to use the most time.

Anyone care to make suggestions on this code, how to make it faster, or more 
idiomatic Julia?  I'm not proficient with Julia or with Matlab (I've been 
using python/numpy/c++ for all my work for years).




[julia-users] Re: Problem with Plots/Compat on RC4

2016-09-12 Thread DNF
Thanks. After deleting .julia/lib and restarting julia I get some warnings 
before the prompt shows up:

WARNING: Method definition cgrad(Any, Any) in module PlotUtils at ~/.julia/
PlotUtils/src/color_gradients.jl:82 overwritten at ~/.julia/PlotUtils/src/
color_gradients.jl:99.
WARNING: Method definition #cgrad(Array{Any, 1}, PlotUtils.#cgrad, Any, 
Any) in module PlotUtils overwritten. 
WARNING: could not import Base.lastidx into LegacyStrings


Plots with PyPlot backend  now seems to work. But, I am still having some 
problems with Plots. When I tried using Plots.jl with GR backend, first 
time it worked. If I close the GTKTerm window that GR draws into, just 
nothing happens, and occasionally I get a segfault.


On Monday, September 12, 2016 at 1:14:30 PM UTC+2, Scott T wrote:
>
> Oh and just to be a little clearer, that's ~/.julia/lib (the .julia folder 
> in your home directory) and not the .../julia/lib folder in Applications.
>
> On Monday, 12 September 2016 12:12:08 UTC+1, Scott T wrote:
>>
>> Try removing .julia/lib (the precompile cache). I had this same issue and 
>> this appears to have fixed it. 
>>
>

[julia-users] Re: Problem with Plots/Compat on RC4

2016-09-12 Thread Scott T
Oh and just to be a little clearer, that's ~/.julia/lib (the .julia folder 
in your home directory) and not the .../julia/lib folder in Applications.

On Monday, 12 September 2016 12:12:08 UTC+1, Scott T wrote:
>
> Try removing .julia/lib (the precompile cache). I had this same issue and 
> this appears to have fixed it. 
>
> On Monday, 12 September 2016 12:06:48 UTC+1, DNF wrote:
>>
>> After updating to RC4 (binary download) plotting completely stopped 
>> working for me.
>>
>> julia> plot(rand(5)) 
>> [Plots.jl] Initializing backend: gr 
>> INFO: Precompiling module GR. 
>> WARNING: Module Compat with uuid 169833921923513 is missing from the 
>> cache. 
>> This may mean module Compat does not support precompilation but is 
>> imported by a module that does. 
>> ERROR: LoadError: Declaring __precompile__(false) is not allowed in 
>> files that are being precompiled. 
>>  in require(::Symbol) at ./loading.jl:385 
>>  in require(::Symbol) at /Applications/Julia-0.5.app/Contents/Resources/
>> julia/lib/julia/sys.dylib:? 
>>  in include_from_node1(::String) at ./loading.jl:488 
>>  in include_from_node1(::String) at /Applications/Julia-0.5.app/Contents/
>> Resources/julia/lib/julia/sys.dylib:? 
>>  in macro expansion; at ./none:2 [inlined] 
>>  in anonymous at ./:? 
>>  in eval(::Module, ::Any) at ./boot.jl:234 
>>  in eval(::Module, ::Any) at /Applications/Julia-0.5.app/Contents/
>> Resources/julia/lib/julia/sys.dylib:? 
>>  in process_options(::Base.JLOptions) at ./client.jl:239 
>>  in _start() at ./client.jl:318 
>>  in _start() at /Applications/Julia-0.5.app/Contents/Resources/julia/lib/
>> julia/sys.dylib:? 
>> while loading ~/.julia/GR/src/GR.jl, in expression starting on line 7 
>> WARNING: Couldn't initialize gr.  (might need to install it?) 
>>
>> --
>>
>>
>> I have un- and reinstalled Plots.jl, GR.jl, PyPlot.jl and Compat.jl to no 
>> avail.
>>
>>
>> I would file an issue about it, but firstly, I don't know whether it 
>> should go on Julia, Plots.jl or Compat.jl. And, secondly, I find it weird 
>> that no-one else has experienced this pretty showstopping bug and filed an 
>> issue already, so I suspect it might be problem only with my installation.
>>
>>
>> Can anyone help me out, I'm totally stuck right now.
>>
>

[julia-users] Re: Problem with Plots/Compat on RC4

2016-09-12 Thread Scott T
Try removing .julia/lib (the precompile cache). I had this same issue and 
this appears to have fixed it. 

On Monday, 12 September 2016 12:06:48 UTC+1, DNF wrote:
>
> After updating to RC4 (binary download) plotting completely stopped 
> working for me.
>
> julia> plot(rand(5)) 
> [Plots.jl] Initializing backend: gr 
> INFO: Precompiling module GR. 
> WARNING: Module Compat with uuid 169833921923513 is missing from the cache
> . 
> This may mean module Compat does not support precompilation but is 
> imported by a module that does. 
> ERROR: LoadError: Declaring __precompile__(false) is not allowed in files 
> that are being precompiled. 
>  in require(::Symbol) at ./loading.jl:385 
>  in require(::Symbol) at /Applications/Julia-0.5.app/Contents/Resources/
> julia/lib/julia/sys.dylib:? 
>  in include_from_node1(::String) at ./loading.jl:488 
>  in include_from_node1(::String) at /Applications/Julia-0.5.app/Contents/
> Resources/julia/lib/julia/sys.dylib:? 
>  in macro expansion; at ./none:2 [inlined] 
>  in anonymous at ./:? 
>  in eval(::Module, ::Any) at ./boot.jl:234 
>  in eval(::Module, ::Any) at /Applications/Julia-0.5.app/Contents/
> Resources/julia/lib/julia/sys.dylib:? 
>  in process_options(::Base.JLOptions) at ./client.jl:239 
>  in _start() at ./client.jl:318 
>  in _start() at /Applications/Julia-0.5.app/Contents/Resources/julia/lib/
> julia/sys.dylib:? 
> while loading ~/.julia/GR/src/GR.jl, in expression starting on line 7 
> WARNING: Couldn't initialize gr.  (might need to install it?) 
>
> --
>
>
> I have un- and reinstalled Plots.jl, GR.jl, PyPlot.jl and Compat.jl to no 
> avail.
>
>
> I would file an issue about it, but firstly, I don't know whether it 
> should go on Julia, Plots.jl or Compat.jl. And, secondly, I find it weird 
> that no-one else has experienced this pretty showstopping bug and filed an 
> issue already, so I suspect it might be problem only with my installation.
>
>
> Can anyone help me out, I'm totally stuck right now.
>


[julia-users] Problem with Plots/Compat on RC4

2016-09-12 Thread DNF
After updating to RC4 (binary download) plotting completely stopped working 
for me.

julia> plot(rand(5)) 
[Plots.jl] Initializing backend: gr 
INFO: Precompiling module GR. 
WARNING: Module Compat with uuid 169833921923513 is missing from the cache. 
This may mean module Compat does not support precompilation but is imported 
by a module that does. 
ERROR: LoadError: Declaring __precompile__(false) is not allowed in files 
that are being precompiled. 
 in require(::Symbol) at ./loading.jl:385 
 in require(::Symbol) at /Applications/Julia-0.5.app/Contents/Resources/
julia/lib/julia/sys.dylib:? 
 in include_from_node1(::String) at ./loading.jl:488 
 in include_from_node1(::String) at /Applications/Julia-0.5.app/Contents/
Resources/julia/lib/julia/sys.dylib:? 
 in macro expansion; at ./none:2 [inlined] 
 in anonymous at ./:? 
 in eval(::Module, ::Any) at ./boot.jl:234 
 in eval(::Module, ::Any) at /Applications/Julia-0.5.app/Contents/Resources/
julia/lib/julia/sys.dylib:? 
 in process_options(::Base.JLOptions) at ./client.jl:239 
 in _start() at ./client.jl:318 
 in _start() at /Applications/Julia-0.5.app/Contents/Resources/julia/lib/
julia/sys.dylib:? 
while loading ~/.julia/GR/src/GR.jl, in expression starting on line 7 
WARNING: Couldn't initialize gr.  (might need to install it?) 
--


I have un- and reinstalled Plots.jl, GR.jl, PyPlot.jl and Compat.jl to no 
avail.


I would file an issue about it, but firstly, I don't know whether it should 
go on Julia, Plots.jl or Compat.jl. And, secondly, I find it weird that 
no-one else has experienced this pretty showstopping bug and filed an issue 
already, so I suspect it might be problem only with my installation.


Can anyone help me out, I'm totally stuck right now.


[julia-users] How to deal with methods redefinition warnings in 0.5?

2016-09-12 Thread felipenoris
Try calling workspace() before repeating include.

[julia-users] Re: Julia Low Pass Filter much slower than identical code in Python ??

2016-09-12 Thread MLicer
Indeed it does! I thought JIT compilation takes place prior to execution of 
the script. Thanks so much, this makes sense now!

Output:
first call:   0.804573 seconds (1.18 M allocations: 53.183 MB, 1.43% gc 
time)
repeated call:  0.000472 seconds (217 allocations: 402.938 KB)

Thanks again,

Cheers!

On Monday, September 12, 2016 at 12:48:30 PM UTC+2, randm...@gmail.com 
wrote:
>
> The Julia code takes 0.000535 seconds for me on the second run -- during 
> the first run, Julia has to compile the method you're timing. Have a look 
> at the performance tips 
> 
>  
> for a more in depth explanation.
>  
> Am Montag, 12. September 2016 11:53:01 UTC+2 schrieb MLicer:
>>
>> Dear all,
>>
>> i've written a low-pass filter in Julia and Python and the code in Julia 
>> seems to be much slower (*0.800 sec in Julia vs 0.000 sec in Python*). I 
>> *must* be coding ineffieciently, can anyone comment on the two codes 
>> below?
>>
>> *Julia:*
>>
>>
>> 
>> using PyPlot, DSP
>>
>> # generate data:
>> x = linspace(0,30,1e4)
>> sin_noise(arr) = sin(arr) + rand(length(arr))
>>
>> # create filter:
>> designmethod = Butterworth(5)
>> ff = digitalfilter(Lowpass(0.02),designmethod)
>> @time yl = filtfilt(ff, sin_noise(x))
>>
>> Python:
>>
>> from scipy import signal
>> import numpy as np
>> import cProfile, pstats
>>
>> def sin_noise(arr):
>> return np.sin(arr) + np.random.rand(len(arr))
>>
>> def filterSignal(b,a,x):
>> return signal.filtfilt(b, a, x, axis=-1)
>>
>> def main():
>> # generate data:
>> x = np.linspace(0,30,1e4)
>> y = sin_noise(x)
>> b, a = signal.butter(5, 0.02, "lowpass", analog=False)
>> ff = filterSignal(b,a,y)
>>
>> cProfile.runctx('filterSignal(b,a,y)',globals(),{'b':b,'a':a,'y':y},
>> filename='profileStatistics')
>>
>> p = pstats.Stats('profileStatistics')
>> printFirstN = 5
>> p.sort_stats('cumtime').print_stats(printFirstN)
>>
>> if __name__=="__main__":
>> main()
>>
>>
>> Thanks very much for any replies!
>>
>

[julia-users] Re: Julia Low Pass Filter much slower than identical code in Python ??

2016-09-12 Thread randmstring
The Julia code takes 0.000535 seconds for me on the second run -- during 
the first run, Julia has to compile the method you're timing. Have a look 
at the performance tips 

 
for a more in depth explanation.
 
Am Montag, 12. September 2016 11:53:01 UTC+2 schrieb MLicer:
>
> Dear all,
>
> i've written a low-pass filter in Julia and Python and the code in Julia 
> seems to be much slower (*0.800 sec in Julia vs 0.000 sec in Python*). I 
> *must* be coding ineffieciently, can anyone comment on the two codes 
> below?
>
> *Julia:*
>
>
> 
> using PyPlot, DSP
>
> # generate data:
> x = linspace(0,30,1e4)
> sin_noise(arr) = sin(arr) + rand(length(arr))
>
> # create filter:
> designmethod = Butterworth(5)
> ff = digitalfilter(Lowpass(0.02),designmethod)
> @time yl = filtfilt(ff, sin_noise(x))
>
> Python:
>
> from scipy import signal
> import numpy as np
> import cProfile, pstats
>
> def sin_noise(arr):
> return np.sin(arr) + np.random.rand(len(arr))
>
> def filterSignal(b,a,x):
> return signal.filtfilt(b, a, x, axis=-1)
>
> def main():
> # generate data:
> x = np.linspace(0,30,1e4)
> y = sin_noise(x)
> b, a = signal.butter(5, 0.02, "lowpass", analog=False)
> ff = filterSignal(b,a,y)
>
> cProfile.runctx('filterSignal(b,a,y)',globals(),{'b':b,'a':a,'y':y},
> filename='profileStatistics')
>
> p = pstats.Stats('profileStatistics')
> printFirstN = 5
> p.sort_stats('cumtime').print_stats(printFirstN)
>
> if __name__=="__main__":
> main()
>
>
> Thanks very much for any replies!
>


Re: [julia-users] Julia on Research Computing Podcast RCE

2016-09-12 Thread Mauro
RCE is a excellent podcast, would be cool the hear some core-devs
talking there.

On Sun, 2016-09-11 at 02:29, Brock Palen  wrote:
> I am one half of the HPC/Research Computing podcast http://www.rce-cast.com/
>
> We would like to feature Julia on the show.  This takes a developer or two
> and is a friendly interview, takes about an hour over the phone or skype
> and aims to educate our community.
>
> Feel free to contact me off list.
>
> Thank you !
>
> Brock Palen


[julia-users] Re: PyPlot x(y)tick label fontsize

2016-09-12 Thread MLicer
Excellent! Thanks so much!

On Friday, September 9, 2016 at 5:45:26 PM UTC+2, Ralph Smith wrote:
>
> ax[:tick_params]("both",labelsize=24)
>
> See http://matplotlib.org/api/pyplot_api.html
> for related functions and arguments.
>


Re: [julia-users] Re: There is very little overhead calling Julia from C++

2016-09-12 Thread Bart Janssens
The jl_call test is 100 times smaller, so the timing there has to be
multiplied by 100, resulting in 2.5 s for jl_call vs 0.1 s for ccall. I
reduced the size to avoid having the test suite take to long.

Op ma 12 sep. 2016 04:56 schreef K leo :

> Sorry, how to tell from these numbers that using jl_call from C++ is about
> 25 times slower than using ccall from Julia?
>
> On Monday, September 12, 2016 at 5:33:02 AM UTC+8, Bart Janssens wrote:
>
>> On Fri, Sep 9, 2016 at 1:40 AM Steven G. Johnson 
>> wrote:
>>
>>> Except that in your example code, you aren't calling the Julia code
>>> through a raw C function pointer.   You are calling it through jl_call0,
>>> which *does* have a fair amount of overhead (which you aren't seeing
>>> because the function execution is expensive enough to hide the call
>>> overhead).
>>>
>>>
>> To confirm this, I added it to my worst-case benchmark in CxxWrap.jl.
>> Using jl_call from C++ is about 25 times slower than using ccall from
>> Julia. The function here just divides a number by 2, so it needs boxing and
>> unboxing. Test code here:
>>
>> https://github.com/barche/CxxWrap.jl/blob/master/deps/src/examples/functions.cpp#L103-L111
>>
>> Timings:
>> Pure Julia test:
>>   0.061723 seconds (4 allocations: 160 bytes)
>> ccall test:
>>   0.092434 seconds (4 allocations: 160 bytes)
>> CxxWrap.jl test:
>>   0.139052 seconds (4 allocations: 160 bytes)
>> Pure C++:
>>   0.057972 seconds (4 allocations: 160 bytes)
>> jl_call inside C++ loop (array is 100 times smaller than other tests):
>>   0.025484 seconds (1.00 M allocations: 15.259 MB, 4.84% gc time)
>>
>> That said, if the test you did is representative of your real-world
>> problem, it should be fine.
>>
>> Cheers,
>>
>> Bart
>>
>>