[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-14 Thread Steven G. Johnson


On Wednesday, September 14, 2016 at 7:12:31 AM UTC-4, Neal Becker wrote:
>
> I'd expect it to be called with a large array as input, so hopefully it's 
> performant enough as is, leaving the type of the argument determined at 
> runtime: 


My issue is more with how it is used.   The whole constellation type seems 
oriented towards the "vectorized" style of constantly constructing new 
arrays  [c.C[x+1] for x in xsyms]  (aside: it seems like OOP overkill to 
define a whole type etcetera just to implement this simple one-line 
function), whereas you can usually get better performance by combining this 
kind of trivial array construction with the subsequent computation on the 
data.


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-14 Thread Neal Becker
Steven G. Johnson wrote:

...
> 
> Note that you have the same problem in several places, e.g. in
> Constellation.jl.
> 
> (I don't really understand what that file is doing, but it seems to be
> constructing lots of little arrays that would be better of constructed
> implicitly as part of other data-processing operations.)
> 
> There are lots of micro-optimizations I can spot in Constellation.jl, e.g.
> let k = 2*pi/N; [cis(k*x) for x in 0:N-1]; end is almost 2x faster than
> [exp(im*2*pi/N*x) for x in 0:N-1] on my machine, but as usual one would
> expect that the biggest benefits would arise by re-arranging you code to
> avoid multiple passes over multiple arrays and instead do a single pass
> over one (or zero) arrays.

Constellation is just a mapping from an Array{Integer} -> 
Array{Complex{Float64}}.

I think I would prefer it to accept an Array{any Integer type}.  It's 
normally constructed only once, so I don't care about performance of the 
construction.

I'd expect it to be called with a large array as input, so hopefully it's 
performant enough as is, leaving the type of the argument determined at 
runtime:

Basically, it is just this:

type constellation{flt}
C::Array{Complex{flt},1}
end

function (c::constellation)(xsyms)
return [c.C[x+1] for x in xsyms]
end

I guess it would be better to write

function (c::constellation}{T}(xsyms::Array{T})
...

?



Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Stefan Karpinski
On Tue, Sep 13, 2016 at 1:23 PM, Neal Becker  wrote:

>
> Thanks for the ideas.  Here, though, the generated values need to be
> Uniform([0...2^N]), where N could be any number.  For example [0...2^3].
> So the output array itself would be Array{Int64} for example, but the
> values
> in the array are [0 ... 7].  Do you know a better way to do this?


Is this the kind of thing you're looking for?

julia> @time rand(0x0:0x7, 10^5);
  0.001795 seconds (10 allocations: 97.953 KB)


Produces a 10^5-element array of random UInt8 values between 0 and 7.


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Chris Rackauckas
A range is a type that essentially acts as an array, but really is only 
three numbers: start, end, and the step length. I.e. a=0:2^N would make a 
range where a[1]=0, a[i]==i-1, and a[end]=2^N. I haven't looked at the 
whole code, but if you're using rand([0...2^N]), then each time you do that 
it has to make the array, whereas rand(1:2^N) or things like that using 
ranges won't. So if you find yourself making arrays like [0...2^N], they 
should probably be ranges.

On Tuesday, September 13, 2016 at 10:43:39 AM UTC-7, Neal Becker wrote:
>
> I'm not following you here. IIUC a range is a single scalar value?  Are 
> you 
> suggesting I want an Array{range}? 
>
> Chris Rackauckas wrote: 
>
> > Do you need to use an array? That sounds better suited for a range. 
> > 
> > On Tuesday, September 13, 2016 at 10:24:15 AM UTC-7, Neal Becker wrote: 
> >> 
> >> Steven G. Johnson wrote: 
> >> 
> >> > 
> >> > 
> >> > 
> >> > On Monday, September 12, 2016 at 7:32:48 AM UTC-4, Neal Becker wrote: 
> >> >> 
> >> >> PnSeq.jl calls rand() to get a Int64, caching the result and then 
> >> >> providing 
> >> >> N bits at a time to fill an Array.  It's supposed to be a fast way 
> to 
> >> get 
> >> >> an 
> >> >> Array of small-width random integers. 
> >> >> 
> >> > 
> >> > rand(T, n) already does this for small integer types T.  (In fact, it 
> >> > generates 128 random bits at a time.)  See base/random.jl 
> >> > 
> >> < 
> >> 
>
> https://github.com/JuliaLang/julia/blob/d0e7684dd0ce867e1add2b88bb91f1c4574100e0/base/random.jl#L507-L515>
>  
>
> >> 
> >> > for how it does it. 
> >> > 
> >> > In a quick test, rand(UInt16, 10^6) was more than 6x faster than 
> >> > pnseq(16)(10^6, UInt16). 
> >> 
> >> Thanks for the ideas.  Here, though, the generated values need to be 
> >> Uniform([0...2^N]), where N could be any number.  For example 
> [0...2^3]. 
> >> So the output array itself would be Array{Int64} for example, but the 
> >> values 
> >> in the array are [0 ... 7].  Do you know a better way to do this? 
> >> 
> >> > 
> >> > (In a performance-critical situation where you are calling this lots 
> of 
> >> > times to generate random arrays, I would pre-allocate the array A and 
> >> call 
> >> > rand!(A) instead to fill it with random numbers in-place.) 
> >> 
> >> 
> >> 
>
>
>

[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Neal Becker
I'm not following you here. IIUC a range is a single scalar value?  Are you 
suggesting I want an Array{range}?

Chris Rackauckas wrote:

> Do you need to use an array? That sounds better suited for a range.
> 
> On Tuesday, September 13, 2016 at 10:24:15 AM UTC-7, Neal Becker wrote:
>>
>> Steven G. Johnson wrote:
>>
>> > 
>> > 
>> > 
>> > On Monday, September 12, 2016 at 7:32:48 AM UTC-4, Neal Becker wrote:
>> >> 
>> >> PnSeq.jl calls rand() to get a Int64, caching the result and then
>> >> providing
>> >> N bits at a time to fill an Array.  It's supposed to be a fast way to
>> get
>> >> an
>> >> Array of small-width random integers.
>> >> 
>> > 
>> > rand(T, n) already does this for small integer types T.  (In fact, it
>> > generates 128 random bits at a time.)  See base/random.jl
>> > 
>> <
>> 
https://github.com/JuliaLang/julia/blob/d0e7684dd0ce867e1add2b88bb91f1c4574100e0/base/random.jl#L507-L515>
>>
>> > for how it does it.
>> > 
>> > In a quick test, rand(UInt16, 10^6) was more than 6x faster than
>> > pnseq(16)(10^6, UInt16).
>>
>> Thanks for the ideas.  Here, though, the generated values need to be
>> Uniform([0...2^N]), where N could be any number.  For example [0...2^3].
>> So the output array itself would be Array{Int64} for example, but the
>> values
>> in the array are [0 ... 7].  Do you know a better way to do this?
>>
>> > 
>> > (In a performance-critical situation where you are calling this lots of
>> > times to generate random arrays, I would pre-allocate the array A and
>> call
>> > rand!(A) instead to fill it with random numbers in-place.)
>>
>>
>>




[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Chris Rackauckas
Do you need to use an array? That sounds better suited for a range.

On Tuesday, September 13, 2016 at 10:24:15 AM UTC-7, Neal Becker wrote:
>
> Steven G. Johnson wrote: 
>
> > 
> > 
> > 
> > On Monday, September 12, 2016 at 7:32:48 AM UTC-4, Neal Becker wrote: 
> >> 
> >> PnSeq.jl calls rand() to get a Int64, caching the result and then 
> >> providing 
> >> N bits at a time to fill an Array.  It's supposed to be a fast way to 
> get 
> >> an 
> >> Array of small-width random integers. 
> >> 
> > 
> > rand(T, n) already does this for small integer types T.  (In fact, it 
> > generates 128 random bits at a time.)  See base/random.jl 
> > 
> <
> https://github.com/JuliaLang/julia/blob/d0e7684dd0ce867e1add2b88bb91f1c4574100e0/base/random.jl#L507-L515>
>  
>
> > for how it does it. 
> > 
> > In a quick test, rand(UInt16, 10^6) was more than 6x faster than 
> > pnseq(16)(10^6, UInt16). 
>
> Thanks for the ideas.  Here, though, the generated values need to be 
> Uniform([0...2^N]), where N could be any number.  For example [0...2^3]. 
> So the output array itself would be Array{Int64} for example, but the 
> values 
> in the array are [0 ... 7].  Do you know a better way to do this? 
>
> > 
> > (In a performance-critical situation where you are calling this lots of 
> > times to generate random arrays, I would pre-allocate the array A and 
> call 
> > rand!(A) instead to fill it with random numbers in-place.) 
>
>
>

[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Neal Becker
Steven G. Johnson wrote:

> 
> 
> 
> On Monday, September 12, 2016 at 7:32:48 AM UTC-4, Neal Becker wrote:
>>
>> PnSeq.jl calls rand() to get a Int64, caching the result and then
>> providing
>> N bits at a time to fill an Array.  It's supposed to be a fast way to get
>> an
>> Array of small-width random integers.
>>
> 
> rand(T, n) already does this for small integer types T.  (In fact, it
> generates 128 random bits at a time.)  See base/random.jl
> 

> for how it does it.
> 
> In a quick test, rand(UInt16, 10^6) was more than 6x faster than
> pnseq(16)(10^6, UInt16).

Thanks for the ideas.  Here, though, the generated values need to be
Uniform([0...2^N]), where N could be any number.  For example [0...2^3].
So the output array itself would be Array{Int64} for example, but the values 
in the array are [0 ... 7].  Do you know a better way to do this?

> 
> (In a performance-critical situation where you are calling this lots of
> times to generate random arrays, I would pre-allocate the array A and call
> rand!(A) instead to fill it with random numbers in-place.)




[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Steven G. Johnson


On Monday, September 12, 2016 at 8:16:52 AM UTC-4, Steven G. Johnson wrote:
>
>
>
> On Monday, September 12, 2016 at 7:59:33 AM UTC-4, DNF wrote:
>>
>> function(p::pnseq)(n,T=Int64)
>>
>>>
> Note that the real problem with this function declaration is that the type 
> T is known only at runtime, not at compile-time. It would be better to 
> do
>
>  function (p::pnseq){T}(n, ::Type{T}=Int64)
>

Note that you have the same problem in several places, e.g. in 
Constellation.jl.

(I don't really understand what that file is doing, but it seems to be 
constructing lots of little arrays that would be better of constructed 
implicitly as part of other data-processing operations.) 

There are lots of micro-optimizations I can spot in Constellation.jl, e.g. 
let k = 2*pi/N; [cis(k*x) for x in 0:N-1]; end is almost 2x faster than 
[exp(im*2*pi/N*x) for x in 0:N-1] on my machine, but as usual one would 
expect that the biggest benefits would arise by re-arranging you code to 
avoid multiple passes over multiple arrays and instead do a single pass 
over one (or zero) arrays.

(I also have no idea how well-optimized the DSP.jl FIRFilters routines are; 
maybe Tim Holy knows, since he's been doing optimizing filters for 
Images.jl)


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Steven G. Johnson


On Monday, September 12, 2016 at 7:32:48 AM UTC-4, Neal Becker wrote:
>
> PnSeq.jl calls rand() to get a Int64, caching the result and then 
> providing 
> N bits at a time to fill an Array.  It's supposed to be a fast way to get 
> an 
> Array of small-width random integers. 
>

rand(T, n) already does this for small integer types T.  (In fact, it 
generates 128 random bits at a time.)  See base/random.jl 

 
for how it does it.

In a quick test, rand(UInt16, 10^6) was more than 6x faster than 
pnseq(16)(10^6, UInt16).

(In a performance-critical situation where you are calling this lots of 
times to generate random arrays, I would pre-allocate the array A and call 
rand!(A) instead to fill it with random numbers in-place.)


Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Páll Haraldsson


On Monday, September 12, 2016 at 7:01:05 PM UTC, Yichao Yu wrote:
>
> On Sep 12, 2016 2:52 PM, "Páll Haraldsson"  > wrote:
> >
> > On Monday, September 12, 2016 at 11:32:48 AM UTC, Neal Becker wrote:
> >>
> >> Anyone care to make suggestions on this code, how to make it faster, or 
> more 
> >> idiomatic Julia?
> >
> >  
> >
> > It may not matter, but this function:
> >
> > function coef_from_func(func, delta, size)
> >center = float(size-1)/2
> >return [func((i - center)*delta) for i in 0:size-1]
> > end
> >
> > returns Array{Any,1} while this could be better:
> >
> > function coef_from_func(func, delta, size)
> >center = float(size-1)/2
> >return Float64[func((i - center)*delta) for i in 0:size-1]
> > end
> >
> > returns Array{Float64,1} (if not, maybe helpful to know elsewhere).
> >
>
> Not applicable on 0.5
>

Good to know (and confirmed) meaning I guess 0.4 is slower (but correct 
results), with the former. Not with the latter, but then you are less 
generic. It seems Compat.jl would not get you out of that dilemma..



Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Yichao Yu
On Sep 12, 2016 2:52 PM, "Páll Haraldsson" 
wrote:
>
> On Monday, September 12, 2016 at 11:32:48 AM UTC, Neal Becker wrote:
>>
>> Anyone care to make suggestions on this code, how to make it faster, or
more
>> idiomatic Julia?
>
>
>
> It may not matter, but this function:
>
> function coef_from_func(func, delta, size)
>center = float(size-1)/2
>return [func((i - center)*delta) for i in 0:size-1]
> end
>
> returns Array{Any,1} while this could be better:
>
> function coef_from_func(func, delta, size)
>center = float(size-1)/2
>return Float64[func((i - center)*delta) for i in 0:size-1]
> end
>
> returns Array{Float64,1} (if not, maybe helpful to know elsewhere).
>

Not applicable on 0.5

>
> I'm not sure this is more idiomatic, this would be an exception to not
having to specify types.. for speed (both works..)
>
> center = float(size-1)/2
>
> could however just as well be:
>
> center = (size-1)/2 # / implies float result, just as in Python 3 (not
not 2), and I like that choice.
>
> --
> Palli.


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Páll Haraldsson
On Monday, September 12, 2016 at 11:32:48 AM UTC, Neal Becker wrote:

> Anyone care to make suggestions on this code, how to make it faster, or 
> more 
> idiomatic Julia?
>
 

It may not matter, but this function:

function coef_from_func(func, delta, size) 
   center = float(size-1)/2 
   return [func((i - center)*delta) for i in 0:size-1] 
end

returns Array{Any,1} while this could be better:

function coef_from_func(func, delta, size)
   center = float(size-1)/2 
   return Float64[func((i - center)*delta) for i in 0:size-1] 
end

returns Array{Float64,1} (if not, maybe helpful to know elsewhere).


I'm not sure this is more idiomatic, this would be an exception to not 
having to specify types.. for speed (both works..)

center = float(size-1)/2

could however just as well be:

center = (size-1)/2 # / implies float result, just as in Python 3 (not not 
2), and I like that choice.

-- 
Palli.


Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Stefan Karpinski
All of the globals setup in bench1 are non-const, which means the top-level
benchmarking code is pretty slow, but if N is small, this won't matter
much. If N is large, it's worth either wrapping the setup in a function
body or making all these variables const.

On Mon, Sep 12, 2016 at 8:37 AM, Neal Becker  wrote:

> Steven G. Johnson wrote:
>
> >
> >
> >
> > On Monday, September 12, 2016 at 7:59:33 AM UTC-4, DNF wrote:
> >>
> >> function(p::pnseq)(n,T=Int64)
> >>
> >>>
> > Note that the real problem with this function declaration is that the
> type
> > T is known only at runtime, not at compile-time. It would be better
> to
> > do
> >
> >  function (p::pnseq){T}(n, ::Type{T}=Int64)
>
> Thanks!  This change made a big difference. Now PnSeq is only using a small
> amount of time, as I expected it should.  I prefer this syntax to the
> alternative you suggest below as it seems more logical to me.
>
> >
> > since making the type a parameter like this exposes it as a compile-time
> > constant.  Although it would be even more natural to not have to pass the
> > type explicitly at all, but rather to get it from the type of n, e.g.
> >
> >
> >  function (p::pnseq){T<:Integer}(n::T)
> >
> > I have no idea whether this particular thing is performance-critical,
> > however.   I also see lots and lots of functions that allocate arrays, as
> > opposed to scalar functions that are composed and called on a single
> > array, which makes me think that you are thinking in terms of numpy-style
> > vectorized code, which doesn't take full advantage of Julia.
>
>
> >
> > It would be much easier to give pereformance tips if you could boil it
> > down to a single self-contained function that you want to make faster,
> > rather than requiring us to read through four or five different
> submodules
> > and
> > lots of little one-line functions and types.  (There's nothing wrong with
> > having lots of functions and types in Julia, it is just that this forces
> > us to comprehend a lot more code in order to make useful suggestions.)
>
> Nyquist and CoefFromFunc are normally only used at startup, so they are
> unimportant to optimize.
>
> The real work is PnSeq, Constellation, and the FIRFilters (which I didn't
> write - they are in DSP.jl).  I agree that the style is to operate on and
> return a large vector.
>
> I guess what you're suggesting is that PnSeq should return a single scalar,
> Constellation should map scalar->scalar.  But FIRFilter I think needs to be
> a vector->vector, so I will take advantage of simd?
>
>


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Neal Becker
Steven G. Johnson wrote:

> 
> 
> 
> On Monday, September 12, 2016 at 7:59:33 AM UTC-4, DNF wrote:
>>
>> function(p::pnseq)(n,T=Int64)
>>
>>>
> Note that the real problem with this function declaration is that the type
> T is known only at runtime, not at compile-time. It would be better to
> do
> 
>  function (p::pnseq){T}(n, ::Type{T}=Int64)

Thanks!  This change made a big difference. Now PnSeq is only using a small 
amount of time, as I expected it should.  I prefer this syntax to the 
alternative you suggest below as it seems more logical to me.

> 
> since making the type a parameter like this exposes it as a compile-time
> constant.  Although it would be even more natural to not have to pass the
> type explicitly at all, but rather to get it from the type of n, e.g.
> 
> 
>  function (p::pnseq){T<:Integer}(n::T)
> 
> I have no idea whether this particular thing is performance-critical,
> however.   I also see lots and lots of functions that allocate arrays, as
> opposed to scalar functions that are composed and called on a single
> array, which makes me think that you are thinking in terms of numpy-style
> vectorized code, which doesn't take full advantage of Julia.


> 
> It would be much easier to give pereformance tips if you could boil it
> down to a single self-contained function that you want to make faster,
> rather than requiring us to read through four or five different submodules
> and
> lots of little one-line functions and types.  (There's nothing wrong with
> having lots of functions and types in Julia, it is just that this forces
> us to comprehend a lot more code in order to make useful suggestions.)

Nyquist and CoefFromFunc are normally only used at startup, so they are 
unimportant to optimize.

The real work is PnSeq, Constellation, and the FIRFilters (which I didn't 
write - they are in DSP.jl).  I agree that the style is to operate on and 
return a large vector.

I guess what you're suggesting is that PnSeq should return a single scalar, 
Constellation should map scalar->scalar.  But FIRFilter I think needs to be 
a vector->vector, so I will take advantage of simd?



[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread DNF
The code is not particularly long, but it seems like almost every type has 
its own module, which makes it harder to get an overview.

A few of the composite types are defined with abstract types as fields, 
such as FNNyquistPulse. That is not 
optimal: 
http://docs.julialang.org/en/latest/manual/performance-tips/#avoid-fields-with-abstract-type

On Monday, September 12, 2016 at 2:16:52 PM UTC+2, Steven G. Johnson wrote:
>
> It would be much easier to give pereformance tips if you could boil it 
> down to a single self-contained function that you want to make faster, 
> rather than requiring us to read through four or five different submodules 
> and lots of little one-line functions and types.  (There's nothing wrong 
> with having lots of functions and types in Julia, it is just that this 
> forces us to comprehend a lot more code in order to make useful 
> suggestions.)
>


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Steven G. Johnson


On Monday, September 12, 2016 at 7:59:33 AM UTC-4, DNF wrote:
>
> function(p::pnseq)(n,T=Int64)
>
>>
Note that the real problem with this function declaration is that the type 
T is known only at runtime, not at compile-time. It would be better to 
do

 function (p::pnseq){T}(n, ::Type{T}=Int64)

since making the type a parameter like this exposes it as a compile-time 
constant.  Although it would be even more natural to not have to pass the 
type explicitly at all, but rather to get it from the type of n, e.g.


 function (p::pnseq){T<:Integer}(n::T)

I have no idea whether this particular thing is performance-critical, 
however.   I also see lots and lots of functions that allocate arrays, as 
opposed to scalar functions that are composed and called on a single array, 
which makes me think that you are thinking in terms of numpy-style 
vectorized code, which doesn't take full advantage of Julia.

It would be much easier to give pereformance tips if you could boil it down 
to a single self-contained function that you want to make faster, rather 
than requiring us to read through four or five different submodules and 
lots of little one-line functions and types.  (There's nothing wrong with 
having lots of functions and types in Julia, it is just that this forces us 
to comprehend a lot more code in order to make useful suggestions.)


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Neal Becker
Patrick Kofod Mogensen wrote:

> This surprised me as well, where did you find this syntax?
> 
> On Monday, September 12, 2016 at 1:59:33 PM UTC+2, DNF wrote:
>>
>> I haven't looked very closely at your code, but a brief look reveals that
>> you are defining your functions in a very unusual way. Two examples:
>>
>> function (f::FIRFilter)(x)
>> return filt(f, x)
>> end
>>
>> function(p::pnseq)(n,T=Int64)
>> out = Array{T}(n)
>> for i in eachindex(out)
>> if p.count < p.width
>> p.cache = rand(Int64)
>> p.count = 64
>> end
>> out[i] = p.cache & p.mask
>> p.cache >>= p.width
>> p.count -= p.width
>> end
>> return out
>> end
>>
>> I have never seen this way of defining them before, and I am pretty
>> surprised that it's not a syntax error. Long-form function signatures
>> should be of the form
>> function myfunc{T<:SomeType}(myarg1::T, myarg2)
>> where the type parameter section (in curly bracket) is optional.
>>
>> As I said, I'm surprised it's not a syntax error, but maybe it gets
>> parsed as an anonymous function (just guessing here). If so, and if you
>> are using version 0.4, you can get slow performance.
>>
>> You can read here about the right way to define functions:
>> http://docs.julialang.org/en/stable/manual/functions/
>>
>> On Monday, September 12, 2016 at 1:32:48 PM UTC+2, Neal Becker wrote:
>>>
>>> As a first (nontrivial) try at julia, I put together some simple DSP
>>> code,
>>> which represents a
>>> pn generator (random fixed-width integer generator)
>>> constellation mapping
>>> interpolating FIR filter (from DSP.jl)
>>> decimating FIR filter (from DSP.jl)
>>> mean-square error measure
>>>
>>> Source code is here:
>>> https://github.com/nbecker/julia-test
>>>
>>> Profile result is here:
>>> https://gist.github.com/anonymous/af2459fc831ddbeb6e3be25e5c8d5197
>>>
>>> If I understand how to read this profile (not sure I do) it looks like
>>> 1/2
>>> the time is spent in PnSeq.jl, which seems surprising.
>>>
>>> PnSeq.jl calls rand() to get a Int64, caching the result and then
>>> providing
>>> N bits at a time to fill an Array.  It's supposed to be a fast way to
>>> get an
>>> Array of small-width random integers.
>>>
>>> Most of the number crunching should be in the FIR filter functions,
>>> which I
>>> would have expected to use the most time.
>>>
>>> Anyone care to make suggestions on this code, how to make it faster, or
>>> more
>>> idiomatic Julia?  I'm not proficient with Julia or with Matlab (I've
>>> been using python/numpy/c++ for all my work for years).
>>>
>>>
>>>

I guess I should have said I'm using
Version 0.5.0-rc3+49 (2016-09-08 05:47 UTC)

http://docs.julialang.org/en/latest/manual/methods/#function-like-objects

Coming from my python/c++ background, I've made a practice of, when I have 
an object that has only 1 reasonable way to use it, overloading the function 
call operator for this purpose.  It seems julia-0.5 allows this (and I'm 
happy)



Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Steven G. Johnson


On Monday, September 12, 2016 at 8:07:55 AM UTC-4, Yichao Yu wrote:
>
>
>
> On Mon, Sep 12, 2016 at 8:03 AM, Patrick Kofod Mogensen <
> patrick@gmail.com > wrote:
>
>> This surprised me as well, where did you find this syntax?
>>
>
> Call overload. 
>

(i.e. it's the new syntax for call overloading in Julia 0.5, what would 
have been Base.call in Julia 0.4.) 


Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Yichao Yu
On Mon, Sep 12, 2016 at 8:03 AM, Patrick Kofod Mogensen <
patrick.mogen...@gmail.com> wrote:

> This surprised me as well, where did you find this syntax?
>

Call overload.


>
>
> On Monday, September 12, 2016 at 1:59:33 PM UTC+2, DNF wrote:
>>
>> I haven't looked very closely at your code, but a brief look reveals that
>> you are defining your functions in a very unusual way. Two examples:
>>
>> function (f::FIRFilter)(x)
>> return filt(f, x)
>> end
>>
>> function(p::pnseq)(n,T=Int64)
>> out = Array{T}(n)
>> for i in eachindex(out)
>> if p.count < p.width
>> p.cache = rand(Int64)
>> p.count = 64
>> end
>> out[i] = p.cache & p.mask
>> p.cache >>= p.width
>> p.count -= p.width
>> end
>> return out
>> end
>>
>> I have never seen this way of defining them before, and I am pretty
>> surprised that it's not a syntax error. Long-form function signatures
>> should be of the form
>> function myfunc{T<:SomeType}(myarg1::T, myarg2)
>> where the type parameter section (in curly bracket) is optional.
>>
>> As I said, I'm surprised it's not a syntax error, but maybe it gets
>> parsed as an anonymous function (just guessing here). If so, and if you are
>> using version 0.4, you can get slow performance.
>>
>> You can read here about the right way to define functions:
>> http://docs.julialang.org/en/stable/manual/functions/
>>
>> On Monday, September 12, 2016 at 1:32:48 PM UTC+2, Neal Becker wrote:
>>>
>>> As a first (nontrivial) try at julia, I put together some simple DSP
>>> code,
>>> which represents a
>>> pn generator (random fixed-width integer generator)
>>> constellation mapping
>>> interpolating FIR filter (from DSP.jl)
>>> decimating FIR filter (from DSP.jl)
>>> mean-square error measure
>>>
>>> Source code is here:
>>> https://github.com/nbecker/julia-test
>>>
>>> Profile result is here:
>>> https://gist.github.com/anonymous/af2459fc831ddbeb6e3be25e5c8d5197
>>>
>>> If I understand how to read this profile (not sure I do) it looks like
>>> 1/2
>>> the time is spent in PnSeq.jl, which seems surprising.
>>>
>>> PnSeq.jl calls rand() to get a Int64, caching the result and then
>>> providing
>>> N bits at a time to fill an Array.  It's supposed to be a fast way to
>>> get an
>>> Array of small-width random integers.
>>>
>>> Most of the number crunching should be in the FIR filter functions,
>>> which I
>>> would have expected to use the most time.
>>>
>>> Anyone care to make suggestions on this code, how to make it faster, or
>>> more
>>> idiomatic Julia?  I'm not proficient with Julia or with Matlab (I've
>>> been
>>> using python/numpy/c++ for all my work for years).
>>>
>>>
>>>


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Patrick Kofod Mogensen
This surprised me as well, where did you find this syntax?

On Monday, September 12, 2016 at 1:59:33 PM UTC+2, DNF wrote:
>
> I haven't looked very closely at your code, but a brief look reveals that 
> you are defining your functions in a very unusual way. Two examples:
>
> function (f::FIRFilter)(x)
> return filt(f, x)
> end
>
> function(p::pnseq)(n,T=Int64)
> out = Array{T}(n)
> for i in eachindex(out)
> if p.count < p.width
> p.cache = rand(Int64)
> p.count = 64
> end
> out[i] = p.cache & p.mask
> p.cache >>= p.width
> p.count -= p.width
> end
> return out
> end
>
> I have never seen this way of defining them before, and I am pretty 
> surprised that it's not a syntax error. Long-form function signatures 
> should be of the form
> function myfunc{T<:SomeType}(myarg1::T, myarg2)
> where the type parameter section (in curly bracket) is optional.
>
> As I said, I'm surprised it's not a syntax error, but maybe it gets parsed 
> as an anonymous function (just guessing here). If so, and if you are using 
> version 0.4, you can get slow performance.
>
> You can read here about the right way to define functions: 
> http://docs.julialang.org/en/stable/manual/functions/
>
> On Monday, September 12, 2016 at 1:32:48 PM UTC+2, Neal Becker wrote:
>>
>> As a first (nontrivial) try at julia, I put together some simple DSP 
>> code, 
>> which represents a 
>> pn generator (random fixed-width integer generator) 
>> constellation mapping 
>> interpolating FIR filter (from DSP.jl) 
>> decimating FIR filter (from DSP.jl) 
>> mean-square error measure 
>>
>> Source code is here: 
>> https://github.com/nbecker/julia-test 
>>
>> Profile result is here: 
>> https://gist.github.com/anonymous/af2459fc831ddbeb6e3be25e5c8d5197 
>>
>> If I understand how to read this profile (not sure I do) it looks like 
>> 1/2 
>> the time is spent in PnSeq.jl, which seems surprising.   
>>
>> PnSeq.jl calls rand() to get a Int64, caching the result and then 
>> providing 
>> N bits at a time to fill an Array.  It's supposed to be a fast way to get 
>> an 
>> Array of small-width random integers. 
>>
>> Most of the number crunching should be in the FIR filter functions, which 
>> I 
>> would have expected to use the most time. 
>>
>> Anyone care to make suggestions on this code, how to make it faster, or 
>> more 
>> idiomatic Julia?  I'm not proficient with Julia or with Matlab (I've been 
>> using python/numpy/c++ for all my work for years). 
>>
>>
>>

[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread DNF
I haven't looked very closely at your code, but a brief look reveals that 
you are defining your functions in a very unusual way. Two examples:

function (f::FIRFilter)(x)
return filt(f, x)
end

function(p::pnseq)(n,T=Int64)
out = Array{T}(n)
for i in eachindex(out)
if p.count < p.width
p.cache = rand(Int64)
p.count = 64
end
out[i] = p.cache & p.mask
p.cache >>= p.width
p.count -= p.width
end
return out
end

I have never seen this way of defining them before, and I am pretty 
surprised that it's not a syntax error. Long-form function signatures 
should be of the form
function myfunc{T<:SomeType}(myarg1::T, myarg2)
where the type parameter section (in curly bracket) is optional.

As I said, I'm surprised it's not a syntax error, but maybe it gets parsed 
as an anonymous function (just guessing here). If so, and if you are using 
version 0.4, you can get slow performance.

You can read here about the right way to define functions: 
http://docs.julialang.org/en/stable/manual/functions/

On Monday, September 12, 2016 at 1:32:48 PM UTC+2, Neal Becker wrote:
>
> As a first (nontrivial) try at julia, I put together some simple DSP code, 
> which represents a 
> pn generator (random fixed-width integer generator) 
> constellation mapping 
> interpolating FIR filter (from DSP.jl) 
> decimating FIR filter (from DSP.jl) 
> mean-square error measure 
>
> Source code is here: 
> https://github.com/nbecker/julia-test 
>
> Profile result is here: 
> https://gist.github.com/anonymous/af2459fc831ddbeb6e3be25e5c8d5197 
>
> If I understand how to read this profile (not sure I do) it looks like 1/2 
> the time is spent in PnSeq.jl, which seems surprising.   
>
> PnSeq.jl calls rand() to get a Int64, caching the result and then 
> providing 
> N bits at a time to fill an Array.  It's supposed to be a fast way to get 
> an 
> Array of small-width random integers. 
>
> Most of the number crunching should be in the FIR filter functions, which 
> I 
> would have expected to use the most time. 
>
> Anyone care to make suggestions on this code, how to make it faster, or 
> more 
> idiomatic Julia?  I'm not proficient with Julia or with Matlab (I've been 
> using python/numpy/c++ for all my work for years). 
>
>
>