Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Stefan Karpinski
On Tue, Sep 13, 2016 at 1:23 PM, Neal Becker  wrote:

>
> Thanks for the ideas.  Here, though, the generated values need to be
> Uniform([0...2^N]), where N could be any number.  For example [0...2^3].
> So the output array itself would be Array{Int64} for example, but the
> values
> in the array are [0 ... 7].  Do you know a better way to do this?


Is this the kind of thing you're looking for?

julia> @time rand(0x0:0x7, 10^5);
  0.001795 seconds (10 allocations: 97.953 KB)


Produces a 10^5-element array of random UInt8 values between 0 and 7.


Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Páll Haraldsson


On Monday, September 12, 2016 at 7:01:05 PM UTC, Yichao Yu wrote:
>
> On Sep 12, 2016 2:52 PM, "Páll Haraldsson"  > wrote:
> >
> > On Monday, September 12, 2016 at 11:32:48 AM UTC, Neal Becker wrote:
> >>
> >> Anyone care to make suggestions on this code, how to make it faster, or 
> more 
> >> idiomatic Julia?
> >
> >  
> >
> > It may not matter, but this function:
> >
> > function coef_from_func(func, delta, size)
> >center = float(size-1)/2
> >return [func((i - center)*delta) for i in 0:size-1]
> > end
> >
> > returns Array{Any,1} while this could be better:
> >
> > function coef_from_func(func, delta, size)
> >center = float(size-1)/2
> >return Float64[func((i - center)*delta) for i in 0:size-1]
> > end
> >
> > returns Array{Float64,1} (if not, maybe helpful to know elsewhere).
> >
>
> Not applicable on 0.5
>

Good to know (and confirmed) meaning I guess 0.4 is slower (but correct 
results), with the former. Not with the latter, but then you are less 
generic. It seems Compat.jl would not get you out of that dilemma..



Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Yichao Yu
On Sep 12, 2016 2:52 PM, "Páll Haraldsson" 
wrote:
>
> On Monday, September 12, 2016 at 11:32:48 AM UTC, Neal Becker wrote:
>>
>> Anyone care to make suggestions on this code, how to make it faster, or
more
>> idiomatic Julia?
>
>
>
> It may not matter, but this function:
>
> function coef_from_func(func, delta, size)
>center = float(size-1)/2
>return [func((i - center)*delta) for i in 0:size-1]
> end
>
> returns Array{Any,1} while this could be better:
>
> function coef_from_func(func, delta, size)
>center = float(size-1)/2
>return Float64[func((i - center)*delta) for i in 0:size-1]
> end
>
> returns Array{Float64,1} (if not, maybe helpful to know elsewhere).
>

Not applicable on 0.5

>
> I'm not sure this is more idiomatic, this would be an exception to not
having to specify types.. for speed (both works..)
>
> center = float(size-1)/2
>
> could however just as well be:
>
> center = (size-1)/2 # / implies float result, just as in Python 3 (not
not 2), and I like that choice.
>
> --
> Palli.


Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Stefan Karpinski
All of the globals setup in bench1 are non-const, which means the top-level
benchmarking code is pretty slow, but if N is small, this won't matter
much. If N is large, it's worth either wrapping the setup in a function
body or making all these variables const.

On Mon, Sep 12, 2016 at 8:37 AM, Neal Becker  wrote:

> Steven G. Johnson wrote:
>
> >
> >
> >
> > On Monday, September 12, 2016 at 7:59:33 AM UTC-4, DNF wrote:
> >>
> >> function(p::pnseq)(n,T=Int64)
> >>
> >>>
> > Note that the real problem with this function declaration is that the
> type
> > T is known only at runtime, not at compile-time. It would be better
> to
> > do
> >
> >  function (p::pnseq){T}(n, ::Type{T}=Int64)
>
> Thanks!  This change made a big difference. Now PnSeq is only using a small
> amount of time, as I expected it should.  I prefer this syntax to the
> alternative you suggest below as it seems more logical to me.
>
> >
> > since making the type a parameter like this exposes it as a compile-time
> > constant.  Although it would be even more natural to not have to pass the
> > type explicitly at all, but rather to get it from the type of n, e.g.
> >
> >
> >  function (p::pnseq){T<:Integer}(n::T)
> >
> > I have no idea whether this particular thing is performance-critical,
> > however.   I also see lots and lots of functions that allocate arrays, as
> > opposed to scalar functions that are composed and called on a single
> > array, which makes me think that you are thinking in terms of numpy-style
> > vectorized code, which doesn't take full advantage of Julia.
>
>
> >
> > It would be much easier to give pereformance tips if you could boil it
> > down to a single self-contained function that you want to make faster,
> > rather than requiring us to read through four or five different
> submodules
> > and
> > lots of little one-line functions and types.  (There's nothing wrong with
> > having lots of functions and types in Julia, it is just that this forces
> > us to comprehend a lot more code in order to make useful suggestions.)
>
> Nyquist and CoefFromFunc are normally only used at startup, so they are
> unimportant to optimize.
>
> The real work is PnSeq, Constellation, and the FIRFilters (which I didn't
> write - they are in DSP.jl).  I agree that the style is to operate on and
> return a large vector.
>
> I guess what you're suggesting is that PnSeq should return a single scalar,
> Constellation should map scalar->scalar.  But FIRFilter I think needs to be
> a vector->vector, so I will take advantage of simd?
>
>


Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Steven G. Johnson


On Monday, September 12, 2016 at 8:07:55 AM UTC-4, Yichao Yu wrote:
>
>
>
> On Mon, Sep 12, 2016 at 8:03 AM, Patrick Kofod Mogensen <
> patrick@gmail.com > wrote:
>
>> This surprised me as well, where did you find this syntax?
>>
>
> Call overload. 
>

(i.e. it's the new syntax for call overloading in Julia 0.5, what would 
have been Base.call in Julia 0.4.) 


Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Yichao Yu
On Mon, Sep 12, 2016 at 8:03 AM, Patrick Kofod Mogensen <
patrick.mogen...@gmail.com> wrote:

> This surprised me as well, where did you find this syntax?
>

Call overload.


>
>
> On Monday, September 12, 2016 at 1:59:33 PM UTC+2, DNF wrote:
>>
>> I haven't looked very closely at your code, but a brief look reveals that
>> you are defining your functions in a very unusual way. Two examples:
>>
>> function (f::FIRFilter)(x)
>> return filt(f, x)
>> end
>>
>> function(p::pnseq)(n,T=Int64)
>> out = Array{T}(n)
>> for i in eachindex(out)
>> if p.count < p.width
>> p.cache = rand(Int64)
>> p.count = 64
>> end
>> out[i] = p.cache & p.mask
>> p.cache >>= p.width
>> p.count -= p.width
>> end
>> return out
>> end
>>
>> I have never seen this way of defining them before, and I am pretty
>> surprised that it's not a syntax error. Long-form function signatures
>> should be of the form
>> function myfunc{T<:SomeType}(myarg1::T, myarg2)
>> where the type parameter section (in curly bracket) is optional.
>>
>> As I said, I'm surprised it's not a syntax error, but maybe it gets
>> parsed as an anonymous function (just guessing here). If so, and if you are
>> using version 0.4, you can get slow performance.
>>
>> You can read here about the right way to define functions:
>> http://docs.julialang.org/en/stable/manual/functions/
>>
>> On Monday, September 12, 2016 at 1:32:48 PM UTC+2, Neal Becker wrote:
>>>
>>> As a first (nontrivial) try at julia, I put together some simple DSP
>>> code,
>>> which represents a
>>> pn generator (random fixed-width integer generator)
>>> constellation mapping
>>> interpolating FIR filter (from DSP.jl)
>>> decimating FIR filter (from DSP.jl)
>>> mean-square error measure
>>>
>>> Source code is here:
>>> https://github.com/nbecker/julia-test
>>>
>>> Profile result is here:
>>> https://gist.github.com/anonymous/af2459fc831ddbeb6e3be25e5c8d5197
>>>
>>> If I understand how to read this profile (not sure I do) it looks like
>>> 1/2
>>> the time is spent in PnSeq.jl, which seems surprising.
>>>
>>> PnSeq.jl calls rand() to get a Int64, caching the result and then
>>> providing
>>> N bits at a time to fill an Array.  It's supposed to be a fast way to
>>> get an
>>> Array of small-width random integers.
>>>
>>> Most of the number crunching should be in the FIR filter functions,
>>> which I
>>> would have expected to use the most time.
>>>
>>> Anyone care to make suggestions on this code, how to make it faster, or
>>> more
>>> idiomatic Julia?  I'm not proficient with Julia or with Matlab (I've
>>> been
>>> using python/numpy/c++ for all my work for years).
>>>
>>>
>>>