[julia-users] Re: Julia idiom for something like CLOS's call-next-method

2014-07-17 Thread Markus Roberts
 
Thank you both...invoke(bar,(Fu,),f) has exactly the semantics I was 
looking for  (esp. being able to specify which type constraint I'm 
loosening in the case where there are multiple args participating in the 
dispatch.

As for Keno's suggestion that I refactor into a primary/secondary method 
pattern, I can see how that would be the right solution in some cases, but 
doesn't feel like a good fit in my present case; my subtype's around 
method is only there to maintain additional invariants imposed by the 
subtype, and having an additional function that does the actual work 
without maintaining the invariant feels...icky. 

Thanks again for the prompt replies,
-- MarkusQ



Re: [julia-users] Re: Julia idiom for something like CLOS's call-next-method

2014-07-17 Thread Stefan Karpinski
I've occasionally found cases where invoke seems to be the only
non-annoying way to accomplish some dispatch, but it's remarkably rare.
Still, it does seem to happen.


On Wed, Jul 16, 2014 at 11:55 PM, Markus Roberts mar...@reality.com wrote:


 Thank you both...invoke(bar,(Fu,),f) has exactly the semantics I was
 looking for  (esp. being able to specify which type constraint I'm
 loosening in the case where there are multiple args participating in the
 dispatch.

 As for Keno's suggestion that I refactor into a primary/secondary method
 pattern, I can see how that would be the right solution in some cases, but
 doesn't feel like a good fit in my present case; my subtype's around
 method is only there to maintain additional invariants imposed by the
 subtype, and having an additional function that does the actual work
 without maintaining the invariant feels...icky.

 Thanks again for the prompt replies,
 -- MarkusQ




Re: [julia-users] Re: build-in function to find inverse of a matrix

2014-07-17 Thread Tobias Knopp
Indeed it is one of the first things one learns in numerical lectures that 
one has to avoid explicitely calculating an inverse matrix.
Still, I think that there various small problems in geometry where I don't 
see an issue of invering a 2x2 or 3x3 matrix. It depends, as so often, a 
lot on the context. When considering not so well posed problems it is quite 
essential to take regularization into account. A simple x = A\b would not 
produce satisfying results in those cases.

Am Donnerstag, 17. Juli 2014 06:25:27 UTC+2 schrieb Stefan Karpinski:

 It's a bit of numerical computing lore that inv is bad – both for 
 performance and for numerical accuracy. It turns out it may not be so bad 
 http://arxiv.org/pdf/1201.6035v1.pdf after all, but everyone is still 
 kind of wary of it and there are often better ways to solve problems where 
 inv would be the naive way to do it.

 On Wed, Jul 16, 2014 at 3:59 PM, Alan Chan szelo...@gmail.com 
 javascript: wrote:

 any reason of avoiding inv?




Re: [julia-users] Re: build-in function to find inverse of a matrix

2014-07-17 Thread Andreas Lobinger
Hello colleague,

On Thursday, July 17, 2014 6:25:27 AM UTC+2, Stefan Karpinski wrote:

 It's a bit of numerical computing lore that inv is bad – both for 
 performance and for numerical accuracy. It turns out it may not be so bad 
 http://arxiv.org/pdf/1201.6035v1.pdf after all, but everyone is still 
 kind of wary of it and there are often better ways to solve problems where 
 inv would be the naive way to do it.


i scanned the paper and follows the fashion that people proof numerical 
assumptions by running matlab experiments ... so i cannot take this 
seriously.

The paper has a point, that if you have random matrix input the method to 
get Ax = b is -using recent FP implementation- not so influential, by 
missing the point, that in random input you are not really interested in 
the output...

For real world problems in evaluating ODEs or PDEs where you are especially 
interested in the result and in ill-conditioned systems, because the impact 
to the real world would have a price tag (at least, or fatal consequences), 
i still prefer the conservative solution (of factorisation) which btw (cite 
from paper): 
Computing the inverse requires more arithmetic operations than computing an 
LU factorization.




Re: [julia-users] Re: build-in function to find inverse of a matrix

2014-07-17 Thread Tomas Lycken


If the inverse of a matrix is your final result, rather than some 
intermediate calculation, it might still be possible to use \ (and, at 
least in some cases, advantageous): 

julia A = rand(10,10)
julia @time inv(A)
elapsed time: 0.002185216 seconds (21336 bytes allocated)
julia @time inv(A)
elapsed time: 0.000202789 seconds (7536 bytes allocated)
julia @time A\eye(10)
elapsed time: 0.000117989 seconds (3280 bytes allocated)

julia @time A\eye(10)
elapsed time: 0.000117989 seconds (3280 bytes allocated)

I show the timings twice to illustrate the kind of weird behavior that the 
second time inv(A) is called, it’s 10x faster and allocates only a third of 
the memory (which is from the third time on consistent). These measurements 
are after precompilation (using different matrices of the same size, i.e. 
inv(rand(10,10)) and rand(10,10)\eye(10) to eliminate any caching or other 
magic), so it’s not JIT-ting. Anyone got any ideas here?

Also, the results aren’t exactly equal: inv(A) == A\eye(10) returns false. 
However, that’s just floating point inaccuracies - all(map(isapprox, 
inv(A), A\eye(10)) returns true. I can’t vouch for which result is closer 
to the mathematically exact correct matrix inverse - it probably depends on 
the input.

// T

On Thursday, July 17, 2014 10:11:10 AM UTC+2, Tobias Knopp wrote:

Indeed it is one of the first things one learns in numerical lectures that 
 one has to avoid explicitely calculating an inverse matrix.
 Still, I think that there various small problems in geometry where I don't 
 see an issue of invering a 2x2 or 3x3 matrix. It depends, as so often, a 
 lot on the context. When considering not so well posed problems it is quite 
 essential to take regularization into account. A simple x = A\b would not 
 produce satisfying results in those cases.

 Am Donnerstag, 17. Juli 2014 06:25:27 UTC+2 schrieb Stefan Karpinski:

 It's a bit of numerical computing lore that inv is bad – both for 
 performance and for numerical accuracy. It turns out it may not be so bad 
 http://arxiv.org/pdf/1201.6035v1.pdf after all, but everyone is still 
 kind of wary of it and there are often better ways to solve problems where 
 inv would be the naive way to do it.

 On Wed, Jul 16, 2014 at 3:59 PM, Alan Chan szelo...@gmail.com wrote:

 any reason of avoiding inv?


  ​


[julia-users] Correct (and fast) way to calculate the inverse zip on a list of tuples?

2014-07-17 Thread Tomas Lycken


I have an array of 2-tuples of floats, created as

julia mytuples = (Float64,Float64)[(v.x, v.y for v in vs] # slightly more 
complicated in actual code
136-element Array{(Float64,Float64),1}:
 (4.0926,-2.55505)   
 (4.170826,-2.586752)

...

Now, I’d like to split this into two arrays of floats. I was under the 
impression that zip could do this for me - according to the docs, zip is 
its own inverse http://docs.julialang.org/en/latest/stdlib/base/#Base.zip, 
and the array of tuples does look like something I could get from zipping 
two arrays. So I tried something similar to the example there:

julia julia [zip(mytuples...)...]
2-element Array{(Float64,Float64,Float64, ... and so on, 136 times...),1}:

so I guess that only works on actual Zip objects, and not on arrays (that 
could have been) generated by the zip function inside []. (Also, since this 
uses splatting with ... on large lists, it might not be a good idea in the 
first place…? 
https://github.com/JuliaLang/julia/issues/6098#issuecomment-37203821)

What’s the best way to accomplish what I want, i.e. transforming the mytuple 
variable above into two Vector{Float64}s (possibly inside a tuple or array 
or something)?

// T
​


[julia-users] Correct (and fast) way to inverse zip a large collection?

2014-07-17 Thread Tomas Lycken
I have an array of 2-tuples of floats, created as

```julia
julia mytuples = (Float64,Float64)[(v.x, v.y for v in vs] # slightly more 
complicated in actual code
136-element Array{(Float64,Float64),1}:
 (4.0926,-2.55505)   
 (4.170826,-2.586752)
...
```

Now, I'd like to split this into two arrays of floats. I was under the 
impression that `zip` could do this for me - according to the docs, [`zip` 
is its own 
inverse](http://docs.julialang.org/en/latest/stdlib/base/#Base.zip), and 
the array of tuples does look like something I could get from `zip`ping two 
arrays. So I tried something similar to the example there:

```
julia julia [zip(mytuples...)...]
2-element Array{(Float64,Float64,Float64, ... and so on, 136 times...),1}:
```

so I guess that only works on actual `Zip` objects, and not on arrays (that 
could have been) generated by the `zip` function inside `[]`. (Also, since 
this uses splatting with `...` on large lists, it [might not be a good idea 
in the first 
place...?](https://github.com/JuliaLang/julia/issues/6098#issuecomment-37203821))

What's the best way to accomplish what I want, i.e. transforming the 
`mytuple` variable above into two `Vector{Float64}`s (possibly inside a 
tuple or array or something)?

// T


[julia-users] Re: PSA: Julia 0.3-RC1 packages working well again

2014-07-17 Thread Ivar Nesje
You can actually start running it immediately for any package that does 
claim to be 0.2 compatible. If they are not compatible, they should not use 
any functionality that is deprecated in 0.3.

The best option will probably be to run PkgEvaluator with deprecations 
removed (a few weeks before the deprecations is actually removed), so that 
it opens issues for packages that will cause trouble.

Ivar

kl. 00:32:25 UTC+2 torsdag 17. juli 2014 skrev Tony Fong følgende:

 It's a little late now, but I have updated Lint.jl to warn on extending a 
 deprecated function. It parses the deprecated.jl and keeps a list. If a new 
 function definition matches the signature of a deprecated function, it'd 
 give a lint error. The definition of a match is more relaxed than the 
 typical invariant style w.r.t. ADT such as Array{ Real, 1} so it should 
 catch reasonably specialized forms of a function.

 Hopefully it'd be more useful the next time we have API migration. 

 On Thursday, July 17, 2014 2:12:04 AM UTC+7, Iain Dunning wrote:

 Hi all,

 If you updated to Julia 0.3-RC1 in the past couple of days, you may have 
 noticed issues with many packages. This was due to deprecated functions 
 that were deprecated in the previous release cycle being finally removed. 
 Most packages affected by this have fixed these minor issues so if you run 
 Pkg.update() you should be OK. Please do give 0.3-RC1 a good try so as many 
 bugs are flushed out before 0.3 is released as possible.

 Cheers,
 Iain



Re: [julia-users] Re: ways to improve performance of a non-integer power?

2014-07-17 Thread Job van der Zwan
On Wednesday, 16 July 2014 22:47:58 UTC+2, Stefan Karpinski wrote:

 On Wed, Jul 16, 2014 at 12:39 PM, Florian Oswald florian...@gmail.com 
 javascript: wrote:

 do you think this log issue may be worth a mention in the performance 
 tips section of the manual? I would have never guessed that there could be 
 a fast/slow issue with such basic functions. Little do I know!


 The trouble with mentioning this is that it's an arbitrarily deep rathole. 
 Whole books could (and have) been written on this kind of optimization. But 
 perhaps a passing mention might be good.


A bit of a general tangent on the topic of documentation using igital 
media, but I've often wondered: why don't we use the fact that we can 
collapse/expand text blocks more often in documentation settings? (probably 
wouldn't be easily done with RTD-generated Julia documentation, but I'm 
thinking more general here). Arbitrarily deep ratholes are less of a 
problem then, as you could hide them behind expandable text, or even 
expandable text within expandable text.


Re: [julia-users] Re: Jupyter project

2014-07-17 Thread Job van der Zwan
On Wednesday, 16 July 2014 23:25:19 UTC+2, Luke Stagner wrote:

 The is also the alchemical symbol for oil 
 http://www.fileformat.info/info/unicode/char/1f746/index.htm ༜ 


That can be narratively made to fit with a bit of work. Julia: removing the 
mental friction from technical computing! Julia: it fuels the inner 
language nerd!


[julia-users] Populating a DArray from file...

2014-07-17 Thread Einar Otnes
Dear experts,

I need to populate a DArray by reading data from file. My computer doesn't 
have enough memory to store the whole array so I cannot just use 
distribute() to generate a DArray from an Array. Since DArray doesn't seem 
to have setindex! defined, I don't know how to move forward. Do I need to 
generate an init function for the DArray constructor that reads the file in 
parallel or are there other options?

Thanks,

Einar 


[julia-users] Why A * zeros(..) is faster than A * ones(..)?

2014-07-17 Thread Andrei Zh
I continue investigating matrix multiplication performance. Today I found 
that multiplication by array of zeros(..) is several times faster than 
multiplication by array of ones(..) or random numbers: 

julia A = rand(200, 100)
...

julia @time for i=1:1000 A * rand(100, 200) end 
 elapsed time: 3.009730414 seconds (48016 bytes allocated, 11.21% gc 
time)

 julia @time for i=1:1000 A * ones(100, 200) end 
 elapsed time: 2.973320655 seconds (480128000 bytes allocated, 12.72% gc 
time)

 julia @time for i=1:1000 A * zeros(100, 200) end 
 elapsed time: 0.438900132 seconds (480128000 bytes allocated, 85.46% gc 
time)

So, A * zeros() is about 6 faster than other kinds of multiplication. Note 
also that it uses ~7x more GC time. 

On NumPy no such difference is seen:

In [106]: %timeit dot(A, rand(100, 200))
100 loops, best of 3: 2.77 ms per loop

In [107]: %timeit dot(A, ones((100, 200)))
100 loops, best of 3: 2.59 ms per loop

In [108]: %timeit dot(A, zeros((100, 200)))
100 loops, best of 3: 2.57 ms per loop


So I'm curious, how multiplying by zeros matrix is different from other 
multiplication types? 




[julia-users] Re: Correct (and fast) way to calculate the inverse zip on a list of tuples?

2014-07-17 Thread David Gonzales
it is possible to use two indices in a comprehension:
mat = [x[y] for x in mytuples,y=1:2]
now mat[:,1] is your first vector, and mat[:,2] is the second.

On Thursday, July 17, 2014 12:37:20 PM UTC+3, Tomas Lycken wrote:

 I have an array of 2-tuples of floats, created as

 julia mytuples = (Float64,Float64)[(v.x, v.y for v in vs] # slightly more 
 complicated in actual code
 136-element Array{(Float64,Float64),1}:
  (4.0926,-2.55505)   
  (4.170826,-2.586752)

 ...

 Now, I’d like to split this into two arrays of floats. I was under the 
 impression that zip could do this for me - according to the docs, zip is 
 its own inverse 
 http://docs.julialang.org/en/latest/stdlib/base/#Base.zip, and the 
 array of tuples does look like something I could get from zipping two 
 arrays. So I tried something similar to the example there:

 julia julia [zip(mytuples...)...]
 2-element Array{(Float64,Float64,Float64, ... and so on, 136 times...),1}:

 so I guess that only works on actual Zip objects, and not on arrays (that 
 could have been) generated by the zip function inside []. (Also, since 
 this uses splatting with ... on large lists, it might not be a good idea 
 in the first place…? 
 https://github.com/JuliaLang/julia/issues/6098#issuecomment-37203821)

 What’s the best way to accomplish what I want, i.e. transforming the 
 mytuple variable above into two Vector{Float64}s (possibly inside a tuple 
 or array or something)?

 // T
 ​



[julia-users] Re: Why A * zeros(..) is faster than A * ones(..)?

2014-07-17 Thread Tomas Lycken
Funny... I can't reproduce this on my machine. What happens if you 
explicitly `gc()` between the various scenarios (i.e. time multiplication 
by random, call `gc()`, then time multiplication by zeros)? What happens if 
you time multiplication by random a few times in a row? A few zeros in a 
row?

// T

On Thursday, July 17, 2014 1:54:54 PM UTC+2, Andrei Zh wrote:

 I continue investigating matrix multiplication performance. Today I found 
 that multiplication by array of zeros(..) is several times faster than 
 multiplication by array of ones(..) or random numbers: 

 julia A = rand(200, 100)
 ...

 julia @time for i=1:1000 A * rand(100, 200) end 
  elapsed time: 3.009730414 seconds (48016 bytes allocated, 11.21% gc 
 time)

  julia @time for i=1:1000 A * ones(100, 200) end 
  elapsed time: 2.973320655 seconds (480128000 bytes allocated, 12.72% gc 
 time)

  julia @time for i=1:1000 A * zeros(100, 200) end 
  elapsed time: 0.438900132 seconds (480128000 bytes allocated, 85.46% gc 
 time)

 So, A * zeros() is about 6 faster than other kinds of multiplication. Note 
 also that it uses ~7x more GC time. 

 On NumPy no such difference is seen:

 In [106]: %timeit dot(A, rand(100, 200))
 100 loops, best of 3: 2.77 ms per loop

 In [107]: %timeit dot(A, ones((100, 200)))
 100 loops, best of 3: 2.59 ms per loop

 In [108]: %timeit dot(A, zeros((100, 200)))
 100 loops, best of 3: 2.57 ms per loop


 So I'm curious, how multiplying by zeros matrix is different from other 
 multiplication types? 




[julia-users] Re: ways to improve performance of a non-integer power?

2014-07-17 Thread Simon Byrne

On Wednesday, 16 July 2014 20:39:39 UTC+1, Florian Oswald wrote:

 myexp(parameter * mylog(x) )

 and it does make a sizeable difference. I'll try your version right now. 


Keep in mind that this is going to be less accurate than using an x^y 
function, as you can be approximately |y*log(x)| ulps out. I'm guessing if 
you're this concerned with performance then you probably won't be too 
concerned about losing a few significant digits, but it is worth keeping in 
mind.

If you look at the openlibm source, you can see that this is basically the 
approach it uses, albeit using some strategic double-double arithmetic to 
keep enough extra significant digits around.

Interestingly, since |y*log(x)| can be at most 710 (otherwise the resulting 
exp would overflow or underflow), if you could work in 80-bit extended 
precision, the extra 11 bits in the significand should be sufficient so 
that the final Float64 result should be accurate to within an ulp. 

-Simon




[julia-users] Append! resulting in #undef elements. Intended behavior?

2014-07-17 Thread Jan Strube


Hi List!

I'm a particle physicist just getting started with some julia. I've been 
using some python in the past but now I'm trying to use analyzing some lab 
data as an excuse to learn Julia.
So far it looks like I'm going to stick with it for a while.

I've been trying to play with basic image analysis, and I've come across 
the following behavior: append! complains that it doesn't find a suitable 
method for my call signature, but it does append an #undef. Is this 
intended?

Please see below for a session.



   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type help() to list help topics
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.0-rc1 (2014-07-14 02:04 UTC)
 _/ |\__'_|_|_|\__'_|  |  
|__/   |  x86_64-apple-darwin13.3.0

julia x = Array[]
0-element Array{Array{T,N},1}

julia append!(x, Array[1])
ERROR: `convert` has no method matching convert(::Type{Array{T,N}}, ::Int64)
 in getindex at array.jl:121

julia x
0-element Array{Array{T,N},1}

julia append!(x, Array[[1]])
1-element Array{Array{T,N},1}:
 [1]

julia append!(x, [1])
ERROR: `convert` has no method matching convert(::Type{Array{T,N}}, ::Int64)
 in copy! at abstractarray.jl:197
 in append! at array.jl:475

julia x
2-element Array{Array{T,N},1}:
[1]
 #undef


Re: [julia-users] Re: ways to improve performance of a non-integer power?

2014-07-17 Thread Florian Oswald
hi simon,
very interesting to know indeed! I'll keep that in mind. thanks!


On 17 July 2014 13:10, Simon Byrne simonby...@gmail.com wrote:


 On Wednesday, 16 July 2014 20:39:39 UTC+1, Florian Oswald wrote:

 myexp(parameter * mylog(x) )

 and it does make a sizeable difference. I'll try your version right now.


 Keep in mind that this is going to be less accurate than using an x^y
 function, as you can be approximately |y*log(x)| ulps out. I'm guessing if
 you're this concerned with performance then you probably won't be too
 concerned about losing a few significant digits, but it is worth keeping in
 mind.

 If you look at the openlibm source, you can see that this is basically the
 approach it uses, albeit using some strategic double-double arithmetic to
 keep enough extra significant digits around.

 Interestingly, since |y*log(x)| can be at most 710 (otherwise the
 resulting exp would overflow or underflow), if you could work in 80-bit
 extended precision, the extra 11 bits in the significand should be
 sufficient so that the final Float64 result should be accurate to within an
 ulp.

 -Simon





[julia-users] Re: Correct (and fast) way to calculate the inverse zip on a list of tuples?

2014-07-17 Thread Tomas Lycken
That's neat! Thanks!

However, it only got me *almost* there - my next step was to use these 
arrays as function arguments, so I want to use `...` splatting. However, 
`foo([x[y] for x in mytuples,y=1:2]...)` will splat the entire matrix 
element-wise, instead of row-wise.

Is there a way to get it into two `Array{Float64,1}` objects, rather than 
an `Array{Float64,2}`, and still only use it in one expression?

(Yes, I realize I'm starting to make things more complicated than they need 
to be, but this is an opportunity to learn new ways of doing things - not 
just an attempt to get things done :P)

// T

On Thursday, July 17, 2014 2:05:38 PM UTC+2, David Gonzales wrote:

 it is possible to use two indices in a comprehension:
 mat = [x[y] for x in mytuples,y=1:2]
 now mat[:,1] is your first vector, and mat[:,2] is the second.

 On Thursday, July 17, 2014 12:37:20 PM UTC+3, Tomas Lycken wrote:

 I have an array of 2-tuples of floats, created as

 julia mytuples = (Float64,Float64)[(v.x, v.y for v in vs] # slightly more 
 complicated in actual code
 136-element Array{(Float64,Float64),1}:
  (4.0926,-2.55505)   
  (4.170826,-2.586752)

 ...

 Now, I’d like to split this into two arrays of floats. I was under the 
 impression that zip could do this for me - according to the docs, zip is 
 its own inverse 
 http://docs.julialang.org/en/latest/stdlib/base/#Base.zip, and the 
 array of tuples does look like something I could get from zipping two 
 arrays. So I tried something similar to the example there:

 julia julia [zip(mytuples...)...]
 2-element Array{(Float64,Float64,Float64, ... and so on, 136 times...),1}:

 so I guess that only works on actual Zip objects, and not on arrays 
 (that could have been) generated by the zip function inside []. (Also, 
 since this uses splatting with ... on large lists, it might not be a 
 good idea in the first place…? 
 https://github.com/JuliaLang/julia/issues/6098#issuecomment-37203821)

 What’s the best way to accomplish what I want, i.e. transforming the 
 mytuple variable above into two Vector{Float64}s (possibly inside a 
 tuple or array or something)?

 // T
 ​



[julia-users] Re: any packages for AI usage

2014-07-17 Thread Avik Sengupta
Svaksha maintains a hand curated list of Julia resources. Note that not all 
of them are registered in METADATA. 

https://github.com/svaksha/Julia.jl/blob/master/AI.md

Regards
-
Avik

On Thursday, 17 July 2014 11:59:25 UTC+1, Abe Schneider wrote:

 In my spare time I've been working on a Torch7-like interface for MLPs. 
 I remember seeing at least one other person working on a NN library (and I 
 suspect there are likely other offerings in the works).

 On Wednesday, July 16, 2014 7:01:03 PM UTC-4, Alan Chan wrote:

 namely, different linear regression, logistical regression, gradient 
 descent, neural network backprop, etc?



[julia-users] Re: Why A * zeros(..) is faster than A * ones(..)?

2014-07-17 Thread Jutho
I don't know about the zeros, but one issue with your timings is certainly 
that you also measure the time to generate the random numbers, which is 
most probably not negligible.

Op donderdag 17 juli 2014 13:54:54 UTC+2 schreef Andrei Zh:

 I continue investigating matrix multiplication performance. Today I found 
 that multiplication by array of zeros(..) is several times faster than 
 multiplication by array of ones(..) or random numbers: 

 julia A = rand(200, 100)
 ...

 julia @time for i=1:1000 A * rand(100, 200) end 
  elapsed time: 3.009730414 seconds (48016 bytes allocated, 11.21% gc 
 time)

  julia @time for i=1:1000 A * ones(100, 200) end 
  elapsed time: 2.973320655 seconds (480128000 bytes allocated, 12.72% gc 
 time)

  julia @time for i=1:1000 A * zeros(100, 200) end 
  elapsed time: 0.438900132 seconds (480128000 bytes allocated, 85.46% gc 
 time)

 So, A * zeros() is about 6 faster than other kinds of multiplication. Note 
 also that it uses ~7x more GC time. 

 On NumPy no such difference is seen:

 In [106]: %timeit dot(A, rand(100, 200))
 100 loops, best of 3: 2.77 ms per loop

 In [107]: %timeit dot(A, ones((100, 200)))
 100 loops, best of 3: 2.59 ms per loop

 In [108]: %timeit dot(A, zeros((100, 200)))
 100 loops, best of 3: 2.57 ms per loop


 So I'm curious, how multiplying by zeros matrix is different from other 
 multiplication types? 




[julia-users] finding all roots of complex function

2014-07-17 Thread Andrei Berceanu
Hi guys,

I would like to know if Julia (itself or though some existing package) can 
be used to find all the roots of a *complex*, 2 variable function F(x,y). 
Here x and y are real, so F:R - C.

Thanks,
Andrei


[julia-users] Re: finding all roots of complex function

2014-07-17 Thread Andrei Berceanu
I should perhaps mention that this is part of a bigger scheme, to first 
find all the poles of  G(x,y)/F(x,y) and then use the residue theorem for 
solving a complex integral of the type 
integral( G(x,y)/F(x,y), (x,y))

On Thursday, July 17, 2014 3:15:45 PM UTC+2, Andrei Berceanu wrote:

 Hi guys,

 I would like to know if Julia (itself or though some existing package) can 
 be used to find all the roots of a *complex*, 2 variable function F(x,y). 
 Here x and y are real, so F:R - C.

 Thanks,
 Andrei



Re: [julia-users] Re: Julia idiom for something like CLOS's call-next-method

2014-07-17 Thread Jameson Nash
There's a second pattern, similar to Keno's, that can also often help avoid
needing invoke:

function f(::Abstract)
  check_invariants(x)
  # does work on x
end
check_invariants(::Abstract) = nothing
check_invariants(x::Subtype) = (x.y == 4  error(f(x) is undefined for
x.y=4))


On Thu, Jul 17, 2014 at 2:59 AM, Stefan Karpinski ste...@karpinski.org
wrote:

 I've occasionally found cases where invoke seems to be the only
 non-annoying way to accomplish some dispatch, but it's remarkably rare.
 Still, it does seem to happen.


 On Wed, Jul 16, 2014 at 11:55 PM, Markus Roberts mar...@reality.com
 wrote:


 Thank you both...invoke(bar,(Fu,),f) has exactly the semantics I was
 looking for  (esp. being able to specify which type constraint I'm
 loosening in the case where there are multiple args participating in the
 dispatch.

 As for Keno's suggestion that I refactor into a primary/secondary method
 pattern, I can see how that would be the right solution in some cases, but
 doesn't feel like a good fit in my present case; my subtype's around
 method is only there to maintain additional invariants imposed by the
 subtype, and having an additional function that does the actual work
 without maintaining the invariant feels...icky.

 Thanks again for the prompt replies,
 -- MarkusQ





Re: [julia-users] Append! resulting in #undef elements. Intended behavior?

2014-07-17 Thread Jacob Quinn
Hi Jan,

You have your syntax a little mixed up. The usage of:

Type[]

actually declares an empty array with element type of `Type`. So you're
first line:

x = Array[]

is actually creating an array of arrays.

Likewise, you're seeing the error in

Array[1]

Because you're trying to put an Int[1] array into an array of arrays (and
julia doesn't know how to make that conversion).

The last error is unfortunate, because it seems that the call to `append!`
is allocating space for the array you're appending but then fails when
actually trying to put the 2nd array into the newly allocated space.
Because `append!` is mutating, you're left with your original array mutated
with the extra space, with the #undef.

I think the semantics you're looking for are as follows:

x = Int[]  # declares an empty 1-d array with element type of `Int`

y = [1, 2, 3]  # literal syntax for creating an array with elements, (1, 2,
3). Type inference figures out that the elements are of type `Int`

append!(x,y)  # mutates the `x` array by appending all the elements of y to
it; this works because they're both of the same type

push!(x, 5)  # adds a single element, 5,  to the `x` array

Feel free to read through the section in teh manual on arrays to get a
better feel for what's going on.

http://docs.julialang.org/en/latest/manual/arrays/

Cheers,

-Jacob


On Thu, Jul 17, 2014 at 8:21 AM, Jan Strube jan.str...@gmail.com wrote:



 Hi List!

 I'm a particle physicist just getting started with some julia. I've been
 using some python in the past but now I'm trying to use analyzing some lab
 data as an excuse to learn Julia.
 So far it looks like I'm going to stick with it for a while.

 I've been trying to play with basic image analysis, and I've come across
 the following behavior: append! complains that it doesn't find a suitable
 method for my call signature, but it does append an #undef. Is this
 intended?

 Please see below for a session.



_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() to list help topics
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.0-rc1 (2014-07-14 02:04 UTC)
  _/ |\__'_|_|_|\__'_|  |
 |__/   |  x86_64-apple-darwin13.3.0

 julia x = Array[]
 0-element Array{Array{T,N},1}

 julia append!(x, Array[1])
 ERROR: `convert` has no method matching convert(::Type{Array{T,N}},
 ::Int64)
  in getindex at array.jl:121

 julia x
 0-element Array{Array{T,N},1}

 julia append!(x, Array[[1]])
 1-element Array{Array{T,N},1}:
  [1]

 julia append!(x, [1])
 ERROR: `convert` has no method matching convert(::Type{Array{T,N}},
 ::Int64)
  in copy! at abstractarray.jl:197
  in append! at array.jl:475

 julia x
 2-element Array{Array{T,N},1}:
 [1]
  #undef



[julia-users] Re: Correct (and fast) way to calculate the inverse zip on a list of tuples?

2014-07-17 Thread David Gonzales
its possible to nest comprehensions:
[ [x[y] for x in mytuple] for y=1:2 ]
this expression can be splatted like you suggested.

On Thursday, July 17, 2014 3:53:42 PM UTC+3, Tomas Lycken wrote:

 That's neat! Thanks!

 However, it only got me *almost* there - my next step was to use these 
 arrays as function arguments, so I want to use `...` splatting. However, 
 `foo([x[y] for x in mytuples,y=1:2]...)` will splat the entire matrix 
 element-wise, instead of row-wise.

 Is there a way to get it into two `Array{Float64,1}` objects, rather than 
 an `Array{Float64,2}`, and still only use it in one expression?

 (Yes, I realize I'm starting to make things more complicated than they 
 need to be, but this is an opportunity to learn new ways of doing things - 
 not just an attempt to get things done :P)

 // T

 On Thursday, July 17, 2014 2:05:38 PM UTC+2, David Gonzales wrote:

 it is possible to use two indices in a comprehension:
 mat = [x[y] for x in mytuples,y=1:2]
 now mat[:,1] is your first vector, and mat[:,2] is the second.

 On Thursday, July 17, 2014 12:37:20 PM UTC+3, Tomas Lycken wrote:

 I have an array of 2-tuples of floats, created as

 julia mytuples = (Float64,Float64)[(v.x, v.y for v in vs] # slightly more 
 complicated in actual code
 136-element Array{(Float64,Float64),1}:
  (4.0926,-2.55505)   
  (4.170826,-2.586752)

 ...

 Now, I’d like to split this into two arrays of floats. I was under the 
 impression that zip could do this for me - according to the docs, zip 
 is its own inverse 
 http://docs.julialang.org/en/latest/stdlib/base/#Base.zip, and the 
 array of tuples does look like something I could get from zipping two 
 arrays. So I tried something similar to the example there:

 julia julia [zip(mytuples...)...]
 2-element Array{(Float64,Float64,Float64, ... and so on, 136 times...),1}:

 so I guess that only works on actual Zip objects, and not on arrays 
 (that could have been) generated by the zip function inside []. (Also, 
 since this uses splatting with ... on large lists, it might not be a 
 good idea in the first place…? 
 https://github.com/JuliaLang/julia/issues/6098#issuecomment-37203821)

 What’s the best way to accomplish what I want, i.e. transforming the 
 mytuple variable above into two Vector{Float64}s (possibly inside a 
 tuple or array or something)?

 // T
 ​



Re: [julia-users] Why A * zeros(..) is faster than A * ones(..)?

2014-07-17 Thread Jutho Haegeman
In principle, it’s also best to wrap all of this in a function, although it 
doesn’t seem to matter that much for this case (on my machine).

I get little over 0.6 seconds for the first, and about 0.55 s for the second 
and third. That sounds consistent with my expectation. Note also that the 
statement with rand has a slightly higher allocation. 

Since matrix multiplication seems to be slower on your machines (less cores?), 
whereas the time of rand is probably similar (since it is not multithreaded 
anyway if I am correct), then I guess the effect of rand is just unobservable 
in your timings. 

On 17 Jul 2014, at 15:54, Tomas Lycken tomas.lyc...@gmail.com wrote:

 @Jutho: My gut reaction was the same thing, but then I should be able to 
 reproduce the results, right? All three invocations take about 1.2-1.5 
 seconds on my machine.
 
 // T
 
 On Thursday, July 17, 2014 3:06:08 PM UTC+2, Jutho wrote:
 I don't know about the zeros, but one issue with your timings is certainly 
 that you also measure the time to generate the random numbers, which is most 
 probably not negligible.
 
 Op donderdag 17 juli 2014 13:54:54 UTC+2 schreef Andrei Zh:
 I continue investigating matrix multiplication performance. Today I found 
 that multiplication by array of zeros(..) is several times faster than 
 multiplication by array of ones(..) or random numbers: 
 
 julia A = rand(200, 100)
 ...
 
 julia @time for i=1:1000 A * rand(100, 200) end 
  elapsed time: 3.009730414 seconds (48016 bytes allocated, 11.21% gc time)
 
  julia @time for i=1:1000 A * ones(100, 200) end 
  elapsed time: 2.973320655 seconds (480128000 bytes allocated, 12.72% gc time)
 
  julia @time for i=1:1000 A * zeros(100, 200) end 
  elapsed time: 0.438900132 seconds (480128000 bytes allocated, 85.46% gc time)
 
 So, A * zeros() is about 6 faster than other kinds of multiplication. Note 
 also that it uses ~7x more GC time. 
 
 On NumPy no such difference is seen:
 
 In [106]: %timeit dot(A, rand(100, 200))
 100 loops, best of 3: 2.77 ms per loop
 
 In [107]: %timeit dot(A, ones((100, 200)))
 100 loops, best of 3: 2.59 ms per loop
 
 In [108]: %timeit dot(A, zeros((100, 200)))
 100 loops, best of 3: 2.57 ms per loop
 
 
 So I'm curious, how multiplying by zeros matrix is different from other 
 multiplication types? 
 
 



[julia-users] Re: Why A * zeros(..) is faster than A * ones(..)?

2014-07-17 Thread Andrei Zh

@Jutho: precomputing matrices doesn't make too much difference:

 julia R = rand(100, 200
 ... 
 julia Z = zeros(100, 200)
 ...
 julia O = ones(100, 200)
 ...

 julia @time for i=1:1000 A * R end 
 elapsed time: 2.845228639 seconds (32008 bytes allocated, 8.31% gc 
time)

 julia @time for i=1:1000 A * O end 
 elapsed time: 2.811217686 seconds (32008 bytes allocated, 8.33% gc 
time)

 julia @time for i=1:1000 A * Z end 
 elapsed time: 0.288209494 seconds (32008 bytes allocated, 83.87% gc 
time)



@Thomas: no difference too: 

 julia @time for i=1:1000 A * R end 
 elapsed time: 3.079787601 seconds (32008 bytes allocated, 8.15% gc 
time)

 julia gc()

 julia @time for i=1:1000 A * R end 
 elapsed time: 2.802706492 seconds (32008 bytes allocated, 7.15% gc 
time)

 julia gc()

... (5x more times with same results)

 julia @time for i=1:1000 A * Z end 
 elapsed time: 0.254381593 seconds (32008 bytes allocated, 81.42% gc 
time)

 julia gc()

 julia @time for i=1:1000 A * Z end 
 elapsed time: 0.252976879 seconds (32008 bytes allocated, 81.44% gc 
time)

 julia gc()

!julia @time for i=1:1000 A * R end 
 elapsed time: 2.77465144 seconds (32008 bytes allocated, 7.20% gc time)

-

There's, however, one interesting observation - the larger are matrices, 
the larger is different. For example, for A = rand(20, 10); R = rand(10, 
20) and Z = zeros(10, 20) we have: 

 julia @time for i=1:100 A * R end 
 elapsed time: 6.073379483 seconds (329600 bytes allocated, 40.88% gc 
time)

!julia @time for i=1:100 A * Z end 
 elapsed time: 3.357091461 seconds (329600 bytes allocated, 74.58% gc 
time)

Only 2 times faster. But for A = rand(2000, 1000); R = rand(1000, 2000) and 
Z = zeros() we have: 

 julia @time for i=1:10 A * R end 
 elapsed time: 32.497353017 seconds (320001120 bytes allocated, 0.43% gc 
time)

 julia @time for i=1:10 A * Z end 
 elapsed time: 0.209265586 seconds (320001120 bytes allocated, 70.28% gc 
time)

155 times faster! 

Moreover, if I use binary matrix (half of elements is 1 and half is 0), 
time is 1/2 of time for random numbers: 

 julia B = zeros(size(R))
 ... 

 julia B[R . 0.5] = 1
 1

 julia @time for i=1:10 A * B end 
 elapsed time: 16.077884672 seconds (320001120 bytes allocated, 1.09% gc 
time)

So time is proportional to a number of non-zero elements.

For reference: I use Intel i7 processor with 2 cores, hyperthreading 
enabled (so virtually 4 cores), but only 1 core working. 



Re: [julia-users] Re: Why A * zeros(..) is faster than A * ones(..)?

2014-07-17 Thread Tim Holy
I get this:

julia @time for i = 1:10 A*R end
elapsed time: 7.428557846 seconds (320001120 bytes allocated, 1.43% gc time)

julia @time for i = 1:10 A*Z end
elapsed time: 7.233144631 seconds (320001120 bytes allocated, 1.46% gc time)

No difference.

Are you using a different BLAS than OpenBLAS?

--Tim

On Thursday, July 17, 2014 07:16:46 AM Andrei Zh wrote:
 Only 2 times faster. But for A = rand(2000, 1000); R = rand(1000, 2000) and 
 Z = zeros() we have:
 
  julia @time for i=1:10 A * R end 
  elapsed time: 32.497353017 seconds (320001120 bytes allocated, 0.43% gc 
 time)
 
  julia @time for i=1:10 A * Z end 
  elapsed time: 0.209265586 seconds (320001120 bytes allocated, 70.28% gc 
 time)
 
 155 times faster! 



Re: [julia-users] Re: Why A * zeros(..) is faster than A * ones(..)?

2014-07-17 Thread Andrei Zh
@Tim: seems like that - Base.blas_vendor() returns :unknown, while 
Base.libblas_name equals libblas.so.3. Is there a way to switch to 
OpenBLAS? 


On Thursday, July 17, 2014 5:21:24 PM UTC+3, Tim Holy wrote:

 I get this: 

 julia @time for i = 1:10 A*R end 
 elapsed time: 7.428557846 seconds (320001120 bytes allocated, 1.43% gc 
 time) 

 julia @time for i = 1:10 A*Z end 
 elapsed time: 7.233144631 seconds (320001120 bytes allocated, 1.46% gc 
 time) 

 No difference. 

 Are you using a different BLAS than OpenBLAS? 

 --Tim 

 On Thursday, July 17, 2014 07:16:46 AM Andrei Zh wrote: 
  Only 2 times faster. But for A = rand(2000, 1000); R = rand(1000, 2000) 
 and 
  Z = zeros() we have: 
  
   julia @time for i=1:10 A * R end 
   elapsed time: 32.497353017 seconds (320001120 bytes allocated, 0.43% gc 
  time) 
  
   julia @time for i=1:10 A * Z end 
   elapsed time: 0.209265586 seconds (320001120 bytes allocated, 70.28% gc 
  time) 
  
  155 times faster! 



Re: [julia-users] Re: Why A * zeros(..) is faster than A * ones(..)?

2014-07-17 Thread Andrei Zh
Ok, OpenBLAS wasn't installed on this laptop, after installing 
Base.blas_vendor() changed. I'll repeat my tests in evening and port 
results. 

On Thursday, July 17, 2014 5:47:44 PM UTC+3, Andrei Zh wrote:

 @Tim: seems like that - Base.blas_vendor() returns :unknown, while 
 Base.libblas_name equals libblas.so.3. Is there a way to switch to 
 OpenBLAS? 


 On Thursday, July 17, 2014 5:21:24 PM UTC+3, Tim Holy wrote:

 I get this: 

 julia @time for i = 1:10 A*R end 
 elapsed time: 7.428557846 seconds (320001120 bytes allocated, 1.43% gc 
 time) 

 julia @time for i = 1:10 A*Z end 
 elapsed time: 7.233144631 seconds (320001120 bytes allocated, 1.46% gc 
 time) 

 No difference. 

 Are you using a different BLAS than OpenBLAS? 

 --Tim 

 On Thursday, July 17, 2014 07:16:46 AM Andrei Zh wrote: 
  Only 2 times faster. But for A = rand(2000, 1000); R = rand(1000, 2000) 
 and 
  Z = zeros() we have: 
  
   julia @time for i=1:10 A * R end 
   elapsed time: 32.497353017 seconds (320001120 bytes allocated, 0.43% 
 gc 
  time) 
  
   julia @time for i=1:10 A * Z end 
   elapsed time: 0.209265586 seconds (320001120 bytes allocated, 70.28% 
 gc 
  time) 
  
  155 times faster! 



[julia-users] Re: Call for Unicode julia source examples

2014-07-17 Thread Ben Arthur
thanks for the reply.  i didn't not realize, and am glad to hear, that many 
functions have unicode equivalents in base.


Re: [julia-users] Re: Why A * zeros(..) is faster than A * ones(..)?

2014-07-17 Thread Andrei Zh
After switching to OpenBLAS results became the same for zero- and 
non-zero-matrices. For example, for R and Z of size (1000, 2000) results 
are as follows: 

 julia @time for i=1:10 A*R end
 elapsed time: 1.733125403 seconds (320001120 bytes allocated, 6.63% gc 
time)

 julia @time for i=1:10 A*Z end
 elapsed time: 1.859967006 seconds (320001120 bytes allocated, 6.33% gc 
time)

Pretty stable now. 

Thanks for your help! 

On Thursday, July 17, 2014 6:00:57 PM UTC+3, Andrei Zh wrote:

 Ok, OpenBLAS wasn't installed on this laptop, after installing 
 Base.blas_vendor() changeI d. I'll repeat my tests in evening and port 
 results. 

 On Thursday, July 17, 2014 5:47:44 PM UTC+3, Andrei Zh wrote:

 @Tim: seems like that - Base.blas_vendor() returns :unknown, while 
 Base.libblas_name equals libblas.so.3. Is there a way to switch to 
 OpenBLAS? 


 On Thursday, July 17, 2014 5:21:24 PM UTC+3, Tim Holy wrote:

 I get this: 

 julia @time for i = 1:10 A*R end 
 elapsed time: 7.428557846 seconds (320001120 bytes allocated, 1.43% gc 
 time) 

 julia @time for i = 1:10 A*Z end 
 elapsed time: 7.233144631 seconds (320001120 bytes allocated, 1.46% gc 
 time) 

 No difference. 

 Are you using a different BLAS than OpenBLAS? 

 --Tim 

 On Thursday, July 17, 2014 07:16:46 AM Andrei Zh wrote: 
  Only 2 times faster. But for A = rand(2000, 1000); R = rand(1000, 
 2000) and 
  Z = zeros() we have: 
  
   julia @time for i=1:10 A * R end 
   elapsed time: 32.497353017 seconds (320001120 bytes allocated, 0.43% 
 gc 
  time) 
  
   julia @time for i=1:10 A * Z end 
   elapsed time: 0.209265586 seconds (320001120 bytes allocated, 70.28% 
 gc 
  time) 
  
  155 times faster! 



[julia-users] Re: finding all roots of complex function

2014-07-17 Thread Steven G. Johnson


On Thursday, July 17, 2014 9:19:04 AM UTC-4, Andrei Berceanu wrote:

 I should perhaps mention that this is part of a bigger scheme, to first 
 find all the poles of  G(x,y)/F(x,y) and then use the residue theorem for 
 solving a complex integral of the type 
 integral( G(x,y)/F(x,y), (x,y))


Unless F(x,y) is very special (e.g. a polynomial), I suspect that it would 
be much faster to just do the integral.   Since you have analytic 
functions, 1d/contour integration is very efficient (with an appropriate 
algorithm, e.g. the built-in quadgk function) unless you have poles lying 
very close to your integration contour (and even then it is not too bad 
with an adaptive scheme like quadgk.)

In contrast, finding *all* the zeros of an arbitrary analytic function is 
hard, usually harder than integrating it unless you have a good idea of 
where the zeros are.   In general, it's not practical to guarantee that you 
have found all the zeros unless you can restrict your search to some finite 
portion of the complex plane.   For finding the roots of analytic functions 
inside a given contour, some of the best algorithms actually involve doing 
a sequence of integrals 
(http://www.chebfun.org/examples/roots/ComplexRoots.html) that are just as 
hard as your original integral above.   So, you might as well just do the 
integral to begin with.



[julia-users] [newb] namespaces?

2014-07-17 Thread Neal Becker
One thing that surprises me, coming from a largely python background, is there 
seems to be little use of namespaces in julia.  For example, it seems that 
everything in the standard library is just part of one global namespace.

If true, this does seem to go against trends in modern design, I believe.  I 
would think this could cause trouble down the road.



Re: [julia-users] Re: Capture the output of Julia's console

2014-07-17 Thread Steven G. Johnson
As I understand it, the REPL makes a private copy of the output-stream 
descriptors early on, which is why you can't redirect them.This is 
actually quite useful behavior, because otherwise it would be impossible to 
debug code using redirect_stdout etc. in the REPL (because the REPL would 
stop working as soon as you redirect).   (The REPL also needs to be able to 
talk to the terminal at a lower level than just stdout in order to support 
things like line editing.)  Also note that the REPL is a program written in 
Julia, not a part of the Julia language or standard library, so you 
wouldn't necessarily expect to see documentation of the REPL's low-level 
terminal-communication internals.

On Wednesday, July 16, 2014 8:48:11 PM UTC-4, Laszlo Hars wrote:

 I don't know, how to file an issue. Where can I get instructions? (The 
 STDERR behavior may be intentional. I kept asking about it in this group, 
 for months, but nobody was interested.)


The difficulty here is that you are trying to hack the REPL for a task to 
which it is poorly suited.  Multiple alternative suggestions have been 
offered for better ways to communicate between Julia and an external 
process, but you've rejected them.   Basically, you should set up a channel 
to receive messages, execute the code yourself, catch errors yourself, and 
then you can do whatever you want with the output, rather than trying to 
intercept it from the REPL.   We didn't write all of your code for you, but 
we gave enough examples that you should be able to implement what you need.

Note that this is independent from whether you want the REPL to be running 
too, in order to be able to use Julia interactively with the same data.   
Just run your receive-execute-respond loop in another thread (asynchronous 
task), and you can have the REPL running at the same time.

Another example is IJulia, which receives messages with Julia code from 
another process (the IPython front-end) and then sends back the results, 
including errors.   IJulia never invokes the REPL at all.


Re: [julia-users] Re: Capture the output of Julia's console

2014-07-17 Thread Steven G. Johnson
On Wednesday, July 16, 2014 11:17:32 PM UTC-4, Laszlo Hars wrote:

  the STDERR behavior was intentional
 1) It just does not seem to be logical: warning messages still get written 
 to STDERR. Why? And what is the use of STDERR, in general? STDERR is in the 
 documentation, without telling anything about it.


You are confusing two output channels:

1) A Julia program may have side effects, e.g. it may call println(...), 
warn(), etc.   These are written to some stream, by default STDOUT for 
print and STDERR for warn, and in general you should use STDOUT and STDERR 
just like stdout and stderr in C (and they use the same file descriptors as 
libc's stdout and stderr, internally, which is useful for Unix pipes etc.). 
  This is a property of the Julia language and standard library.

2) When you execute a code block in the REPL, the REPL also prints its 
result value (if any), or any exception that was thrown.  This is not a 
part of the code or a part of the Julia language or standard library, it is 
a behavior of the REPL, which is an interactive program that happens to be 
bundled with Julia.   There is no reason to use the same output channel 
here as the ones used for side effects of Julia code, and in fact it is 
sometimes useful to have it be a different channel (e.g. if redirect_stdout 
has redirected Julia side effects, but you still want to run the REPL). 
 And the REPL can be easily replaced by a different interactive 
environment, e.g. IJulia or Julia Studio, and these environments may do 
different things with the results/exceptions produced by expressions that 
are evaluated.
 

 2) The REPL behavior ought to be documented


The low-level mechanisms by which the REPL communicates with the terminal 
are not part of the Julia language or standard library.
or they are so complicated, that I cannot include them into my programs.

 println(RESULT: , eval(parse(readline(
 As we discussed it several times, this does not work with multi-line 
 input. Sure, I can write my own REPL, but this is not the issue here. We 
 should not have to do it, since there is one included in the Julia 
 distribution. It just does not seem to have sufficient documentation.


We explained to you several times how to capture multi-line input.  You can 
either use parse_input_line like the REPL or (better) you can devise your 
own communication protocol (like the one used between IJulia/IPython), that 
communicates code blocks to be evaluated in an unambiguous way.


Re: [julia-users] [newb] namespaces?

2014-07-17 Thread Leah Hanson
Packages are each in their own module, which are namespaces.

Because generic functions own all their scattered methods, there is less
of a need to hide functions with the same name inside different namespaces
in Julia.

Could you be more specific about what you're concerned could happen? (for
example, what is a specific problem that could arise from not using
namespaces heavily enough?)

Best,
  Leah


On Thu, Jul 17, 2014 at 1:13 PM, Neal Becker ndbeck...@gmail.com wrote:

 One thing that surprises me, coming from a largely python background, is
 there
 seems to be little use of namespaces in julia.  For example, it seems that
 everything in the standard library is just part of one global namespace.

 If true, this does seem to go against trends in modern design, I believe.
  I
 would think this could cause trouble down the road.




Re: [julia-users] Re: Capture the output of Julia's console

2014-07-17 Thread Laszlo Hars
 you wouldn't necessarily expect to see documentation of the REPL's 
low-level terminal-communication internals
I understand. However, as we learn the language we use the REPL. It is the 
first thing we see of Julia. A clear distinction in the documentation would 
be really helpful telling what of the Julia features works in the REPL and 
what does not.

 trying to hack the REPL for a task to which it is poorly suited
This hacks have been working for me. I could achieve everything I wanted 
with 8-10 lines of Julia code.

 Multiple alternative suggestions have been offered for better ways to 
communicate between Julia and an external process, but you've rejected them
I did not ask help to communicate between Julia and an external process. 
I posted several code snippets, which do accomplish that task, with 8-10 
lines of code. I asked help to capture the output of the REPL. I could not 
imagine that it was that hard. I believe it should not be.

It is an entirely different conversation to discuss if I need to capture 
the output of the REPL, and if I could achieve the same or better results 
with something else. I would be glad to discuss that, but the point here 
was simply how I can capture the output of Julia's console. As I understand 
now, the only way is to define my own terminal and replace the REPL 
terminal with it.


On Thursday, July 17, 2014 12:20:19 PM UTC-6, Steven G. Johnson wrote:

 As I understand it, the REPL makes a private copy of the output-stream 
 descriptors early on, which is why you can't redirect them.This is 
 actually quite useful behavior, because otherwise it would be impossible to 
 debug code using redirect_stdout etc. in the REPL (because the REPL would 
 stop working as soon as you redirect).   (The REPL also needs to be able to 
 talk to the terminal at a lower level than just stdout in order to support 
 things like line editing.)  Also note that the REPL is a program written in 
 Julia, not a part of the Julia language or standard library, so you 
 wouldn't necessarily expect to see documentation of the REPL's low-level 
 terminal-communication internals.

 On Wednesday, July 16, 2014 8:48:11 PM UTC-4, Laszlo Hars wrote:

 I don't know, how to file an issue. Where can I get instructions? (The 
 STDERR behavior may be intentional. I kept asking about it in this group, 
 for months, but nobody was interested.)


 The difficulty here is that you are trying to hack the REPL for a task to 
 which it is poorly suited.  Multiple alternative suggestions have been 
 offered for better ways to communicate between Julia and an external 
 process, but you've rejected them.   Basically, you should set up a channel 
 to receive messages, execute the code yourself, catch errors yourself, and 
 then you can do whatever you want with the output, rather than trying to 
 intercept it from the REPL.   We didn't write all of your code for you, but 
 we gave enough examples that you should be able to implement what you need.

 Note that this is independent from whether you want the REPL to be running 
 too, in order to be able to use Julia interactively with the same data.   
 Just run your receive-execute-respond loop in another thread (asynchronous 
 task), and you can have the REPL running at the same time.

 Another example is IJulia, which receives messages with Julia code from 
 another process (the IPython front-end) and then sends back the results, 
 including errors.   IJulia never invokes the REPL at all.



[julia-users] Re: [newb] namespaces?

2014-07-17 Thread Iain Dunning
With multiple dispatch, this matters a lot less. Modules pick up the rest 
of the slack for when it does matter though.

On Thursday, July 17, 2014 2:14:14 PM UTC-4, Neal Becker wrote:

 One thing that surprises me, coming from a largely python background, is 
 there 
 seems to be little use of namespaces in julia.  For example, it seems that 
 everything in the standard library is just part of one global namespace. 

 If true, this does seem to go against trends in modern design, I believe. 
  I 
 would think this could cause trouble down the road. 



Re: [julia-users] Append! resulting in #undef elements. Intended behavior?

2014-07-17 Thread Simon Kornblith
The fact that append! grows the array on failure seems like a bug 
nonetheless. If convert throws it seems preferable to leave the array as 
is. I'll file an issue.

Simon

On Thursday, July 17, 2014 9:34:21 AM UTC-4, Jacob Quinn wrote:

 Hi Jan,

 You have your syntax a little mixed up. The usage of:

 Type[]

 actually declares an empty array with element type of `Type`. So you're 
 first line:

 x = Array[]

 is actually creating an array of arrays.

 Likewise, you're seeing the error in

 Array[1]

 Because you're trying to put an Int[1] array into an array of arrays (and 
 julia doesn't know how to make that conversion).

 The last error is unfortunate, because it seems that the call to `append!` 
 is allocating space for the array you're appending but then fails when 
 actually trying to put the 2nd array into the newly allocated space. 
 Because `append!` is mutating, you're left with your original array mutated 
 with the extra space, with the #undef.

 I think the semantics you're looking for are as follows:

 x = Int[]  # declares an empty 1-d array with element type of `Int`

 y = [1, 2, 3]  # literal syntax for creating an array with elements, (1, 
 2, 3). Type inference figures out that the elements are of type `Int`

 append!(x,y)  # mutates the `x` array by appending all the elements of y 
 to it; this works because they're both of the same type

 push!(x, 5)  # adds a single element, 5,  to the `x` array 

 Feel free to read through the section in teh manual on arrays to get a 
 better feel for what's going on.

 http://docs.julialang.org/en/latest/manual/arrays/

 Cheers,

 -Jacob


 On Thu, Jul 17, 2014 at 8:21 AM, Jan Strube jan.s...@gmail.com 
 javascript: wrote:



 Hi List!

 I'm a particle physicist just getting started with some julia. I've been 
 using some python in the past but now I'm trying to use analyzing some lab 
 data as an excuse to learn Julia.
 So far it looks like I'm going to stick with it for a while.

 I've been trying to play with basic image analysis, and I've come across 
 the following behavior: append! complains that it doesn't find a suitable 
 method for my call signature, but it does append an #undef. Is this 
 intended?

 Please see below for a session.



_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() to list help topics
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.0-rc1 (2014-07-14 02:04 UTC)
  _/ |\__'_|_|_|\__'_|  |  
 |__/   |  x86_64-apple-darwin13.3.0

 julia x = Array[]
 0-element Array{Array{T,N},1}

 julia append!(x, Array[1])
 ERROR: `convert` has no method matching convert(::Type{Array{T,N}}, 
 ::Int64)
  in getindex at array.jl:121

 julia x
 0-element Array{Array{T,N},1}

 julia append!(x, Array[[1]])
 1-element Array{Array{T,N},1}:
  [1]

 julia append!(x, [1])
 ERROR: `convert` has no method matching convert(::Type{Array{T,N}}, 
 ::Int64)
  in copy! at abstractarray.jl:197
  in append! at array.jl:475

 julia x
 2-element Array{Array{T,N},1}:
 [1]
  #undef




[julia-users] question about doc

2014-07-17 Thread Comer Duncan
I just looked at the docs in the section on Performance Tips.  There is an
example:

norm(A::Vector) = sqrt(real(dot(x,x)))norm(A::Matrix) = max(svd(A)[2])

It would seem that the first case is a typo. I would have thought that
it should read

norm(A::Vector) = sqrt(real(dot(A,A)))


Is this true?

Thanks.

Comer


Re: [julia-users] Re: Capture the output of Julia's console

2014-07-17 Thread Laszlo Hars
 We explained to you several times how to capture multi-line input.
You missed my points again. This was not the issue in this thread. Still, 
if you look at the code snippets I posted here, you can see that I knew how 
to handle multi-line input. I did not need your friendly reminder or 
explanations.


[julia-users] Waiting on multiple subprocesses to finish?

2014-07-17 Thread Elliot Saba
I was reading the docs
http://julia.readthedocs.org/en/latest/manual/control-flow/#tasks-and-events,
and it seems to me that it's saying I can use tasks to run multiple
subprocesses at once.  E.g., if I have some long-running subprocesses such
as `sleep 10`, I should be able to wrap each in a Task and use the inherent
wait() command that running each subprocess would entail to switch to
another task and kick off another subprocess.  Is this correct?  If it is,
can someone provide me a quick example?  I can't seem to get this to work,
but I've never used Tasks before so that's hardly surprising.  ;)
-E


Re: [julia-users] Waiting on multiple subprocesses to finish?

2014-07-17 Thread Tim Holy
Check out the implementation of multi.jl:pmap (the part in the @sync and 
@async blocks), it's a great example.

--Tim

On Thursday, July 17, 2014 04:23:48 PM Elliot Saba wrote:
 I was reading the docs
 http://julia.readthedocs.org/en/latest/manual/control-flow/#tasks-and-event
 s, and it seems to me that it's saying I can use tasks to run multiple
 subprocesses at once.  E.g., if I have some long-running subprocesses such
 as `sleep 10`, I should be able to wrap each in a Task and use the inherent
 wait() command that running each subprocess would entail to switch to
 another task and kick off another subprocess.  Is this correct?  If it is,
 can someone provide me a quick example?  I can't seem to get this to work,
 but I've never used Tasks before so that's hardly surprising.  ;)
 -E



[julia-users] Re: question about doc

2014-07-17 Thread Steven G. Johnson
It's a typo, thanks.  Should be fixed now.


Re: [julia-users] Waiting on multiple subprocesses to finish?

2014-07-17 Thread Kevin Squire
Note that the tasks all run in the same thread, so if any of the tasks (or
the main process) have a long running computation which does not yield, the
other processes won't run.  I believe that `sleep 10` falls in this
category, although I'm not 100% certain.

Cheers,
   Kevin


On Thu, Jul 17, 2014 at 1:49 PM, Tim Holy tim.h...@gmail.com wrote:

 Check out the implementation of multi.jl:pmap (the part in the @sync and
 @async blocks), it's a great example.

 --Tim

 On Thursday, July 17, 2014 04:23:48 PM Elliot Saba wrote:
  I was reading the docs
  
 http://julia.readthedocs.org/en/latest/manual/control-flow/#tasks-and-event
  s, and it seems to me that it's saying I can use tasks to run multiple
  subprocesses at once.  E.g., if I have some long-running subprocesses
 such
  as `sleep 10`, I should be able to wrap each in a Task and use the
 inherent
  wait() command that running each subprocess would entail to switch to
  another task and kick off another subprocess.  Is this correct?  If it
 is,
  can someone provide me a quick example?  I can't seem to get this to
 work,
  but I've never used Tasks before so that's hardly surprising.  ;)
  -E




[julia-users] Re: any packages for AI usage

2014-07-17 Thread Alan Chan
Great. Very useful. Thanks.

On Thursday, July 17, 2014 9:01:20 AM UTC-4, Avik Sengupta wrote:

 Svaksha maintains a hand curated list of Julia resources. Note that not 
 all of them are registered in METADATA. 

 https://github.com/svaksha/Julia.jl/blob/master/AI.md

 Regards
 -
 Avik

 On Thursday, 17 July 2014 11:59:25 UTC+1, Abe Schneider wrote:

 In my spare time I've been working on a Torch7-like interface for MLPs. 
 I remember seeing at least one other person working on a NN library (and I 
 suspect there are likely other offerings in the works).

 On Wednesday, July 16, 2014 7:01:03 PM UTC-4, Alan Chan wrote:

 namely, different linear regression, logistical regression, gradient 
 descent, neural network backprop, etc?



Re: [julia-users] Append! resulting in #undef elements. Intended behavior?

2014-07-17 Thread Ivar Nesje
https://github.com/JuliaLang/julia/issues/7642


[julia-users] Populating a DArray from file...

2014-07-17 Thread Ivar Nesje
Bigger than memory smells of mmap_array

http://docs.julialang.org/en/latest/stdlib/base/#Base.mmap_array


Re: [julia-users] Waiting on multiple subprocesses to finish?

2014-07-17 Thread Elliot Saba
Thanks, I'll take a look, although I have the gut feeling that these
functions are targeted toward multiple processes.
-E


On Thu, Jul 17, 2014 at 4:49 PM, Tim Holy tim.h...@gmail.com wrote:

 Check out the implementation of multi.jl:pmap (the part in the @sync and
 @async blocks), it's a great example.

 --Tim

 On Thursday, July 17, 2014 04:23:48 PM Elliot Saba wrote:
  I was reading the docs
  
 http://julia.readthedocs.org/en/latest/manual/control-flow/#tasks-and-event
  s, and it seems to me that it's saying I can use tasks to run multiple
  subprocesses at once.  E.g., if I have some long-running subprocesses
 such
  as `sleep 10`, I should be able to wrap each in a Task and use the
 inherent
  wait() command that running each subprocess would entail to switch to
  another task and kick off another subprocess.  Is this correct?  If it
 is,
  can someone provide me a quick example?  I can't seem to get this to
 work,
  but I've never used Tasks before so that's hardly surprising.  ;)
  -E




Re: [julia-users] Waiting on multiple subprocesses to finish?

2014-07-17 Thread Elliot Saba
I'm trying to run shell commands in parallel.  Shouldn't waiting for a
process to finish just call wait() which then invokes the scheduler?  Or
did I misread the above linked paragraph completely wrong?  Specifically,
I'm talking about the sections that say:

The basic function for waiting for an event is wait. Several objects
implement wait; for example, given a Process object, wait will wait for it
to exit.

In the next paragraph, it says:

When a task calls wait on a Condition, the task is marked as non-runnable,
added to the condition’s queue, and switches to the scheduler. The
scheduler will then pick another task to run, or block waiting for external
events.

I guess I thought that block waiting for external events would only
happen when there were no other tasks to run, but perhaps what's happening
is that switching to other tasks only happens when I'm waiting on a
condition constructed within Julia?
-E



On Thu, Jul 17, 2014 at 5:25 PM, Kevin Squire kevin.squ...@gmail.com
wrote:

 Note that the tasks all run in the same thread, so if any of the tasks (or
 the main process) have a long running computation which does not yield, the
 other processes won't run.  I believe that `sleep 10` falls in this
 category, although I'm not 100% certain.

 Cheers,
Kevin


 On Thu, Jul 17, 2014 at 1:49 PM, Tim Holy tim.h...@gmail.com wrote:

 Check out the implementation of multi.jl:pmap (the part in the @sync and
 @async blocks), it's a great example.

 --Tim

 On Thursday, July 17, 2014 04:23:48 PM Elliot Saba wrote:
  I was reading the docs
  
 http://julia.readthedocs.org/en/latest/manual/control-flow/#tasks-and-event
  s, and it seems to me that it's saying I can use tasks to run multiple
  subprocesses at once.  E.g., if I have some long-running subprocesses
 such
  as `sleep 10`, I should be able to wrap each in a Task and use the
 inherent
  wait() command that running each subprocess would entail to switch to
  another task and kick off another subprocess.  Is this correct?  If it
 is,
  can someone provide me a quick example?  I can't seem to get this to
 work,
  but I've never used Tasks before so that's hardly surprising.  ;)
  -E





Re: [julia-users] Append! resulting in #undef elements. Intended behavior?

2014-07-17 Thread Jan Strube
Thank you for reporting the issue.
I wasn't sure, but I thought I'd mention this as long as it's still a RC.

Jan


On Friday, July 18, 2014 4:55:36 AM UTC+9, Simon Kornblith wrote:

 The fact that append! grows the array on failure seems like a bug 
 nonetheless. If convert throws it seems preferable to leave the array as 
 is. I'll file an issue.

 Simon

 On Thursday, July 17, 2014 9:34:21 AM UTC-4, Jacob Quinn wrote:

 Hi Jan,

 You have your syntax a little mixed up. The usage of:

 Type[]

 actually declares an empty array with element type of `Type`. So you're 
 first line:

 x = Array[]

 is actually creating an array of arrays.

 Likewise, you're seeing the error in

 Array[1]

 Because you're trying to put an Int[1] array into an array of arrays (and 
 julia doesn't know how to make that conversion).

 The last error is unfortunate, because it seems that the call to 
 `append!` is allocating space for the array you're appending but then fails 
 when actually trying to put the 2nd array into the newly allocated space. 
 Because `append!` is mutating, you're left with your original array mutated 
 with the extra space, with the #undef.

 I think the semantics you're looking for are as follows:

 x = Int[]  # declares an empty 1-d array with element type of `Int`

 y = [1, 2, 3]  # literal syntax for creating an array with elements, (1, 
 2, 3). Type inference figures out that the elements are of type `Int`

 append!(x,y)  # mutates the `x` array by appending all the elements of y 
 to it; this works because they're both of the same type

 push!(x, 5)  # adds a single element, 5,  to the `x` array 

 Feel free to read through the section in teh manual on arrays to get a 
 better feel for what's going on.

 http://docs.julialang.org/en/latest/manual/arrays/

 Cheers,

 -Jacob


 On Thu, Jul 17, 2014 at 8:21 AM, Jan Strube jan.s...@gmail.com wrote:



 Hi List!

 I'm a particle physicist just getting started with some julia. I've been 
 using some python in the past but now I'm trying to use analyzing some lab 
 data as an excuse to learn Julia.
 So far it looks like I'm going to stick with it for a while.

 I've been trying to play with basic image analysis, and I've come across 
 the following behavior: append! complains that it doesn't find a suitable 
 method for my call signature, but it does append an #undef. Is this 
 intended?

 Please see below for a session.



_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() to list help topics
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.0-rc1 (2014-07-14 02:04 UTC)
  _/ |\__'_|_|_|\__'_|  |  
 |__/   |  x86_64-apple-darwin13.3.0

 julia x = Array[]
 0-element Array{Array{T,N},1}

 julia append!(x, Array[1])
 ERROR: `convert` has no method matching convert(::Type{Array{T,N}}, 
 ::Int64)
  in getindex at array.jl:121

 julia x
 0-element Array{Array{T,N},1}

 julia append!(x, Array[[1]])
 1-element Array{Array{T,N},1}:
  [1]

 julia append!(x, [1])
 ERROR: `convert` has no method matching convert(::Type{Array{T,N}}, 
 ::Int64)
  in copy! at abstractarray.jl:197
  in append! at array.jl:475

 julia x
 2-element Array{Array{T,N},1}:
 [1]
  #undef




Re: [julia-users] Waiting on multiple subprocesses to finish?

2014-07-17 Thread Jameson Nash
you can also call `spawn` (or `open`) rather than `run` to run a process
asynchronously in the current task (it returns a process handle rather than
a return value):


function run(cmds::AbstractCmd, args...)

ps = spawn(cmds, spawn_opts_inherit(args...)...)

success(ps) ? nothing : pipeline_error(ps)

end


On Thu, Jul 17, 2014 at 8:33 PM, Elliot Saba staticfl...@gmail.com wrote:

 Thanks, I'll take a look, although I have the gut feeling that these
 functions are targeted toward multiple processes.
 -E


 On Thu, Jul 17, 2014 at 4:49 PM, Tim Holy tim.h...@gmail.com wrote:

 Check out the implementation of multi.jl:pmap (the part in the @sync and
 @async blocks), it's a great example.

 --Tim

 On Thursday, July 17, 2014 04:23:48 PM Elliot Saba wrote:
  I was reading the docs
  
 http://julia.readthedocs.org/en/latest/manual/control-flow/#tasks-and-event
  s, and it seems to me that it's saying I can use tasks to run multiple
  subprocesses at once.  E.g., if I have some long-running subprocesses
 such
  as `sleep 10`, I should be able to wrap each in a Task and use the
 inherent
  wait() command that running each subprocess would entail to switch to
  another task and kick off another subprocess.  Is this correct?  If it
 is,
  can someone provide me a quick example?  I can't seem to get this to
 work,
  but I've never used Tasks before so that's hardly surprising.  ;)
  -E





Re: [julia-users] Waiting on multiple subprocesses to finish?

2014-07-17 Thread Elliot Saba
Great!  That answers it, Jameson.  Thanks!


On Thu, Jul 17, 2014 at 8:48 PM, Jameson Nash vtjn...@gmail.com wrote:

 you can also call `spawn` (or `open`) rather than `run` to run a process
 asynchronously in the current task (it returns a process handle rather than
 a return value):


 function run(cmds::AbstractCmd, args...)

 ps = spawn(cmds, spawn_opts_inherit(args...)...)

 success(ps) ? nothing : pipeline_error(ps)

 end


 On Thu, Jul 17, 2014 at 8:33 PM, Elliot Saba staticfl...@gmail.com
 wrote:

 Thanks, I'll take a look, although I have the gut feeling that these
 functions are targeted toward multiple processes.
 -E


 On Thu, Jul 17, 2014 at 4:49 PM, Tim Holy tim.h...@gmail.com wrote:

 Check out the implementation of multi.jl:pmap (the part in the @sync and
 @async blocks), it's a great example.

 --Tim

 On Thursday, July 17, 2014 04:23:48 PM Elliot Saba wrote:
  I was reading the docs
  
 http://julia.readthedocs.org/en/latest/manual/control-flow/#tasks-and-event
  s, and it seems to me that it's saying I can use tasks to run multiple
  subprocesses at once.  E.g., if I have some long-running subprocesses
 such
  as `sleep 10`, I should be able to wrap each in a Task and use the
 inherent
  wait() command that running each subprocess would entail to switch to
  another task and kick off another subprocess.  Is this correct?  If it
 is,
  can someone provide me a quick example?  I can't seem to get this to
 work,
  but I've never used Tasks before so that's hardly surprising.  ;)
  -E