Re: [julia-users] readandwrite: how can I read a line as soon as it's written by the process?

2015-06-28 Thread Jeff Waller


On Saturday, June 27, 2015 at 5:03:53 PM UTC-4, ele...@gmail.com wrote:

 Is your od program flushing its output?  Maybe its stuck in its internal 
 buffers, not in the pipe.


nah man, if it's it's od, it's not going to do that, the only thing simpler 
is cat; I suppose you could try cat.

What happens when you do something like this?

*julia **fromStream,toStream,p=readandwrite(`cat`)*

*(Pipe(open, 0 bytes waiting),Pipe(open, 0 bytes waiting),Process(`cat`, 
ProcessRunning))*


*julia **toStream.line_buffered*

*true*


*julia **toStream.line_buffered = false*

*false*


*julia **fromStream.line_buffered = false*

*false*


*julia **write(toStream,x)*

*1*


*julia **readavailable(fromStream)*

*1-element Array{UInt8,1}:*

* 0x78*


Re: [julia-users] What's the best way to implement multiple methods with similar arguments?

2015-06-28 Thread ks
Hello Mauro,

Thank you for your answer.

 1. Use different function names: getitems, getitems_maxid. Not too 
 elegant 
  as you mix purpose and details of function usage in its name. 
  2. Use named arguments. This will cause the function implementation to 
 grow 
  (a series of if / else), again not too elegant. 
  3. Define a new type: ItemId which behaves exactly as Int but can be 
 used 
  to 'activate' multiple dispatch (one function would use Int and the 
 second 
  one would use ItemId). Generally not the best approach if you have 
 methods 
  each having an argument that should be really represented as an Int 
 rather 
  than a new type. 

 For dispatch to work you have to use #3.  By the same logic you use in 
 #1 this is good: a 3 of ItemId has different units to a 3 of 
 maxnumitems.  For instance, you shouldn't be able to do ItemId + 
 maxnumitems.  Using different types gives you that and is the most 
 Julian approach but #1 and #2 work fine and might be the right solution 
 at times too.


The units explanation is very convincing. But what would you do when you 
have two methods that have different semantics but are indistinguishable by 
the argument types? A quick example:

getitems( maxitemid::ItemId )
getitems( minitemid::ItemId )

both arguments use the same type and so no multiple dispatch is possible 
here (introducing two types here wouldn't be right I feel). I'm just 
thinking about general approach to structuring code in Julia, like when do 
you handle things with multiple dispatch and when do you switch over to 
another convention.
 

 Note that a type such as 

 immutable ItemId 
  id::Int 
 end  

is as performant as an Int.  

 
Thanks. Would it be possible to define a new type which is not a composite 
type, just an Int without any fields but with its own name? So that you can 
use it like a rather than a.id?

Thank you again,
Krystian


[julia-users] Re: What's the best way to implement multiple methods with similar arguments?

2015-06-28 Thread ks
Hello Andrew,

Thanks!

the other answer is on the money, but, in this particular case, it seems to 
 me that you might want to have a function that could take both of those, 
 with the idea that you never get more than max_num_items, but that if you 
 find max_iter_id before that, the sequence stops there.

 in that case, named args would be more suitable.  something like

 function getitems(; max_num_items=-1, max_iter_id=...)


In this case it would work nicely because all the cases make sense: none, 
one (any), or both.
 


 where you might still have a special type for item id, but also have a 
 special singleton that means none (like -1 means unlimited for 
 max_num_items).

 then you could specify none (all items), either, or both.


So you'd have a parent and two child types?:

  ItemId
   / \
ExistingItemId  NoneItemId (Singleton)

What would be the difference of using one type ItemId and nothing to 
denote none?:

getitems(; max_num_items = -1, max_item_id = nothing )

I also read about Nullable{T} type, perhaps it could be used as well in 
this case.

Thanks again,
Krystian
 


 andrew


 On Friday, 26 June 2015 23:30:45 UTC-3, ks wrote:

 Hello everyone,

 I've just started to write a bit of code in Julia and I'm still exploring 
 the best ways of doing this and that. I'm having this small problem now and 
 wanted to ask for your advice.

 I'd like to have two methods that retrieve some items. The first method 
 takes the max number of items that should be retrieved. And the second 
 method takes the max item id.

 getitems( maxnumitems )
 getitems( maxitemid )

 In both cases the argument has the same type: Int. So how do I take the 
 advantage of multiple dispatch mechanism in this situation? And is multiple 
 dispatch really the recommended way of handling a situation like this one? 
 Here're some alternatives that I thought of:

 1. Use different function names: getitems, getitems_maxid. Not too 
 elegant as you mix purpose and details of function usage in its name.
 2. Use named arguments. This will cause the function implementation to 
 grow (a series of if / else), again not too elegant.
 3. Define a new type: ItemId which behaves exactly as Int but can be used 
 to 'activate' multiple dispatch (one function would use Int and the second 
 one would use ItemId). Generally not the best approach if you have methods 
 each having an argument that should be really represented as an Int rather 
 than a new type.
 4. ...?

 What would you recommend ?

 Thank you,
 ks



[julia-users] How to insert new row/ egsisitng vector into array ?

2015-06-28 Thread Ivar Nesje
No, 
array size for Array{T, N} where N  1 is immutable.

I think I have read somewhere that this is to make it easier to have automatic 
bounds check hoisting in loop, but I don't think we have that yet. 

[julia-users] Re: Sieve of Atkin performance.

2015-06-28 Thread Ismael VC
Stefan It's done, I've updated it with a MIT licence header: 
https://gist.github.com/Ismael-VC/179790a53c549609b3ce 

El domingo, 28 de junio de 2015, 10:16:16 (UTC-5), Ismael VC escribió:

 Hello everyone!

 I’ve implemented this Sieve of Atkin: 

- https://gist.github.com/Ismael-VC/179790a53c549609b3ce 

 function atkin(limit::Int = 0)
 @inbounds begin
 primes = [2, 3]

 if limit  0
 error(limit can't be negative (found $(limit)))

 elseif limit  2
 primes = Int[]

 elseif limit == 2
 primes = [2]

 else
 factor = round(Int, sqrt(limit))
 sieve = falses(limit)

 for x = 1:factor
 for y = 1:factor
 n = 4x^2 + y^2
 if n = limit  (n % 12 == 1 || n % 12 == 5)
 sieve[n] = !sieve[n]
 end

 n = 3x^2 + y^2
 if n = limit  n % 12 == 7
 sieve[n] = !sieve[n]
 end

 n = 3x^2 - y^2
 if x  y  n = limit  n % 12 == 11
 sieve[n] = !sieve[n]
 end
 end
 end

 for x = 5:factor
 if sieve[x]
 for y = x^2 : x^2 : limit
 sieve[y] = false
 end
 end
 end

 for i = 5:limit
 if sieve[i]
 push!(primes, i)
 end
 end
 end
 end
 return primes
 end

 Ported directly from the Wikipedia pseudocode:

- https://en.wikipedia.org/wiki/Sieve_of_Atkin#Pseudocode 

 And I’ve also compared atkin with Base.primes (IJulia notebook tested at 
 JuliaBox version 0.4.0-dev+5491):

- http://nbviewer.ipython.org/gist/Ismael-VC/25b1a0c1e11f306a40ae 

 I also tested it on a Mac:

 ulia versioninfo()
 Julia Version 0.3.9
 Commit 31efe69 (2015-05-30 11:24 UTC)
 Platform Info:
 System: Darwin (x86_64-apple-darwin13.4.0)
 CPU: Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
 WORD_SIZE: 64
 BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
 LAPACK: libopenblas
 LIBM: libopenlibm
 LLVM: libLLVM-3.3

 julia gc(); @time primes(1000_000_000);
 elapsed time: 72.423236327 seconds (531889264 bytes allocated, 0.02% gc time)

 julia gc(); @time atkin(1000_000_000);
 elapsed time: 27.908726228 seconds (2342278320 bytes allocated, 0.17% gc time)

 julia gc(); @time primes(10_000_000_000);
 elapsed time: 809.601231674 seconds (4890727200 bytes allocated, 0.00% gc 
 time)

 julia gc(); @time atkin(10_000_000_000);
 elapsed time: 332.286719798 seconds (160351721104 bytes allocated, 0.32% gc 
 time)

 Reference: 

- 
https://github.com/JuliaLang/julia/issues/11594#issuecomment-115915833 

 I’m trying to understand how does Base.primes and the Base.primesmask 
 functions work and also how is it that atkin performs better in time (and 
 sometimes also in space) than Base.primes in this tests.
 ​



[julia-users] Re: Sieve of Atkin performance.

2015-06-28 Thread Ismael VC
I used the wikipedia spanish version and algorithm, I didn't notice that 
it's a different one in the english version:

* https://es.wikipedia.org/wiki/Criba_de_Atkin#Pseudoc.C3.B3digo

I'll check that one too.

El domingo, 28 de junio de 2015, 10:16:16 (UTC-5), Ismael VC escribió:

 Hello everyone!

 I’ve implemented this Sieve of Atkin: 

- https://gist.github.com/Ismael-VC/179790a53c549609b3ce 

 function atkin(limit::Int = 0)
 @inbounds begin
 primes = [2, 3]

 if limit  0
 error(limit can't be negative (found $(limit)))

 elseif limit  2
 primes = Int[]

 elseif limit == 2
 primes = [2]

 else
 factor = round(Int, sqrt(limit))
 sieve = falses(limit)

 for x = 1:factor
 for y = 1:factor
 n = 4x^2 + y^2
 if n = limit  (n % 12 == 1 || n % 12 == 5)
 sieve[n] = !sieve[n]
 end

 n = 3x^2 + y^2
 if n = limit  n % 12 == 7
 sieve[n] = !sieve[n]
 end

 n = 3x^2 - y^2
 if x  y  n = limit  n % 12 == 11
 sieve[n] = !sieve[n]
 end
 end
 end

 for x = 5:factor
 if sieve[x]
 for y = x^2 : x^2 : limit
 sieve[y] = false
 end
 end
 end

 for i = 5:limit
 if sieve[i]
 push!(primes, i)
 end
 end
 end
 end
 return primes
 end

 Ported directly from the Wikipedia pseudocode:

- https://en.wikipedia.org/wiki/Sieve_of_Atkin#Pseudocode 

 And I’ve also compared atkin with Base.primes (IJulia notebook tested at 
 JuliaBox version 0.4.0-dev+5491):

- http://nbviewer.ipython.org/gist/Ismael-VC/25b1a0c1e11f306a40ae 

 I also tested it on a Mac:

 ulia versioninfo()
 Julia Version 0.3.9
 Commit 31efe69 (2015-05-30 11:24 UTC)
 Platform Info:
 System: Darwin (x86_64-apple-darwin13.4.0)
 CPU: Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
 WORD_SIZE: 64
 BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
 LAPACK: libopenblas
 LIBM: libopenlibm
 LLVM: libLLVM-3.3

 julia gc(); @time primes(1000_000_000);
 elapsed time: 72.423236327 seconds (531889264 bytes allocated, 0.02% gc time)

 julia gc(); @time atkin(1000_000_000);
 elapsed time: 27.908726228 seconds (2342278320 bytes allocated, 0.17% gc time)

 julia gc(); @time primes(10_000_000_000);
 elapsed time: 809.601231674 seconds (4890727200 bytes allocated, 0.00% gc 
 time)

 julia gc(); @time atkin(10_000_000_000);
 elapsed time: 332.286719798 seconds (160351721104 bytes allocated, 0.32% gc 
 time)

 Reference: 

- 
https://github.com/JuliaLang/julia/issues/11594#issuecomment-115915833 

 I’m trying to understand how does Base.primes and the Base.primesmask 
 functions work and also how is it that atkin performs better in time (and 
 sometimes also in space) than Base.primes in this tests.
 ​



Re: [julia-users] How to insert new row/ egsisitng vector into array ?

2015-06-28 Thread Stefan Karpinski
The main reason is actually that it's quite hard and, at best, very
inefficient to do this in general. You have to move the elements of the
entire array except in the very special case that you happen to be
appending to the good dimension of an array.

On Sun, Jun 28, 2015 at 4:13 AM, Ivar Nesje iva...@gmail.com wrote:

 No,
 array size for Array{T, N} where N  1 is immutable.

 I think I have read somewhere that this is to make it easier to have
 automatic bounds check hoisting in loop, but I don't think we have that
 yet.


Re: [julia-users] Re: Sieve of Atkin performance.

2015-06-28 Thread Stefan Karpinski
Are you willing to release this code under the MIT license
http://julialang.org/license?

On Sun, Jun 28, 2015 at 11:19 AM, Ismael VC ismael.vc1...@gmail.com wrote:

 Best out of 10 runs with a limit of 100,000,000.

- Base.primes:

 3.514 seconds (9042 k allocations: 194 MB, 0.22% gc time)

- atkin:

 2.036 seconds (20 allocations: 78768 KB, 0.03% gc time)

- eratosthenes:

 7.272 seconds (10 k allocations: 1677 MB, 1.58% gc time)

 El domingo, 28 de junio de 2015, 10:16:16 (UTC-5), Ismael VC escribió:

 Hello everyone!

 I’ve implemented this Sieve of Atkin:

- https://gist.github.com/Ismael-VC/179790a53c549609b3ce

 function atkin(limit::Int = 0)
 @inbounds begin
 primes = [2, 3]

 if limit  0
 error(limit can't be negative (found $(limit)))

 elseif limit  2
 primes = Int[]

 elseif limit == 2
 primes = [2]

 else
 factor = round(Int, sqrt(limit))
 sieve = falses(limit)

 for x = 1:factor
 for y = 1:factor
 n = 4x^2 + y^2
 if n = limit  (n % 12 == 1 || n % 12 == 5)
 sieve[n] = !sieve[n]
 end

 n = 3x^2 + y^2
 if n = limit  n % 12 == 7
 sieve[n] = !sieve[n]
 end

 n = 3x^2 - y^2
 if x  y  n = limit  n % 12 == 11
 sieve[n] = !sieve[n]
 end
 end
 end

 for x = 5:factor
 if sieve[x]
 for y = x^2 : x^2 : limit
 sieve[y] = false
 end
 end
 end

 for i = 5:limit
 if sieve[i]
 push!(primes, i)
 end
 end
 end
 end
 return primes
 end

 Ported directly from the Wikipedia pseudocode:

- https://en.wikipedia.org/wiki/Sieve_of_Atkin#Pseudocode

 And I’ve also compared atkin with Base.primes (IJulia notebook tested at
 JuliaBox version 0.4.0-dev+5491):

- http://nbviewer.ipython.org/gist/Ismael-VC/25b1a0c1e11f306a40ae

 I also tested it on a Mac:

 ulia versioninfo()
 Julia Version 0.3.9
 Commit 31efe69 (2015-05-30 11:24 UTC)
 Platform Info:
 System: Darwin (x86_64-apple-darwin13.4.0)
 CPU: Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
 WORD_SIZE: 64
 BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
 LAPACK: libopenblas
 LIBM: libopenlibm
 LLVM: libLLVM-3.3

 julia gc(); @time primes(1000_000_000);
 elapsed time: 72.423236327 seconds (531889264 bytes allocated, 0.02% gc time)

 julia gc(); @time atkin(1000_000_000);
 elapsed time: 27.908726228 seconds (2342278320 bytes allocated, 0.17% gc 
 time)

 julia gc(); @time primes(10_000_000_000);
 elapsed time: 809.601231674 seconds (4890727200 bytes allocated, 0.00% gc 
 time)

 julia gc(); @time atkin(10_000_000_000);
 elapsed time: 332.286719798 seconds (160351721104 bytes allocated, 0.32% gc 
 time)

 Reference:

-
https://github.com/JuliaLang/julia/issues/11594#issuecomment-115915833

 I’m trying to understand how does Base.primes and the Base.primesmask
 functions work and also how is it that atkin performs better in time
 (and sometimes also in space) than Base.primes in this tests.
 ​

 ​



[julia-users] Sieve of Atkin performance.

2015-06-28 Thread Ismael VC


Hello everyone!

I’ve implemented this Sieve of Atkin: 

   - https://gist.github.com/Ismael-VC/179790a53c549609b3ce 

function atkin(limit::Int = 0)
@inbounds begin
primes = [2, 3]

if limit  0
error(limit can't be negative (found $(limit)))

elseif limit  2
primes = Int[]

elseif limit == 2
primes = [2]

else
factor = round(Int, sqrt(limit))
sieve = falses(limit)

for x = 1:factor
for y = 1:factor
n = 4x^2 + y^2
if n = limit  (n % 12 == 1 || n % 12 == 5)
sieve[n] = !sieve[n]
end

n = 3x^2 + y^2
if n = limit  n % 12 == 7
sieve[n] = !sieve[n]
end

n = 3x^2 - y^2
if x  y  n = limit  n % 12 == 11
sieve[n] = !sieve[n]
end
end
end

for x = 5:factor
if sieve[x]
for y = x^2 : x^2 : limit
sieve[y] = false
end
end
end

for i = 5:limit
if sieve[i]
push!(primes, i)
end
end
end
end
return primes
end

Ported directly from the Wikipedia pseudocode:

   - https://en.wikipedia.org/wiki/Sieve_of_Atkin#Pseudocode 

And I’ve also compared atkin with Base.primes (IJulia notebook tested at 
JuliaBox version 0.4.0-dev+5491):

   - http://nbviewer.ipython.org/gist/Ismael-VC/25b1a0c1e11f306a40ae 

I also tested it on a Mac:

ulia versioninfo()
Julia Version 0.3.9
Commit 31efe69 (2015-05-30 11:24 UTC)
Platform Info:
System: Darwin (x86_64-apple-darwin13.4.0)
CPU: Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
LAPACK: libopenblas
LIBM: libopenlibm
LLVM: libLLVM-3.3

julia gc(); @time primes(1000_000_000);
elapsed time: 72.423236327 seconds (531889264 bytes allocated, 0.02% gc time)

julia gc(); @time atkin(1000_000_000);
elapsed time: 27.908726228 seconds (2342278320 bytes allocated, 0.17% gc time)

julia gc(); @time primes(10_000_000_000);
elapsed time: 809.601231674 seconds (4890727200 bytes allocated, 0.00% gc time)

julia gc(); @time atkin(10_000_000_000);
elapsed time: 332.286719798 seconds (160351721104 bytes allocated, 0.32% gc 
time)

Reference: 

   - https://github.com/JuliaLang/julia/issues/11594#issuecomment-115915833 

I’m trying to understand how does Base.primes and the Base.primesmask 
functions work and also how is it that atkin performs better in time (and 
sometimes also in space) than Base.primes in this tests.
​


[julia-users] Re: Sieve of Atkin performance.

2015-06-28 Thread Ismael VC
Yes certainly Stefan, I'll update the gist with a MIT licence note.

El domingo, 28 de junio de 2015, 10:16:16 (UTC-5), Ismael VC escribió:

 Hello everyone!

 I’ve implemented this Sieve of Atkin: 

- https://gist.github.com/Ismael-VC/179790a53c549609b3ce 

 function atkin(limit::Int = 0)
 @inbounds begin
 primes = [2, 3]

 if limit  0
 error(limit can't be negative (found $(limit)))

 elseif limit  2
 primes = Int[]

 elseif limit == 2
 primes = [2]

 else
 factor = round(Int, sqrt(limit))
 sieve = falses(limit)

 for x = 1:factor
 for y = 1:factor
 n = 4x^2 + y^2
 if n = limit  (n % 12 == 1 || n % 12 == 5)
 sieve[n] = !sieve[n]
 end

 n = 3x^2 + y^2
 if n = limit  n % 12 == 7
 sieve[n] = !sieve[n]
 end

 n = 3x^2 - y^2
 if x  y  n = limit  n % 12 == 11
 sieve[n] = !sieve[n]
 end
 end
 end

 for x = 5:factor
 if sieve[x]
 for y = x^2 : x^2 : limit
 sieve[y] = false
 end
 end
 end

 for i = 5:limit
 if sieve[i]
 push!(primes, i)
 end
 end
 end
 end
 return primes
 end

 Ported directly from the Wikipedia pseudocode:

- https://en.wikipedia.org/wiki/Sieve_of_Atkin#Pseudocode 

 And I’ve also compared atkin with Base.primes (IJulia notebook tested at 
 JuliaBox version 0.4.0-dev+5491):

- http://nbviewer.ipython.org/gist/Ismael-VC/25b1a0c1e11f306a40ae 

 I also tested it on a Mac:

 ulia versioninfo()
 Julia Version 0.3.9
 Commit 31efe69 (2015-05-30 11:24 UTC)
 Platform Info:
 System: Darwin (x86_64-apple-darwin13.4.0)
 CPU: Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
 WORD_SIZE: 64
 BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
 LAPACK: libopenblas
 LIBM: libopenlibm
 LLVM: libLLVM-3.3

 julia gc(); @time primes(1000_000_000);
 elapsed time: 72.423236327 seconds (531889264 bytes allocated, 0.02% gc time)

 julia gc(); @time atkin(1000_000_000);
 elapsed time: 27.908726228 seconds (2342278320 bytes allocated, 0.17% gc time)

 julia gc(); @time primes(10_000_000_000);
 elapsed time: 809.601231674 seconds (4890727200 bytes allocated, 0.00% gc 
 time)

 julia gc(); @time atkin(10_000_000_000);
 elapsed time: 332.286719798 seconds (160351721104 bytes allocated, 0.32% gc 
 time)

 Reference: 

- 
https://github.com/JuliaLang/julia/issues/11594#issuecomment-115915833 

 I’m trying to understand how does Base.primes and the Base.primesmask 
 functions work and also how is it that atkin performs better in time (and 
 sometimes also in space) than Base.primes in this tests.
 ​



[julia-users] Re: Sieve of Atkin performance.

2015-06-28 Thread Ismael VC


Best out of 10 runs with a limit of 100,000,000.

   - Base.primes: 

3.514 seconds (9042 k allocations: 194 MB, 0.22% gc time)

   - atkin: 

2.036 seconds (20 allocations: 78768 KB, 0.03% gc time)

   - eratosthenes: 

7.272 seconds (10 k allocations: 1677 MB, 1.58% gc time)

El domingo, 28 de junio de 2015, 10:16:16 (UTC-5), Ismael VC escribió:

Hello everyone!

 I’ve implemented this Sieve of Atkin: 

- https://gist.github.com/Ismael-VC/179790a53c549609b3ce 

 function atkin(limit::Int = 0)
 @inbounds begin
 primes = [2, 3]

 if limit  0
 error(limit can't be negative (found $(limit)))

 elseif limit  2
 primes = Int[]

 elseif limit == 2
 primes = [2]

 else
 factor = round(Int, sqrt(limit))
 sieve = falses(limit)

 for x = 1:factor
 for y = 1:factor
 n = 4x^2 + y^2
 if n = limit  (n % 12 == 1 || n % 12 == 5)
 sieve[n] = !sieve[n]
 end

 n = 3x^2 + y^2
 if n = limit  n % 12 == 7
 sieve[n] = !sieve[n]
 end

 n = 3x^2 - y^2
 if x  y  n = limit  n % 12 == 11
 sieve[n] = !sieve[n]
 end
 end
 end

 for x = 5:factor
 if sieve[x]
 for y = x^2 : x^2 : limit
 sieve[y] = false
 end
 end
 end

 for i = 5:limit
 if sieve[i]
 push!(primes, i)
 end
 end
 end
 end
 return primes
 end

 Ported directly from the Wikipedia pseudocode:

- https://en.wikipedia.org/wiki/Sieve_of_Atkin#Pseudocode 

 And I’ve also compared atkin with Base.primes (IJulia notebook tested at 
 JuliaBox version 0.4.0-dev+5491):

- http://nbviewer.ipython.org/gist/Ismael-VC/25b1a0c1e11f306a40ae 

 I also tested it on a Mac:

 ulia versioninfo()
 Julia Version 0.3.9
 Commit 31efe69 (2015-05-30 11:24 UTC)
 Platform Info:
 System: Darwin (x86_64-apple-darwin13.4.0)
 CPU: Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
 WORD_SIZE: 64
 BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
 LAPACK: libopenblas
 LIBM: libopenlibm
 LLVM: libLLVM-3.3

 julia gc(); @time primes(1000_000_000);
 elapsed time: 72.423236327 seconds (531889264 bytes allocated, 0.02% gc time)

 julia gc(); @time atkin(1000_000_000);
 elapsed time: 27.908726228 seconds (2342278320 bytes allocated, 0.17% gc time)

 julia gc(); @time primes(10_000_000_000);
 elapsed time: 809.601231674 seconds (4890727200 bytes allocated, 0.00% gc 
 time)

 julia gc(); @time atkin(10_000_000_000);
 elapsed time: 332.286719798 seconds (160351721104 bytes allocated, 0.32% gc 
 time)

 Reference: 

- 
https://github.com/JuliaLang/julia/issues/11594#issuecomment-115915833 

 I’m trying to understand how does Base.primes and the Base.primesmask 
 functions work and also how is it that atkin performs better in time (and 
 sometimes also in space) than Base.primes in this tests.
 ​

​


[julia-users] Re: Sieve of Atkin performance.

2015-06-28 Thread Ismael VC


Is that header ok? What do you think about this? The wikipedia article 
claims that there is room for more improvement:

This pseudocode is written for clarity; although some redundant 
computations have been eliminated by controlling the odd/even x/y 
combinations, it still wastes almost half of its quadratic computations on 
non-productive loops that don’t pass the modulo tests such that it will not 
be faster than an equivalent wheel factorized (2/3/5) sieve of 
Eratosthenes. To improve its efficiency, a method must be devised to 
minimize or eliminate these non-productive computations.

El domingo, 28 de junio de 2015, 10:16:16 (UTC-5), Ismael VC escribió:

Hello everyone!

 I’ve implemented this Sieve of Atkin: 

- https://gist.github.com/Ismael-VC/179790a53c549609b3ce 

 function atkin(limit::Int = 0)
 @inbounds begin
 primes = [2, 3]

 if limit  0
 error(limit can't be negative (found $(limit)))

 elseif limit  2
 primes = Int[]

 elseif limit == 2
 primes = [2]

 else
 factor = round(Int, sqrt(limit))
 sieve = falses(limit)

 for x = 1:factor
 for y = 1:factor
 n = 4x^2 + y^2
 if n = limit  (n % 12 == 1 || n % 12 == 5)
 sieve[n] = !sieve[n]
 end

 n = 3x^2 + y^2
 if n = limit  n % 12 == 7
 sieve[n] = !sieve[n]
 end

 n = 3x^2 - y^2
 if x  y  n = limit  n % 12 == 11
 sieve[n] = !sieve[n]
 end
 end
 end

 for x = 5:factor
 if sieve[x]
 for y = x^2 : x^2 : limit
 sieve[y] = false
 end
 end
 end

 for i = 5:limit
 if sieve[i]
 push!(primes, i)
 end
 end
 end
 end
 return primes
 end

 Ported directly from the Wikipedia pseudocode:

- https://en.wikipedia.org/wiki/Sieve_of_Atkin#Pseudocode 

 And I’ve also compared atkin with Base.primes (IJulia notebook tested at 
 JuliaBox version 0.4.0-dev+5491):

- http://nbviewer.ipython.org/gist/Ismael-VC/25b1a0c1e11f306a40ae 

 I also tested it on a Mac:

 ulia versioninfo()
 Julia Version 0.3.9
 Commit 31efe69 (2015-05-30 11:24 UTC)
 Platform Info:
 System: Darwin (x86_64-apple-darwin13.4.0)
 CPU: Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
 WORD_SIZE: 64
 BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
 LAPACK: libopenblas
 LIBM: libopenlibm
 LLVM: libLLVM-3.3

 julia gc(); @time primes(1000_000_000);
 elapsed time: 72.423236327 seconds (531889264 bytes allocated, 0.02% gc time)

 julia gc(); @time atkin(1000_000_000);
 elapsed time: 27.908726228 seconds (2342278320 bytes allocated, 0.17% gc time)

 julia gc(); @time primes(10_000_000_000);
 elapsed time: 809.601231674 seconds (4890727200 bytes allocated, 0.00% gc 
 time)

 julia gc(); @time atkin(10_000_000_000);
 elapsed time: 332.286719798 seconds (160351721104 bytes allocated, 0.32% gc 
 time)

 Reference: 

- 
https://github.com/JuliaLang/julia/issues/11594#issuecomment-115915833 

 I’m trying to understand how does Base.primes and the Base.primesmask 
 functions work and also how is it that atkin performs better in time (and 
 sometimes also in space) than Base.primes in this tests.
 ​

​


Re: [julia-users] readandwrite: how can I read a line as soon as it's written by the process?

2015-06-28 Thread Jameson Nash
yes, it is the fault of `od`. you can see this by using `cat` instead and
observing that it works. if you play around with the thresholds in `od`,
you will observe that it is doing block-buffering of 4096 byte chunks when
the input is a pipe.

Unless you turn on write buffering (with `buffer_writes(si)`), the `write`
function implicitly calls `flush`, and will block the task until the write
completes (the actual `flush` function is a no-op). Therefore, it's best to
call write in a separate task if you are processing both ends of the pipe:

julia (so,si,pr) = readandwrite(`od`)
(Pipe(open, 0 bytes waiting),Pipe(open, 0 bytes waiting),Process(`od`,
ProcessRunning))

julia @async write(si,repeat(test\n,1))
Task (done) @0x7fe37aa0daa0

(note, the line_buffered property has no effect)

Note that Base.readavailable returns a random, non-zero number of bytes,
which is presumably not what you want?

julia Base.readavailable(so)
135168-element Array{UInt8,1}:
...
julia Base.readavailable(so)
61440-element Array{UInt8,1}:
...

In order to get the full output, you need to instead call:
julia close(si)
julia readall(so)


On Sun, Jun 28, 2015 at 9:08 AM Jeff Waller truth...@gmail.com wrote:



 On Saturday, June 27, 2015 at 5:03:53 PM UTC-4, ele...@gmail.com wrote:

 Is your od program flushing its output?  Maybe its stuck in its internal
 buffers, not in the pipe.


 nah man, if it's it's od, it's not going to do that, the only thing
 simpler is cat; I suppose you could try cat.

 What happens when you do something like this?

 *julia **fromStream,toStream,p=readandwrite(`cat`)*

 *(Pipe(open, 0 bytes waiting),Pipe(open, 0 bytes waiting),Process(`cat`,
 ProcessRunning))*


 *julia **toStream.line_buffered*

 *true*


 *julia **toStream.line_buffered = false*

 *false*


 *julia **fromStream.line_buffered = false*

 *false*


 *julia **write(toStream,x)*

 *1*


 *julia **readavailable(fromStream)*

 *1-element Array{UInt8,1}:*

 * 0x78*



Re: [julia-users] What's the best way to implement multiple methods with similar arguments?

2015-06-28 Thread Josh Langsfeld
The philosophy I try to follow is to use multiple dispatch when you don't 
care at the calling site what the types of your arguments are and therefore 
which method will be called. For example, consider the case when you 
receive some output from a library function and you either don't know or 
are too lazy to check what types are being returned. Multiple dispatch is 
perfect for just sending on those outputs without worrying about it and 
have it do the right thing automatically.

Now, on the other hand, when you invest care at the calling site in making 
sure a particular method needs to be invoked, I would argue you probably 
don't need to worry about fitting everything in a multiple dispatch scheme. 
In your 'maxitemid' vs 'miniterid' case, it seems your calling function is 
going to need to do some logic to figure out which method it wants to 
invoke. In that case, you're already treating them as separate functions 
and so I would say they should be named as separate functions.

There are other tricks though to using dispatch in such cases and I think 
it's mostly a matter of preference on which style you think looks nicer. 
You could use the 'Val' parametric type and have something like:

getitems(::Type{Val{:max}}, id::ItemId)#invoke as getitems(Val{:max}, 
ItemId(10))
getitems(::Type{Val{:min}}, id::ItemId) #   
 getitems(Val{:min}, ItemId(11))

Or you could declare some empty immutables to act as labels in the same 
manner as Val.

On Sunday, June 28, 2015 at 8:26:51 AM UTC-4, ks wrote:

 Hello Mauro,

 Thank you for your answer.

  1. Use different function names: getitems, getitems_maxid. Not too 
 elegant 
  as you mix purpose and details of function usage in its name. 
  2. Use named arguments. This will cause the function implementation to 
 grow 
  (a series of if / else), again not too elegant. 
  3. Define a new type: ItemId which behaves exactly as Int but can be 
 used 
  to 'activate' multiple dispatch (one function would use Int and the 
 second 
  one would use ItemId). Generally not the best approach if you have 
 methods 
  each having an argument that should be really represented as an Int 
 rather 
  than a new type. 

 For dispatch to work you have to use #3.  By the same logic you use in 
 #1 this is good: a 3 of ItemId has different units to a 3 of 
 maxnumitems.  For instance, you shouldn't be able to do ItemId + 
 maxnumitems.  Using different types gives you that and is the most 
 Julian approach but #1 and #2 work fine and might be the right solution 
 at times too.


 The units explanation is very convincing. But what would you do when you 
 have two methods that have different semantics but are indistinguishable by 
 the argument types? A quick example:

 getitems( maxitemid::ItemId )
 getitems( minitemid::ItemId )

 both arguments use the same type and so no multiple dispatch is possible 
 here (introducing two types here wouldn't be right I feel). I'm just 
 thinking about general approach to structuring code in Julia, like when do 
 you handle things with multiple dispatch and when do you switch over to 
 another convention.
  

 Note that a type such as 

 immutable ItemId 
  id::Int 
 end  

 is as performant as an Int.  

  
 Thanks. Would it be possible to define a new type which is not a composite 
 type, just an Int without any fields but with its own name? So that you can 
 use it like a rather than a.id?

 Thank you again,
 Krystian



[julia-users] Re: Current Performance w Trunk Compared to 0.3

2015-06-28 Thread Tony Kelman
Matt's been busy fixing all the things lately :)


On Sunday, June 28, 2015 at 6:55:53 PM UTC-4, andrew cooke wrote:


 i am now back from my trip and have returned to this (i did try to look 
 while away, but it seems that it was only on one of my machines - the one 
 turned off and hidden from burglars in the closet - that was affected).  
 anyway, pulling from git and rebuilding julia fixed the issue - it's now 
 back to 9 allocations (256 bytes) instead of 300,000,000 allocations (4578 
 MB).

 thanks,
 andrew


 On Wednesday, 10 June 2015 10:10:52 UTC-3, andrew cooke wrote:


 Is it the current poor performance / allocation a known issue?

 I don't know how long this has been going on, and searching for 
 performance in issues gives a lot of hits, but I've been maintaining some 
 old projects and noticed that timed tests are running significant;y slower 
 with trunk than 0.3.  CRC.jl was 40x slower - I ended up cancelling the 
 Travis build, and assumed it was a weird glitch that would be fixed.  But 
 now I am seeing slowdowns with IntModN.jl too (factor more like 4x as slow).

 You can see this at https://travis-ci.org/andrewcooke/IntModN.jl 
 (compare the timing results in the two jobs) and at 
 https://travis-ci.org/andrewcooke/CRC.jl/builds/66140801 (i have been 
 cancelling jobs there, so the examples aren't as complete).

 Andrew



[julia-users] Re: What's the best way to implement multiple methods with similar arguments?

2015-06-28 Thread andrew cooke
On Sunday, 28 June 2015 10:13:42 UTC-3, ks wrote:

 Hello Andrew,

 Thanks!

 the other answer is on the money, but, in this particular case, it seems 
 to me that you might want to have a function that could take both of those, 
 with the idea that you never get more than max_num_items, but that if you 
 find max_iter_id before that, the sequence stops there.

 in that case, named args would be more suitable.  something like

 function getitems(; max_num_items=-1, max_iter_id=...)


 In this case it would work nicely because all the cases make sense: none, 
 one (any), or both.
  


 where you might still have a special type for item id, but also have a 
 special singleton that means none (like -1 means unlimited for 
 max_num_items).

 then you could specify none (all items), either, or both.


 So you'd have a parent and two child types?:

   ItemId
/ \
 ExistingItemId  NoneItemId (Singleton)

 What would be the difference of using one type ItemId and nothing to 
 denote none?:

 getitems(; max_num_items = -1, max_item_id = nothing )

 I also read about Nullable{T} type, perhaps it could be used as well in 
 this case.


any of those would work.  you could also do something dirtier with a -ve 
sign if you were using a signed int for item IDs, but real IDs were 
positive.

for example:

immutable ItemId
id::Int
end

then ItemId(-1) could be used to mean no value.

it depends how serious you want to be about making things type safe.

andrew

 

 Thanks again,
 Krystian
  


 andrew


 On Friday, 26 June 2015 23:30:45 UTC-3, ks wrote:

 Hello everyone,

 I've just started to write a bit of code in Julia and I'm still 
 exploring the best ways of doing this and that. I'm having this small 
 problem now and wanted to ask for your advice.

 I'd like to have two methods that retrieve some items. The first method 
 takes the max number of items that should be retrieved. And the second 
 method takes the max item id.

 getitems( maxnumitems )
 getitems( maxitemid )

 In both cases the argument has the same type: Int. So how do I take the 
 advantage of multiple dispatch mechanism in this situation? And is multiple 
 dispatch really the recommended way of handling a situation like this one? 
 Here're some alternatives that I thought of:

 1. Use different function names: getitems, getitems_maxid. Not too 
 elegant as you mix purpose and details of function usage in its name.
 2. Use named arguments. This will cause the function implementation to 
 grow (a series of if / else), again not too elegant.
 3. Define a new type: ItemId which behaves exactly as Int but can be 
 used to 'activate' multiple dispatch (one function would use Int and the 
 second one would use ItemId). Generally not the best approach if you have 
 methods each having an argument that should be really represented as an Int 
 rather than a new type.
 4. ...?

 What would you recommend ?

 Thank you,
 ks



[julia-users] Re: Current Performance w Trunk Compared to 0.3

2015-06-28 Thread andrew cooke

i am now back from my trip and have returned to this (i did try to look 
while away, but it seems that it was only on one of my machines - the one 
turned off and hidden from burglars in the closet - that was affected).  
anyway, pulling from git and rebuilding julia fixed the issue - it's now 
back to 9 allocations (256 bytes) instead of 300,000,000 allocations (4578 
MB).

thanks,
andrew


On Wednesday, 10 June 2015 10:10:52 UTC-3, andrew cooke wrote:


 Is it the current poor performance / allocation a known issue?

 I don't know how long this has been going on, and searching for 
 performance in issues gives a lot of hits, but I've been maintaining some 
 old projects and noticed that timed tests are running significant;y slower 
 with trunk than 0.3.  CRC.jl was 40x slower - I ended up cancelling the 
 Travis build, and assumed it was a weird glitch that would be fixed.  But 
 now I am seeing slowdowns with IntModN.jl too (factor more like 4x as slow).

 You can see this at https://travis-ci.org/andrewcooke/IntModN.jl (compare 
 the timing results in the two jobs) and at 
 https://travis-ci.org/andrewcooke/CRC.jl/builds/66140801 (i have been 
 cancelling jobs there, so the examples aren't as complete).

 Andrew



[julia-users] Packed my function's parameters into a special immutable type. Expected slowdown, but my code is 20% faster. Why?

2015-06-28 Thread Andrew
In the interest of writing abstract code that I could modify easily 
depending on the economics model I need, I decided to pack the parameters 
of my utility function into a special type. Here's the old function.

function u(UF::CRRA,a::Float64,aprime::Float64,y::Float64,r::Float64,w::
Float64)
consump = w*y + (1+r)*a - aprime
u(UF,consump,1)
end
(note: I have tried this with and without the Float64 type annotations. It 
makes no difference.)

and the setup for the new function

abstract State
immutable State1 : State
a::Float64
aprime::Float64
y::Float64
r::Float64
w::Float64
end

function u(UF::CRRA,state::State1)
w = state.w
r = state.r
y = state.y
a = state.a
aprime = state.aprime
consump = w*y + (1+r)*a - aprime
u(UF,consump,1)
end

This function is called within a tight inner loop. Here's the old and new 
version. Umatrix_computed is a Bool array.

 Umatrix_computed[i,j,k] ? nothing : ( Umatrix_computed[i,j,k] = true ; 
Umatrix[i,j,k] = u(UF, x_grid[i] ,a_grid[j] ,yvals[k] , r , w) )

state = State1(x_grid[i] ,a_grid[j] ,yvals[k], r, w)
Umatrix_computed[i,j,k] ? nothing : ( Umatrix_computed[i,j,k] = true ; 
Umatrix[i,j,k] = u(UF, state) )

Given that I was adding an extra layer of abstraction, I expected this 
would be perhaps slightly slower. Instead, the new version runs about 20% 
faster (1.2s vs 1s).

I really don't understand what's going on here. Have I maybe addressed some 
type-instability problem? I don't think so, since the original function had 
type annotations. Does Julia for some reason find it easier to pass 1 
variable instead of 5? 

Any ideas? Thanks.


[julia-users] Re: How to insert new row/ egsisitng vector into array ?

2015-06-28 Thread colintbowers
One way around this is to store data in Vector{Vector{T}} instead of 
Matrix{T}, then extend insert! to operate on each of the inner vectors. I 
have done this for a slightly more complicated type that includes a sorted 
list for indexing the inner vector and a header list for indexing the outer 
vector. The source is here 
(https://github.com/colintbowers/SortedStructures.jl), although I'm still 
tinkering with it at the moment so it should in no way be treated as stable.

The only downside is if you are performing lots of matrix operations, then 
each time you'll need to convert from Vector{Vector{T}} to Matrix{T} and 
back again. My main usage is for data-storage rather than linear algebra, 
so that problem doesn't come up much for me, whereas I find the ability to 
dynamically insert! and deleteat! incredibly useful.

Cheers,

Colin

On Saturday, 27 June 2015 00:08:34 UTC+10, paul analyst wrote:

 Is posible insert new row (egsisitng vector)  into array ?  wihout hcat 
 etc. ?  
 Is something like insert! in iter ?

 julia a=rand(5,5)
 5x5 Array{Float64,2}:
  0.613346   0.864493  0.495873   0.571237   0.948809
  0.688794   0.168175  0.732427   0.0516122  0.439683
  0.740090.491623  0.0662683  0.160219   0.708842
  0.0678776  0.601627  0.425847   0.329719   0.108245
  0.689865   0.233258  0.171292   0.487139   0.452603

 julia insert!(a,3,1,zeros(5))
 ERROR: `insert!` has no method matching insert!(::Array{Float64,2}, 
 ::Int32, ::Int32, ::Array{Float64,1})

 julia insert!(a,[:,3],,zeros(5))
 ERROR: syntax: unexpected ,

 Paul?



Re: [julia-users] Packed my function's parameters into a special immutable type. Expected slowdown, but my code is 20% faster. Why?

2015-06-28 Thread Mauro
That it is as fast comes as no surprise as there is no overhead in Julia
for this kind of data structure (immutable, isbits).  Not sure why it is
slightly faster though, it could be because of better memory alignment.

On Sun, 2015-06-28 at 20:19, Andrew owen...@gmail.com wrote:
 In the interest of writing abstract code that I could modify easily depending
 on the economics model I need, I decided to pack the parameters of my utility
 function into a special type. Here's the old function.

 function 
 u(UF::CRRA,a::Float64,aprime::Float64,y::Float64,r::Float64,w::Float64
 )
 consump = w*y + (1+r)*a - aprime
 u(UF,consump,1)
 end
 (note: I have tried this with and without the Float64 type annotations. It
 makes no difference.)

Yes, there is not difference as Julia complies a specialized function
for specific argument types.  So, you only need type annotations for
dispatch or for documenting.  (Note, this is not so with data-types,
there the types are needed.)

 and the setup for the new function

 abstract State
 immutable State1 : State
 a::Float64
 aprime::Float64
 y::Float64
 r::Float64
 w::Float64
 end

 function u(UF::CRRA,state::State1)
 w = state.w
 r = state.r
 y = state.y
 a = state.a
 aprime = state.aprime
 consump = w*y + (1+r)*a - aprime
 u(UF,consump,1)
 end

 This function is called within a tight inner loop. Here's the old and new
 version. Umatrix_computed is a Bool array.

  Umatrix_computed[i,j,k] ? nothing : ( Umatrix_computed[i,j,k] = true ; 
 Umatrix
 [i,j,k] = u(UF, x_grid[i] ,a_grid[j] ,yvals[k] , r , w) )

 state = State1(x_grid[i] ,a_grid[j] ,yvals[k], r, w)
 Umatrix_computed[i,j,k] ? nothing : ( Umatrix_computed[i,j,k] = true ; 
 Umatrix[
 i,j,k] = u(UF, state) )

 Given that I was adding an extra layer of abstraction, I expected this would 
 be
 perhaps slightly slower. Instead, the new version runs about 20% faster (1.2s
 vs 1s).

 I really don't understand what's going on here. Have I maybe addressed some
 type-instability problem? I don't think so, since the original function had
 type annotations. Does Julia for some reason find it easier to pass 1 variable
 instead of 5?

 Any ideas? Thanks.



Re: [julia-users] PSA: new ~/.julia_history format

2015-06-28 Thread Khoa Tran
Dear Leo,

I'm new to Julia. On MAC OSX, how do I actually delete the 
~/.julia_history file? How do I find it?

Best regards!
Khoa

On Sunday, June 8, 2014 at 12:28:36 AM UTC+2, K leo wrote:

 Lately, every time I start up julia I got the following error.  And 
 every time I have to delete ~/.julia_history.  Why? 

 -- 
 $ julia 
 _ 
 _   _ _(_)_ |  A fresh approach to technical computing 
(_) | (_) (_)|  Documentation: http://docs.julialang.org 
 _ _   _| |_  __ _   |  Type help() to list help topics 
| | | | | | |/ _` |  | 
| | |_| | | | (_| |  |  Version 0.3.0-prerelease+3512 (2014-06-05 
 19:22 UTC) 
   _/ |\__'_|_|_|\__'_|  |  Commit e16ee44* (2 days old master) 
 |__/   |  x86_64-linux-gnu 

 ERROR: Invalid history format. If you have a ~/.julia_history file left 
 over from an older version of Julia, try renaming or deleting it. 

   in hist_from_file at REPL.jl:277 
   in setup_interface at REPL.jl:594 
   in run_frontend at REPL.jl:718 
   in run_repl at REPL.jl:162 
   in _start at client.jl:396