[julia-users] Re: URGENT: Google Summer of Code

2015-03-25 Thread Viral Shah
Hi Raniere,

Are there specific dates mentors and students have to do this by?

-viral

On Tuesday, March 24, 2015 at 10:25:22 PM UTC+1, Raniere Silva wrote:

 Hi, 

 there was a problem of communication 
 and I didn't announce that NumFOCUS, http://numfocus.org/, 
 was selected for Google Summer of Code this year. 

 What this means to Julia? 
 That you can try participate at GSoC with Julia. 
 I know that isn't much time left, 
 and I'm very sorry about that, 
 but there still time. 

 NumFOCUS is keeping all the documents related with GSoC 
 at https://github.com/numfocus/gsoc. 

 If you want to be a student, 
 please read 
 https://github.com/numfocus/gsoc/blob/master/CONTRIBUTING-students.md. 

 If you want to be a mentor, 
 please read 
 https://github.com/numfocus/gsoc/blob/master/CONTRIBUTING-mentors.md. 
 You probably have questions that I hope are answered 
 at https://github.com/numfocus/gsoc/blob/master/organization/operations.md. 


 I will try to answer any questions as soon as possible. 

 Raniere 



Re: [julia-users] Julia users Berlin

2015-03-25 Thread Viral Shah
How about we aim for 5pm in that case? I think I can make it by then. Does 
that work for others?

-viral

On Tuesday, March 24, 2015 at 11:07:40 AM UTC+1, Simon Danisch wrote:

 My train leaves at 9pm (at least the train station is close), so I'd 
 probably go there 1-2 hours early and see who drops by.
 Felix Schüler would come earlier as well ;)
 @David Higgins
 Do we need to call them to adjust this properly?
 On 24 Mar 2015 08:56, Fabian Gans fabiang...@gmail.com wrote:

 I will not be there. 7 seems to be too late for me to get back to Jena 
 the same day. 

 Fabian



Re: [julia-users] Julia users Berlin

2015-03-25 Thread David Higgins
Both times are fine with me, I just need to change the reservation if we go 
with that.

By my count, from the thread above the following people are probably coming:
Viral Shah
Simon Danisch
Felix Schueler
David Higgins
Felix Jung? (wow, cool stuff :) )
Fabian Gans?? (Jena)
One other person contacted me off-list to say they'll come if some travel 
arrangements work out.

The first four are ok with an earlier meeting time. I imagine it's getting 
late for Fabian to arrange a train from Jena, but 5pm would certainly work 
better for him.

So, any objections to changing from 7pm to 5pm? (ie. who's lurking out 
there and hasn't replied yet but was hoping to come?)

David.

On Wednesday, 25 March 2015 07:26:16 UTC+1, Viral Shah wrote:

 How about we aim for 5pm in that case? I think I can make it by then. Does 
 that work for others?

 -viral

 On Tuesday, March 24, 2015 at 11:07:40 AM UTC+1, Simon Danisch wrote:

 My train leaves at 9pm (at least the train station is close), so I'd 
 probably go there 1-2 hours early and see who drops by.
 Felix Schüler would come earlier as well ;)
 @David Higgins
 Do we need to call them to adjust this properly?
 On 24 Mar 2015 08:56, Fabian Gans fabia...@gmail.com javascript: 
 wrote:

 I will not be there. 7 seems to be too late for me to get back to Jena 
 the same day. 

 Fabian



Re: [julia-users] Re: Performance difference between running in REPL and calling a script?

2015-03-25 Thread Stefan Karpinski
Yes, writing to a file is one of the slower things you can do. So if that's in 
a performance-critical loop it will very much slow things down. But that would 
be true for Python and PyPy as well. Are you doing the same thing in that code?


 On Mar 25, 2015, at 4:00 AM, Michael Bullman bullman.mich...@gmail.com 
 wrote:
 
 Hi Guys, 
 
 So I just went back through my code. I didn't see any global variables. I'm 
 going to try and start using the @time macro tomorrow to try and identify the 
 worse functions. Would writes to file significantly impact speed? I know 
 looking on google writing to files is frowned upon, but what is a better 
 alternative? Hold everything in an Array until the program finishes then 
 write out at the end? Are data bases a viable option when output is very 
 large? Or when records need to be kept?
 
 I'm also going over the code again and might post a copy if people are 
 interested, but I'm not going to be doing that tonight. 
 
 Thanks again


Re: [julia-users] Re: inserting an Expr into AST via macro

2015-03-25 Thread Toivo Henningsson
Ok, glad to hear that it seems that you got it working!

[julia-users] Does julia's profiling capabilities extend to the use of external code with ccall?

2015-03-25 Thread Patrick Sanan
I am interested in profiling some julia code, but a substantial fraction of 
the time and memory usage will be due to functions from an external 
library, called with ccall. Should I be able to collect data about time 
spent and memory resources used in this case?


[julia-users] Re: URGENT: Google Summer of Code

2015-03-25 Thread Raniere Silva
 Are there specific dates mentors and students have to do this by?

Before March 27th 19:00 UTC.


signature.asc
Description: Digital signature


Re: [julia-users] Re: zero-allocation reinterpretation of bytes

2015-03-25 Thread Stefan Karpinski
That does seem to be the issue. It's tricky to fix since you can't evaluate
sizeof(Ptr) unless the condition is true.

On Tue, Mar 24, 2015 at 7:13 PM, Stefan Karpinski ste...@karpinski.org
wrote:

 There's a branch in eltype, which is probably causing this difference.

 On Tue, Mar 24, 2015 at 7:00 PM, Sebastian Good 
 sebast...@palladiumconsulting.com wrote:

 Yep, that’s done it. The only difference I can see in the code I wrote
 before and this code is that previously I had

 convert(Ptr{T}, pointer(raw, byte_number))

 whereas here we have

 convert(Ptr{T}, pointer(raw) + byte_number - 1)

 The former construction seems to emit a call to a Julia-intrinsic
 function, while the latter executes the more expected simple machine loads.
 Is there a subtle difference between the two calls to pointer?

 Thanks all for your help!

 On March 24, 2015 at 12:19:00 PM, Matt Bauman (mbau...@gmail.com) wrote:

 (The key is to ensure that the method gets specialized for different
 types with the parametric `::Type{T}` in the signature instead of
 `T::DataType`).

 On Tuesday, March 24, 2015 at 12:10:59 PM UTC-4, Stefan Karpinski wrote:

 This seems like it works fine to me (on both 0.3 and 0.4):

  immutable Test
 x::Float32
 y::Int64
 z::Int8
 end

  julia a = [Test(1,2,3)]
 1-element Array{Test,1}:
  Test(1.0f0,2,3)

 julia b = copy(reinterpret(UInt8, a))
 24-element Array{UInt8,1}:
  0x00
  0x00
  0x80
  0x3f
  0x03
  0x00
  0x00
  0x00
  0x02
  0x00
  0x00
  0x00
  0x00
  0x00
  0x00
  0x00
  0x03
  0xe0
  0x82
  0x10
  0x01
  0x00
  0x00
  0x00

 julia prim_read{T}(::Type{T}, data::Array{Uint8,1}, offset::Int) =
 unsafe_load(convert(Ptr{T}, pointer(data) + offset))
 prim_read (generic function with 1 method)

 julia prim_read(Test, b, 0)
 Test(1.0f0,2,3)

 julia @code_native prim_read(Test, b, 0)
 .section __TEXT,__text,regular,pure_instructions
 Filename: none
 Source line: 1
 push RBP
 mov RBP, RSP
 Source line: 1
 mov RCX, QWORD PTR [RSI + 8]
 vmovss XMM0, DWORD PTR [RCX + RDX]
 mov RAX, QWORD PTR [RCX + RDX + 8]
 mov DL, BYTE PTR [RCX + RDX + 16]
 pop RBP
 ret


 On Tue, Mar 24, 2015 at 5:04 PM, Simon Danisch sdan...@gmail.com
 wrote:

 There is a high chance that I simply don't understand llvmcall well
 enough, though ;)

 Am Montag, 23. März 2015 20:20:09 UTC+1 schrieb Sebastian Good:

 I'm trying to read some binary formatted data. In C, I would define an
 appropriately padded struct and cast away. Is is possible to do something
 similar in Julia, though for only one value at a time? Philosophically, 
 I'd
 like to approximate the following, for some simple bittypes T (Int32,
 Float32, etc.)

 T readT(char* data, size_t offset) { return *(T*)(data + offset); }

 The transliteration of this brain-dead approach results in the
 following, which seems to allocate a boxed Pointer object on every
 invocation. The pointer function comes with ample warnings about how it
 shouldn't be used, and I imagine that it's not polite to the garbage
 collector.

  prim_read{T}(::Type{T}, data::AbstractArray{Uint8, 1}, byte_number) =
 unsafe_load(convert(Ptr{T}, pointer(data, byte_number)))

 I can reinterpret the whole array, but this will involve a division of
 the offset to calculate the new offset relative to the reinterpreted 
 array,
 and it allocates an array object.

 Is there a better way to simply read the machine word at a particular
 offset in a byte array? I would think it should inline to a single 
 assembly
 instruction if done right.







[julia-users] Do julia's profiling capabilities extend to the use of external code with ccall?

2015-03-25 Thread Patrick Sanan
I am interested in profiling some julia code, but a substantial fraction of 
the time and memory usage will be due to functions from an external 
library, called with ccall. Should I be able to collect data about time 
spent and memory resources used in this case?


Re: [julia-users] Does julia's profiling capabilities extend to the use of external code with ccall?

2015-03-25 Thread Stefan Karpinski
Yes: if you call Profile.print(C=true) you'll see C stack frames as well.

On Wed, Mar 25, 2015 at 11:29 AM, Patrick Sanan patrick.sa...@gmail.com
wrote:

 I am interested in profiling some julia code, but a substantial fraction
 of the time and memory usage will be due to functions from an external
 library, called with ccall. Should I be able to collect data about time
 spent and memory resources used in this case?



Re: [julia-users] Julia users Berlin

2015-03-25 Thread Keyan Ghazi-Zahedi
I won’t make it either, but I hope that I can join in on some other day.

Cheers,
Keyan

 On 25 Mar 2015, at 11:54, Felix Jung fe...@jung.fm wrote:
 
 Sorry guys. Would have loved to come but can't make it on that date. If we 
 make this a regular thing I'd be happy to participate in an active manner.
 
 Have fun,
 
 Felix
 
 On 25 Mar 2015, at 09:37, David Higgins daithiohuig...@gmail.com 
 mailto:daithiohuig...@gmail.com wrote:
 
 Both times are fine with me, I just need to change the reservation if we go 
 with that.
 
 By my count, from the thread above the following people are probably coming:
 Viral Shah
 Simon Danisch
 Felix Schueler
 David Higgins
 Felix Jung? (wow, cool stuff :) )
 Fabian Gans?? (Jena)
 One other person contacted me off-list to say they'll come if some travel 
 arrangements work out.
 
 The first four are ok with an earlier meeting time. I imagine it's getting 
 late for Fabian to arrange a train from Jena, but 5pm would certainly work 
 better for him.
 
 So, any objections to changing from 7pm to 5pm? (ie. who's lurking out there 
 and hasn't replied yet but was hoping to come?)
 
 David.
 
 On Wednesday, 25 March 2015 07:26:16 UTC+1, Viral Shah wrote:
 How about we aim for 5pm in that case? I think I can make it by then. Does 
 that work for others?
 
 -viral
 
 On Tuesday, March 24, 2015 at 11:07:40 AM UTC+1, Simon Danisch wrote:
 My train leaves at 9pm (at least the train station is close), so I'd 
 probably go there 1-2 hours early and see who drops by.
 Felix Schüler would come earlier as well ;)
 @David Higgins
 Do we need to call them to adjust this properly?
 
 On 24 Mar 2015 08:56, Fabian Gans fabia...@gmail.com javascript: wrote:
 I will not be there. 7 seems to be too late for me to get back to Jena the 
 same day. 
 
 Fabian



Re: [julia-users] Re: zero-allocation reinterpretation of bytes

2015-03-25 Thread Stefan Karpinski
Given the performance difference and the different behavior, I'm tempted to
just deprecate the two-argument form of pointer.

On Wed, Mar 25, 2015 at 12:53 PM, Sebastian Good 
sebast...@palladiumconsulting.com wrote:

 I guess what I find most confusing is that there would be a difference,
 since adding 1 to a pointer only adds one byte, not one element size.

  p1 = pointer(zeros(UInt64));
 Ptr{UInt64} @0x00010b28c360
  p1 + 1
 Ptr{UInt64} @0x00010b28c361

 I would have expected the latter to end in 68. the two argument pointer
 function gets this “right”.

  a=zeros(UInt64);
  pointer(a,1)
 Ptr{Int64} @0x00010b9c72e0
  pointer(a,2)
 Ptr{Int64} @0x00010b9c72e8

 I can see arguments multiple ways, but when I’m given a strongly typed
 pointer (Ptr{T}), I would expect it to participate in arithmetic in
 increments of sizeof(T).

 On March 25, 2015 at 6:36:37 AM, Stefan Karpinski (ste...@karpinski.org)
 wrote:

 That does seem to be the issue. It's tricky to fix since you can't
 evaluate sizeof(Ptr) unless the condition is true.

 On Tue, Mar 24, 2015 at 7:13 PM, Stefan Karpinski ste...@karpinski.org
 wrote:

 There's a branch in eltype, which is probably causing this difference.

 On Tue, Mar 24, 2015 at 7:00 PM, Sebastian Good 
 sebast...@palladiumconsulting.com wrote:

  Yep, that’s done it. The only difference I can see in the code I wrote
 before and this code is that previously I had

 convert(Ptr{T}, pointer(raw, byte_number))

  whereas here we have

 convert(Ptr{T}, pointer(raw) + byte_number - 1)

 The former construction seems to emit a call to a Julia-intrinsic
 function, while the latter executes the more expected simple machine loads.
 Is there a subtle difference between the two calls to pointer?

 Thanks all for your help!

 On March 24, 2015 at 12:19:00 PM, Matt Bauman (mbau...@gmail.com) wrote:

  (The key is to ensure that the method gets specialized for different
 types with the parametric `::Type{T}` in the signature instead of
 `T::DataType`).

 On Tuesday, March 24, 2015 at 12:10:59 PM UTC-4, Stefan Karpinski wrote:

 This seems like it works fine to me (on both 0.3 and 0.4):

  immutable Test
 x::Float32
 y::Int64
 z::Int8
 end

  julia a = [Test(1,2,3)]
 1-element Array{Test,1}:
  Test(1.0f0,2,3)

 julia b = copy(reinterpret(UInt8, a))
 24-element Array{UInt8,1}:
  0x00
  0x00
  0x80
  0x3f
  0x03
  0x00
  0x00
  0x00
  0x02
  0x00
  0x00
  0x00
  0x00
  0x00
  0x00
  0x00
  0x03
  0xe0
  0x82
  0x10
  0x01
  0x00
  0x00
  0x00

 julia prim_read{T}(::Type{T}, data::Array{Uint8,1}, offset::Int) =
 unsafe_load(convert(Ptr{T}, pointer(data) + offset))
 prim_read (generic function with 1 method)

 julia prim_read(Test, b, 0)
 Test(1.0f0,2,3)

 julia @code_native prim_read(Test, b, 0)
 .section __TEXT,__text,regular,pure_instructions
 Filename: none
 Source line: 1
 push RBP
 mov RBP, RSP
 Source line: 1
 mov RCX, QWORD PTR [RSI + 8]
 vmovss XMM0, DWORD PTR [RCX + RDX]
 mov RAX, QWORD PTR [RCX + RDX + 8]
 mov DL, BYTE PTR [RCX + RDX + 16]
 pop RBP
 ret


 On Tue, Mar 24, 2015 at 5:04 PM, Simon Danisch sdan...@gmail.com
 wrote:

 There is a high chance that I simply don't understand llvmcall well
 enough, though ;)

 Am Montag, 23. März 2015 20:20:09 UTC+1 schrieb Sebastian Good:

 I'm trying to read some binary formatted data. In C, I would define
 an appropriately padded struct and cast away. Is is possible to do
 something similar in Julia, though for only one value at a time?
 Philosophically, I'd like to approximate the following, for some simple
 bittypes T (Int32, Float32, etc.)

 T readT(char* data, size_t offset) { return *(T*)(data + offset); }

 The transliteration of this brain-dead approach results in the
 following, which seems to allocate a boxed Pointer object on every
 invocation. The pointer function comes with ample warnings about how it
 shouldn't be used, and I imagine that it's not polite to the garbage
 collector.

  prim_read{T}(::Type{T}, data::AbstractArray{Uint8, 1}, byte_number)
 = unsafe_load(convert(Ptr{T}, pointer(data, byte_number)))

 I can reinterpret the whole array, but this will involve a division
 of the offset to calculate the new offset relative to the reinterpreted
 array, and it allocates an array object.

 Is there a better way to simply read the machine word at a particular
 offset in a byte array? I would think it should inline to a single 
 assembly
 instruction if done right.








Re: [julia-users] ArrayView no broadcasting?

2015-03-25 Thread Tim Holy
I'm sure a pull request would be appreciated. Alternatively, SubArrays do work 
the way you are hoping for.

--Tim

On Wednesday, March 25, 2015 07:19:50 AM Neal Becker wrote:
 I can assign a single element of a view:
 
 julia view(a,:,:)[1,1] = 2
 2
 
 julia a
 10x10 Array{Int64,2}:
  2  5  5  5  5  5  5  5  5   5
  5  5  5  5  5  5  5  5  5   5
  5  5  5  5  5  5  5  5  5   5
  1  2  3  4  5  6  7  8  9  10
  1  2  3  4  5  6  7  8  9  10
  1  2  3  4  5  6  7  8  9  10
  1  2  3  4  5  6  7  8  9  10
  1  2  3  4  5  6  7  8  9  10
  1  2  3  4  5  6  7  8  9  10
  1  2  3  4  5  6  7  8  9  10
 
 
 But this doesn't work?
 
 julia view(a,:,:)[1,:] = 2
 ERROR: `setindex!` has no method matching setindex!
 (::ContiguousView{Int64,2,Array{Int64,2}}, ::Int64, ::Int64,
 
 ::UnitRange{Int64})
 
 While this does?
 
 julia a[1,:]=2
 2
 
 So ArrayView is not a 1st-class array?



Re: [julia-users] Re: zero-allocation reinterpretation of bytes

2015-03-25 Thread Sebastian Good
The benefit of the semantics of the two argument pointer function is that it 
preserves intuitive pointer arithmetic. As a new (yet happy!) Julia programmer, 
I certainly don’t know what the deprecation implications of changing pointer 
arithmetic are (vast, sadly, I imagine), but their behavior certainly violated 
my “principle of least astonishment” when I found they worked by bytes, not by 
Ts. That is, instead of base/pointer.jl:64 (and friends) looking like

+(x::Ptr, y::Integer) = oftype(x, (UInt(x) + (y % UInt) % UInt))

I would expect them to look like

+{T}(x::Ptr{T}, y::Integer) = oftype(x, (UInt(x) + sizeof(T)*(y % UInt) % UInt))

To more closely follow the principle of pointer arithmetic long ago established 
by C. The type specialization would make these just as fast. For this to work 
with arrays safely, you’d have to guarantee that dense arrays had no padding 
between elements. Since C requires this to the be the case, it seems we’re on 
safe ground?
On March 25, 2015 at 9:07:40 AM, Stefan Karpinski (ste...@karpinski.org) wrote:

Given the performance difference and the different behavior, I'm tempted to 
just deprecate the two-argument form of pointer.

On Wed, Mar 25, 2015 at 12:53 PM, Sebastian Good 
sebast...@palladiumconsulting.com wrote:
I guess what I find most confusing is that there would be a difference, since 
adding 1 to a pointer only adds one byte, not one element size.

 p1 = pointer(zeros(UInt64));
Ptr{UInt64} @0x00010b28c360
 p1 + 1
Ptr{UInt64} @0x00010b28c361

I would have expected the latter to end in 68. the two argument pointer 
function gets this “right”. 

 a=zeros(UInt64);
 pointer(a,1)
Ptr{Int64} @0x00010b9c72e0
 pointer(a,2)
Ptr{Int64} @0x00010b9c72e8

I can see arguments multiple ways, but when I’m given a strongly typed pointer 
(Ptr{T}), I would expect it to participate in arithmetic in increments of 
sizeof(T).

On March 25, 2015 at 6:36:37 AM, Stefan Karpinski (ste...@karpinski.org) wrote:

That does seem to be the issue. It's tricky to fix since you can't evaluate 
sizeof(Ptr) unless the condition is true.

On Tue, Mar 24, 2015 at 7:13 PM, Stefan Karpinski ste...@karpinski.org wrote:
There's a branch in eltype, which is probably causing this difference.

On Tue, Mar 24, 2015 at 7:00 PM, Sebastian Good 
sebast...@palladiumconsulting.com wrote:
Yep, that’s done it. The only difference I can see in the code I wrote before 
and this code is that previously I had

convert(Ptr{T}, pointer(raw, byte_number))

whereas here we have

convert(Ptr{T}, pointer(raw) + byte_number - 1)

The former construction seems to emit a call to a Julia-intrinsic function, 
while the latter executes the more expected simple machine loads. Is there a 
subtle difference between the two calls to pointer?

Thanks all for your help!

On March 24, 2015 at 12:19:00 PM, Matt Bauman (mbau...@gmail.com) wrote:

(The key is to ensure that the method gets specialized for different types with 
the parametric `::Type{T}` in the signature instead of `T::DataType`).

On Tuesday, March 24, 2015 at 12:10:59 PM UTC-4, Stefan Karpinski wrote:
This seems like it works fine to me (on both 0.3 and 0.4):

immutable Test
x::Float32
y::Int64
z::Int8
end

julia a = [Test(1,2,3)]
1-element Array{Test,1}:
 Test(1.0f0,2,3)

julia b = copy(reinterpret(UInt8, a))
24-element Array{UInt8,1}:
 0x00
 0x00
 0x80
 0x3f
 0x03
 0x00
 0x00
 0x00
 0x02
 0x00
 0x00
 0x00
 0x00
 0x00
 0x00
 0x00
 0x03
 0xe0
 0x82
 0x10
 0x01
 0x00
 0x00
 0x00

julia prim_read{T}(::Type{T}, data::Array{Uint8,1}, offset::Int) = 
unsafe_load(convert(Ptr{T}, pointer(data) + offset))
prim_read (generic function with 1 method)

julia prim_read(Test, b, 0)
Test(1.0f0,2,3)

julia @code_native prim_read(Test, b, 0)
.section __TEXT,__text,regular,pure_instructions
Filename: none
Source line: 1
push RBP
mov RBP, RSP
Source line: 1
mov RCX, QWORD PTR [RSI + 8]
vmovss XMM0, DWORD PTR [RCX + RDX]
mov RAX, QWORD PTR [RCX + RDX + 8]
mov DL, BYTE PTR [RCX + RDX + 16]
pop RBP
ret


On Tue, Mar 24, 2015 at 5:04 PM, Simon Danisch sdan...@gmail.com wrote:
There is a high chance that I simply don't understand llvmcall well enough, 
though ;)

Am Montag, 23. März 2015 20:20:09 UTC+1 schrieb Sebastian Good:
I'm trying to read some binary formatted data. In C, I would define an 
appropriately padded struct and cast away. Is is possible to do something 
similar in Julia, though for only one value at a time? Philosophically, I'd 
like to approximate the following, for some simple bittypes T (Int32, Float32, 
etc.)

T readT(char* data, size_t offset) { return *(T*)(data + offset); }

The transliteration of this brain-dead approach results in the following, which 
seems to allocate a boxed Pointer object on every invocation. The pointer 
function comes with ample warnings about how it shouldn't be used, and I 
imagine that it's not polite to the garbage collector.


prim_read{T}(::Type{T},
data::AbstractArray{Uint8,  
1},
byte_number)  
=  

[julia-users] Re: Julia blogging and contributions

2015-03-25 Thread Johan Sigfrids
There is also http://www.reddit.com/r/Julia/

On Wednesday, March 25, 2015 at 7:57:20 AM UTC+2, cdm wrote:


 these twitter feeds:

https://twitter.com/JuliaLanguage

https://twitter.com/ProjectJupyter

https://twitter.com/julialang_news


 in addition to searching twitter for #JuliaLang

https://twitter.com/hashtag/julialang?src=hash


 usually yield interesting, fresh and dynamic
 content with diverse context ... certainly
 worth perusing once a week, or so.



Re: [julia-users] strange behaviour of togglebuttons in Interact.jl

2015-03-25 Thread Shashi Gowda
using Interact, Reactive

α = Input(2)
display(togglebuttons([one = 1, two = 2], signal=α))
signal(α)

results in two being selected initially. If you want to set initial label
to be selected, you can use the value_label keyword argument

If you want the selection to change wrt another signal, you will need to
lift the togglebuttons and set the value_label, but the value in the input
won't change without user interaction...

I may not have understood your question fully well. I hope you can play
around with value_label and let me know where you get!

Thanks
Shashi


On Tue, Mar 24, 2015 at 10:23 PM, Andrei Berceanu andreiberce...@gmail.com
wrote:

 OK, I see. Now my problem is that, in my code, the initial value should
 then depend on another signal, and I have found no way of resolving this.
 The actual code I have is










 *lift(a - togglebuttons([Landau = ( (n,m) -
 one(Complex{Float64}), (n,m) - one(Complex{Float64}),
 (n,m) - exp(-im*2π*a*m), (n,m) - exp(im*2π*a*m) ),
 Symmetric = ((n,m) - exp(-im*π*a*n), (n,m) -
 exp(im*π*a*n),(n,m) - exp(-im*π*a*m), (n,m) -
 exp(im*π*a*m))], signal=ft), α)*

 So I would like to initialize *ft* beforehand with, say, the first value
 in my Dict, the one under the key Landau, but this depends on the value
 of the signal *α.*
 On Tuesday, March 24, 2015 at 3:43:04 PM UTC+1, Shashi Gowda wrote:

 Not a bug, if you are passing in your own input signal to widgets, you
 need to take care of maintaining the right initial values. It's also better
 to use OrderedDict from DataStructures package here to keep the ordering of
 the key-value pairs.



 On Tue, Mar 24, 2015 at 7:39 PM, Andrei Berceanu andreib...@gmail.com
 wrote:

 Consider the following code






 *using Reactive, Interactα = Input(0)togglebuttons([one = 1, two =
 2], signal=α)signal(α)*I would expect the value of *α *to change after
 executing the *togglebuttons*(..) line, however this is not the case.
 *signal(α) *on the next line shows that *α *is still 0, even though one
 of the buttons is pre-selected. One has to press the buttons at least once
 to change the value of *α*.
 Can this behaviour be changed? Is it a bug?





[julia-users] Re: ArrayView no broadcasting?

2015-03-25 Thread Matt Bauman
On Wednesday, March 25, 2015 at 7:20:05 AM UTC-4, Neal Becker wrote:

 So ArrayView is not a 1st-class array? 


There's not really such a thing as a 1st-class array.  Every array type 
needs to define its own indexing methods… and there are a lot of them! 
 It's very tough to cover them all.  You've just run into a method that's 
missing.  As Tim suggests, you could write this method and submit a PR.

I'm hoping to fix this eventually.


[julia-users] Can you make this linear algebra code faster?

2015-03-25 Thread Jiahao Chen
Here is some code I wrote for completely pivoted LU factorizations.
Can you make it even faster?

Anyone who can demonstrate verifiable speedups (or find bugs relative
to the textbook description) while sticking to pure Julia code wins an
acknowledgment in an upcoming paper I'm writing about Julia, and a
small token of my appreciation with no cash value. :)

Reference: G. H. Golub and C. F. Van Loan, Matrix Computations 4/e,
Algorithm 3.4.3, p. 132.

Thanks,

Jiahao Chen
Staff Research Scientist
MIT Computer Science and Artificial Intelligence Laboratory


benchmark-opt.jl
Description: Binary data


lucp-opt.jl
Description: Binary data


Re: [julia-users] Re: zero-allocation reinterpretation of bytes

2015-03-25 Thread Milan Bouchet-Valat
Le mercredi 25 mars 2015 à 07:55 -0700, Matt Bauman a écrit :
 See https://github.com/JuliaLang/julia/issues/6219#issuecomment-38117402

This looks like a case where, as discussed for string indexing, writing
something like p + 5bytes could make sense. Then the default behavior
could follow the more natural C convention, yet you'd never have to
write things like p + size/sizeof(T) (to quote Jeff's remark on the
issue).


Regards


 On Wednesday, March 25, 2015 at 9:58:46 AM UTC-4, Sebastian Good
 wrote:
 
 The benefit of the semantics of the two argument pointer
 function is that it preserves intuitive pointer arithmetic. As
 a new (yet happy!) Julia programmer, I certainly don’t know
 what the deprecation implications of changing pointer
 arithmetic are (vast, sadly, I imagine), but their behavior
 certainly violated my “principle of least astonishment” when I
 found they worked by bytes, not by Ts. That is, instead of
 base/pointer.jl:64 (and friends) looking like
 
 
 +(x::Ptr, y::Integer) = oftype(x, (UInt(x) + (y % UInt) %
 UInt))
 
 
 I would expect them to look like
 
 
 +{T}(x::Ptr{T}, y::Integer) = oftype(x, (UInt(x) +
 sizeof(T)*(y % UInt) % UInt))
 
 
 
 To more closely follow the principle of pointer arithmetic
 long ago established by C. The type specialization would make
 these just as fast. For this to work with arrays safely, you’d
 have to guarantee that dense arrays had no padding between
 elements. Since C requires this to the be the case, it seems
 we’re on safe ground?
 
 On March 25, 2015 at 9:07:40 AM, Stefan Karpinski
 (ste...@karpinski.org) wrote:
 
 
  
  Given the performance difference and the different behavior,
  I'm tempted to just deprecate the two-argument form of
  pointer.
  
  
  On Wed, Mar 25, 2015 at 12:53 PM, Sebastian Good
  seba...@palladiumconsulting.com wrote:
  
  I guess what I find most confusing is that there
  would be a difference, since adding 1 to a pointer
  only adds one byte, not one element size.
  
  
   p1 = pointer(zeros(UInt64));
  Ptr{UInt64} @0x00010b28c360
   p1 + 1
  Ptr{UInt64} @0x00010b28c361
  
  
  I would have expected the latter to end in 68. the
  two argument pointer function gets this “right”. 
  
  
   a=zeros(UInt64);
   pointer(a,1)
  Ptr{Int64} @0x00010b9c72e0
   pointer(a,2)
  Ptr{Int64} @0x00010b9c72e8
  
  
  I can see arguments multiple ways, but when I’m
  given a strongly typed pointer (Ptr{T}), I would
  expect it to participate in arithmetic in increments
  of sizeof(T).
  
  On March 25, 2015 at 6:36:37 AM, Stefan Karpinski
  (ste...@karpinski.org) wrote:
  
  
   That does seem to be the issue. It's tricky to fix
   since you can't evaluate sizeof(Ptr) unless the
   condition is true.
   
   
   On Tue, Mar 24, 2015 at 7:13 PM, Stefan Karpinski
   ste...@karpinski.org wrote:
   
   There's a branch in eltype, which is
   probably causing this difference.
   
   
   On Tue, Mar 24, 2015 at 7:00 PM, Sebastian
   Good seba...@palladiumconsulting.com
   wrote:
   
   Yep, that’s done it. The only
   difference I can see in the code I
   wrote before and this code is that
   previously I had
   
   
   convert(Ptr{T}, pointer(raw,
   byte_number))
   
   
   whereas here we have
   
   
   convert(Ptr{T}, pointer(raw) +
   byte_number - 1)
   
   

[julia-users] Re: Can you make this linear algebra code faster?

2015-03-25 Thread dextorious
I hope to look at this when I get some time, but as a preliminary note, 
merely applying the @inbounds and @simd macros to the main for loop yields 
an increase in performance of about 15-20% on my machine.


Re: [julia-users] Re: zero-allocation reinterpretation of bytes

2015-03-25 Thread Sebastian Good
Ah, I see it’s been discussed and even documented. FWIW, documenting this 
behavior in the pointer function would be useful for newbies like myself. I 
agree with Stefan that the two argument pointer function should be deprecated 
as it’s C-like behavior is inconsistent. If Julia pointer arithmetic is byte 
based, that’s a reasonable convention that just needs to be understood, like 
1-based indexing or FORTRAN array layout.

Sprinkling a few sizeof(T) in your code when you’re mucking about with pointers 
anyway is a small price to pay. With C conventions, you’d do just as much 
mucking about with convert(Ptr{UInt8},...).

On March 25, 2015 at 11:05:00 AM, Milan Bouchet-Valat (nalimi...@club.fr) wrote:

Le mercredi 25 mars 2015 à 07:55 -0700, Matt Bauman a écrit :
See https://github.com/JuliaLang/julia/issues/6219#issuecomment-38117402
This looks like a case where, as discussed for string indexing, writing 
something like p + 5bytes could make sense. Then the default behavior could 
follow the more natural C convention, yet you'd never have to write things like 
p + size/sizeof(T) (to quote Jeff's remark on the issue).


Regards

On Wednesday, March 25, 2015 at 9:58:46 AM UTC-4, Sebastian Good wrote:
The benefit of the semantics of the two argument pointer function is that it 
preserves intuitive pointer arithmetic. As a new (yet happy!) Julia programmer, 
I certainly don’t know what the deprecation implications of changing pointer 
arithmetic are (vast, sadly, I imagine), but their behavior certainly violated 
my “principle of least astonishment” when I found they worked by bytes, not by 
Ts. That is, instead of base/pointer.jl:64 (and friends) looking like


+(x::Ptr, y::Integer) = oftype(x, (UInt(x) + (y % UInt) % UInt))


I would expect them to look like


+{T}(x::Ptr{T}, y::Integer) = oftype(x, (UInt(x) + sizeof(T)*(y % UInt) % UInt))


To more closely follow the principle of pointer arithmetic long ago established 
by C. The type specialization would make these just as fast. For this to work 
with arrays safely, you’d have to guarantee that dense arrays had no padding 
between elements. Since C requires this to the be the case, it seems we’re on 
safe ground?
On March 25, 2015 at 9:07:40 AM, Stefan Karpinski (ste...@karpinski.org) wrote:


Given the performance difference and the different behavior, I'm tempted to 
just deprecate the two-argument form of pointer.

On Wed, Mar 25, 2015 at 12:53 PM, Sebastian Good 
seba...@palladiumconsulting.com wrote:
I guess what I find most confusing is that there would be a difference, since 
adding 1 to a pointer only adds one byte, not one element size.


 p1 = pointer(zeros(UInt64));
Ptr{UInt64} @0x00010b28c360
 p1 + 1
Ptr{UInt64} @0x00010b28c361


I would have expected the latter to end in 68. the two argument pointer 
function gets this “right”. 


 a=zeros(UInt64);
 pointer(a,1)
Ptr{Int64} @0x00010b9c72e0
 pointer(a,2)
Ptr{Int64} @0x00010b9c72e8


I can see arguments multiple ways, but when I’m given a strongly typed pointer 
(Ptr{T}), I would expect it to participate in arithmetic in increments of 
sizeof(T).

On March 25, 2015 at 6:36:37 AM, Stefan Karpinski (ste...@karpinski.org) wrote:

That does seem to be the issue. It's tricky to fix since you can't evaluate 
sizeof(Ptr) unless the condition is true.

On Tue, Mar 24, 2015 at 7:13 PM, Stefan Karpinski ste...@karpinski.org wrote:
There's a branch in eltype, which is probably causing this difference.

On Tue, Mar 24, 2015 at 7:00 PM, Sebastian Good 
seba...@palladiumconsulting.com wrote:
Yep, that’s done it. The only difference I can see in the code I wrote before 
and this code is that previously I had


convert(Ptr{T}, pointer(raw, byte_number))


whereas here we have


convert(Ptr{T}, pointer(raw) + byte_number - 1)

The former construction seems to emit a call to a Julia-intrinsic function, 
while the latter executes the more expected simple machine loads. Is there a 
subtle difference between the two calls to pointer?

Thanks all for your help!

On March 24, 2015 at 12:19:00 PM, Matt Bauman (mba...@gmail.com) wrote:

(The key is to ensure that the method gets specialized for different types with 
the parametric `::Type{T}` in the signature instead of `T::DataType`).

On Tuesday, March 24, 2015 at 12:10:59 PM UTC-4, Stefan Karpinski wrote:
This seems like it works fine to me (on both 0.3 and 0.4):


immutable Test
x::Float32
y::Int64
z::Int8
end


julia a = [Test(1,2,3)]
1-element Array{Test,1}:
 Test(1.0f0,2,3)


julia b = copy(reinterpret(UInt8, a))
24-element Array{UInt8,1}:
 0x00
 0x00
 0x80
 0x3f
 0x03
 0x00
 0x00
 0x00
 0x02
 0x00
 0x00
 0x00
 0x00
 0x00
 0x00
 0x00
 0x03
 0xe0
 0x82
 0x10
 0x01
 0x00
 0x00
 0x00


julia prim_read{T}(::Type{T}, data::Array{Uint8,1}, offset::Int) = 
unsafe_load(convert(Ptr{T}, pointer(data) + offset))
prim_read (generic function with 1 method)


julia prim_read(Test, b, 0)
Test(1.0f0,2,3)


julia @code_native prim_read(Test, b, 0)
.section 

Re: [julia-users] Re: zero-allocation reinterpretation of bytes

2015-03-25 Thread Matt Bauman
See https://github.com/JuliaLang/julia/issues/6219#issuecomment-38117402

On Wednesday, March 25, 2015 at 9:58:46 AM UTC-4, Sebastian Good wrote:

 The benefit of the semantics of the two argument pointer function is that 
 it preserves intuitive pointer arithmetic. As a new (yet happy!) Julia 
 programmer, I certainly don’t know what the deprecation implications of 
 changing pointer arithmetic are (vast, sadly, I imagine), but their 
 behavior certainly violated my “principle of least astonishment” when I 
 found they worked by bytes, not by Ts. That is, instead of 
 base/pointer.jl:64 (and friends) looking like

 +(x::Ptr, y::Integer) = oftype(x, (UInt(x) + (y % UInt) % UInt))

 I would expect them to look like

 +{T}(x::Ptr{T}, y::Integer) = oftype(x, (UInt(x) + *sizeof(T)**(y % UInt) 
 % UInt))

 To more closely follow the principle of pointer arithmetic long ago 
 established by C. The type specialization would make these just as fast. 
 For this to work with arrays safely, you’d have to guarantee that dense 
 arrays had no padding between elements. Since C requires this to the be the 
 case, it seems we’re on safe ground?

 On March 25, 2015 at 9:07:40 AM, Stefan Karpinski (ste...@karpinski.org 
 javascript:) wrote:

 Given the performance difference and the different behavior, I'm tempted 
 to just deprecate the two-argument form of pointer.

 On Wed, Mar 25, 2015 at 12:53 PM, Sebastian Good 
 seba...@palladiumconsulting.com javascript: wrote:

  I guess what I find most confusing is that there would be a difference, 
 since adding 1 to a pointer only adds one byte, not one element size.
  
  p1 = pointer(zeros(UInt64));
 Ptr{UInt64} @0x00010b28c360
   p1 + 1
 Ptr{UInt64} @0x00010b28c361

  I would have expected the latter to end in 68. the two argument pointer 
 function gets this “right”. 

  a=zeros(UInt64);
   pointer(a,1)
 Ptr{Int64} @0x00010b9c72e0
   pointer(a,2)
 Ptr{Int64} @0x00010b9c72e8
  
 I can see arguments multiple ways, but when I’m given a strongly typed 
 pointer (Ptr{T}), I would expect it to participate in arithmetic in 
 increments of sizeof(T).
  
 On March 25, 2015 at 6:36:37 AM, Stefan Karpinski (ste...@karpinski.org 
 javascript:) wrote:

  That does seem to be the issue. It's tricky to fix since you can't 
 evaluate sizeof(Ptr) unless the condition is true.

 On Tue, Mar 24, 2015 at 7:13 PM, Stefan Karpinski ste...@karpinski.org 
 javascript: wrote:

 There's a branch in eltype, which is probably causing this difference.
  
 On Tue, Mar 24, 2015 at 7:00 PM, Sebastian Good 
 seba...@palladiumconsulting.com javascript: wrote:

  Yep, that’s done it. The only difference I can see in the code I 
 wrote before and this code is that previously I had
  
 convert(Ptr{T}, pointer(raw, byte_number))
  
  whereas here we have
  
 convert(Ptr{T}, pointer(raw) + byte_number - 1)

 The former construction seems to emit a call to a Julia-intrinsic 
 function, while the latter executes the more expected simple machine 
 loads. 
 Is there a subtle difference between the two calls to pointer?

 Thanks all for your help!
  
 On March 24, 2015 at 12:19:00 PM, Matt Bauman (mba...@gmail.com 
 javascript:) wrote:

  (The key is to ensure that the method gets specialized for different 
 types with the parametric `::Type{T}` in the signature instead of 
 `T::DataType`). 

 On Tuesday, March 24, 2015 at 12:10:59 PM UTC-4, Stefan Karpinski 
 wrote: 

 This seems like it works fine to me (on both 0.3 and 0.4): 

  immutable Test
 x::Float32
 y::Int64
 z::Int8
 end
  
  julia a = [Test(1,2,3)]
 1-element Array{Test,1}:
  Test(1.0f0,2,3)

 julia b = copy(reinterpret(UInt8, a))
 24-element Array{UInt8,1}:
  0x00
  0x00
  0x80
  0x3f
  0x03
  0x00
  0x00
  0x00
  0x02
  0x00
  0x00
  0x00
  0x00
  0x00
  0x00
  0x00
  0x03
  0xe0
  0x82
  0x10
  0x01
  0x00
  0x00
  0x00

 julia prim_read{T}(::Type{T}, data::Array{Uint8,1}, offset::Int) = 
 unsafe_load(convert(Ptr{T}, pointer(data) + offset))
 prim_read (generic function with 1 method)

 julia prim_read(Test, b, 0)
 Test(1.0f0,2,3)
  
 julia @code_native prim_read(Test, b, 0)
 .section __TEXT,__text,regular,pure_instructions
 Filename: none
 Source line: 1
 push RBP
 mov RBP, RSP
 Source line: 1
 mov RCX, QWORD PTR [RSI + 8]
 vmovss XMM0, DWORD PTR [RCX + RDX]
 mov RAX, QWORD PTR [RCX + RDX + 8]
 mov DL, BYTE PTR [RCX + RDX + 16]
 pop RBP
 ret
  
  
 On Tue, Mar 24, 2015 at 5:04 PM, Simon Danisch sdan...@gmail.com 
 wrote:

 There is a high chance that I simply don't understand llvmcall well 
 enough, though ;)

 Am Montag, 23. März 2015 20:20:09 UTC+1 schrieb Sebastian Good: 

 I'm trying to read some binary formatted data. In C, I would define 
 an appropriately padded struct and cast away. Is is possible to do 
 something similar in Julia, though for only one value at a time? 
 Philosophically, I'd like to approximate the following, for some simple 
 bittypes T (Int32, Float32, etc.) 
  
 T readT(char* data, size_t offset) { return 

Re: [julia-users] Julia users Berlin

2015-03-25 Thread David Higgins
Reservation changed:

Thursday, 26th March, *5pm* at St. Oberholz, Rosenthaler Straße 72A

It's still in my name (Higgins).

Looking forward to seeing you then,
David.

On Wednesday, 25 March 2015 15:05:07 UTC+1, Keyan wrote:

 I won’t make it either, but I hope that I can join in on some other day.

 Cheers,
 Keyan

 On 25 Mar 2015, at 11:54, Felix Jung fe...@jung.fm javascript: wrote:

 Sorry guys. Would have loved to come but can't make it on that date. If we 
 make this a regular thing I'd be happy to participate in an active manner.

 Have fun,

 Felix

 On 25 Mar 2015, at 09:37, David Higgins daithio...@gmail.com 
 javascript: wrote:

 Both times are fine with me, I just need to change the reservation if we 
 go with that.

 By my count, from the thread above the following people are probably 
 coming:
 Viral Shah
 Simon Danisch
 Felix Schueler
 David Higgins
 Felix Jung? (wow, cool stuff :) )
 Fabian Gans?? (Jena)
 One other person contacted me off-list to say they'll come if some travel 
 arrangements work out.

 The first four are ok with an earlier meeting time. I imagine it's getting 
 late for Fabian to arrange a train from Jena, but 5pm would certainly work 
 better for him.

 So, any objections to changing from 7pm to 5pm? (ie. who's lurking out 
 there and hasn't replied yet but was hoping to come?)

 David.

 On Wednesday, 25 March 2015 07:26:16 UTC+1, Viral Shah wrote:

 How about we aim for 5pm in that case? I think I can make it by then. 
 Does that work for others?

 -viral

 On Tuesday, March 24, 2015 at 11:07:40 AM UTC+1, Simon Danisch wrote:

 My train leaves at 9pm (at least the train station is close), so I'd 
 probably go there 1-2 hours early and see who drops by.
 Felix Schüler would come earlier as well ;)
 @David Higgins
 Do we need to call them to adjust this properly?
 On 24 Mar 2015 08:56, Fabian Gans fabia...@gmail.com wrote:

 I will not be there. 7 seems to be too late for me to get back to Jena 
 the same day. 

 Fabian




Re: [julia-users] Re: Can you make this linear algebra code faster?

2015-03-25 Thread Jiahao Chen
Thanks all for the suggestions so far. Yes, I'm using julia 0.4-dev for the 
basis of this discussion.


[julia-users] What's the difference between @assert and @test?

2015-03-25 Thread Ismael VC
Hello guys!

I just had someone ask me this question and I didn't know what to answer 
him, example:

julia using Base.Test

julia @test 1 == 1

julia @test 1 == 3
ERROR: test failed: 1 == 3
 in error at error.jl:21 (repeats 2 times)

julia @assert 1 == 1

julia @assert 1 == 3
ERROR: assertion failed: 1 == 3
 in error at error.jl:21 (repeats 2 times)

I fail to see the difference, besides that `@test` conveys the idea of 
testing. 

Even the error message is even the same:  `in error at error.jl:21 (repeats 
2 times)`

Thanks!


Re: [julia-users] Re: Can you make this linear algebra code faster?

2015-03-25 Thread Jiahao Chen
On Wed, Mar 25, 2015 at 1:13 PM, Jason Riedy ja...@lovesgoodfood.com wrote:
 Similarly for moving the row scaling and next pivot search into
 the loop.

I tried to manually inline idxmaxabs. It made absolutely no difference
on my machine. The row scaling takes ~0.05% of total execution time.

Thanks,

Jiahao Chen
Staff Research Scientist
MIT Computer Science and Artificial Intelligence Laboratory


[julia-users] Re: Can you make this linear algebra code faster?

2015-03-25 Thread Jiahao Chen
 The swap could be done without temporaries, but I assume you're also 
trying to match the look of the pseudocode?

It would be interesting to see how fast the code can get without 
significantly altering its look, or alternatively how much one would have 
to change to achieve speedups.

I profiled the code for a 500 x 500 random matrix and the swaps took ~ 0.5% 
of the execution time, IIRC. I'm not too concerned with those particular 
lines.


Re: [julia-users] Does julia's profiling capabilities extend to the use of external code with ccall?

2015-03-25 Thread Patrick Sanan
Great! I will experiment further. he I am hoping that this will also apply 
to external fortran routines, and that I'll be able to monitor memory 
allocation in these external functions.

On Wednesday, March 25, 2015 at 11:38:00 AM UTC+1, Stefan Karpinski wroteT

 Yes: if you call Profile.print(C=true) you'll see C stack frames as well.

 On Wed, Mar 25, 2015 at 11:29 AM, Patrick Sanan patric...@gmail.com 
 javascript: wrote:

 I am interested in profiling some julia code, but a substantial fraction 
 of the time and memory usage will be due to functions from an external 
 library, called with ccall. Should I be able to collect data about time 
 spent and memory resources used in this case?




[julia-users] Re: Can you make this linear algebra code faster?

2015-03-25 Thread Matt Bauman
The swap could be done without temporaries, but I assume you're also trying 
to match the look of the pseudocode?

On Wednesday, March 25, 2015 at 11:22:41 AM UTC-4, Jiahao Chen wrote:

 Here is some code I wrote for completely pivoted LU factorizations. 
 Can you make it even faster? 

 Anyone who can demonstrate verifiable speedups (or find bugs relative 
 to the textbook description) while sticking to pure Julia code wins an 
 acknowledgment in an upcoming paper I'm writing about Julia, and a 
 small token of my appreciation with no cash value. :) 

 Reference: G. H. Golub and C. F. Van Loan, Matrix Computations 4/e, 
 Algorithm 3.4.3, p. 132. 

 Thanks, 

 Jiahao Chen 
 Staff Research Scientist 
 MIT Computer Science and Artificial Intelligence Laboratory 



[julia-users] Re: Can you make this linear algebra code faster?

2015-03-25 Thread Jiahao Chen
Also, Andreas just pointed out the loop in indmaxabs traverses the matrix 
in row major order, not column major. (for j in s, i in r is faster)


Re: [julia-users] Re: Can you make this linear algebra code faster?

2015-03-25 Thread Tim Holy
If you want it to look nice and are running on 0.4, just switching to

slice(A, 1:n, k) ↔ slice(A, 1:n, λ)

should also get you a performance boost (especially for large matrices). 
Obviously you could do even better by devectorizing, but it wouldn't be as 
pretty.

Off-topic, but your use of unicode for this is very elegant, and eye-opening 
for me.

Best,
--Tim

On Wednesday, March 25, 2015 09:24:09 AM Matt Bauman wrote:
 The swap could be done without temporaries, but I assume you're also trying
 to match the look of the pseudocode?
 
 On Wednesday, March 25, 2015 at 11:22:41 AM UTC-4, Jiahao Chen wrote:
  Here is some code I wrote for completely pivoted LU factorizations.
  Can you make it even faster?
  
  Anyone who can demonstrate verifiable speedups (or find bugs relative
  to the textbook description) while sticking to pure Julia code wins an
  acknowledgment in an upcoming paper I'm writing about Julia, and a
  small token of my appreciation with no cash value. :)
  
  Reference: G. H. Golub and C. F. Van Loan, Matrix Computations 4/e,
  Algorithm 3.4.3, p. 132.
  
  Thanks,
  
  Jiahao Chen
  Staff Research Scientist
  MIT Computer Science and Artificial Intelligence Laboratory



[julia-users] Re: Can you make this linear algebra code faster?

2015-03-25 Thread Jason Riedy
And Tim Holy writes:
 Obviously you could do even better by devectorizing, but it
 wouldn't be as pretty.

Similarly for moving the row scaling and next pivot search into
the loop.



[julia-users] SubArray memory footprint

2015-03-25 Thread Sebastian Good
I was surprised by two things in the SubArray implementation

1) They are big! About 175 bytes for a simple subset from a 1D array from 
my naive measurement.[*]
2) They are not flat. That is, they seem to get heap allocated and have 
indirections in them.

I'm guessing this is because SubArrays aren't immutable, and tuples aren't 
always inlined into an immutable either, but I am really grasping at straws.

I'm walking through a very large memory mapped structure and generating 
hundreds of thousands of subarrays to look at various windows of it. I was 
hoping that by using views I would reduce memory usage as compared with 
creating copies of those windows. Indeed I am, but by a lot less than I 
thought I would be. 

In other words: SubArrays are surprisingly expensive because they 
necessitate several memory allocations apiece.

From the work that's gone into SubArrays I'm guessing that isn't meant to 
be. They are so carefully specialized that I would expect them to behave 
roughly like a (largish) struct in common use.

Is this a misconception? Do I need to take more care about how I 
parameterize the container I put them in to take advantage?

[*]
 const b = [1:5;]
 function f()
  for i in 1:1_000_000 sub(b, 1:2) end
end
 @time f()
elapsed time: 0.071933306 seconds (175 MB allocated, 9.21% gc time in 8 
pauses with 0 full sweep)


Re: [julia-users] SubArray memory footprint

2015-03-25 Thread Sebastian Good
That helps a bit; I am indeed working on v0.4. A zero-allocation SubArray would 
be a phenomenal achievement. I guess it’s at that point that getindex with 
ranges will return SubArrays, i.e. mutable views, instead of copies? Is that 
still targeted for v0.4?

On March 25, 2015 at 3:30:03 PM, Tim Holy (tim.h...@gmail.com) wrote:

SubArrays are immutable on 0.4. But tuples aren't inlined, which is going to  
force allocation.  

Assuming you're using 0.3, there's a second problem: the code in the  
constructor is not type-stable, and that makes construction slow and memory-  
hungry. Compare the following on 0.3 and 0.4:  

julia A = rand(2,10^4);  

julia function myfun(A)  
s = 0.0  
for j = 1:size(A,2)  
S = slice(A, :, j)  
s += sum(S)  
end  
s  
end  
myfun (generic function with 1 method)  


On 0.3:  
# warmup call  
julia @time myfun(A)  
elapsed time: 0.145141435 seconds (11277536 bytes allocated)  

# the real call  
julia @time myfun(A)  
elapsed time: 0.034556106 seconds (7866896 bytes allocated)  


On 0.4:  
julia @time myfun(A)  
elapsed time: 0.190744146 seconds (7 MB allocated)  

julia @time myfun(A)  
elapsed time: 0.000697173 seconds (1 MB allocated)  



So you can see it's about 50x faster and about 8-fold more memory efficient on  
0.4. Once Jeff finishes his tuple overhaul, the allocation on 0.4 could  
potentially drop to 0.  

--Tim  


On Wednesday, March 25, 2015 11:18:08 AM Sebastian Good wrote:  
 I was surprised by two things in the SubArray implementation  
  
 1) They are big! About 175 bytes for a simple subset from a 1D array from  
 my naive measurement.[*]  
 2) They are not flat. That is, they seem to get heap allocated and have  
 indirections in them.  
  
 I'm guessing this is because SubArrays aren't immutable, and tuples aren't  
 always inlined into an immutable either, but I am really grasping at straws.  
  
 I'm walking through a very large memory mapped structure and generating  
 hundreds of thousands of subarrays to look at various windows of it. I was  
 hoping that by using views I would reduce memory usage as compared with  
 creating copies of those windows. Indeed I am, but by a lot less than I  
 thought I would be.  
  
 In other words: SubArrays are surprisingly expensive because they  
 necessitate several memory allocations apiece.  
  
 From the work that's gone into SubArrays I'm guessing that isn't meant to  
 be. They are so carefully specialized that I would expect them to behave  
 roughly like a (largish) struct in common use.  
  
 Is this a misconception? Do I need to take more care about how I  
 parameterize the container I put them in to take advantage?  
  
 [*]  
  
  const b = [1:5;]  
  function f()  
  
 for i in 1:1_000_000 sub(b, 1:2) end  
 end  
  
  @time f()  
  
 elapsed time: 0.071933306 seconds (175 MB allocated, 9.21% gc time in 8  
 pauses with 0 full sweep)  



Re: [julia-users] Re: Can you make this linear algebra code faster?

2015-03-25 Thread Tim Holy
Actually, didn't the original implementation have a couple of bugs?

- A[1:n, k] makes a copy, so I'm not sure you were actually swapping elements 
in the original A
- If A[i,j]  0, you're storing a negative value in themax, making it easy for 
the next nonnegative value to beat it. You presumably want to store 
abs(A[i,j]).

See attached (which is also faster, on 0.4).

--Tim

On Wednesday, March 25, 2015 11:37:07 AM you wrote:
 If you want it to look nice and are running on 0.4, just switching to
 
 slice(A, 1:n, k) ↔ slice(A, 1:n, λ)
 
 should also get you a performance boost (especially for large matrices).
 Obviously you could do even better by devectorizing, but it wouldn't be as
 pretty.
 
 Off-topic, but your use of unicode for this is very elegant, and eye-opening
 for me.
 
 Best,
 --Tim
 
 On Wednesday, March 25, 2015 09:24:09 AM Matt Bauman wrote:
  The swap could be done without temporaries, but I assume you're also
  trying
  to match the look of the pseudocode?
  
  On Wednesday, March 25, 2015 at 11:22:41 AM UTC-4, Jiahao Chen wrote:
   Here is some code I wrote for completely pivoted LU factorizations.
   Can you make it even faster?
   
   Anyone who can demonstrate verifiable speedups (or find bugs relative
   to the textbook description) while sticking to pure Julia code wins an
   acknowledgment in an upcoming paper I'm writing about Julia, and a
   small token of my appreciation with no cash value. :)
   
   Reference: G. H. Golub and C. F. Van Loan, Matrix Computations 4/e,
   Algorithm 3.4.3, p. 132.
   
   Thanks,
   
   Jiahao Chen
   Staff Research Scientist
   MIT Computer Science and Artificial Intelligence Laboratory
x ↔ y = for i=1:length(x) #Define swap function
  x[i], y[i] = y[i], x[i]
end

function idxmaxabs(A, r)
r1 = r[1]
μ, λ = r1, r1
themax = abs(A[r1, r1])
@inbounds for j in r, i in r
a = abs(A[i,j])
if a  themax
μ, λ, themax = i, j, A[i, j]
end
end
return μ, λ
end

function lucompletepiv!(A)
  n=size(A, 1)
  rowpiv=zeros(Int, n-1)
  colpiv=zeros(Int, n-1)
  for k=1:n-1
μ, λ = idxmaxabs(A, k:n)
rowpiv[k] = μ
slice(A, k, 1:n) ↔ slice(A, μ, 1:n)
colpiv[k] = λ
slice(A, 1:n, k) ↔ slice(A, 1:n, λ)
if A[k,k] ≠ 0
  ρ = k+1:n
  scale!(1/A[k,k], sub(A, ρ, k))
  @inbounds for j in ρ
  Akj = A[k, j]
  @simd for i in ρ
  A[i, j] -= A[i, k] * Akj
  end
  end
end
  end
  return (A, rowpiv, colpiv)
end


Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Mauro
This is a known limitation of Julia.  The trouble is that Julia cannot
do its type interference with the passed in function.  I don't have time
to search for the relevant issues but you should be able to find them.
Similarly, lambdas also suffer from this.  Hopefully this will be
resolved soon!

On Wed, 2015-03-25 at 19:41, Phil Tomson philtom...@gmail.com wrote:
  Maybe this is just obvious, but it's not making much sense to me.

 If I have a reference to a function (pardon if that's not the correct 
 Julia-ish terminology - basically just a variable that holds a Function 
 type) and call it, it runs much more slowly (persumably because it's 
 allocating a lot more memory) than it would if I make the same call with  
 the function directly.

 Maybe that's not so clear, so let me show an example using the abs function:

 function test_time()
  sum = 1.0
  for i in 1:100
sum += abs(sum)
  end
  sum
  end

 Run it a few times with @time:

julia @time test_time()
 elapsed time: 0.007576883 seconds (96 bytes allocated)
 Inf

julia @time test_time()
 elapsed time: 0.002058207 seconds (96 bytes allocated)
 Inf

 julia @time test_time()
 elapsed time: 0.005015882 seconds (96 bytes allocated)
 Inf

 Now let's try a modified version that takes a Function on the input:

 function test_time(func::Function)
  sum = 1.0
  for i in 1:100
sum += func(sum)
  end
  sum
  end

 So essentially the same function, but this time the function is passed in. 
 Running this version a few times:

 julia @time test_time(abs)
 elapsed time: 0.066612994 seconds (3280 bytes allocated, 31.05% 
 gc time)
 Inf
  
 julia @time test_time(abs)
 elapsed time: 0.064705561 seconds (3280 bytes allocated, 31.16% gc 
 time)
 Inf

 So roughly 10X slower, probably because of the much larger amount of memory 
 allocated (3280 bytes vs. 96 bytes)

 Why does the second version allocate so much more memory? (I'm running 
 Julia 0.3.6 for this testcase)

 Phil



Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Tim Holy
There have been many prior posts about this topic. Maybe we should add a FAQ 
page we can direct people to. In the mean time, your best bet is to search (or 
use FastAnonymous or NumericFuns).

--Tim

On Wednesday, March 25, 2015 11:41:10 AM Phil Tomson wrote:
  Maybe this is just obvious, but it's not making much sense to me.
 
 If I have a reference to a function (pardon if that's not the correct
 Julia-ish terminology - basically just a variable that holds a Function
 type) and call it, it runs much more slowly (persumably because it's
 allocating a lot more memory) than it would if I make the same call with
 the function directly.
 
 Maybe that's not so clear, so let me show an example using the abs function:
 
 function test_time()
  sum = 1.0
  for i in 1:100
sum += abs(sum)
  end
  sum
  end
 
 Run it a few times with @time:
 
julia @time test_time()
 elapsed time: 0.007576883 seconds (96 bytes allocated)
 Inf
 
julia @time test_time()
 elapsed time: 0.002058207 seconds (96 bytes allocated)
 Inf
 
 julia @time test_time()
 elapsed time: 0.005015882 seconds (96 bytes allocated)
 Inf
 
 Now let's try a modified version that takes a Function on the input:
 
 function test_time(func::Function)
  sum = 1.0
  for i in 1:100
sum += func(sum)
  end
  sum
  end
 
 So essentially the same function, but this time the function is passed in.
 Running this version a few times:
 
 julia @time test_time(abs)
 elapsed time: 0.066612994 seconds (3280 bytes allocated, 31.05%
 gc time)
 Inf
 
 julia @time test_time(abs)
 elapsed time: 0.064705561 seconds (3280 bytes allocated, 31.16% gc
 time)
 Inf
 
 So roughly 10X slower, probably because of the much larger amount of memory
 allocated (3280 bytes vs. 96 bytes)
 
 Why does the second version allocate so much more memory? (I'm running
 Julia 0.3.6 for this testcase)
 
 Phil



[julia-users] Result shape not specified when using @parallel

2015-03-25 Thread Archibald Pontier
Hi everyone,

I recently started using Julia for my projects and I'm currently quite 
stuck by how to parallelize things. 

I've got the two following functions:

@everywhere pixel(p) = [p.r, p.g, p.b];

which takes a RGB pixel (as defined in the Images module) and converts it 
into a vector of its RGB components (in my case always Float64), and

@everywhere function p_theta(pixel, mu, Sigma)
  Sigma = inv(Sigma);
  d = size(mu, 1);
  temp = dot(-(pixel - mu), Sigma * (pixel - mu)) / 2;
  result = (sqrt(det(Sigma)) * exp(temp) / sqrt((2*pi)^2))
  return result;
end

which calculates the probability for a given pixel, given a 3 components 
vector mu and a 3x3 covariance matrix Sigma.

Now, when I use them without parallelizing, there is no problem. However, 
as soon as I use them in parallel, for example, given an image img

s = size(img, 1) * size(img, 2);
t_img = reshape(img, s)

s_D = @parallel (vcat) for i in 1:s
  p = pixel(t_img[i]);
  d = p_theta(p, mu, Sigma);
  d
end

it crashes with the following error: 
ERROR: result shape not specified in _reinterpret_cvarray at 
~/.julia/v0.3/Images/src/core.jl:140
all the child processes terminate, and I end up with only 1 julia process 
left.

I tried various things, including pmap, without success.

Any idea why that happens?

Thanks in advance!



Re: [julia-users] SubArray memory footprint

2015-03-25 Thread Kevin Squire
Others are more qualified to answer the specific question about SubArrays,
but you might check out the ArrayViews package.  For your test, it
allocates a little under half the memory and is a little over twice as fast
(after warmup);

julia const b = [1:5;];

julia function f()
 for i in 1:1_000_000 sub(b, 1:2) end
   end
f (generic function with 1 method)

julia using ArrayViews

julia function f2()
 for i in 1:1_000_000 view(b, 1:2) end
   end
f2 (generic function with 1 method)

julia @time f()  # after warmup
elapsed time: 0.048006869 seconds (137 MB allocated, 6.80% gc time in 6
pauses with 0 full sweep)

julia @time f2()  # after warmup
elapsed time: 0.018902176 seconds (61 MB allocated, 6.60% gc time in 2
pauses with 0 full sweep)


Cheers,
   Kevin

On Wed, Mar 25, 2015 at 11:18 AM, Sebastian Good 
sebast...@palladiumconsulting.com wrote:

 I was surprised by two things in the SubArray implementation

 1) They are big! About 175 bytes for a simple subset from a 1D array from
 my naive measurement.[*]
 2) They are not flat. That is, they seem to get heap allocated and have
 indirections in them.

 I'm guessing this is because SubArrays aren't immutable, and tuples
 aren't always inlined into an immutable either, but I am really grasping at
 straws.

 I'm walking through a very large memory mapped structure and generating
 hundreds of thousands of subarrays to look at various windows of it. I was
 hoping that by using views I would reduce memory usage as compared with
 creating copies of those windows. Indeed I am, but by a lot less than I
 thought I would be.

 In other words: SubArrays are surprisingly expensive because they
 necessitate several memory allocations apiece.

 From the work that's gone into SubArrays I'm guessing that isn't meant to
 be. They are so carefully specialized that I would expect them to behave
 roughly like a (largish) struct in common use.

 Is this a misconception? Do I need to take more care about how I
 parameterize the container I put them in to take advantage?

 [*]
  const b = [1:5;]
  function f()
   for i in 1:1_000_000 sub(b, 1:2) end
 end
  @time f()
 elapsed time: 0.071933306 seconds (175 MB allocated, 9.21% gc time in 8
 pauses with 0 full sweep)



[julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Phil Tomson
 Maybe this is just obvious, but it's not making much sense to me.

If I have a reference to a function (pardon if that's not the correct 
Julia-ish terminology - basically just a variable that holds a Function 
type) and call it, it runs much more slowly (persumably because it's 
allocating a lot more memory) than it would if I make the same call with  
the function directly.

Maybe that's not so clear, so let me show an example using the abs function:

function test_time()
 sum = 1.0
 for i in 1:100
   sum += abs(sum)
 end
 sum
 end

Run it a few times with @time:

   julia @time test_time()
elapsed time: 0.007576883 seconds (96 bytes allocated)
Inf

   julia @time test_time()
elapsed time: 0.002058207 seconds (96 bytes allocated)
Inf

julia @time test_time()
elapsed time: 0.005015882 seconds (96 bytes allocated)
Inf

Now let's try a modified version that takes a Function on the input:

function test_time(func::Function)
 sum = 1.0
 for i in 1:100
   sum += func(sum)
 end
 sum
 end

So essentially the same function, but this time the function is passed in. 
Running this version a few times:

julia @time test_time(abs)
elapsed time: 0.066612994 seconds (3280 bytes allocated, 31.05% 
gc time)
Inf
 
julia @time test_time(abs)
elapsed time: 0.064705561 seconds (3280 bytes allocated, 31.16% gc 
time)
Inf

So roughly 10X slower, probably because of the much larger amount of memory 
allocated (3280 bytes vs. 96 bytes)

Why does the second version allocate so much more memory? (I'm running 
Julia 0.3.6 for this testcase)

Phil




[julia-users] Re: Julia v0.3.7

2015-03-25 Thread SixString
Thanks to all who contributed to v0.3.7.

Unless further testing is going on, this milestone can now be closed at 
https://github.com/JuliaLang/julia/milestones
It would be helpful if the v0.4.0 milestone due date was updated to provide 
a more realistic projection.


Re: [julia-users] SubArray memory footprint

2015-03-25 Thread Tim Holy
SubArrays are immutable on 0.4. But tuples aren't inlined, which is going to 
force allocation.

Assuming you're using 0.3, there's a second problem: the code in the 
constructor is not type-stable, and that makes construction slow and memory-
hungry. Compare the following on 0.3 and 0.4:

julia A = rand(2,10^4);

julia function myfun(A)
   s = 0.0
   for j = 1:size(A,2)
   S = slice(A, :, j)
   s += sum(S)
   end
   s
   end
myfun (generic function with 1 method)


On 0.3:
# warmup call
julia @time myfun(A)
elapsed time: 0.145141435 seconds (11277536 bytes allocated)

# the real call
julia @time myfun(A)
elapsed time: 0.034556106 seconds (7866896 bytes allocated)


On 0.4:
julia @time myfun(A)
elapsed time: 0.190744146 seconds (7 MB allocated)

julia @time myfun(A)
elapsed time: 0.000697173 seconds (1 MB allocated)



So you can see it's about 50x faster and about 8-fold more memory efficient on 
0.4. Once Jeff finishes his tuple overhaul, the allocation on 0.4 could 
potentially drop to 0.

--Tim


On Wednesday, March 25, 2015 11:18:08 AM Sebastian Good wrote:
 I was surprised by two things in the SubArray implementation
 
 1) They are big! About 175 bytes for a simple subset from a 1D array from
 my naive measurement.[*]
 2) They are not flat. That is, they seem to get heap allocated and have
 indirections in them.
 
 I'm guessing this is because SubArrays aren't immutable, and tuples aren't
 always inlined into an immutable either, but I am really grasping at straws.
 
 I'm walking through a very large memory mapped structure and generating
 hundreds of thousands of subarrays to look at various windows of it. I was
 hoping that by using views I would reduce memory usage as compared with
 creating copies of those windows. Indeed I am, but by a lot less than I
 thought I would be.
 
 In other words: SubArrays are surprisingly expensive because they
 necessitate several memory allocations apiece.
 
 From the work that's gone into SubArrays I'm guessing that isn't meant to
 be. They are so carefully specialized that I would expect them to behave
 roughly like a (largish) struct in common use.
 
 Is this a misconception? Do I need to take more care about how I
 parameterize the container I put them in to take advantage?
 
 [*]
 
  const b = [1:5;]
  function f()
 
   for i in 1:1_000_000 sub(b, 1:2) end
 end
 
  @time f()
 
 elapsed time: 0.071933306 seconds (175 MB allocated, 9.21% gc time in 8
 pauses with 0 full sweep)



Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Phil Tomson
I have a couple of instances where a function is determined by some 
parameters (in a JSON file in this case) and I have to call it in this 
manner.  I'm thinking it should be possible to speed these up via a macro, 
but I'm a macro newbie.  I'll probably post a different question related to 
that, but would a macro be feasible in an instance like this?

On Wednesday, March 25, 2015 at 12:35:20 PM UTC-7, Tim Holy wrote:

 There have been many prior posts about this topic. Maybe we should add a 
 FAQ 
 page we can direct people to. In the mean time, your best bet is to search 
 (or 
 use FastAnonymous or NumericFuns). 

 --Tim 

 On Wednesday, March 25, 2015 11:41:10 AM Phil Tomson wrote: 
   Maybe this is just obvious, but it's not making much sense to me. 
  
  If I have a reference to a function (pardon if that's not the correct 
  Julia-ish terminology - basically just a variable that holds a Function 
  type) and call it, it runs much more slowly (persumably because it's 
  allocating a lot more memory) than it would if I make the same call with 
  the function directly. 
  
  Maybe that's not so clear, so let me show an example using the abs 
 function: 
  
  function test_time() 
   sum = 1.0 
   for i in 1:100 
 sum += abs(sum) 
   end 
   sum 
   end 
  
  Run it a few times with @time: 
  
 julia @time test_time() 
  elapsed time: 0.007576883 seconds (96 bytes allocated) 
  Inf 
  
 julia @time test_time() 
  elapsed time: 0.002058207 seconds (96 bytes allocated) 
  Inf 
  
  julia @time test_time() 
  elapsed time: 0.005015882 seconds (96 bytes allocated) 
  Inf 
  
  Now let's try a modified version that takes a Function on the input: 
  
  function test_time(func::Function) 
   sum = 1.0 
   for i in 1:100 
 sum += func(sum) 
   end 
   sum 
   end 
  
  So essentially the same function, but this time the function is passed 
 in. 
  Running this version a few times: 
  
  julia @time test_time(abs) 
  elapsed time: 0.066612994 seconds (3280 bytes allocated, 31.05% 
  gc time) 
  Inf 
  
  julia @time test_time(abs) 
  elapsed time: 0.064705561 seconds (3280 bytes allocated, 31.16% 
 gc 
  time) 
  Inf 
  
  So roughly 10X slower, probably because of the much larger amount of 
 memory 
  allocated (3280 bytes vs. 96 bytes) 
  
  Why does the second version allocate so much more memory? (I'm running 
  Julia 0.3.6 for this testcase) 
  
  Phil 



[julia-users] Does it make sence to use Uint8 instead of Int64 on x64 OS?

2015-03-25 Thread Boris Kheyfets
The question says it all. I wonder if on would get any benefits of keeping 
small things in small containers: Uint8 instead of Int64 on x64 OS?


Re: [julia-users] Re: Does it make sence to use Uint8 instead of Int64 on x64 OS?

2015-03-25 Thread Boris Kheyfets
Thanks.

On Wed, Mar 25, 2015 at 11:31 PM, Ivar Nesje iva...@gmail.com wrote:

 If you store millions of them, you can use only 1/8 of the space, and get
 better memory efficiency.

 onsdag 25. mars 2015 21.11.05 UTC+1 skrev Boris Kheyfets følgende:

 The question says it all. I wonder if on would get any benefits of
 keeping small things in small containers: Uint8 instead of Int64 on x64 OS?




[julia-users] Re: Can you make this linear algebra code faster?

2015-03-25 Thread Jason Riedy
And Jiahao Chen writes:
 I tried to manually inline idxmaxabs. It made absolutely no difference
 on my machine. The row scaling takes ~0.05% of total execution time.

Simply inlining, sure, but you could scale inside the outer loop
and find next the pivot in the inner loop.  Making only a single
pass over the data should save more than 0.05% once you leave
cache.  But as long as you're in cache (500x500 is approx. 2MiB),
not much will matter.

Ultimately, I'm not sure who's interested in complete pivoting
for LU.  That choice alone kills performance on modern machines
for negligible benefit.  You likely would find more interest for
column-pivoted QR or rook pivoting in LDL^T.



[julia-users] Re: What's the difference between @assert and @test?

2015-03-25 Thread Ivar Nesje
Good question!

In 0.4 the printing for @test has been improved quite significantly to 
display the values of variables.

julia a,b = 1,2

julia @test a==b 
ERROR: test failed: (1 == 2) 
in expression: a == b 
in error at error.jl:19 
in default_handler at test.jl:27 
in do_test at test.jl:50 

julia @assert a==b
ERROR: AssertionError: a == b


There is some discussion in #10614 
https://github.com/JuliaLang/julia/issues/10614 about means to disable 
assertions, so there is a conceptual difference in that assertions is used 
inside a program to test for invalid inputs to functions, but tests are 
usually runned externally to see that functions work correctly for 
different outputs.

Regards
Ivar

onsdag 25. mars 2015 18.26.30 UTC+1 skrev Ismael VC følgende:

 Hello guys!

 I just had someone ask me this question and I didn't know what to answer 
 him, example:

 julia using Base.Test

 julia @test 1 == 1

 julia @test 1 == 3
 ERROR: test failed: 1 == 3
  in error at error.jl:21 (repeats 2 times)

 julia @assert 1 == 1

 julia @assert 1 == 3
 ERROR: assertion failed: 1 == 3
  in error at error.jl:21 (repeats 2 times)

 I fail to see the difference, besides that `@test` conveys the idea of 
 testing. 

 Even the error message is even the same:  `in error at error.jl:21 
 (repeats 2 times)`

 Thanks!



Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Phil Tomson


On Wednesday, March 25, 2015 at 1:08:24 PM UTC-7, Tim Holy wrote:

 Don't use a macro, just use the @anon macro to create an object that will 
 be 
 fast to use as a function. 


I guess I'm not understanding how this is used, I would have thought I'd 
need to do something like:

julia 
function test_time(func::Function)
 f = @anon func
 sum = 1.0
 for i in 1:100
   sum += f(sum)
 end
 sum
 end
ERROR: `anonsplice` has no method matching anonsplice(::Symbol)


... or even trying it outside of the function:
julia f = @anon abs
ERROR: `anonsplice` has no method matching anonsplice(::Symbol)

 


 --Tim 

 On Wednesday, March 25, 2015 01:00:27 PM Phil Tomson wrote: 
  I have a couple of instances where a function is determined by some 
  parameters (in a JSON file in this case) and I have to call it in this 
  manner.  I'm thinking it should be possible to speed these up via a 
 macro, 
  but I'm a macro newbie.  I'll probably post a different question related 
 to 
  that, but would a macro be feasible in an instance like this? 
  
  On Wednesday, March 25, 2015 at 12:35:20 PM UTC-7, Tim Holy wrote: 
   There have been many prior posts about this topic. Maybe we should add 
 a 
   FAQ 
   page we can direct people to. In the mean time, your best bet is to 
 search 
   (or 
   use FastAnonymous or NumericFuns). 
   
   --Tim 
   
   On Wednesday, March 25, 2015 11:41:10 AM Phil Tomson wrote: 
 Maybe this is just obvious, but it's not making much sense to me. 

If I have a reference to a function (pardon if that's not the 
 correct 
Julia-ish terminology - basically just a variable that holds a 
 Function 
type) and call it, it runs much more slowly (persumably because it's 
allocating a lot more memory) than it would if I make the same call 
 with 
the function directly. 

Maybe that's not so clear, so let me show an example using the abs 
   
   function: 
function test_time() 

 sum = 1.0 
 for i in 1:100 
  
   sum += abs(sum) 
  
 end 
 sum 
  
 end 

Run it a few times with @time: 
   julia @time test_time() 

elapsed time: 0.007576883 seconds (96 bytes allocated) 
Inf 

   julia @time test_time() 

elapsed time: 0.002058207 seconds (96 bytes allocated) 
Inf 

julia @time test_time() 
elapsed time: 0.005015882 seconds (96 bytes allocated) 
Inf 

Now let's try a modified version that takes a Function on the input: 
function test_time(func::Function) 

 sum = 1.0 
 for i in 1:100 
  
   sum += func(sum) 
  
 end 
 sum 
  
 end 

So essentially the same function, but this time the function is 
 passed 
   
   in. 
   
Running this version a few times: 
julia @time test_time(abs) 
elapsed time: 0.066612994 seconds (3280 bytes allocated, 
 31.05% 

gc time) 

Inf 

julia @time test_time(abs) 
elapsed time: 0.064705561 seconds (3280 bytes allocated, 
 31.16% 
   
   gc 
   
time) 

Inf 

So roughly 10X slower, probably because of the much larger amount of 
   
   memory 
   
allocated (3280 bytes vs. 96 bytes) 

Why does the second version allocate so much more memory? (I'm 
 running 
Julia 0.3.6 for this testcase) 

Phil 



Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Phil Tomson


On Wednesday, March 25, 2015 at 1:52:04 PM UTC-7, Tim Holy wrote:

 No, it's 

f = @anon x-abs(x) 

 and then pass f to test_time. Declare the function like this: 

 function test_time{F}(func::F) 
  
 end 


Ok, got that working, but when I try using it inside the function (which 
would be closer to what I really need to do):

 function test_time2(func::Function)
 fn = @anon x-func(x)
 sum = 1.0
 for i in 1:100
sum += fn(sum)
 end
 sum
 end

julia @time test_time2(abs)
ERROR: `func` has no method matching func(::Float64)
 in ##26503 at /home/phil/.julia/v0.3/FastAnonymous/src/FastAnonymous.jl:2
 in test_time2 at none:5





 --Tim 

 On Wednesday, March 25, 2015 01:30:28 PM Phil Tomson wrote: 
  On Wednesday, March 25, 2015 at 1:08:24 PM UTC-7, Tim Holy wrote: 
   Don't use a macro, just use the @anon macro to create an object that 
 will 
   be 
   fast to use as a function. 
  
  I guess I'm not understanding how this is used, I would have thought I'd 
  need to do something like: 
  
  julia 
  function test_time(func::Function) 
   f = @anon func 
   sum = 1.0 
   for i in 1:100 
 sum += f(sum) 
   end 
   sum 
   end 
  ERROR: `anonsplice` has no method matching anonsplice(::Symbol) 
  
  
  ... or even trying it outside of the function: 
  julia f = @anon abs 
  ERROR: `anonsplice` has no method matching anonsplice(::Symbol) 
  
   --Tim 
   
   On Wednesday, March 25, 2015 01:00:27 PM Phil Tomson wrote: 
I have a couple of instances where a function is determined by some 
parameters (in a JSON file in this case) and I have to call it in 
 this 
manner.  I'm thinking it should be possible to speed these up via a 
   
   macro, 
   
but I'm a macro newbie.  I'll probably post a different question 
 related 
   
   to 
   
that, but would a macro be feasible in an instance like this? 

On Wednesday, March 25, 2015 at 12:35:20 PM UTC-7, Tim Holy wrote: 
 There have been many prior posts about this topic. Maybe we should 
 add 
   
   a 
   
 FAQ 
 page we can direct people to. In the mean time, your best bet is 
 to 
   
   search 
   
 (or 
 use FastAnonymous or NumericFuns). 
 
 --Tim 
 
 On Wednesday, March 25, 2015 11:41:10 AM Phil Tomson wrote: 
   Maybe this is just obvious, but it's not making much sense to 
 me. 
  
  If I have a reference to a function (pardon if that's not the 
   
   correct 
   
  Julia-ish terminology - basically just a variable that holds a 
   
   Function 
   
  type) and call it, it runs much more slowly (persumably because 
 it's 
  allocating a lot more memory) than it would if I make the same 
 call 
   
   with 
   
  the function directly. 
  
  Maybe that's not so clear, so let me show an example using the 
 abs 
 
 function: 
  function test_time() 
  
   sum = 1.0 
   for i in 1:100 

 sum += abs(sum) 

   end 
   sum 

   end 
  
  Run it a few times with @time: 
 julia @time test_time() 
  
  elapsed time: 0.007576883 seconds (96 bytes allocated) 
  Inf 
  
 julia @time test_time() 
  
  elapsed time: 0.002058207 seconds (96 bytes allocated) 
  Inf 
  
  julia @time test_time() 
  elapsed time: 0.005015882 seconds (96 bytes allocated) 
  Inf 
  
  Now let's try a modified version that takes a Function on the 
 input: 
  function test_time(func::Function) 
  
   sum = 1.0 
   for i in 1:100 

 sum += func(sum) 

   end 
   sum 

   end 
  
  So essentially the same function, but this time the function is 
   
   passed 
   
 in. 
 
  Running this version a few times: 
  julia @time test_time(abs) 
  elapsed time: 0.066612994 seconds (3280 bytes allocated, 
   
   31.05% 
   
  gc time) 
  
  Inf 
  
  julia @time test_time(abs) 
  elapsed time: 0.064705561 seconds (3280 bytes allocated, 
   
   31.16% 
   
 gc 
 
  time) 
  
  Inf 
  
  So roughly 10X slower, probably because of the much larger 
 amount of 
 
 memory 
 
  allocated (3280 bytes vs. 96 bytes) 
  
  Why does the second version allocate so much more memory? (I'm 
   
   running 
   
  Julia 0.3.6 for this testcase) 
  
  Phil 



Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Tim Holy
Don't use a macro, just use the @anon macro to create an object that will be 
fast to use as a function.

--Tim

On Wednesday, March 25, 2015 01:00:27 PM Phil Tomson wrote:
 I have a couple of instances where a function is determined by some
 parameters (in a JSON file in this case) and I have to call it in this
 manner.  I'm thinking it should be possible to speed these up via a macro,
 but I'm a macro newbie.  I'll probably post a different question related to
 that, but would a macro be feasible in an instance like this?
 
 On Wednesday, March 25, 2015 at 12:35:20 PM UTC-7, Tim Holy wrote:
  There have been many prior posts about this topic. Maybe we should add a
  FAQ
  page we can direct people to. In the mean time, your best bet is to search
  (or
  use FastAnonymous or NumericFuns).
  
  --Tim
  
  On Wednesday, March 25, 2015 11:41:10 AM Phil Tomson wrote:
Maybe this is just obvious, but it's not making much sense to me.
   
   If I have a reference to a function (pardon if that's not the correct
   Julia-ish terminology - basically just a variable that holds a Function
   type) and call it, it runs much more slowly (persumably because it's
   allocating a lot more memory) than it would if I make the same call with
   the function directly.
   
   Maybe that's not so clear, so let me show an example using the abs
  
  function:
   function test_time()
   
sum = 1.0
for i in 1:100

  sum += abs(sum)

end
sum

end
   
   Run it a few times with @time:
  julia @time test_time()
  
   elapsed time: 0.007576883 seconds (96 bytes allocated)
   Inf
  
  julia @time test_time()
  
   elapsed time: 0.002058207 seconds (96 bytes allocated)
   Inf
   
   julia @time test_time()
   elapsed time: 0.005015882 seconds (96 bytes allocated)
   Inf
   
   Now let's try a modified version that takes a Function on the input:
   function test_time(func::Function)
   
sum = 1.0
for i in 1:100

  sum += func(sum)

end
sum

end
   
   So essentially the same function, but this time the function is passed
  
  in.
  
   Running this version a few times:
   julia @time test_time(abs)
   elapsed time: 0.066612994 seconds (3280 bytes allocated, 31.05%
   
   gc time)
   
   Inf
   
   julia @time test_time(abs)
   elapsed time: 0.064705561 seconds (3280 bytes allocated, 31.16%
  
  gc
  
   time)
   
   Inf
   
   So roughly 10X slower, probably because of the much larger amount of
  
  memory
  
   allocated (3280 bytes vs. 96 bytes)
   
   Why does the second version allocate so much more memory? (I'm running
   Julia 0.3.6 for this testcase)
   
   Phil



[julia-users] Re: Does it make sence to use Uint8 instead of Int64 on x64 OS?

2015-03-25 Thread Ivar Nesje
If you store millions of them, you can use only 1/8 of the space, and get 
better memory efficiency.

onsdag 25. mars 2015 21.11.05 UTC+1 skrev Boris Kheyfets følgende:

 The question says it all. I wonder if on would get any benefits of keeping 
 small things in small containers: Uint8 instead of Int64 on x64 OS?



Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Tim Holy
No, it's

   f = @anon x-abs(x)

and then pass f to test_time. Declare the function like this:

function test_time{F}(func::F)

end

--Tim

On Wednesday, March 25, 2015 01:30:28 PM Phil Tomson wrote:
 On Wednesday, March 25, 2015 at 1:08:24 PM UTC-7, Tim Holy wrote:
  Don't use a macro, just use the @anon macro to create an object that will
  be
  fast to use as a function.
 
 I guess I'm not understanding how this is used, I would have thought I'd
 need to do something like:
 
 julia
 function test_time(func::Function)
  f = @anon func
  sum = 1.0
  for i in 1:100
sum += f(sum)
  end
  sum
  end
 ERROR: `anonsplice` has no method matching anonsplice(::Symbol)
 
 
 ... or even trying it outside of the function:
 julia f = @anon abs
 ERROR: `anonsplice` has no method matching anonsplice(::Symbol)
 
  --Tim
  
  On Wednesday, March 25, 2015 01:00:27 PM Phil Tomson wrote:
   I have a couple of instances where a function is determined by some
   parameters (in a JSON file in this case) and I have to call it in this
   manner.  I'm thinking it should be possible to speed these up via a
  
  macro,
  
   but I'm a macro newbie.  I'll probably post a different question related
  
  to
  
   that, but would a macro be feasible in an instance like this?
   
   On Wednesday, March 25, 2015 at 12:35:20 PM UTC-7, Tim Holy wrote:
There have been many prior posts about this topic. Maybe we should add
  
  a
  
FAQ
page we can direct people to. In the mean time, your best bet is to
  
  search
  
(or
use FastAnonymous or NumericFuns).

--Tim

On Wednesday, March 25, 2015 11:41:10 AM Phil Tomson wrote:
  Maybe this is just obvious, but it's not making much sense to me.
 
 If I have a reference to a function (pardon if that's not the
  
  correct
  
 Julia-ish terminology - basically just a variable that holds a
  
  Function
  
 type) and call it, it runs much more slowly (persumably because it's
 allocating a lot more memory) than it would if I make the same call
  
  with
  
 the function directly.
 
 Maybe that's not so clear, so let me show an example using the abs

function:
 function test_time()
 
  sum = 1.0
  for i in 1:100
  
sum += abs(sum)
  
  end
  sum
  
  end
 
 Run it a few times with @time:
julia @time test_time()

 elapsed time: 0.007576883 seconds (96 bytes allocated)
 Inf

julia @time test_time()

 elapsed time: 0.002058207 seconds (96 bytes allocated)
 Inf
 
 julia @time test_time()
 elapsed time: 0.005015882 seconds (96 bytes allocated)
 Inf
 
 Now let's try a modified version that takes a Function on the input:
 function test_time(func::Function)
 
  sum = 1.0
  for i in 1:100
  
sum += func(sum)
  
  end
  sum
  
  end
 
 So essentially the same function, but this time the function is
  
  passed
  
in.

 Running this version a few times:
 julia @time test_time(abs)
 elapsed time: 0.066612994 seconds (3280 bytes allocated,
  
  31.05%
  
 gc time)
 
 Inf
 
 julia @time test_time(abs)
 elapsed time: 0.064705561 seconds (3280 bytes allocated,
  
  31.16%
  
gc

 time)
 
 Inf
 
 So roughly 10X slower, probably because of the much larger amount of

memory

 allocated (3280 bytes vs. 96 bytes)
 
 Why does the second version allocate so much more memory? (I'm
  
  running
  
 Julia 0.3.6 for this testcase)
 
 Phil



Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Tony Kelman
The function-to-be-called is not known at compile time in Phil's 
application, apparently.

Question for Phil: are there a limited set of functions that you know 
you'll be calling here? I was doing something similar recently, where it 
actually made the most sense to create a fixed Dict{Symbol, UInt} of 
function codes, use that dict as a lookup table, passing the symbol into 
the function and generating the runtime conditionals for which function to 
call via a macro. I can point you to some rough code if it would help and 
if this is at all similar to what you're trying to do.


On Wednesday, March 25, 2015 at 2:59:42 PM UTC-7, ele...@gmail.com wrote:



 On Thursday, March 26, 2015 at 8:06:41 AM UTC+11, Phil Tomson wrote:



 On Wednesday, March 25, 2015 at 1:52:04 PM UTC-7, Tim Holy wrote:

 No, it's 

f = @anon x-abs(x) 

 and then pass f to test_time. Declare the function like this: 

 function test_time{F}(func::F) 
  
 end 


 Ok, got that working, but when I try using it inside the function (which 
 would be closer to what I really need to do):

  function test_time2(func::Function)
  fn = @anon x-func(x)


 No, as Tim said, you do @anon outside test_time with the function you want 
 to use and pass the result as the parameter.  Note also his point of how to 
 declare test_time as a generic.

 Cheers
 Lex

  

  sum = 1.0
  for i in 1:100
 sum += fn(sum)
  end
  sum
  end

 julia @time test_time2(abs)
 ERROR: `func` has no method matching func(::Float64)
  in ##26503 at /home/phil/.julia/v0.3/FastAnonymous/src/FastAnonymous.jl:2
  in test_time2 at none:5





 --Tim 

 On Wednesday, March 25, 2015 01:30:28 PM Phil Tomson wrote: 
  On Wednesday, March 25, 2015 at 1:08:24 PM UTC-7, Tim Holy wrote: 
   Don't use a macro, just use the @anon macro to create an object that 
 will 
   be 
   fast to use as a function. 
  
  I guess I'm not understanding how this is used, I would have thought 
 I'd 
  need to do something like: 
  
  julia 
  function test_time(func::Function) 
   f = @anon func 
   sum = 1.0 
   for i in 1:100 
 sum += f(sum) 
   end 
   sum 
   end 
  ERROR: `anonsplice` has no method matching anonsplice(::Symbol) 
  
  
  ... or even trying it outside of the function: 
  julia f = @anon abs 
  ERROR: `anonsplice` has no method matching anonsplice(::Symbol) 
  
   --Tim 
   
   On Wednesday, March 25, 2015 01:00:27 PM Phil Tomson wrote: 
I have a couple of instances where a function is determined by 
 some 
parameters (in a JSON file in this case) and I have to call it in 
 this 
manner.  I'm thinking it should be possible to speed these up via 
 a 
   
   macro, 
   
but I'm a macro newbie.  I'll probably post a different question 
 related 
   
   to 
   
that, but would a macro be feasible in an instance like this? 

On Wednesday, March 25, 2015 at 12:35:20 PM UTC-7, Tim Holy wrote: 
 There have been many prior posts about this topic. Maybe we 
 should add 
   
   a 
   
 FAQ 
 page we can direct people to. In the mean time, your best bet is 
 to 
   
   search 
   
 (or 
 use FastAnonymous or NumericFuns). 
 
 --Tim 
 
 On Wednesday, March 25, 2015 11:41:10 AM Phil Tomson wrote: 
   Maybe this is just obvious, but it's not making much sense to 
 me. 
  
  If I have a reference to a function (pardon if that's not the 
   
   correct 
   
  Julia-ish terminology - basically just a variable that holds a 
   
   Function 
   
  type) and call it, it runs much more slowly (persumably 
 because it's 
  allocating a lot more memory) than it would if I make the same 
 call 
   
   with 
   
  the function directly. 
  
  Maybe that's not so clear, so let me show an example using the 
 abs 
 
 function: 
  function test_time() 
  
   sum = 1.0 
   for i in 1:100 

 sum += abs(sum) 

   end 
   sum 

   end 
  
  Run it a few times with @time: 
 julia @time test_time() 
  
  elapsed time: 0.007576883 seconds (96 bytes allocated) 
  Inf 
  
 julia @time test_time() 
  
  elapsed time: 0.002058207 seconds (96 bytes allocated) 
  Inf 
  
  julia @time test_time() 
  elapsed time: 0.005015882 seconds (96 bytes allocated) 
  Inf 
  
  Now let's try a modified version that takes a Function on the 
 input: 
  function test_time(func::Function) 
  
   sum = 1.0 
   for i in 1:100 

 sum += func(sum) 

   end 
   sum 

   end 
  
  So 

Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread elextr


On Thursday, March 26, 2015 at 8:06:41 AM UTC+11, Phil Tomson wrote:



 On Wednesday, March 25, 2015 at 1:52:04 PM UTC-7, Tim Holy wrote:

 No, it's 

f = @anon x-abs(x) 

 and then pass f to test_time. Declare the function like this: 

 function test_time{F}(func::F) 
  
 end 


 Ok, got that working, but when I try using it inside the function (which 
 would be closer to what I really need to do):

  function test_time2(func::Function)
  fn = @anon x-func(x)


No, as Tim said, you do @anon outside test_time with the function you want 
to use and pass the result as the parameter.  Note also his point of how to 
declare test_time as a generic.

Cheers
Lex

 

  sum = 1.0
  for i in 1:100
 sum += fn(sum)
  end
  sum
  end

 julia @time test_time2(abs)
 ERROR: `func` has no method matching func(::Float64)
  in ##26503 at /home/phil/.julia/v0.3/FastAnonymous/src/FastAnonymous.jl:2
  in test_time2 at none:5





 --Tim 

 On Wednesday, March 25, 2015 01:30:28 PM Phil Tomson wrote: 
  On Wednesday, March 25, 2015 at 1:08:24 PM UTC-7, Tim Holy wrote: 
   Don't use a macro, just use the @anon macro to create an object that 
 will 
   be 
   fast to use as a function. 
  
  I guess I'm not understanding how this is used, I would have thought 
 I'd 
  need to do something like: 
  
  julia 
  function test_time(func::Function) 
   f = @anon func 
   sum = 1.0 
   for i in 1:100 
 sum += f(sum) 
   end 
   sum 
   end 
  ERROR: `anonsplice` has no method matching anonsplice(::Symbol) 
  
  
  ... or even trying it outside of the function: 
  julia f = @anon abs 
  ERROR: `anonsplice` has no method matching anonsplice(::Symbol) 
  
   --Tim 
   
   On Wednesday, March 25, 2015 01:00:27 PM Phil Tomson wrote: 
I have a couple of instances where a function is determined by some 
parameters (in a JSON file in this case) and I have to call it in 
 this 
manner.  I'm thinking it should be possible to speed these up via a 
   
   macro, 
   
but I'm a macro newbie.  I'll probably post a different question 
 related 
   
   to 
   
that, but would a macro be feasible in an instance like this? 

On Wednesday, March 25, 2015 at 12:35:20 PM UTC-7, Tim Holy wrote: 
 There have been many prior posts about this topic. Maybe we 
 should add 
   
   a 
   
 FAQ 
 page we can direct people to. In the mean time, your best bet is 
 to 
   
   search 
   
 (or 
 use FastAnonymous or NumericFuns). 
 
 --Tim 
 
 On Wednesday, March 25, 2015 11:41:10 AM Phil Tomson wrote: 
   Maybe this is just obvious, but it's not making much sense to 
 me. 
  
  If I have a reference to a function (pardon if that's not the 
   
   correct 
   
  Julia-ish terminology - basically just a variable that holds a 
   
   Function 
   
  type) and call it, it runs much more slowly (persumably because 
 it's 
  allocating a lot more memory) than it would if I make the same 
 call 
   
   with 
   
  the function directly. 
  
  Maybe that's not so clear, so let me show an example using the 
 abs 
 
 function: 
  function test_time() 
  
   sum = 1.0 
   for i in 1:100 

 sum += abs(sum) 

   end 
   sum 

   end 
  
  Run it a few times with @time: 
 julia @time test_time() 
  
  elapsed time: 0.007576883 seconds (96 bytes allocated) 
  Inf 
  
 julia @time test_time() 
  
  elapsed time: 0.002058207 seconds (96 bytes allocated) 
  Inf 
  
  julia @time test_time() 
  elapsed time: 0.005015882 seconds (96 bytes allocated) 
  Inf 
  
  Now let's try a modified version that takes a Function on the 
 input: 
  function test_time(func::Function) 
  
   sum = 1.0 
   for i in 1:100 

 sum += func(sum) 

   end 
   sum 

   end 
  
  So essentially the same function, but this time the function is 
   
   passed 
   
 in. 
 
  Running this version a few times: 
  julia @time test_time(abs) 
  elapsed time: 0.066612994 seconds (3280 bytes 
 allocated, 
   
   31.05% 
   
  gc time) 
  
  Inf 
  
  julia @time test_time(abs) 
  elapsed time: 0.064705561 seconds (3280 bytes 
 allocated, 
   
   31.16% 
   
 gc 
 
  time) 
  
  Inf 
  
  So roughly 10X slower, probably because of the much larger 
 amount of 
 
 memory 
 
  allocated (3280 bytes vs. 96 bytes) 
  
  Why does 

[julia-users] passing in a symbol to a macro and applying it as a function to expression

2015-03-25 Thread Phil Tomson
I want to be able to pass in a symbol which represents a function name into 
a macro and then have that function applied to an expression, something 
like:

  @apply_func :abs (x - y)

(where (x-y) could stand in for some expression or a single number)

I did a bit of searching here and came up with the following (posted by Tim 
Holy last year, from this post: 
https://groups.google.com/forum/#!searchin/julia-users/macro$20symbol/julia-users/lrtnyACdrxQ/5wovJmrUs0MJ
 
):

  macro apply_func(fn::Symbol, ex::Expr) 
 qex = Expr(:quote, ex) 
 quote 
   $(esc(fn))($qex) 
 end 
  end 

I've got a Symbol which represents a function name and I'd like to apply to 
the expression, so I'd like to be able to do:
   x = 10
   y = 11 
  @apply_func :abs (x - y)
...And get: 1

But first of all, a symbol doesn't work there:
julia macroexpand(:(@apply_func :abs 1))
:($(Expr(:error, TypeError(:anonymous,typeassert,Symbol,:(:abs)

I think this is because the arguments to the macro are already being passed 
in as a symbol... so it becomes ::abs

Ok, so what if I go with:

ulia macroexpand(:(@apply_func abs 1+2))
  quote  # none, line 4:
  abs($(Expr(:copyast, :(:(1 + 2) 
  end

...that seems problematic because we're passing an Expr to the abs then:

julia @apply_func abs 1+2
ERROR: `abs` has no method matching abs(::Expr)

Ok, so now I'm realizing that macro isn't going to do what I want it to, so 
let's change it:

  macro apply_func(fn::Symbol, ex::Expr)
 quote
   $(esc(fn))($ex)
 end
  end

That works better:
julia @apply_func abs 1+2
3

But It won't work if I pass in a symbol:
julia macroexpand(:(@apply_func :abs 1+2))
:($(Expr(:error, TypeError(:anonymous,typeassert,Symbol,:(:abs)

How would I go about getting that case to work?

Phil





Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Phil Tomson


On Wednesday, March 25, 2015 at 5:07:27 PM UTC-7, Tony Kelman wrote:

 The function-to-be-called is not known at compile time in Phil's 
 application, apparently.


Right, they come out of a JSON file. I parse the JSON and construct a list 
of processing nodes from it and those could have 1 of two functions.
 


 Question for Phil: are there a limited set of functions that you know 
 you'll be calling here? 


True. Currently two. Could be more later.
 

 I was doing something similar recently, where it actually made the most 
 sense to create a fixed Dict{Symbol, UInt} of function codes, use that dict 
 as a lookup table, passing the symbol into the function and generating the 
 runtime conditionals for which function to call via a macro. I can point 
 you to some rough code if it would help and if this is at all similar to 
 what you're trying to do.


I would be interested in seeing your macro. 
I actually can already get the function name as a symbol (instead of having 
it be a function) and I've been trying to make a macro that applies that 
function (as defined by the symbol) to the arguments. But so far not 
working (I just posted a query about it)




 On Wednesday, March 25, 2015 at 2:59:42 PM UTC-7, ele...@gmail.com wrote:



 On Thursday, March 26, 2015 at 8:06:41 AM UTC+11, Phil Tomson wrote:



 On Wednesday, March 25, 2015 at 1:52:04 PM UTC-7, Tim Holy wrote:

 No, it's 

f = @anon x-abs(x) 

 and then pass f to test_time. Declare the function like this: 

 function test_time{F}(func::F) 
  
 end 


 Ok, got that working, but when I try using it inside the function (which 
 would be closer to what I really need to do):

  function test_time2(func::Function)
  fn = @anon x-func(x)


 No, as Tim said, you do @anon outside test_time with the function you 
 want to use and pass the result as the parameter.  Note also his point of 
 how to declare test_time as a generic.

 Cheers
 Lex

  

  sum = 1.0
  for i in 1:100
 sum += fn(sum)
  end
  sum
  end

 julia @time test_time2(abs)
 ERROR: `func` has no method matching func(::Float64)
  in ##26503 at 
 /home/phil/.julia/v0.3/FastAnonymous/src/FastAnonymous.jl:2
  in test_time2 at none:5





 --Tim 

 On Wednesday, March 25, 2015 01:30:28 PM Phil Tomson wrote: 
  On Wednesday, March 25, 2015 at 1:08:24 PM UTC-7, Tim Holy wrote: 
   Don't use a macro, just use the @anon macro to create an object 
 that will 
   be 
   fast to use as a function. 
  
  I guess I'm not understanding how this is used, I would have thought 
 I'd 
  need to do something like: 
  
  julia 
  function test_time(func::Function) 
   f = @anon func 
   sum = 1.0 
   for i in 1:100 
 sum += f(sum) 
   end 
   sum 
   end 
  ERROR: `anonsplice` has no method matching anonsplice(::Symbol) 
  
  
  ... or even trying it outside of the function: 
  julia f = @anon abs 
  ERROR: `anonsplice` has no method matching anonsplice(::Symbol) 
  
   --Tim 
   
   On Wednesday, March 25, 2015 01:00:27 PM Phil Tomson wrote: 
I have a couple of instances where a function is determined by 
 some 
parameters (in a JSON file in this case) and I have to call it in 
 this 
manner.  I'm thinking it should be possible to speed these up via 
 a 
   
   macro, 
   
but I'm a macro newbie.  I'll probably post a different question 
 related 
   
   to 
   
that, but would a macro be feasible in an instance like this? 

On Wednesday, March 25, 2015 at 12:35:20 PM UTC-7, Tim Holy 
 wrote: 
 There have been many prior posts about this topic. Maybe we 
 should add 
   
   a 
   
 FAQ 
 page we can direct people to. In the mean time, your best bet 
 is to 
   
   search 
   
 (or 
 use FastAnonymous or NumericFuns). 
 
 --Tim 
 
 On Wednesday, March 25, 2015 11:41:10 AM Phil Tomson wrote: 
   Maybe this is just obvious, but it's not making much sense 
 to me. 
  
  If I have a reference to a function (pardon if that's not the 
   
   correct 
   
  Julia-ish terminology - basically just a variable that holds 
 a 
   
   Function 
   
  type) and call it, it runs much more slowly (persumably 
 because it's 
  allocating a lot more memory) than it would if I make the 
 same call 
   
   with 
   
  the function directly. 
  
  Maybe that's not so clear, so let me show an example using 
 the abs 
 
 function: 
  function test_time() 
  
   sum = 1.0 
   for i in 1:100 

 sum += abs(sum) 

   end 
   sum 

   end 
  
  Run it a few times with @time: 
 julia @time test_time() 
  
  elapsed time: 0.007576883 seconds (96 bytes allocated) 
  Inf 
  
 julia @time 

Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Tony Kelman
Here's the code I was referring to 
- https://github.com/tkelman/BLOM.jl/blob/master/src/functioncodes.jl

In my case I'm using Float64 function codes for other reasons, created by 
reinterpreting a UInt64 with a few bits flipped. Using UInts directly, 
probably from the object_id of the symbol, would be less work. hash() would 
also work but I'm going to be doing some corresponding C code generation so 
I want the code values to be semi-stable, and hash(::Symbol) changed 
results not that long ago on 0.4.

Anyway at the end of the file you can see I sort the codes and create a 
lookup table recursively. If you only have 2 possible functions your code 
could be simpler. Expr(:call, sym, :x) is what I'm doing for calling the 
function.




On Wednesday, March 25, 2015 at 6:18:02 PM UTC-7, Phil Tomson wrote:



 On Wednesday, March 25, 2015 at 5:07:27 PM UTC-7, Tony Kelman wrote:

 The function-to-be-called is not known at compile time in Phil's 
 application, apparently.


 Right, they come out of a JSON file. I parse the JSON and construct a list 
 of processing nodes from it and those could have 1 of two functions.
  


 Question for Phil: are there a limited set of functions that you know 
 you'll be calling here? 


 True. Currently two. Could be more later.
  

 I was doing something similar recently, where it actually made the most 
 sense to create a fixed Dict{Symbol, UInt} of function codes, use that dict 
 as a lookup table, passing the symbol into the function and generating the 
 runtime conditionals for which function to call via a macro. I can point 
 you to some rough code if it would help and if this is at all similar to 
 what you're trying to do.


 I would be interested in seeing your macro. 
 I actually can already get the function name as a symbol (instead of 
 having it be a function) and I've been trying to make a macro that applies 
 that function (as defined by the symbol) to the arguments. But so far not 
 working (I just posted a query about it)




 On Wednesday, March 25, 2015 at 2:59:42 PM UTC-7, ele...@gmail.com wrote:



 On Thursday, March 26, 2015 at 8:06:41 AM UTC+11, Phil Tomson wrote:



 On Wednesday, March 25, 2015 at 1:52:04 PM UTC-7, Tim Holy wrote:

 No, it's 

f = @anon x-abs(x) 

 and then pass f to test_time. Declare the function like this: 

 function test_time{F}(func::F) 
  
 end 


 Ok, got that working, but when I try using it inside the function 
 (which would be closer to what I really need to do):

  function test_time2(func::Function)
  fn = @anon x-func(x)


 No, as Tim said, you do @anon outside test_time with the function you 
 want to use and pass the result as the parameter.  Note also his point of 
 how to declare test_time as a generic.

 Cheers
 Lex

  

  sum = 1.0
  for i in 1:100
 sum += fn(sum)
  end
  sum
  end

 julia @time test_time2(abs)
 ERROR: `func` has no method matching func(::Float64)
  in ##26503 at 
 /home/phil/.julia/v0.3/FastAnonymous/src/FastAnonymous.jl:2
  in test_time2 at none:5





 --Tim 

 On Wednesday, March 25, 2015 01:30:28 PM Phil Tomson wrote: 
  On Wednesday, March 25, 2015 at 1:08:24 PM UTC-7, Tim Holy wrote: 
   Don't use a macro, just use the @anon macro to create an object 
 that will 
   be 
   fast to use as a function. 
  
  I guess I'm not understanding how this is used, I would have thought 
 I'd 
  need to do something like: 
  
  julia 
  function test_time(func::Function) 
   f = @anon func 
   sum = 1.0 
   for i in 1:100 
 sum += f(sum) 
   end 
   sum 
   end 
  ERROR: `anonsplice` has no method matching anonsplice(::Symbol) 
  
  
  ... or even trying it outside of the function: 
  julia f = @anon abs 
  ERROR: `anonsplice` has no method matching anonsplice(::Symbol) 
  
   --Tim 
   
   On Wednesday, March 25, 2015 01:00:27 PM Phil Tomson wrote: 
I have a couple of instances where a function is determined by 
 some 
parameters (in a JSON file in this case) and I have to call it 
 in this 
manner.  I'm thinking it should be possible to speed these up 
 via a 
   
   macro, 
   
but I'm a macro newbie.  I'll probably post a different question 
 related 
   
   to 
   
that, but would a macro be feasible in an instance like this? 

On Wednesday, March 25, 2015 at 12:35:20 PM UTC-7, Tim Holy 
 wrote: 
 There have been many prior posts about this topic. Maybe we 
 should add 
   
   a 
   
 FAQ 
 page we can direct people to. In the mean time, your best bet 
 is to 
   
   search 
   
 (or 
 use FastAnonymous or NumericFuns). 
 
 --Tim 
 
 On Wednesday, March 25, 2015 11:41:10 AM Phil Tomson wrote: 
   Maybe this is just obvious, but it's not making much sense 
 to me. 
  
  If I have a reference to a function (pardon if that's not 
 the 
   
   correct 
   
  

Re: [julia-users] Re: zero-allocation reinterpretation of bytes

2015-03-25 Thread Sebastian Good
I guess what I find most confusing is that there would be a difference, since 
adding 1 to a pointer only adds one byte, not one element size.

 p1 = pointer(zeros(UInt64));
Ptr{UInt64} @0x00010b28c360
 p1 + 1
Ptr{UInt64} @0x00010b28c361

I would have expected the latter to end in 68. the two argument pointer 
function gets this “right”. 

 a=zeros(UInt64);
 pointer(a,1)
Ptr{Int64} @0x00010b9c72e0
 pointer(a,2)
Ptr{Int64} @0x00010b9c72e8

I can see arguments multiple ways, but when I’m given a strongly typed pointer 
(Ptr{T}), I would expect it to participate in arithmetic in increments of 
sizeof(T).

On March 25, 2015 at 6:36:37 AM, Stefan Karpinski (ste...@karpinski.org) wrote:

That does seem to be the issue. It's tricky to fix since you can't evaluate 
sizeof(Ptr) unless the condition is true.

On Tue, Mar 24, 2015 at 7:13 PM, Stefan Karpinski ste...@karpinski.org wrote:
There's a branch in eltype, which is probably causing this difference.

On Tue, Mar 24, 2015 at 7:00 PM, Sebastian Good 
sebast...@palladiumconsulting.com wrote:
Yep, that’s done it. The only difference I can see in the code I wrote before 
and this code is that previously I had

convert(Ptr{T}, pointer(raw, byte_number))

whereas here we have

convert(Ptr{T}, pointer(raw) + byte_number - 1)

The former construction seems to emit a call to a Julia-intrinsic function, 
while the latter executes the more expected simple machine loads. Is there a 
subtle difference between the two calls to pointer?

Thanks all for your help!

On March 24, 2015 at 12:19:00 PM, Matt Bauman (mbau...@gmail.com) wrote:

(The key is to ensure that the method gets specialized for different types with 
the parametric `::Type{T}` in the signature instead of `T::DataType`).

On Tuesday, March 24, 2015 at 12:10:59 PM UTC-4, Stefan Karpinski wrote:
This seems like it works fine to me (on both 0.3 and 0.4):

immutable Test
x::Float32
y::Int64
z::Int8
end

julia a = [Test(1,2,3)]
1-element Array{Test,1}:
 Test(1.0f0,2,3)

julia b = copy(reinterpret(UInt8, a))
24-element Array{UInt8,1}:
 0x00
 0x00
 0x80
 0x3f
 0x03
 0x00
 0x00
 0x00
 0x02
 0x00
 0x00
 0x00
 0x00
 0x00
 0x00
 0x00
 0x03
 0xe0
 0x82
 0x10
 0x01
 0x00
 0x00
 0x00

julia prim_read{T}(::Type{T}, data::Array{Uint8,1}, offset::Int) = 
unsafe_load(convert(Ptr{T}, pointer(data) + offset))
prim_read (generic function with 1 method)

julia prim_read(Test, b, 0)
Test(1.0f0,2,3)

julia @code_native prim_read(Test, b, 0)
.section __TEXT,__text,regular,pure_instructions
Filename: none
Source line: 1
push RBP
mov RBP, RSP
Source line: 1
mov RCX, QWORD PTR [RSI + 8]
vmovss XMM0, DWORD PTR [RCX + RDX]
mov RAX, QWORD PTR [RCX + RDX + 8]
mov DL, BYTE PTR [RCX + RDX + 16]
pop RBP
ret


On Tue, Mar 24, 2015 at 5:04 PM, Simon Danisch sdan...@gmail.com wrote:
There is a high chance that I simply don't understand llvmcall well enough, 
though ;)

Am Montag, 23. März 2015 20:20:09 UTC+1 schrieb Sebastian Good:
I'm trying to read some binary formatted data. In C, I would define an 
appropriately padded struct and cast away. Is is possible to do something 
similar in Julia, though for only one value at a time? Philosophically, I'd 
like to approximate the following, for some simple bittypes T (Int32, Float32, 
etc.)

T readT(char* data, size_t offset) { return *(T*)(data + offset); }

The transliteration of this brain-dead approach results in the following, which 
seems to allocate a boxed Pointer object on every invocation. The pointer 
function comes with ample warnings about how it shouldn't be used, and I 
imagine that it's not polite to the garbage collector.


prim_read{T}(::Type{T},
data::AbstractArray{Uint8,  
1},
byte_number)  
=  
unsafe_load(convert(Ptr{T},  
pointer(data,
byte_number)))

I can reinterpret the whole array, but this will involve a division of the 
offset to calculate the new offset relative to the reinterpreted array, and it 
allocates an array object. 

Is there a better way to simply read the machine word at a particular offset in 
a byte array? I would think it should inline to a single assembly instruction 
if done right.
    





Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Phil Tomson


On Wednesday, March 25, 2015 at 12:34:47 PM UTC-7, Mauro wrote:

 This is a known limitation of Julia.  The trouble is that Julia cannot 
 do its type interference with the passed in function.  I don't have time 
 to search for the relevant issues but you should be able to find them. 
 Similarly, lambdas also suffer from this.  Hopefully this will be 
 resolved soon! 


Mauro: When you say Hopefully this will be resolved soon!  does that mean 
this is an issue with a planned future fix?

For those of us used to programming in a very functional style, this 
limitation leads to less performant code in Julia.


 On Wed, 2015-03-25 at 19:41, Phil Tomson philt...@gmail.com javascript: 
 wrote: 
   Maybe this is just obvious, but it's not making much sense to me. 
  
  If I have a reference to a function (pardon if that's not the correct 
  Julia-ish terminology - basically just a variable that holds a Function 
  type) and call it, it runs much more slowly (persumably because it's 
  allocating a lot more memory) than it would if I make the same call with 
   
  the function directly. 
  
  Maybe that's not so clear, so let me show an example using the abs 
 function: 
  
  function test_time() 
   sum = 1.0 
   for i in 1:100 
 sum += abs(sum) 
   end 
   sum 
   end 
  
  Run it a few times with @time: 
  
 julia @time test_time() 
  elapsed time: 0.007576883 seconds (96 bytes allocated) 
  Inf 
  
 julia @time test_time() 
  elapsed time: 0.002058207 seconds (96 bytes allocated) 
  Inf 
  
  julia @time test_time() 
  elapsed time: 0.005015882 seconds (96 bytes allocated) 
  Inf 
  
  Now let's try a modified version that takes a Function on the input: 
  
  function test_time(func::Function) 
   sum = 1.0 
   for i in 1:100 
 sum += func(sum) 
   end 
   sum 
   end 
  
  So essentially the same function, but this time the function is passed 
 in. 
  Running this version a few times: 
  
  julia @time test_time(abs) 
  elapsed time: 0.066612994 seconds (3280 bytes allocated, 31.05% 
  gc time) 
  Inf 

  julia @time test_time(abs) 
  elapsed time: 0.064705561 seconds (3280 bytes allocated, 31.16% 
 gc 
  time) 
  Inf 
  
  So roughly 10X slower, probably because of the much larger amount of 
 memory 
  allocated (3280 bytes vs. 96 bytes) 
  
  Why does the second version allocate so much more memory? (I'm running 
  Julia 0.3.6 for this testcase) 
  
  Phil 



[julia-users] Efficient way to split an array/dataframe?

2015-03-25 Thread veryluckyxyz
Hi,
I have an array of 100 elements. I want to split the array to 70 (test set) 
and 30 (train set) randomly.

N=100
A = rand(N);
n = convert(Int, ceil(N*0.7))
testindex = sample(1:size(A,1), replace=false,n)
testA = A[testindex];

How can I get the train set?

I could loop through testA and A to get trainA as below

trainA = Array(eltype(testA), N-n);
k=1
for elem in A
if !(elem in testA)
trainA[k] = elem
k=k+1
end
end

Is there a more efficient or elegant way to do this?

Thanks!


[julia-users] Re: Result shape not specified when using @parallel

2015-03-25 Thread Archibald Pontier
Hi again,

I found a workaround by transforming the images into an array before (with 
separate(data(img))). However I still don't understand why I can't 
parallelize directly using the image.

Any idea why?

Thanks in advance :)

On Wednesday, 25 March 2015 19:33:02 UTC+1, Archibald Pontier wrote:

 Hi everyone,

 I recently started using Julia for my projects and I'm currently quite 
 stuck by how to parallelize things. 

 I've got the two following functions:

 @everywhere pixel(p) = [p.r, p.g, p.b];

 which takes a RGB pixel (as defined in the Images module) and converts it 
 into a vector of its RGB components (in my case always Float64), and

 @everywhere function p_theta(pixel, mu, Sigma)
   Sigma = inv(Sigma);
   d = size(mu, 1);
   temp = dot(-(pixel - mu), Sigma * (pixel - mu)) / 2;
   result = (sqrt(det(Sigma)) * exp(temp) / sqrt((2*pi)^2))
   return result;
 end

 which calculates the probability for a given pixel, given a 3 components 
 vector mu and a 3x3 covariance matrix Sigma.

 Now, when I use them without parallelizing, there is no problem. However, 
 as soon as I use them in parallel, for example, given an image img

 s = size(img, 1) * size(img, 2);
 t_img = reshape(img, s)

 s_D = @parallel (vcat) for i in 1:s
   p = pixel(t_img[i]);
   d = p_theta(p, mu, Sigma);
   d
 end

 it crashes with the following error: 
 ERROR: result shape not specified in _reinterpret_cvarray at 
 ~/.julia/v0.3/Images/src/core.jl:140
 all the child processes terminate, and I end up with only 1 julia process 
 left.

 I tried various things, including pmap, without success.

 Any idea why that happens?

 Thanks in advance!



Re: [julia-users] Julia users Berlin

2015-03-25 Thread Felix Jung
Sorry guys. Would have loved to come but can't make it on that date. If we make 
this a regular thing I'd be happy to participate in an active manner.

Have fun,

Felix

 On 25 Mar 2015, at 09:37, David Higgins daithiohuig...@gmail.com wrote:
 
 Both times are fine with me, I just need to change the reservation if we go 
 with that.
 
 By my count, from the thread above the following people are probably coming:
 Viral Shah
 Simon Danisch
 Felix Schueler
 David Higgins
 Felix Jung? (wow, cool stuff :) )
 Fabian Gans?? (Jena)
 One other person contacted me off-list to say they'll come if some travel 
 arrangements work out.
 
 The first four are ok with an earlier meeting time. I imagine it's getting 
 late for Fabian to arrange a train from Jena, but 5pm would certainly work 
 better for him.
 
 So, any objections to changing from 7pm to 5pm? (ie. who's lurking out there 
 and hasn't replied yet but was hoping to come?)
 
 David.
 
 On Wednesday, 25 March 2015 07:26:16 UTC+1, Viral Shah wrote:
 How about we aim for 5pm in that case? I think I can make it by then. Does 
 that work for others?
 
 -viral
 
 On Tuesday, March 24, 2015 at 11:07:40 AM UTC+1, Simon Danisch wrote:
 My train leaves at 9pm (at least the train station is close), so I'd 
 probably go there 1-2 hours early and see who drops by.
 Felix Schüler would come earlier as well ;)
 @David Higgins
 Do we need to call them to adjust this properly?
 
 On 24 Mar 2015 08:56, Fabian Gans fabia...@gmail.com wrote:
 I will not be there. 7 seems to be too late for me to get back to Jena the 
 same day. 
 
 Fabian


[julia-users] Re: builing 0.3.8 - lots of 'fatal:' error messages

2015-03-25 Thread Tony Kelman
What platform is this? Are you building from a tarball or a git clone? What 
version of git do you have installed?


On Tuesday, March 24, 2015 at 11:16:01 AM UTC-7, Neal Becker wrote:

 after git clone, and 
 make OPENBLAS_TARGET_ARCH=NEHALEM 

 I see a lot of messages like: 

 fatal: Needed a single revision 
 fatal: This operation must be run in a work tree 
 fatal: ambiguous argument 'HEAD': unknown revision or path not in the 
 working tree. 
 Use '--' to separate paths from revisions, like this: 
 'git command [revision...] -- [file...]' 
 /bin/date: invalid date ‘@’ 
 fatal: ambiguous argument 'HEAD': unknown revision or path not in the 
 working tree. 
 Use '--' to separate paths from revisions, like this: 
 'git command [revision...] -- [file...]' 
 fatal: Not a valid object name HEAD 
 fatal: bad default revision 'HEAD' 
 CC ui/repl.do 
 LINK usr/bin/julia-debug 
 fatal: Needed a single revision 
 fatal: Needed a single revision 
 fatal: Needed a single revision 


 Although, the julia binary seems to work (at least, starts up and gives 
 command prompt). 

 -- 
 Those who fail to understand recursion are doomed to repeat it 



[julia-users] Julia v0.3.7

2015-03-25 Thread Tony Kelman
Hello all!  The latest bugfix release of the 0.3.x Julia line has been 
released. Binaries are available from the usual place 
http://julialang.org/downloads/, and as is typical with such things, 
please report all issues to either the issue tracker 
https://github.com/JuliaLang/julia/issues, or email this list.

As this is a bugfix release, there are not too many new big-item features 
to announce, but if you are interested in the bugs fixed since 0.3.6, see this 
commit log https://github.com/JuliaLang/julia/compare/v0.3.6...v0.3.7.

This is a recommended upgrade for anyone using any of the previous 0.3.x 
releases, and should act as a drop-in replacement for any of the 0.3.x 
line. We would like to get feedback if someone has a working program that 
breaks after this upgrade.

Happy Hacking,
-Tony



[julia-users] ArrayView no broadcasting?

2015-03-25 Thread Neal Becker
I can assign a single element of a view:

julia view(a,:,:)[1,1] = 2
2

julia a
10x10 Array{Int64,2}:
 2  5  5  5  5  5  5  5  5   5
 5  5  5  5  5  5  5  5  5   5
 5  5  5  5  5  5  5  5  5   5
 1  2  3  4  5  6  7  8  9  10
 1  2  3  4  5  6  7  8  9  10
 1  2  3  4  5  6  7  8  9  10
 1  2  3  4  5  6  7  8  9  10
 1  2  3  4  5  6  7  8  9  10
 1  2  3  4  5  6  7  8  9  10
 1  2  3  4  5  6  7  8  9  10


But this doesn't work?

julia view(a,:,:)[1,:] = 2
ERROR: `setindex!` has no method matching setindex!
(::ContiguousView{Int64,2,Array{Int64,2}}, ::Int64, ::Int64, 
::UnitRange{Int64})

While this does?

julia a[1,:]=2
2

So ArrayView is not a 1st-class array?

-- 
Those who fail to understand recursion are doomed to repeat it



Re: [julia-users] Re: zero-allocation reinterpretation of bytes

2015-03-25 Thread Jameson Nash
 Given the performance difference and the different behavior, I'm tempted
to just deprecate the two-argument form of pointer.

let's try to be aware of the fact that there is is no performance
difference, before we throw out any wild claims about function calls being
problematic or slow:

julia g(x) = for i = 1:1e6 pointer(x,12) end
g (generic function with 1 method)

julia h(x) = for i = 1:1e6 pointer(x)+12*sizeof(x) end
h (generic function with 1 method)

julia @time g(Int8[])
elapsed time: 0.451235329 seconds (144 bytes allocated)

julia @time h(Int8[])
elapsed time: 0.450592699 seconds (144 bytes allocated)

 There's a branch in eltype, which is probably causing this difference.

That branch is of the form `if true`, so it will get optimized away. (there
is a performance gap still to calling sizeof, but it stems from a current
limitation of the julia codegen/inference, and not anything major)

 To more closely follow the principle of pointer arithmetic long ago
established by C

C needed to define pointer arithmetic to be equivalent to array access,
because it decided that `a[x]` was defined to be just syntactic sugar for
`*(a+x)`. I don't see how that is really a feature, since it throws away
perfectly good syntax and instead gives you something harder to use. So
instead, Julia defines math-like operations to generally work like math (so
x+1 gives you the pointer to the next byte), and array-like operations work
like array operations (so unsafe_load, pointer, getindex, pointer_to_array,
etc. all operate based on elements). FWIW though, Wikipedia seems to note
that most languages don't define pointer arithmetic at all:
http://en.wikipedia.org/wiki/Pointer_(computer_programming)

For your purposes, I believe you should be able to dispense with pointers
entirely by reading the data from a file (or IOBuffer) and using StrPack.jl
to deal with any specific alignment issues you may encounter.

On Wed, Mar 25, 2015 at 9:07 AM Stefan Karpinski ste...@karpinski.org
wrote:

 Given the performance difference and the different behavior, I'm tempted
 to just deprecate the two-argument form of pointer.

 On Wed, Mar 25, 2015 at 12:53 PM, Sebastian Good 
 sebast...@palladiumconsulting.com wrote:

 I guess what I find most confusing is that there would be a difference,
 since adding 1 to a pointer only adds one byte, not one element size.

  p1 = pointer(zeros(UInt64));
 Ptr{UInt64} @0x00010b28c360
  p1 + 1
 Ptr{UInt64} @0x00010b28c361

 I would have expected the latter to end in 68. the two argument pointer
 function gets this “right”.

  a=zeros(UInt64);
  pointer(a,1)
 Ptr{Int64} @0x00010b9c72e0
  pointer(a,2)
 Ptr{Int64} @0x00010b9c72e8

 I can see arguments multiple ways, but when I’m given a strongly typed
 pointer (Ptr{T}), I would expect it to participate in arithmetic in
 increments of sizeof(T).

 On March 25, 2015 at 6:36:37 AM, Stefan Karpinski (ste...@karpinski.org)
 wrote:

 That does seem to be the issue. It's tricky to fix since you can't
 evaluate sizeof(Ptr) unless the condition is true.

 On Tue, Mar 24, 2015 at 7:13 PM, Stefan Karpinski ste...@karpinski.org
 wrote:

 There's a branch in eltype, which is probably causing this difference.

 On Tue, Mar 24, 2015 at 7:00 PM, Sebastian Good 
 sebast...@palladiumconsulting.com wrote:

  Yep, that’s done it. The only difference I can see in the code I
 wrote before and this code is that previously I had

 convert(Ptr{T}, pointer(raw, byte_number))

  whereas here we have

 convert(Ptr{T}, pointer(raw) + byte_number - 1)

 The former construction seems to emit a call to a Julia-intrinsic
 function, while the latter executes the more expected simple machine loads.
 Is there a subtle difference between the two calls to pointer?

 Thanks all for your help!

 On March 24, 2015 at 12:19:00 PM, Matt Bauman (mbau...@gmail.com)
 wrote:

  (The key is to ensure that the method gets specialized for different
 types with the parametric `::Type{T}` in the signature instead of
 `T::DataType`).

 On Tuesday, March 24, 2015 at 12:10:59 PM UTC-4, Stefan Karpinski
 wrote:

 This seems like it works fine to me (on both 0.3 and 0.4):

  immutable Test
 x::Float32
 y::Int64
 z::Int8
 end

  julia a = [Test(1,2,3)]
 1-element Array{Test,1}:
  Test(1.0f0,2,3)

 julia b = copy(reinterpret(UInt8, a))
 24-element Array{UInt8,1}:
  0x00
  0x00
  0x80
  0x3f
  0x03
  0x00
  0x00
  0x00
  0x02
  0x00
  0x00
  0x00
  0x00
  0x00
  0x00
  0x00
  0x03
  0xe0
  0x82
  0x10
  0x01
  0x00
  0x00
  0x00

 julia prim_read{T}(::Type{T}, data::Array{Uint8,1}, offset::Int) =
 unsafe_load(convert(Ptr{T}, pointer(data) + offset))
 prim_read (generic function with 1 method)

 julia prim_read(Test, b, 0)
 Test(1.0f0,2,3)

 julia @code_native prim_read(Test, b, 0)
 .section __TEXT,__text,regular,pure_instructions
 Filename: none
 Source line: 1
 push RBP
 mov RBP, RSP
 Source line: 1
 mov RCX, QWORD PTR [RSI + 8]
 vmovss XMM0, DWORD PTR [RCX + RDX]
 mov RAX, QWORD PTR [RCX + RDX + 

Re: [julia-users] passing in a symbol to a macro and applying it as a function to expression

2015-03-25 Thread Isaiah Norton
You could remove the type assertion on `fn`, and then pull the symbol out
with `fn.args[1]` if it is an expression. I don't see much benefit to
setting things up this way, though.

On Wed, Mar 25, 2015 at 8:58 PM, Phil Tomson philtom...@gmail.com wrote:

 I want to be able to pass in a symbol which represents a function name
 into a macro and then have that function applied to an expression,
 something like:

   @apply_func :abs (x - y)

 (where (x-y) could stand in for some expression or a single number)

 I did a bit of searching here and came up with the following (posted by
 Tim Holy last year, from this post:
 https://groups.google.com/forum/#!searchin/julia-users/macro$20symbol/julia-users/lrtnyACdrxQ/5wovJmrUs0MJ
 ):

   macro apply_func(fn::Symbol, ex::Expr)
  qex = Expr(:quote, ex)
  quote
$(esc(fn))($qex)
  end
   end

 I've got a Symbol which represents a function name and I'd like to apply
 to the expression, so I'd like to be able to do:
x = 10
y = 11
   @apply_func :abs (x - y)
 ...And get: 1

 But first of all, a symbol doesn't work there:
 julia macroexpand(:(@apply_func :abs 1))
 :($(Expr(:error, TypeError(:anonymous,typeassert,Symbol,:(:abs)

 I think this is because the arguments to the macro are already being
 passed in as a symbol... so it becomes ::abs

 Ok, so what if I go with:

 ulia macroexpand(:(@apply_func abs 1+2))
   quote  # none, line 4:
   abs($(Expr(:copyast, :(:(1 + 2)
   end

 ...that seems problematic because we're passing an Expr to the abs then:

 julia @apply_func abs 1+2
 ERROR: `abs` has no method matching abs(::Expr)

 Ok, so now I'm realizing that macro isn't going to do what I want it to,
 so let's change it:

   macro apply_func(fn::Symbol, ex::Expr)
  quote
$(esc(fn))($ex)
  end
   end

 That works better:
 julia @apply_func abs 1+2
 3

 But It won't work if I pass in a symbol:
 julia macroexpand(:(@apply_func :abs 1+2))
 :($(Expr(:error, TypeError(:anonymous,typeassert,Symbol,:(:abs)

 How would I go about getting that case to work?

 Phil