[julia-users] Re: Algebraic Multigrid

2015-06-20 Thread Christoph Ortner
Very interesting, then why does the following code work?

using PyCall
@pyimport pyamg
@pyimport scipy.sparse as scipy_sparse
# generate 2D laplacian
N = 10
L1 = spdiagm((-ones(N-1), 2*ones(N), -ones(N-1)), (-1,0,1), N, N) * N^2
B = kron(speye(N), L1) + kron(L1, speye(N))
# load into Python
B_py_csr = scipy_sparse.csr_matrix(B)
# create multi-grid solver
ml = pyamg.ruge_stuben_solver(B_py_csr)
# solve with random RHS
b = ones(size(B,1))
x_py = ml[:solve](b, tol=1e-10)
# check result
println("|x - x_py| = ", norm(x_py - (B \ b), Inf))


After your comment I looked more closely. The line `B_py_csr = 
scipy_sparse.csr_matrix(B)` takes **a lot** of time when N is large. Is it 
possible that it converts B into a full matrix, then loads it into Python, 
then generates the sparse matrix from that?

Christoph


[julia-users] Any function to generate code in String from Expr?

2015-06-20 Thread Jiyin Yiyong
As described in http://blog.leahhanson.us/julia-introspects.html Julia 
parses code to AST.
But is there function provided to generate code back?

I checked on Google and the docs, but found nothing so far.
http://docs.julialang.org/en/release-0.3/stdlib/base/



[julia-users] Re: Composite types with many fields

2015-06-20 Thread Tony Kelman
I see this was double-posted and got plenty of good responses 
at https://groups.google.com/forum/#!topic/julia-users/Z5WlHpRWLho - might 
be best to delete this redundant thread.


On Saturday, June 20, 2015 at 9:32:14 PM UTC-7, Tony Kelman wrote:
>
> You can define an inner constructor that performs incomplete 
> initialization using new(), see 
> http://docs.julialang.org/en/release-0.3/manual/constructors/#incomplete-initialization
>
> Then you can do
> mesh = Mesh()
> mesh.coords = ...
>
> and so on.
>
>
> On Saturday, June 20, 2015 at 12:42:55 PM UTC-7, Stef Kynaston wrote:
>>
>> I feel I am missing a simpler approach to replicating the behaviour of a 
>> Matlab structure. I am doing FEM, and require structure like behaviour for 
>> my model initialisation and mesh generation. Currently I am using composite 
>> type definitions, such as:
>>
>> type Mesh
>> coords   :: Array{Float64,2}  
>> elements   :: Array{Float64,2}  
>> end
>>
>> but in actuality I have many required fields (20 for Mesh, for example). 
>> It seems to me very impractical to initialise an instance of Mesh via
>>
>> mesh = Mesh(field1, field2, field3, ..., field20),
>>
>> as this would require a lookup of the type definition every time to 
>> ensure correct ordering. None of my fields have standard "default" values.
>>
>> Is there an easier way to do this that I have overlooked? In Matlab I can 
>> just define the fields as I compute their values, using "Mesh.coords = 
>> ...", and this would work here except that I need to initialise Mesh before 
>> the "." field referencing will work.
>>
>> First post, so apologies if I have failed to observe etiquette rules. 
>>
>

[julia-users] Re: Composite types with many fields

2015-06-20 Thread Tony Kelman
You can define an inner constructor that performs incomplete initialization 
using new(), 
see 
http://docs.julialang.org/en/release-0.3/manual/constructors/#incomplete-initialization

Then you can do
mesh = Mesh()
mesh.coords = ...

and so on.


On Saturday, June 20, 2015 at 12:42:55 PM UTC-7, Stef Kynaston wrote:
>
> I feel I am missing a simpler approach to replicating the behaviour of a 
> Matlab structure. I am doing FEM, and require structure like behaviour for 
> my model initialisation and mesh generation. Currently I am using composite 
> type definitions, such as:
>
> type Mesh
> coords   :: Array{Float64,2}  
> elements   :: Array{Float64,2}  
> end
>
> but in actuality I have many required fields (20 for Mesh, for example). 
> It seems to me very impractical to initialise an instance of Mesh via
>
> mesh = Mesh(field1, field2, field3, ..., field20),
>
> as this would require a lookup of the type definition every time to ensure 
> correct ordering. None of my fields have standard "default" values.
>
> Is there an easier way to do this that I have overlooked? In Matlab I can 
> just define the fields as I compute their values, using "Mesh.coords = 
> ...", and this would work here except that I need to initialise Mesh before 
> the "." field referencing will work.
>
> First post, so apologies if I have failed to observe etiquette rules. 
>


[julia-users] Re: Algebraic Multigrid

2015-06-20 Thread Tony Kelman
I don't think that exists yet, but I could be wrong. Does PyAMG work on 
scipy.sparse matrices? Should be relatively easy to write some convenience 
wrappers around scipy.sparse's csc type, just remember to decrement all 
entries of colptr and rowval by 1.


On Saturday, June 20, 2015 at 4:54:19 AM UTC-7, Christoph Ortner wrote:
>
> Just to comment on my own question: PyAMG together with PyCall seems quite 
> straightforward to use (so far). 
>
> Mostly for curiosity: I didn't find any code in PyCall that converts 
> sparse matrices to Python objects. Where is that?
>
> Christoph
>


Re: [julia-users] Re: Object attributes // dispatch on extra attributes?

2015-06-20 Thread Kevin Squire
When looking at the parameter values (as opposed to functions, as David
discussed), the general term for caching results of functions particular
values is memoization.  There is a package, Memoize.jl
, which allows this as well,
although it's not a part of the core language.  Generally, memoization is
most useful when the cost of looking up the cached value is less than the
cost of recalculating it, and when the same result will be used
frequently.  For many calculations in Julia, this is won't be true,
although there probably are cases where it's useful.

Cheers,
  Kevin

On Sat, Jun 20, 2015 at 5:20 PM, David Gold  wrote:

> For some inspiration on method caching, you might take a look at the
> `broadcast!` code:
> https://github.com/JuliaLang/julia/blob/master/base/broadcast.jl#L219.
>
> The body of the `broadcast!` function essentially asks, "Have I been asked
> to broadcast this function before? If so, have I been asked to broadcast it
> over this many arrays before? If so, grab the locally defined function that
> does the broadcasting! If not, generate that function, store it in the
> cache, and give it to me now so I can call it on the original arguments."
> This actually uses a doubly nested caching system, which you can see in the
> three dicts at work (`cache`, `cache_f`, `cache_f_na`).
>
> Note also that there is some kinda tricksy metaprogramming going on, for
> instance with the use of `@get!` (which is not exported and whose
> definition can be found here:
> https://github.com/JuliaLang/julia/blob/master/base/dict.jl#L670-L687).
>
> On Saturday, June 20, 2015 at 5:40:35 PM UTC-4, Laurent Bartholdi wrote:
>
>> Dear Julia-list:
>>
>> I'm a quite heavy user of the computer algebra system GAP (
>> www.gap-system.org), and wondered how some of its features could be
>> mapped to Julia.
>>
>> GAP is an object-oriented language, in which objects can acquire
>> attributes over time; and these attributes can be used by the method
>> selection to determine the most efficient method to be applied. For a
>> contrived example, integers could have a method "is_square" which returns
>> true if the argument is a perfect square. When the method is_square is
>> called, the result is stored in a cache (attached to the object). On the
>> second call to the same method, the value is just looked up in the cache.
>> Furthermore, the method "sqrt" for integers may test whether a cached value
>> is known for the method is_square, and look it up, and in case the argument
>> is a perfect square it may switch to a better square root algorithm. Even
>> better, the method dispatcher can do this behind the scenes, by calling
>> itself the square root algorithm for perfect squares in case the argument
>> is already known to be a square.
>>
>> (This is not really a GAP example; in GAP, only composed types (records,
>> lists) are allowed to store attributes. In the internals, each object has a
>> type and a bitlist storing its known attributes.)
>>
>> I read through the Julia manual, but did not see any features ressembling
>> those (caching results of methods, and allowing finer dispatching). I have
>> also read http://arxiv.org/pdf/1209.5145.pdf which says that "the type
>> of a value cannot change over its lifetime".
>>
>> Is there still a way to do similar things in Julia?
>>
>> Thanks in advance, and apologies if I missed this in the manual,
>> Laurent
>>
>


[julia-users] Re: Object attributes // dispatch on extra attributes?

2015-06-20 Thread David Gold
For some inspiration on method caching, you might take a look at the 
`broadcast!` 
code: https://github.com/JuliaLang/julia/blob/master/base/broadcast.jl#L219.

The body of the `broadcast!` function essentially asks, "Have I been asked 
to broadcast this function before? If so, have I been asked to broadcast it 
over this many arrays before? If so, grab the locally defined function that 
does the broadcasting! If not, generate that function, store it in the 
cache, and give it to me now so I can call it on the original arguments." 
This actually uses a doubly nested caching system, which you can see in the 
three dicts at work (`cache`, `cache_f`, `cache_f_na`). 

Note also that there is some kinda tricksy metaprogramming going on, for 
instance with the use of `@get!` (which is not exported and whose 
definition can be found 
here: https://github.com/JuliaLang/julia/blob/master/base/dict.jl#L670-L687).

On Saturday, June 20, 2015 at 5:40:35 PM UTC-4, Laurent Bartholdi wrote:
>
> Dear Julia-list:
>
> I'm a quite heavy user of the computer algebra system GAP (
> www.gap-system.org), and wondered how some of its features could be 
> mapped to Julia.
>
> GAP is an object-oriented language, in which objects can acquire 
> attributes over time; and these attributes can be used by the method 
> selection to determine the most efficient method to be applied. For a 
> contrived example, integers could have a method "is_square" which returns 
> true if the argument is a perfect square. When the method is_square is 
> called, the result is stored in a cache (attached to the object). On the 
> second call to the same method, the value is just looked up in the cache. 
> Furthermore, the method "sqrt" for integers may test whether a cached value 
> is known for the method is_square, and look it up, and in case the argument 
> is a perfect square it may switch to a better square root algorithm. Even 
> better, the method dispatcher can do this behind the scenes, by calling 
> itself the square root algorithm for perfect squares in case the argument 
> is already known to be a square.
>
> (This is not really a GAP example; in GAP, only composed types (records, 
> lists) are allowed to store attributes. In the internals, each object has a 
> type and a bitlist storing its known attributes.)
>
> I read through the Julia manual, but did not see any features ressembling 
> those (caching results of methods, and allowing finer dispatching). I have 
> also read http://arxiv.org/pdf/1209.5145.pdf which says that "the type of 
> a value cannot change over its lifetime".
>
> Is there still a way to do similar things in Julia?
>
> Thanks in advance, and apologies if I missed this in the manual,
> Laurent
>


[julia-users] Re: Using composite types with many fields

2015-06-20 Thread Scott Jones
Why not just use keyword arguments on an inner constructor?
julia> type MyType
   a::Float64
   b::Int64
   MyType(;a = 0.0 , b=0) = new(a,b)
   end


julia> x = MyType(b=5, a=2.3)
MyType(2.3,5)


On Saturday, June 20, 2015 at 5:15:13 PM UTC-4, David P. Sanders wrote:
>
>
> Christoph's solution is neat.
>
> Another possibility is to just start with an empty object, by defining an 
> inner constructor that 
> does not define any of the fields, and then fill it up, as you were (IIUC) 
> looking for.
> As far as I am aware, there is not any problem with doing this.
>
> type MyType
> a::Float64
> b::Int64
> c::UTF8String
> d::Vector{Int}
> 
> MyType() = new()
> end
>
> t = MyType()
>
> t.a = 17.
> t.b = -3
> t.c = "Hello"
> t.d = [3, 4]
>
> Note that an error will occur if you try to read any field that has not 
> yet been defined.
>
> David.
>
>
> El sábado, 20 de junio de 2015, 14:43:03 (UTC-5), Stef Kynaston escribió:
>>
>> I feel I am missing a simpler approach to replicating the behaviour of a 
>> Matlab structure. I am doing FEM, and require structure like behaviour for 
>> my model initialisation and mesh generation. Currently I am using composite 
>> type definitions, such as:
>>
>> type Mesh
>> coords   :: Array{Float64,2}  
>> elements   :: Array{Float64,2}  
>> end
>>
>> but in actuality I have many required fields (20 for Mesh, for example). 
>> It seems to me very impractical to initialise an instance of Mesh via
>>
>> mesh = Mesh(field1, field2, field3, ..., field20),
>>
>> as this would require a lookup of the type definition every time to 
>> ensure correct ordering. None of my fields have standard "default" values.
>>
>> Is there an easier way to do this that I have overlooked? In Matlab I can 
>> just define the fields as I compute their values, using "Mesh.coords = 
>> ...", and this would work here except that I need to initialise Mesh before 
>> the "." field referencing will work.
>>
>> First post, so apologies if I have failed to observe etiquette rules. 
>>
>

[julia-users] Re: Object attributes // dispatch on extra attributes?

2015-06-20 Thread Scott Jones
I haven't seen any prepackaged way to do something like that, but I'm 
pretty sure that it wouldn't be too hard to implement in Julia... a 
"cached-sqrt" function could be written, with methods that take different 
types, so that each type can have it's own cache.

On Saturday, June 20, 2015 at 5:40:35 PM UTC-4, Laurent Bartholdi wrote:
>
> Dear Julia-list:
>
> I'm a quite heavy user of the computer algebra system GAP (
> www.gap-system.org), and wondered how some of its features could be 
> mapped to Julia.
>
> GAP is an object-oriented language, in which objects can acquire 
> attributes over time; and these attributes can be used by the method 
> selection to determine the most efficient method to be applied. For a 
> contrived example, integers could have a method "is_square" which returns 
> true if the argument is a perfect square. When the method is_square is 
> called, the result is stored in a cache (attached to the object). On the 
> second call to the same method, the value is just looked up in the cache. 
> Furthermore, the method "sqrt" for integers may test whether a cached value 
> is known for the method is_square, and look it up, and in case the argument 
> is a perfect square it may switch to a better square root algorithm. Even 
> better, the method dispatcher can do this behind the scenes, by calling 
> itself the square root algorithm for perfect squares in case the argument 
> is already known to be a square.
>
> (This is not really a GAP example; in GAP, only composed types (records, 
> lists) are allowed to store attributes. In the internals, each object has a 
> type and a bitlist storing its known attributes.)
>
> I read through the Julia manual, but did not see any features ressembling 
> those (caching results of methods, and allowing finer dispatching). I have 
> also read http://arxiv.org/pdf/1209.5145.pdf which says that "the type of 
> a value cannot change over its lifetime".
>
> Is there still a way to do similar things in Julia?
>
> Thanks in advance, and apologies if I missed this in the manual,
> Laurent
>


Re: [julia-users] Performances: vectorised operator, for loop and comprehension

2015-06-20 Thread Scott Jones
I ran your functions, using `@time`, on 0.4:

> julia> @time tf1(1:100) ;
>7.140 milliseconds (7 allocations: 7813 KB)
> julia> @time tf2(1:100) ;
>9.419 milliseconds (7 allocations: 7813 KB, 19.99% gc time)
> julia> @time tf3(1:100) ;
>  927.697 microseconds (7 allocations: 7813 KB)
> julia> @time tf4(1:100) ;
>   46.060 milliseconds (1999 k allocations: 39054 KB, 6.87% gc time)

As you can see, the map version has ~2 allocations per loop... probably at 
least part of the problem...

On Saturday, June 20, 2015 at 5:25:54 PM UTC-4, Xiubo Zhang wrote:
>
> Same tests on latest build (0.4.0-dev+5468):
>
> tf1 (vectorized):0.00457 seconds
> tf2 (loop):  0.00805 seconds
> tf3 (comprehension): 0.00197 seconds
> tf4 (map):   0.04186 seconds
>
> I have to say the progression in the performance department is exciting -- 
> improvements in all fronts! Yet, relatively (I guess this is 
> almost nitpicking), comprehension is still faster than others.
>
> I did suspected GC to be the cause. But I feel I lack the knowledge to 
> properly analyse its behaviour with @time(d) macro. I did note notice, 
> however, the standard deviations for the seconds used are vastly different:
>
> tf1 (vectorized):0.976434 seconds
> tf2 (loop):  0.0009162303 seconds
> tf3 (comprehension): 0.0001677752 seconds
> tf4 (map):   0.0020671911 seconds
>
> Maybe the variations are related to GC behaviours?
>
>
> On Saturday, 20 June 2015 20:24:47 UTC+1, Scott Jones wrote:
>>
>> Also, you might want to retry your tests on 0.4 (if you don't mind living 
>> on the bleeding edge!), there've been a number of changes there that would 
>> affect your results.
>>
>
> On Saturday, 20 June 2015 20:24:47 UTC+1, Scott Jones wrote:
>>
>> Also, you might want to retry your tests on 0.4 (if you don't mind living 
>> on the bleeding edge!), there've been a number of changes there that would 
>> affect your results.
>>
>

[julia-users] Object attributes // dispatch on extra attributes?

2015-06-20 Thread Laurent Bartholdi
Dear Julia-list:

I'm a quite heavy user of the computer algebra system GAP 
(www.gap-system.org), and wondered how some of its features could be mapped 
to Julia.

GAP is an object-oriented language, in which objects can acquire attributes 
over time; and these attributes can be used by the method selection to 
determine the most efficient method to be applied. For a contrived example, 
integers could have a method "is_square" which returns true if the argument 
is a perfect square. When the method is_square is called, the result is 
stored in a cache (attached to the object). On the second call to the same 
method, the value is just looked up in the cache. Furthermore, the method 
"sqrt" for integers may test whether a cached value is known for the method 
is_square, and look it up, and in case the argument is a perfect square it 
may switch to a better square root algorithm. Even better, the method 
dispatcher can do this behind the scenes, by calling itself the square root 
algorithm for perfect squares in case the argument is already known to be a 
square.

(This is not really a GAP example; in GAP, only composed types (records, 
lists) are allowed to store attributes. In the internals, each object has a 
type and a bitlist storing its known attributes.)

I read through the Julia manual, but did not see any features ressembling 
those (caching results of methods, and allowing finer dispatching). I have 
also read http://arxiv.org/pdf/1209.5145.pdf which says that "the type of a 
value cannot change over its lifetime".

Is there still a way to do similar things in Julia?

Thanks in advance, and apologies if I missed this in the manual,
Laurent


Re: [julia-users] Performances: vectorised operator, for loop and comprehension

2015-06-20 Thread Xiubo Zhang
Same tests on latest build (0.4.0-dev+5468):

tf1 (vectorized):0.00457 seconds
tf2 (loop):  0.00805 seconds
tf3 (comprehension): 0.00197 seconds
tf4 (map):   0.04186 seconds

I have to say the progression in the performance department is exciting -- 
improvements in all fronts! Yet, relatively (I guess this is 
almost nitpicking), comprehension is still faster than others.

I did suspected GC to be the cause. But I feel I lack the knowledge to 
properly analyse its behaviour with @time(d) macro. I did note notice, 
however, the standard deviations for the seconds used are vastly different:

tf1 (vectorized):0.976434 seconds
tf2 (loop):  0.0009162303 seconds
tf3 (comprehension): 0.0001677752 seconds
tf4 (map):   0.0020671911 seconds

Maybe the variations are related to GC behaviours?


On Saturday, 20 June 2015 20:24:47 UTC+1, Scott Jones wrote:
>
> Also, you might want to retry your tests on 0.4 (if you don't mind living 
> on the bleeding edge!), there've been a number of changes there that would 
> affect your results.
>

On Saturday, 20 June 2015 20:24:47 UTC+1, Scott Jones wrote:
>
> Also, you might want to retry your tests on 0.4 (if you don't mind living 
> on the bleeding edge!), there've been a number of changes there that would 
> affect your results.
>


Re: [julia-users] Performances: vectorised operator, for loop and comprehension

2015-06-20 Thread Xiubo Zhang
Same tests on latest build (0.4.0-dev+5468):

tf1 (vectorized):0.00457 seconds
tf2 (loop):  0.00805 seconds
tf3 (comprehension): 0.00197 seconds
tf4 (map):   0.04186 seconds

I have to say the progression in the performance department is exciting -- 
improvements in all fronts! Yet, relatively (I guess this is 
almost nitpicking), comprehension is still faster than 

I did suspected GC to be the cause. But I feel I lack the knowledge to 
properly analyse its behaviour with @time(d) macro. I did note notice, 
however, the standard deviations for the seconds used are vastly different:

tf1 (vectorized):0.976434 seconds
tf2 (loop):  0.0009162303 seconds
tf3 (comprehension): 0.0001677752 seconds
tf4 (map):   0.0020671911 seconds

Maybe the variations are related to GC behaviours?


On Saturday, 20 June 2015 20:24:47 UTC+1, Scott Jones wrote:
>
> Also, you might want to retry your tests on 0.4 (if you don't mind living 
> on the bleeding edge!), there've been a number of changes there that would 
> affect your results.
>


[julia-users] Re: Using composite types with many fields

2015-06-20 Thread Simon Danisch
type Mesh
var1
var2
..
Mesh() = new()
end

Will do the trick.
This will overwrite the default constructor, though.
If you want to have it back, the easiest way to do it would be this:

type Mesh
var1
var2
...
Mesh(varargs...) = new(varargs...)
end

now you can write:
m = Mesh()
m.var1 = ...
x = m.var2 # ERROR: undefined reference 

Best,
Simon



Am Samstag, 20. Juni 2015 21:43:03 UTC+2 schrieb Stef Kynaston:
>
> I feel I am missing a simpler approach to replicating the behaviour of a 
> Matlab structure. I am doing FEM, and require structure like behaviour for 
> my model initialisation and mesh generation. Currently I am using composite 
> type definitions, such as:
>
> type Mesh
> coords   :: Array{Float64,2}  
> elements   :: Array{Float64,2}  
> end
>
> but in actuality I have many required fields (20 for Mesh, for example). 
> It seems to me very impractical to initialise an instance of Mesh via
>
> mesh = Mesh(field1, field2, field3, ..., field20),
>
> as this would require a lookup of the type definition every time to ensure 
> correct ordering. None of my fields have standard "default" values.
>
> Is there an easier way to do this that I have overlooked? In Matlab I can 
> just define the fields as I compute their values, using "Mesh.coords = 
> ...", and this would work here except that I need to initialise Mesh before 
> the "." field referencing will work.
>
> First post, so apologies if I have failed to observe etiquette rules. 
>


[julia-users] Re: Using composite types with many fields

2015-06-20 Thread David P. Sanders

Christoph's solution is neat.

Another possibility is to just start with an empty object, by defining an 
inner constructor that 
does not define any of the fields, and then fill it up, as you were (IIUC) 
looking for.
As far as I am aware, there is not any problem with doing this.

type MyType
a::Float64
b::Int64
c::UTF8String
d::Vector{Int}

MyType() = new()
end

t = MyType()

t.a = 17.
t.b = -3
t.c = "Hello"
t.d = [3, 4]

Note that an error will occur if you try to read any field that has not yet 
been defined.

David.


El sábado, 20 de junio de 2015, 14:43:03 (UTC-5), Stef Kynaston escribió:
>
> I feel I am missing a simpler approach to replicating the behaviour of a 
> Matlab structure. I am doing FEM, and require structure like behaviour for 
> my model initialisation and mesh generation. Currently I am using composite 
> type definitions, such as:
>
> type Mesh
> coords   :: Array{Float64,2}  
> elements   :: Array{Float64,2}  
> end
>
> but in actuality I have many required fields (20 for Mesh, for example). 
> It seems to me very impractical to initialise an instance of Mesh via
>
> mesh = Mesh(field1, field2, field3, ..., field20),
>
> as this would require a lookup of the type definition every time to ensure 
> correct ordering. None of my fields have standard "default" values.
>
> Is there an easier way to do this that I have overlooked? In Matlab I can 
> just define the fields as I compute their values, using "Mesh.coords = 
> ...", and this would work here except that I need to initialise Mesh before 
> the "." field referencing will work.
>
> First post, so apologies if I have failed to observe etiquette rules. 
>


Re: [julia-users] Re: Plans for "Linear Algebra"

2015-06-20 Thread Christoph Ortner
The tendency seems to be to provide packages (easy to install) instead of 
built-in.

For iterative solvers look at IterativeSolvers.jl. For AMG, I have just 
started to use PyAMG (via PyCall) which was very easy to setup and seems to 
work fine.

Christoph


[julia-users] Re: Using composite types with many fields

2015-06-20 Thread Christoph Ortner
A little hack I use when I have "important" fields that are always present 
and need fast access, but many others as well is this

type MyType
   field1
   field2
   misc::Dict
end

getindex(m::MyType, i::Symbol) = m.misc[i]
function setindex(m::MyType, val, i::Symbol) m.misc[i] = val; end

Then I can set and read "fields" via
   m = MyType(f1, f2)
   m[:blib] = somedata

I've found that this works quite well in some situations because you can 
dispatch on MyType, but you have all the flexibility of a Dict.

Christoph


[julia-users] Re: Using composite types with many fields

2015-06-20 Thread John Myles White
It sounds like you might be better off working with Dict's instead of types.

 -- John

On Saturday, June 20, 2015 at 12:43:03 PM UTC-7, Stef Kynaston wrote:
>
> I feel I am missing a simpler approach to replicating the behaviour of a 
> Matlab structure. I am doing FEM, and require structure like behaviour for 
> my model initialisation and mesh generation. Currently I am using composite 
> type definitions, such as:
>
> type Mesh
> coords   :: Array{Float64,2}  
> elements   :: Array{Float64,2}  
> end
>
> but in actuality I have many required fields (20 for Mesh, for example). 
> It seems to me very impractical to initialise an instance of Mesh via
>
> mesh = Mesh(field1, field2, field3, ..., field20),
>
> as this would require a lookup of the type definition every time to ensure 
> correct ordering. None of my fields have standard "default" values.
>
> Is there an easier way to do this that I have overlooked? In Matlab I can 
> just define the fields as I compute their values, using "Mesh.coords = 
> ...", and this would work here except that I need to initialise Mesh before 
> the "." field referencing will work.
>
> First post, so apologies if I have failed to observe etiquette rules. 
>


[julia-users] Composite types with many fields

2015-06-20 Thread Stef Kynaston
I feel I am missing a simpler approach to replicating the behaviour of a 
Matlab structure. I am doing FEM, and require structure like behaviour for 
my model initialisation and mesh generation. Currently I am using composite 
type definitions, such as:

type Mesh
coords   :: Array{Float64,2}  
elements   :: Array{Float64,2}  
end

but in actuality I have many required fields (20 for Mesh, for example). It 
seems to me very impractical to initialise an instance of Mesh via

mesh = Mesh(field1, field2, field3, ..., field20),

as this would require a lookup of the type definition every time to ensure 
correct ordering. None of my fields have standard "default" values.

Is there an easier way to do this that I have overlooked? In Matlab I can 
just define the fields as I compute their values, using "Mesh.coords = 
...", and this would work here except that I need to initialise Mesh before 
the "." field referencing will work.

First post, so apologies if I have failed to observe etiquette rules. 


[julia-users] Using composite types with many fields

2015-06-20 Thread Stef Kynaston
I feel I am missing a simpler approach to replicating the behaviour of a 
Matlab structure. I am doing FEM, and require structure like behaviour for 
my model initialisation and mesh generation. Currently I am using composite 
type definitions, such as:

type Mesh
coords   :: Array{Float64,2}  
elements   :: Array{Float64,2}  
end

but in actuality I have many required fields (20 for Mesh, for example). It 
seems to me very impractical to initialise an instance of Mesh via

mesh = Mesh(field1, field2, field3, ..., field20),

as this would require a lookup of the type definition every time to ensure 
correct ordering. None of my fields have standard "default" values.

Is there an easier way to do this that I have overlooked? In Matlab I can 
just define the fields as I compute their values, using "Mesh.coords = 
...", and this would work here except that I need to initialise Mesh before 
the "." field referencing will work.

First post, so apologies if I have failed to observe etiquette rules. 


Re: [julia-users] Performances: vectorised operator, for loop and comprehension

2015-06-20 Thread Scott Jones
Also, you might want to retry your tests on 0.4 (if you don't mind living 
on the bleeding edge!), there've been a number of changes there that would 
affect your results.


Re: [julia-users] Performances: vectorised operator, for loop and comprehension

2015-06-20 Thread Scott Jones
Because julia has a GC, I've found that the memory allocations are critical 
to performance (which is why I added some more information to the @time 
macro, and added a @timev (verbose) macro as well).
It doesn't harm anything to use the @time macros (there's even one that 
returns the information as a tuple so you can store the information and do 
whatever you want with it), and you will be able to track down performance 
issues much quicker.

On Friday, June 19, 2015 at 10:45:56 AM UTC-4, Xiubo Zhang wrote:
>
> Thanks for the reply.
>
> I am aware of the @time macro. Just that I thought tic() and toc() are 
> adequate for this case as I am not concerned with the memory side of things 
> at the moment. I have also read the performance section in the manual, 
> which led me to doing the benchmarks with functions rather than writing 
> expressions in the REPL.
>
> Loops are faster than vectorized code in Julia, and a comprehension is 
>> essentially a loop.
>>
>
> This is exactly what I was thinking before asking this question. I learnt 
> that de-vectorized loops should be faster than the vectorized version, but 
> wouldn't the developers of the language simply implement the ".^" in the 
> form of plain for loops to benefit from the better performance? Also it 
> does not explain why the comprehension version is 3 to 4 times faster than 
> the for loop version.
>
> What did I miss?
>
> On Friday, 19 June 2015 15:29:15 UTC+1, Mauro wrote:
>>
>> Loops are faster than vectorized code in Julia, and a comprehension is 
>> essentially a loop.  Also checkout the convenient @time macro, it also 
>> reports memory allocation.  Last, there is a performance section in the 
>> manual where a lot of this is explained.  But do report back if you got 
>> more questions.  Mauro 
>>
>>
>> On Fri, 2015-06-19 at 15:41, Xiubo Zhang  wrote: 
>> > I am rather new to Julia, so please do remind me if I missed anything 
>> > important. 
>> > 
>> > I was trying to write a function which would operate on the elements in 
>> an 
>> > array, and return an array. For the sake of simplicity, let's say 
>> > calculating the squares of an array of real numbers. I designed four 
>> > functions, each implementing the task using a different style: 
>> > 
>> > function tf1{T<:Real}(x::AbstractArray{T}) return r = x .^ 2 end  # 
>> > vectorised power operator 
>> > function tf2{T<:Real}(x::AbstractArray{T}) r = Array(T, length(x)); for 
>> i 
>> > in 1:length(x) r[i] = x[i] ^ 2 end; return r end  # plain for loop 
>> > function tf3{T<:Real}(x::AbstractArray{T}) return [i ^ 2 for i in x] 
>> end  # 
>> > array comprehension 
>> > function tf4{T<:Real}(x::AbstractArray{T}) return map(x -> x ^ 2, x) 
>> end  # 
>> > using the "map" function 
>> > 
>> > And I timed the operations with tic() and toc(). The results varies 
>> from 
>> > each run, but the following is a typical set of results after warming 
>> up: 
>> > 
>> > tic(); tf1( 1:100 ); toc() 
>> > elapsed time: 0.011582169 seconds 
>> > 
>> > tic(); tf2( 1:100 ); toc() 
>> > elapsed time: 0.016566094 seconds 
>> > 
>> > tic(); tf3( 1:100 ); toc() 
>> > elapsed time: 0.004038817 seconds 
>> > 
>> > tic(); tf4( 1:100 ); toc() 
>> > elapsed time: 0.065989988 seconds 
>> > 
>> > I understand that the map function should run slower than the rest, but 
>> why 
>> > is the comprehension version so much faster than the vectorised "^" 
>> > operator? Does this mean array comprehensions should be used in favour 
>> of 
>> > all other styles whenever possible? 
>> > 
>> > P.S. version of Julia: 
>> >_ 
>> >_   _ _(_)_ |  A fresh approach to technical computing 
>> >   (_) | (_) (_)|  Documentation: http://docs.julialang.org 
>> >_ _   _| |_  __ _   |  Type "help()" for help. 
>> >   | | | | | | |/ _` |  | 
>> >   | | |_| | | | (_| |  |  Version 0.3.9 (2015-05-30 11:24 UTC) 
>> >  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release 
>> > |__/   |  x86_64-w64-mingw32 
>> > 
>> > This is on a Windows 7 machine. 
>>
>>

[julia-users] Code for approximate Bayesian computing (ABC) and related estimators

2015-06-20 Thread michael . creel
Code for ABC estimation is at https://github.com/mcreel/ABCAuction The ABC 
code is in the directory AuctionSBIL. The file AIS.jl is a set of functions 
for generic ABC estimation. The file Auction.jl contains functions for a 
specific model, a structural auction model. The file AuctionMC.jl performs 
a Monte Carlo study of the ABC estimator.


Re: [julia-users] Performances: vectorised operator, for loop and comprehension

2015-06-20 Thread Xiubo Zhang
Thanks Josh for your feedback. I tried to replicate your experiment, but 
surprisingly here is what I have got:

tf1 (vectorized):0.00967 seconds
tf2 (loop):  0.01178 seconds
tf3 (comprehension): 0.00655 seconds
tf4 (map):   0.07086 seconds

I can understand the overhead introduced by the extra shaping calls in the 
vectorized operator's implementation, but the time for the plain loop 
version is just weird. Maybe it's a machine specific thing? Regardless 
comprehension is always faster than the other ones -- so I suppose this 
should warrant the use of comprehension in favour of other styles whenever 
possible?

Though I personally think subtle performance "best practices" like this may 
be a bad thing for the language users, especially package authors who work 
in other domains because they will have to invest a lot more time to learn 
the language to write optimised Julia code.

On Friday, 19 June 2015 18:24:18 UTC+1, Josh Langsfeld wrote:
>
> My results don't show a significant performance advantage of the 
> comprehension. Averaging over 1000 runs of a million-element array, I got:
>
> f1 (vectorized): 10.32 ms
> f2 (loop): 2.07 ms
> f3 (comprehension): 2.05 ms
> f4 (map): 38.09 ms
>
> Also, as you can see here (
> https://github.com/JuliaLang/julia/blob/master/base/arraymath.jl#L57), 
> the .^ operator is implemented with a comprehension so I don't see why it 
> is measurably slower. It does include a call to reshape, but I believe that 
> it shares the data so that should be a negligible extra cost.
>
> On Friday, June 19, 2015 at 10:45:56 AM UTC-4, Xiubo Zhang wrote:
>>
>> Thanks for the reply.
>>
>> I am aware of the @time macro. Just that I thought tic() and toc() are 
>> adequate for this case as I am not concerned with the memory side of things 
>> at the moment. I have also read the performance section in the manual, 
>> which led me to doing the benchmarks with functions rather than writing 
>> expressions in the REPL.
>>
>> Loops are faster than vectorized code in Julia, and a comprehension is 
>>> essentially a loop.
>>>
>>
>> This is exactly what I was thinking before asking this question. I learnt 
>> that de-vectorized loops should be faster than the vectorized version, but 
>> wouldn't the developers of the language simply implement the ".^" in the 
>> form of plain for loops to benefit from the better performance? Also it 
>> does not explain why the comprehension version is 3 to 4 times faster than 
>> the for loop version.
>>
>> What did I miss?
>>
>> On Friday, 19 June 2015 15:29:15 UTC+1, Mauro wrote:
>>>
>>> Loops are faster than vectorized code in Julia, and a comprehension is 
>>> essentially a loop.  Also checkout the convenient @time macro, it also 
>>> reports memory allocation.  Last, there is a performance section in the 
>>> manual where a lot of this is explained.  But do report back if you got 
>>> more questions.  Mauro 
>>>
>>>
>>> On Fri, 2015-06-19 at 15:41, Xiubo Zhang  wrote: 
>>> > I am rather new to Julia, so please do remind me if I missed anything 
>>> > important. 
>>> > 
>>> > I was trying to write a function which would operate on the elements 
>>> in an 
>>> > array, and return an array. For the sake of simplicity, let's say 
>>> > calculating the squares of an array of real numbers. I designed four 
>>> > functions, each implementing the task using a different style: 
>>> > 
>>> > function tf1{T<:Real}(x::AbstractArray{T}) return r = x .^ 2 end  # 
>>> > vectorised power operator 
>>> > function tf2{T<:Real}(x::AbstractArray{T}) r = Array(T, length(x)); 
>>> for i 
>>> > in 1:length(x) r[i] = x[i] ^ 2 end; return r end  # plain for loop 
>>> > function tf3{T<:Real}(x::AbstractArray{T}) return [i ^ 2 for i in x] 
>>> end  # 
>>> > array comprehension 
>>> > function tf4{T<:Real}(x::AbstractArray{T}) return map(x -> x ^ 2, x) 
>>> end  # 
>>> > using the "map" function 
>>> > 
>>> > And I timed the operations with tic() and toc(). The results varies 
>>> from 
>>> > each run, but the following is a typical set of results after warming 
>>> up: 
>>> > 
>>> > tic(); tf1( 1:100 ); toc() 
>>> > elapsed time: 0.011582169 seconds 
>>> > 
>>> > tic(); tf2( 1:100 ); toc() 
>>> > elapsed time: 0.016566094 seconds 
>>> > 
>>> > tic(); tf3( 1:100 ); toc() 
>>> > elapsed time: 0.004038817 seconds 
>>> > 
>>> > tic(); tf4( 1:100 ); toc() 
>>> > elapsed time: 0.065989988 seconds 
>>> > 
>>> > I understand that the map function should run slower than the rest, 
>>> but why 
>>> > is the comprehension version so much faster than the vectorised "^" 
>>> > operator? Does this mean array comprehensions should be used in favour 
>>> of 
>>> > all other styles whenever possible? 
>>> > 
>>> > P.S. version of Julia: 
>>> >_ 
>>> >_   _ _(_)_ |  A fresh approach to technical computing 
>>> >   (_) | (_) (_)|  Documentation: http://docs.julialang.org 
>>> >_ _   _| |_  __ _   |  Type "help()" for help. 
>>>

Re: [julia-users] Re: Plans for "Linear Algebra"

2015-06-20 Thread cuneytsert
I am working in the field of Computational Fluid Dynamics. I was thinking 
about sparse iterative solvers and built-in support for modern accelerators 
such as Algebraic Multigrid.
Cuneyt




[julia-users] Re: Algebraic Multigrid

2015-06-20 Thread Christoph Ortner
Just to comment on my own question: PyAMG together with PyCall seems quite 
straightforward to use (so far). 

Mostly for curiosity: I didn't find any code in PyCall that converts sparse 
matrices to Python objects. Where is that?

Christoph


Re: [julia-users] Re: Plans for "Linear Algebra"

2015-06-20 Thread Christoph Ortner
One example are built-in iterative solvers, at least the basic ones such as 
PCG, GMRES, but I think people made strong arguments against having these 
in Base.  At some point though there was a discussion about outsourcing 
some of Base into a standard library, and in that case it could make sense 
to include those as well. (As well as many other things.) 
Christoph