Re: [julia-users] merged arrays?

2015-04-08 Thread Kevin Squire
AFAIK, there's nothing really like that right now, but what do you plan to
do with the data?  Most linear algebra code, for example, calls out to
BLAS, which requires data to be contiguous (or at least strided) in
memory.  Other code may or may not have this same restriction.

It should be relatively easy to write an `VCat` type which wraps a variable
sized list of arrays and at least lets you index them as if they were one
array.  Making it efficient would probably be more challenging, and making
it an actual AbstractArray might be tedious, but should be doable.

Cheers!
   Kevin

On Wed, Apr 8, 2015 at 8:56 PM, Sebastian Good <
sebast...@palladiumconsulting.com> wrote:

> I can use sub to pretend a larger array is a smaller, or differently
> shaped one. Is there functionality to allow me to treat several smaller
> arrays as a larger one without copying them? In effect, mimicking hcat
> and friends, but without copying data.
>


[julia-users] Re: CALL FOR PARTICIPATION: JuliaCon 2015, June 24-28, MIT

2015-04-08 Thread Jiahao Chen
Proposal submission deadline extended by popular request to *Sunday, April 
12 at 11:59pm EDT*.


[julia-users] merged arrays?

2015-04-08 Thread Sebastian Good
I can use sub to pretend a larger array is a smaller, or differently shaped 
one. Is there functionality to allow me to treat several smaller arrays as 
a larger one without copying them? In effect, mimicking hcat and friends, 
but without copying data.


Re: [julia-users] Re: Default behavior from omitted catch block when a finally block is added.

2015-04-08 Thread Harry B
I have put in a pull request for a documentation change related to this 
https://github.com/JuliaLang/julia/pull/10776
Please note it contains a few more edits that seemed appropriate.

Please approve or let me know what needs to change.
I would be happy to amend the documentation again if you decide to
require catch clause based on suggestions here.

Thanks
--
Harry

On Wednesday, April 8, 2015 at 3:32:18 AM UTC-7, Mauro wrote:
>
> > If we require a 'catch' block for every "try", it would be cleaner. 
> > Otherwise I suggest we had a note in the documentation to state the 
> > caveats. Particularly for those who come from other background and get 
> too 
> > excited about "try end" terse syntax. I would be happy to make some 
> edits 
> > in the documentation if you approve. 
>
> I read the docs after seeing your first post and was not any wiser.  So 
> yes, some better docs would be good, thanks! 
>


Re: [julia-users] Usage of floating point template arguments with many different values

2015-04-08 Thread Sheehan Olver
It already is Interval{Float64}.  The issue is choosing defaults for a and b 
which aren't inferrable from the type.

I think I'm convinced that just using NaN should work, with other domains 
having a similar representation for parameters not set.

Sent from my iPhone

> On 9 Apr 2015, at 6:41 am, Steven G. Johnson  wrote:
> 
> 
> 
>> On Tuesday, April 7, 2015 at 5:53:09 PM UTC-4, Sheehan Olver wrote:
>> 
>> The current problem is 
>> 
>> f=Fun(x->x^2,Chebyshev(Interval(a,b))) 
>> 
>> represents a function on [a,b] as type Fun{Float64,Chebyshev}.  But 
>> sometimes one needs to convert numbers to functions a la 
>> 
>> convert(typeof(f),5) 
>> 
>> e.g., as part of creating the vector [f,5].  Since the parameters a and b 
>> are not inferable, the current solution is for Chebyshev to support both 
>> Interval and another type AnyDomain.   This has the bad effect of losing 
>> type information. 
>> 
>> So I’m currently debating changing to template parameters: 
>> 
>> f=Fun(x->x^2,Chebyshev(Interval{a,b}())) 
>> 
>> so that typeof(f) is Fun{Float64,Chebyshev{Interval{a,b}}}.Then it is 
>> possible to convert numbers to the correct type.
> 
> Why not make something like Interval{promote_type(typeof(a),typeof(b))} ?   
> That way the Interval is parameterized by its type, not by its endpoints.


[julia-users] Errors from make doctest

2015-04-08 Thread Harry B
Hello,

I did a fresh build of julia from master/latest source and I get tons of 
errors for "make doctest" (all errors are identical though)

I am attempting to run "make doctest" from the doc/ subdirectory. My goal 
is to make sure my edits and corrections pass the doctest

I have tried two ways  (1) Installing julia from source with make at the 
top level (and then make install)
(2) make doctest having the path 
pointing to a nightly snapshot of Julia from a few days ago.

"make doctest" runs now, but I get an error for almost all doctest entries. 
All errors are of the form "ERROR: UndefVarError: @raw_str not defined"

File "manual/constructors.rst", line 538, in default
Failed example:
(1 + 2im)//(1 - 2im)
Expected:
-3//5 + 4//5*im
Got:
ERROR: UndefVarError: @raw_str not defined

Any help is appreciated.

Thanks
--
Harry



Re: [julia-users] Usage of floating point template arguments with many different values

2015-04-08 Thread Steven G. Johnson


On Tuesday, April 7, 2015 at 5:53:09 PM UTC-4, Sheehan Olver wrote:
>
>
> The current problem is 
>
> f=Fun(x->x^2,Chebyshev(Interval(a,b))) 
>
> represents a function on [a,b] as type Fun{Float64,Chebyshev}.  But 
> sometimes one needs to convert numbers to functions a la 
>
> convert(typeof(f),5) 
>
> e.g., as part of creating the vector [f,5].  Since the parameters a and b 
> are not inferable, the current solution is for Chebyshev to support both 
> Interval and another type AnyDomain.   This has the bad effect of losing 
> type information. 
>
> So I’m currently debating changing to template parameters: 
>
> f=Fun(x->x^2,Chebyshev(Interval{a,b}())) 
>
> so that typeof(f) is Fun{Float64,Chebyshev{Interval{a,b}}}.Then it is 
> possible to convert numbers to the correct type.


Why not make something like Interval{promote_type(typeof(a),typeof(b))} ?   
That way the Interval is parameterized by its type, not by its endpoints.


[julia-users] code review: my first outer constructor :)

2015-04-08 Thread Andrei Berceanu
Hi guys!
Here is [part of] some code I recently wrote for solving a physics problem:

https://gist.github.com/berceanu/010d331884848205acef

It's the first time I try to use Julia constructors properly (or 
improperly?!) so I need your opinion on a couple of points.
1. On Julia's IRC channel I was told that using AbstractArray instead of 
e.g. Matrix/Vector might yield a performance boost - is that the case?
2. Can you spot any major performance killers in my code?
3. Coming from Python, I am used to things like enumerate etc, but perhaps 
that is not very "Julian"? :) So this last aspect concerns more the coding 
style, I guess.

Thanks so much!
//A


Re: [julia-users] Direct access to fields in a type, unjulian?

2015-04-08 Thread Mauro
> Mauro, thanks for the link. I read the paper by Logg and it seems 
> interesting. Do you have any references for how the mapping from the cells 
> to the actual finite elements is done?

No, I never got that far.  But it doesn't contain any nodes, so
presumably those are all separate from the geometric mesh.  However, its
usage in Fenics is limited to triangles and tets, at least last time I
checked.  However, the design (and my implementation) works for all
kinds of cells (elements) (except for some of the algorithms in section
4 of the paper).

> While writing my FEM code, I have had some troubles with type stability. In 
> FEM you can have arbitrary many type of finite elements and materials at 
> different parts of the mesh and having it all come together in a type 
> stable way is non trivial in my opinion. Right now, I use something like 
> this to represent the geometric 
> mesh: https://gist.github.com/KristofferC/74bfcc0c79897f3b5754 and I then 
> apply finite element types and material types to different element sets in 
> the geometric mesh. I then create a bunch of "FESections" that all 
> are parameterized by the finite element type and material type. There was 
> some discussion about it 
> here: https://github.com/JuliaGeometry/meta/issues/3

I've only ever worked with one type of element so I don't have great
insights here but this would probably the approach I'd take too.

>
> On Wednesday, April 8, 2015 at 5:46:12 PM UTC+2, Mauro wrote:
>>
>> Nat and I once started on a mesh library which implements some of that: 
>> https://bitbucket.org/maurow/mesh.jl 
>> but it has gone a bit stale. 
>>
>> In the spirit of Tim's response, it defines a big mesh datatype and then 
>> helper dataytpes, for instance Vertices, which are just a thin wrapper 
>> around the mesh.  So, then you can do: 
>>
>> mesh = ... # construct the mesh 
>> vertices = Vertices(mesh) 
>> n_vertices = length(vertices) 
>> edges = Edges(mesh) 
>> for v in vertices 
>>... 
>> end 
>> v10 = vertices[10] # get a single vertex 
>>
>> (I hope to start using it again in a few weeks to do some FEM work.) 
>>
>> On Wed, 2015-04-08 at 15:35, Kristoffer Carlsson > > wrote: 
>> > I come from a Python background where direct access to fields in for 
>> > example classes with the dot notation is very common. 
>> > 
>> > However, from what I have seen in different conversations, accessing 
>> fields 
>> > directly is not really Julian. Sort of a "fields are an implementation 
>> > detail" mindset, and "what is important are the functions". 
>> > 
>> > Here is an example of a type hierarchy that is a little bit similar to 
>> > types I am working with now: 
>> > 
>> > type Element 
>> > vertices::Vector{Int} 
>> > end 
>> > 
>> > type Node 
>> > coordinates::Vector{Float64} 
>> > id::Int 
>> > end 
>> > 
>> > type FESection 
>> > elements::Vector{Elements} 
>> > nodes::Vector{Nodes} 
>> > end 
>> > 
>> > type Mesh 
>> >sections::Vector{FESection} 
>> > end 
>> > 
>> > Now, let's say that I want to write a function to loop over all 
>> vertices. 
>> > One way (which I would do in Python is): 
>> > 
>> > mesh = Mesh(.) 
>> > for section in mesh.sections 
>> > for element in section.elements 
>> > for vertices in element.vertices 
>> >   blah bla 
>> > end 
>> > end 
>> > end 
>> > 
>> > 
>> > 
>> > However, this accesses the fields directly. Would it be more Julian to 
>> > write getters for the fields? Since Julia does not have @property like 
>> > Python I realize that by accessing the fields you commit to exactly the 
>> > name of the field and it's type while with a getter it would be more 
>> > flexible. 
>> > 
>> > Best regards, 
>> > Kristoffer Carlsson 
>>
>>



Re: [julia-users] Direct access to fields in a type, unjulian?

2015-04-08 Thread Kristoffer Carlsson
Thank you Tim for your suggestion about creating functions returning 
iterators. That sounds indeed like a very clean way to do it.

Mauro, thanks for the link. I read the paper by Logg and it seems 
interesting. Do you have any references for how the mapping from the cells 
to the actual finite elements is done?

While writing my FEM code, I have had some troubles with type stability. In 
FEM you can have arbitrary many type of finite elements and materials at 
different parts of the mesh and having it all come together in a type 
stable way is non trivial in my opinion. Right now, I use something like 
this to represent the geometric 
mesh: https://gist.github.com/KristofferC/74bfcc0c79897f3b5754 and I then 
apply finite element types and material types to different element sets in 
the geometric mesh. I then create a bunch of "FESections" that all 
are parameterized by the finite element type and material type. There was 
some discussion about it 
here: https://github.com/JuliaGeometry/meta/issues/3



On Wednesday, April 8, 2015 at 5:46:12 PM UTC+2, Mauro wrote:
>
> Nat and I once started on a mesh library which implements some of that: 
> https://bitbucket.org/maurow/mesh.jl 
> but it has gone a bit stale. 
>
> In the spirit of Tim's response, it defines a big mesh datatype and then 
> helper dataytpes, for instance Vertices, which are just a thin wrapper 
> around the mesh.  So, then you can do: 
>
> mesh = ... # construct the mesh 
> vertices = Vertices(mesh) 
> n_vertices = length(vertices) 
> edges = Edges(mesh) 
> for v in vertices 
>... 
> end 
> v10 = vertices[10] # get a single vertex 
>
> (I hope to start using it again in a few weeks to do some FEM work.) 
>
> On Wed, 2015-04-08 at 15:35, Kristoffer Carlsson  > wrote: 
> > I come from a Python background where direct access to fields in for 
> > example classes with the dot notation is very common. 
> > 
> > However, from what I have seen in different conversations, accessing 
> fields 
> > directly is not really Julian. Sort of a "fields are an implementation 
> > detail" mindset, and "what is important are the functions". 
> > 
> > Here is an example of a type hierarchy that is a little bit similar to 
> > types I am working with now: 
> > 
> > type Element 
> > vertices::Vector{Int} 
> > end 
> > 
> > type Node 
> > coordinates::Vector{Float64} 
> > id::Int 
> > end 
> > 
> > type FESection 
> > elements::Vector{Elements} 
> > nodes::Vector{Nodes} 
> > end 
> > 
> > type Mesh 
> >sections::Vector{FESection} 
> > end 
> > 
> > Now, let's say that I want to write a function to loop over all 
> vertices. 
> > One way (which I would do in Python is): 
> > 
> > mesh = Mesh(.) 
> > for section in mesh.sections 
> > for element in section.elements 
> > for vertices in element.vertices 
> >   blah bla 
> > end 
> > end 
> > end 
> > 
> > 
> > 
> > However, this accesses the fields directly. Would it be more Julian to 
> > write getters for the fields? Since Julia does not have @property like 
> > Python I realize that by accessing the fields you commit to exactly the 
> > name of the field and it's type while with a getter it would be more 
> > flexible. 
> > 
> > Best regards, 
> > Kristoffer Carlsson 
>
>

Re: [julia-users] Direct access to fields in a type, unjulian?

2015-04-08 Thread Mauro
> Can you comment on the performance implications of directly accessing 
> fields vs your approach?  I'm guessing that directly accessing the fields 
> would be faster?

I think usually it should get inlined and there is no difference:

julia> immutable A
   a::Int
   end

julia> function f1(a::A,n)
   out = 0
   for i=1:n
   out += a.a
   end
   end

f1 (generic function with 1 method)

julia> getA(a::A) = a.a
getA (generic function with 1 method)

julia> function f2(a::A,n)
   out = 0
   for i=1:n
   out += getA(a)
   end
   end
f2 (generic function with 1 method)

julia> @code_native f1(A(5),10)
.text
Filename: none
Source line: 4
pushq   %rbp
movq%rsp, %rbp
Source line: 4
popq%rbp
ret

julia> @code_native f2(A(5),10)
.text
Filename: none
Source line: 4
pushq   %rbp
movq%rsp, %rbp
Source line: 4
popq%rbp
ret

>
>> On Wednesday, April 08, 2015 06:35:46 AM Kristoffer Carlsson wrote: 
>> > I come from a Python background where direct access to fields in for 
>> > example classes with the dot notation is very common. 
>> > 
>> > However, from what I have seen in different conversations, accessing 
>> fields 
>> > directly is not really Julian. Sort of a "fields are an implementation 
>> > detail" mindset, and "what is important are the functions". 
>> > 
>> > Here is an example of a type hierarchy that is a little bit similar to 
>> > types I am working with now: 
>> > 
>> > type Element 
>> > vertices::Vector{Int} 
>> > end 
>> > 
>> > type Node 
>> > coordinates::Vector{Float64} 
>> > id::Int 
>> > end 
>> > 
>> > type FESection 
>> > elements::Vector{Elements} 
>> > nodes::Vector{Nodes} 
>> > end 
>> > 
>> > type Mesh 
>> >sections::Vector{FESection} 
>> > end 
>> > 
>> > Now, let's say that I want to write a function to loop over all 
>> vertices. 
>> > One way (which I would do in Python is): 
>> > 
>> > mesh = Mesh(.) 
>> > for section in mesh.sections 
>> > for element in section.elements 
>> > for vertices in element.vertices 
>> >   blah bla 
>> > end 
>> > end 
>> > end 
>> > 
>> > 
>> > 
>> > However, this accesses the fields directly. Would it be more Julian to 
>> > write getters for the fields? Since Julia does not have @property like 
>> > Python I realize that by accessing the fields you commit to exactly the 
>> > name of the field and it's type while with a getter it would be more 
>> > flexible. 
>> > 
>> > Best regards, 
>> > Kristoffer Carlsson 
>>
>>



Re: [julia-users] Direct access to fields in a type, unjulian?

2015-04-08 Thread Phil Tomson


On Wednesday, April 8, 2015 at 8:00:42 AM UTC-7, Tim Holy wrote:
>
> It's a matter of taste, really, but in general I agree that the Julian way 
> is 
> to reduce the number of accesses to fields directly. That said, I do 
> sometimes 
> access the fields. 
>
> However, your iterator example is a good opportunity to illustrate a more 
> julian approach: 
>
> mesh = mesh(...) 
> for vertex in vertices(mesh) 
> blah blah 
> end 
>
> The idea is that vertices(mesh) might return an iterator object, and then 
> you 
> write start, next, and done functions to implement iteration. Presumably 
> you 
> should build the iterators for Mesh on top of iterators for FESection, 
> etc, so 
> the whole thing is composable. You'd then have short implementations of 
> the 
> vertices function taking a Mesh, FESection, or Element. 
>
>
Tim,

Can you comment on the performance implications of directly accessing 
fields vs your approach?  I'm guessing that directly accessing the fields 
would be faster?

Phil 

> On Wednesday, April 08, 2015 06:35:46 AM Kristoffer Carlsson wrote: 
> > I come from a Python background where direct access to fields in for 
> > example classes with the dot notation is very common. 
> > 
> > However, from what I have seen in different conversations, accessing 
> fields 
> > directly is not really Julian. Sort of a "fields are an implementation 
> > detail" mindset, and "what is important are the functions". 
> > 
> > Here is an example of a type hierarchy that is a little bit similar to 
> > types I am working with now: 
> > 
> > type Element 
> > vertices::Vector{Int} 
> > end 
> > 
> > type Node 
> > coordinates::Vector{Float64} 
> > id::Int 
> > end 
> > 
> > type FESection 
> > elements::Vector{Elements} 
> > nodes::Vector{Nodes} 
> > end 
> > 
> > type Mesh 
> >sections::Vector{FESection} 
> > end 
> > 
> > Now, let's say that I want to write a function to loop over all 
> vertices. 
> > One way (which I would do in Python is): 
> > 
> > mesh = Mesh(.) 
> > for section in mesh.sections 
> > for element in section.elements 
> > for vertices in element.vertices 
> >   blah bla 
> > end 
> > end 
> > end 
> > 
> > 
> > 
> > However, this accesses the fields directly. Would it be more Julian to 
> > write getters for the fields? Since Julia does not have @property like 
> > Python I realize that by accessing the fields you commit to exactly the 
> > name of the field and it's type while with a getter it would be more 
> > flexible. 
> > 
> > Best regards, 
> > Kristoffer Carlsson 
>
>

Re: [julia-users] Holt-Winters D-E Smoothing in Julia

2015-04-08 Thread Philip Tellis
Thank you.

Re: [julia-users] Re: Unexpected behaviour when running parallelized code on single worker

2015-04-08 Thread Patrick O'Leary
Thank you Tim for explaining that more clearly. This morning's reply was 
ENOCOFFEE :D

Please do file an issue, Nils, and thanks for investigating further.

On Wednesday, April 8, 2015 at 10:03:04 AM UTC-5, Tim Holy wrote:
>
> Sounds like a bug, but I think what Patrick was trying to say is that it 
> would 
> help to test it with Arrays first just to make sure there's not some 
> problem 
> with your code. Assuming that it works for Arrays but not for 
> SharedArrays, 
> then you should probably file an issue. 
>
> Best, 
> --Tim 
>
> On Wednesday, April 08, 2015 05:57:54 AM Nils Gudat wrote: 
> > Hi Patrick, 
> > 
> > Sorry for the late reply (Easter and all that)! The problem with using 
> > Arrays in the code above is that they're not available for remote 
> processes 
> > to write on, so I'd have to come up with a more complicated structure of 
> > passing computations and results around. The whole appeal of 
> SharedArrays 
> > to me is that they make this unnecessary and therefore lend itself 
> > perfectly to facilitate easy parallelization of the kind of problems 
> > economists are often facing (basically filling large Arrays with the 
> > results of lots and lots of simple maximization problems). One way to 
> get 
> > around both problems would be to initialize the Array as 
> > 
> > nprocs() > 1 ? result1 = SharedArray(Float64, (3,3,3)) : result1 = 
> > Array(Float64, (3,3,3)) 
> > 
> > The main reason for my original post here was to see whether anyone 
> could 
> > explain why SharedArrays behave in this way when used on one process and 
> > whether this was potentially a bug (or at least something that should be 
> > mentioned in the docs). 
>
>

Re: [julia-users] A method with an optional parameter may result in a different method being called

2015-04-08 Thread Daan Huybrechs
Thanks, Seth, that example looks even stranger than what I encountered.

I have suggested adding a short paragraph to the manual:
https://github.com/JuliaLang/julia/pull/10769

I can't think of a case where the author of code would like a default value 
of a method to be used in another method instead, but one good reason for 
the current implementation of optional arguments is certainly simplicity.



Re: [julia-users] Re: Unexpected behaviour when running parallelized code on single worker

2015-04-08 Thread Tim Holy
In a sense, a SharedArray should act like an Array if you're using a single 
core. I'm not saying all methods have been implemented for them, but if the 
code runs then it certainly shouldn't give different answers. So I'd say this 
is a bug.

So now that we know it's a bug, I'd urge you to file an issue. The advantage is 
that it's less likely to be forgotten than an email. If whoever digs into it 
(maybe me but not this week, maybe someone else) discovers there isn't really 
a problem after all, you can blame me :-).

--Tim

On Wednesday, April 08, 2015 08:15:38 AM Nils Gudat wrote:
> Apologies again for being a little slow (mentally now, not in terms of
> response time); by trying an Array you mean running the code in single-core
> mode and using an Array instead of a SharedArray? Running it in parallel
> with a regular Array (unsurprisingly) doesn't work. So I have:
> 
> Single core: initializing result* as Arrays works, initializing as
> SharedArrays gives unexpected result
> Multi-core: initializing result* as Arrays returns empty Arrays,
> initializing as SharedArrays works
> 
> I'm still not sure whether this is a bug or the "intended" behaviour of
> SharedArrays, i.e. whether SharedArrays maybe just aren't meant to work in
> this situation when not using multiple processes. I'm a bit reluctant to
> file an issue simply because my understanding of the workings of
> SharedArray is limited, so I thought I'd ask for help here first.



Re: [julia-users] Direct access to fields in a type, unjulian?

2015-04-08 Thread Mauro
Nat and I once started on a mesh library which implements some of that:
https://bitbucket.org/maurow/mesh.jl
but it has gone a bit stale.

In the spirit of Tim's response, it defines a big mesh datatype and then
helper dataytpes, for instance Vertices, which are just a thin wrapper
around the mesh.  So, then you can do:

mesh = ... # construct the mesh
vertices = Vertices(mesh)
n_vertices = length(vertices)
edges = Edges(mesh)
for v in vertices
   ...
end
v10 = vertices[10] # get a single vertex

(I hope to start using it again in a few weeks to do some FEM work.)

On Wed, 2015-04-08 at 15:35, Kristoffer Carlsson  wrote:
> I come from a Python background where direct access to fields in for 
> example classes with the dot notation is very common.
>
> However, from what I have seen in different conversations, accessing fields 
> directly is not really Julian. Sort of a "fields are an implementation 
> detail" mindset, and "what is important are the functions".
>
> Here is an example of a type hierarchy that is a little bit similar to 
> types I am working with now:
>
> type Element
> vertices::Vector{Int}
> end
>
> type Node
> coordinates::Vector{Float64}
> id::Int
> end
>
> type FESection
> elements::Vector{Elements}
> nodes::Vector{Nodes}
> end
>
> type Mesh
>sections::Vector{FESection}
> end
>
> Now, let's say that I want to write a function to loop over all vertices. 
> One way (which I would do in Python is):
>
> mesh = Mesh(.)
> for section in mesh.sections
> for element in section.elements
> for vertices in element.vertices
>   blah bla
> end
> end
> end
>
>
>
> However, this accesses the fields directly. Would it be more Julian to 
> write getters for the fields? Since Julia does not have @property like 
> Python I realize that by accessing the fields you commit to exactly the 
> name of the field and it's type while with a getter it would be more 
> flexible.
>
> Best regards,
> Kristoffer Carlsson



Re: [julia-users] Re: Unexpected behaviour when running parallelized code on single worker

2015-04-08 Thread Nils Gudat
Apologies again for being a little slow (mentally now, not in terms of 
response time); by trying an Array you mean running the code in single-core 
mode and using an Array instead of a SharedArray? Running it in parallel 
with a regular Array (unsurprisingly) doesn't work. So I have:

Single core: initializing result* as Arrays works, initializing as 
SharedArrays gives unexpected result
Multi-core: initializing result* as Arrays returns empty Arrays, 
initializing as SharedArrays works

I'm still not sure whether this is a bug or the "intended" behaviour of 
SharedArrays, i.e. whether SharedArrays maybe just aren't meant to work in 
this situation when not using multiple processes. I'm a bit reluctant to 
file an issue simply because my understanding of the workings of 
SharedArray is limited, so I thought I'd ask for help here first.


Re: [julia-users] Re: Unexpected behaviour when running parallelized code on single worker

2015-04-08 Thread Tim Holy
Sounds like a bug, but I think what Patrick was trying to say is that it would 
help to test it with Arrays first just to make sure there's not some problem 
with your code. Assuming that it works for Arrays but not for SharedArrays, 
then you should probably file an issue.

Best,
--Tim

On Wednesday, April 08, 2015 05:57:54 AM Nils Gudat wrote:
> Hi Patrick,
> 
> Sorry for the late reply (Easter and all that)! The problem with using
> Arrays in the code above is that they're not available for remote processes
> to write on, so I'd have to come up with a more complicated structure of
> passing computations and results around. The whole appeal of SharedArrays
> to me is that they make this unnecessary and therefore lend itself
> perfectly to facilitate easy parallelization of the kind of problems
> economists are often facing (basically filling large Arrays with the
> results of lots and lots of simple maximization problems). One way to get
> around both problems would be to initialize the Array as
> 
> nprocs() > 1 ? result1 = SharedArray(Float64, (3,3,3)) : result1 =
> Array(Float64, (3,3,3))
> 
> The main reason for my original post here was to see whether anyone could
> explain why SharedArrays behave in this way when used on one process and
> whether this was potentially a bug (or at least something that should be
> mentioned in the docs).



Re: [julia-users] Direct access to fields in a type, unjulian?

2015-04-08 Thread Tim Holy
It's a matter of taste, really, but in general I agree that the Julian way is 
to reduce the number of accesses to fields directly. That said, I do sometimes 
access the fields.

However, your iterator example is a good opportunity to illustrate a more 
julian approach:

mesh = mesh(...)
for vertex in vertices(mesh)
blah blah
end

The idea is that vertices(mesh) might return an iterator object, and then you 
write start, next, and done functions to implement iteration. Presumably you 
should build the iterators for Mesh on top of iterators for FESection, etc, so 
the whole thing is composable. You'd then have short implementations of the 
vertices function taking a Mesh, FESection, or Element.

Best,
--Tim

On Wednesday, April 08, 2015 06:35:46 AM Kristoffer Carlsson wrote:
> I come from a Python background where direct access to fields in for
> example classes with the dot notation is very common.
> 
> However, from what I have seen in different conversations, accessing fields
> directly is not really Julian. Sort of a "fields are an implementation
> detail" mindset, and "what is important are the functions".
> 
> Here is an example of a type hierarchy that is a little bit similar to
> types I am working with now:
> 
> type Element
> vertices::Vector{Int}
> end
> 
> type Node
> coordinates::Vector{Float64}
> id::Int
> end
> 
> type FESection
> elements::Vector{Elements}
> nodes::Vector{Nodes}
> end
> 
> type Mesh
>sections::Vector{FESection}
> end
> 
> Now, let's say that I want to write a function to loop over all vertices.
> One way (which I would do in Python is):
> 
> mesh = Mesh(.)
> for section in mesh.sections
> for element in section.elements
> for vertices in element.vertices
>   blah bla
> end
> end
> end
> 
> 
> 
> However, this accesses the fields directly. Would it be more Julian to
> write getters for the fields? Since Julia does not have @property like
> Python I realize that by accessing the fields you commit to exactly the
> name of the field and it's type while with a getter it would be more
> flexible.
> 
> Best regards,
> Kristoffer Carlsson



Re: [julia-users] Re: [ANN] JuliaIO and FileIO

2015-04-08 Thread Tim Holy
No criticism intended, I just wanted to save you (or anyone else) time :-).

I agree that it's also a great opportunity to refactor, and (without taking 
the time to think about it too deeply) your suggestions seem very reasonable.

Best,
--Tim

On Wednesday, April 08, 2015 07:23:40 AM Simon Danisch wrote:
> Excuse my ignorance Tim! I just don't allow myself to waste too much time
> on this, so that I'm not looking into things very deeply.
> This indeed solves the issue in a slightly different way.
> I think we can definitely reuse more of what Images is currently doing.
> It makes a lot of sense to switch out symbols in favor of types to better
> model hierarchies.
> Maybe this should come with a mime type overhaul, as they're closely
> related.
> 
> Am Samstag, 4. April 2015 17:41:14 UTC+2 schrieb Simon Danisch:
> > Hi there,
> > 
> > FileIO has the aim to make it very easy to read any arbitrary file.
> > I hastily copied together a proof of concept by taking code from
> > Images.jl.
> > 
> > JuliaIO is the umbrella group, which takes IO packages with no home. If
> > someone wrote an IO package, but doesn't have time to implement the FileIO
> > interface, giving it to JuliaIO might be a good idea in order to keep the
> > package usable.
> > 
> > Concept of FileIO is described in the readme:
> > 
> > Meta package for FileIO. Purpose is to open a file and return the
> > respective Julia object, without doing any research on how to open the
> > file.
> > 
> > f = file"test.jpg" # -> File{:jpg}read(f) # -> Imageread(file"test.obj") #
> > -> Meshread(file"test.csv") # -> DataFrame
> > 
> > So far only Images are supported and MeshIO is on the horizon.
> > 
> > It is structured the following way: There are three levels of abstraction,
> > first FileIO, defining the file_str macro etc, then a meta package for a
> > certain class of file, e.g. Images or Meshes. This meta package defines
> > the
> > Julia datatype (e.g. Mesh, Image) and organizes the importer libraries.
> > This is also a good place to define IO library independant tests for
> > different file formats. Then on the last level, there are the low-level
> > importer libraries, which do the actual IO. They're included via Mike
> > Innes
> > Requires  package, so
> > that it doesn't introduce extra load time if not needed. This way, using
> > FileIO without reading/writing anything should have short load times.
> > 
> > As an implementation example please look at FileIO -> ImageIO ->
> > ImageMagick. This should already work as a proof of concept. Try:
> > 
> > using FileIO # should be very fast, thanks to Mike Innes Requires
> > packageread(file"test.jpg") # takes a little longer as it needs to load
> > the IO libraryread(file"test.jpg") # should be fastread(File("documents",
> > "images", "myimage.jpg") # automatic joinpath via File constructor
> > 
> > Please open issues if things are not clear or if you find flaws in the
> > concept/implementation.
> > 
> > If you're interested in working on this infrastructure I'll be pleased to
> > add you to the group JuliaIO.
> > 
> > 
> > Best,
> > 
> > Simon



[julia-users] Re: [ANN] JuliaIO and FileIO

2015-04-08 Thread Simon Danisch
Excuse my ignorance Tim! I just don't allow myself to waste too much time 
on this, so that I'm not looking into things very deeply.
This indeed solves the issue in a slightly different way.
I think we can definitely reuse more of what Images is currently doing.
It makes a lot of sense to switch out symbols in favor of types to better 
model hierarchies.
Maybe this should come with a mime type overhaul, as they're closely 
related. 


Am Samstag, 4. April 2015 17:41:14 UTC+2 schrieb Simon Danisch:
>
> Hi there,
>
> FileIO has the aim to make it very easy to read any arbitrary file.
> I hastily copied together a proof of concept by taking code from Images.jl.
>
> JuliaIO is the umbrella group, which takes IO packages with no home. If 
> someone wrote an IO package, but doesn't have time to implement the FileIO 
> interface, giving it to JuliaIO might be a good idea in order to keep the 
> package usable.
>
> Concept of FileIO is described in the readme:
>
> Meta package for FileIO. Purpose is to open a file and return the 
> respective Julia object, without doing any research on how to open the file.
>
> f = file"test.jpg" # -> File{:jpg}read(f) # -> Imageread(file"test.obj") # -> 
> Meshread(file"test.csv") # -> DataFrame
>
> So far only Images are supported and MeshIO is on the horizon.
>
> It is structured the following way: There are three levels of abstraction, 
> first FileIO, defining the file_str macro etc, then a meta package for a 
> certain class of file, e.g. Images or Meshes. This meta package defines the 
> Julia datatype (e.g. Mesh, Image) and organizes the importer libraries. 
> This is also a good place to define IO library independant tests for 
> different file formats. Then on the last level, there are the low-level 
> importer libraries, which do the actual IO. They're included via Mike Innes 
> Requires  package, so 
> that it doesn't introduce extra load time if not needed. This way, using 
> FileIO without reading/writing anything should have short load times.
>
> As an implementation example please look at FileIO -> ImageIO -> 
> ImageMagick. This should already work as a proof of concept. Try:
>
> using FileIO # should be very fast, thanks to Mike Innes Requires 
> packageread(file"test.jpg") # takes a little longer as it needs to load the 
> IO libraryread(file"test.jpg") # should be fastread(File("documents", 
> "images", "myimage.jpg") # automatic joinpath via File constructor
>
> Please open issues if things are not clear or if you find flaws in the 
> concept/implementation.
>
> If you're interested in working on this infrastructure I'll be pleased to 
> add you to the group JuliaIO.
>
>
> Best,
>
> Simon
>


Re: [julia-users] A method with an optional parameter may result in a different method being called

2015-04-08 Thread Seth


> That is a bit surprising indeed.  Maybe worth an addition to 
>
> http://docs.julialang.org/en/latest/manual/methods/#note-on-optional-and-keyword-arguments
>  
> ? 
>

It's not only surprising, I'd consider it (absent a real reason why it 
should be this way) incorrect. Once you've decided which method you're 
going to use (in this case, f()), the system shouldn't suddenly "switch". 
Consider what might happen if the optional parameter depends on an external 
variable/process (simple case below):

julia> a = 5
5

julia> function f(d=a) info("in f1"); d; end
f (generic function with 2 methods)


julia> function f(a::Float64) info("in f2"); round(Int,a); end
f (generic function with 3 methods)


julia> f()
INFO: in f1
5


julia> a = 6.6
6.6


julia> f()
INFO: in f2
7

This is undesirable for a number of reasons.




Re: [julia-users] A method with an optional parameter may result in a different method being called

2015-04-08 Thread Daan Huybrechs
On Wednesday, April 8, 2015 at 3:52:44 PM UTC+2, Mauro wrote:
>
>
> On Wed, 2015-04-08 at 15:34, Daan Huybrechs  > wrote: 
> > I was a little surprised today by the following behaviour: 
> > 
> > julia> f(d=2) = d 
> > f (generic function with 2 methods) 
> > 
> > julia> f(a::Int) = -a 
> > f (generic function with 3 methods) 
> > 
> > julia> f() 
> > -2 
> > 
> > julia> methods(f) 
> > # 3 methods for generic function "f": 
> > f() at none:1 
> > f(a::Int64) at none:1 
> > f(d) at none:1 
> > 
> > Yet, there is no bug here and Julia works as documented: the first 
> > definition creates two methods, with f() simply calling f(2). The result 
> is 
> > -2 because I specified this is the behaviour I want f to have for 
> integers. 
> > Still, it feels a bit weird that the value 2 is taken from the first 
> > definition above, but the corresponding method is not actually called - 
> the 
> > second one is. 
> > 
> > For completeness, the fact that the second definition is more 
> specialized 
> > is not essential. The behaviour stays the same when you specialize d to 
> be 
> > an Int as well: 
> > 
> > julia> g(d::Int=2) = d 
> > g (generic function with 2 methods) 
> > 
> > julia> g(a::Int) = -a 
> > g (generic function with 2 methods) 
> > 
> > julia> g() 
> > -2 
> > 
> > julia> methods(g) 
> > # 2 methods for generic function "g": 
> > g() at none:1 
> > g(a::Int64) at none:1 
> > 
> > This time, there are only two methods for g and my second definition is 
> > called because it is given after the definition with d, essentially 
> > overwriting it. But the optional parameter survives this. 
> > 
> > I guess that in my mind an optional parameter with a default value was 
> tied 
> > to a specific method, but in a dynamic setting that is not necessarily 
> the 
> > case. I don't think anything should change here (a warning at best, 
> though 
> > that might be difficult), but it may be good to know about the above 
> > possibility when debugging code. 
>
> That is a bit surprising indeed.  Maybe worth an addition to 
>
> http://docs.julialang.org/en/latest/manual/methods/#note-on-optional-and-keyword-arguments
>  
> ? 
>

Thanks for the link, that seems like the right place. I'll suggest an 
addition on GitHub.



Re: [julia-users] A method with an optional parameter may result in a different method being called

2015-04-08 Thread Mauro

On Wed, 2015-04-08 at 15:34, Daan Huybrechs  wrote:
> I was a little surprised today by the following behaviour:
>
> julia> f(d=2) = d
> f (generic function with 2 methods)
>
> julia> f(a::Int) = -a
> f (generic function with 3 methods)
>
> julia> f()
> -2
>
> julia> methods(f)
> # 3 methods for generic function "f":
> f() at none:1
> f(a::Int64) at none:1
> f(d) at none:1
>
> Yet, there is no bug here and Julia works as documented: the first 
> definition creates two methods, with f() simply calling f(2). The result is 
> -2 because I specified this is the behaviour I want f to have for integers. 
> Still, it feels a bit weird that the value 2 is taken from the first 
> definition above, but the corresponding method is not actually called - the 
> second one is.
>
> For completeness, the fact that the second definition is more specialized 
> is not essential. The behaviour stays the same when you specialize d to be 
> an Int as well:
>
> julia> g(d::Int=2) = d
> g (generic function with 2 methods)
>
> julia> g(a::Int) = -a
> g (generic function with 2 methods)
>
> julia> g()
> -2
>
> julia> methods(g)
> # 2 methods for generic function "g":
> g() at none:1
> g(a::Int64) at none:1
>
> This time, there are only two methods for g and my second definition is 
> called because it is given after the definition with d, essentially 
> overwriting it. But the optional parameter survives this.
>
> I guess that in my mind an optional parameter with a default value was tied 
> to a specific method, but in a dynamic setting that is not necessarily the 
> case. I don't think anything should change here (a warning at best, though 
> that might be difficult), but it may be good to know about the above 
> possibility when debugging code.

That is a bit surprising indeed.  Maybe worth an addition to
http://docs.julialang.org/en/latest/manual/methods/#note-on-optional-and-keyword-arguments
 ?


Re: [julia-users] Short question about type alias with partially known parameter types

2015-04-08 Thread Kristoffer Carlsson
Thank you for the code and the pull request link. Very interesting.

On Tuesday, April 7, 2015 at 8:55:34 PM UTC+2, Mauro wrote:
>
> In 0.3 it's not possible in 0.4 this works: 
>
> julia> type A{T,P} 
>a::T 
>end 
>
> julia> typealias B{T} A{T, 1} 
> A{T,1} 
>
> julia> call{T}(::Type{B}, a::T) = B{T}(a) 
> call (generic function with 936 methods) 
>
> julia> B(1.0) 
> A{Float64,1}(1.0) 
>
> See https://github.com/JuliaLang/julia/pull/8712 for discussion 
> call-overloading. 
>
> On Tue, 2015-04-07 at 20:26, Kristoffer Carlsson  > wrote: 
> > Let's say I have a type parameterized with two other types and I do the 
> > following: 
> > 
> > type A{T,P} 
> > a::T 
> > end 
> > 
> > typealias B{T} A{T, 1} 
> > 
> > B(1.0) # Error 
> > 
> > B{Float64}(1.0) # Works 
> > 
> > Is there anyway you could write the code so that the T type is inferred 
> > from the argument to the call to B. 
> > 
> > Best regards, 
> > Kristoffer Carlsson 
>
>

[julia-users] Direct access to fields in a type, unjulian?

2015-04-08 Thread Kristoffer Carlsson
I come from a Python background where direct access to fields in for 
example classes with the dot notation is very common.

However, from what I have seen in different conversations, accessing fields 
directly is not really Julian. Sort of a "fields are an implementation 
detail" mindset, and "what is important are the functions".

Here is an example of a type hierarchy that is a little bit similar to 
types I am working with now:

type Element
vertices::Vector{Int}
end

type Node
coordinates::Vector{Float64}
id::Int
end

type FESection
elements::Vector{Elements}
nodes::Vector{Nodes}
end

type Mesh
   sections::Vector{FESection}
end

Now, let's say that I want to write a function to loop over all vertices. 
One way (which I would do in Python is):

mesh = Mesh(.)
for section in mesh.sections
for element in section.elements
for vertices in element.vertices
  blah bla
end
end
end



However, this accesses the fields directly. Would it be more Julian to 
write getters for the fields? Since Julia does not have @property like 
Python I realize that by accessing the fields you commit to exactly the 
name of the field and it's type while with a getter it would be more 
flexible.

Best regards,
Kristoffer Carlsson



[julia-users] A method with an optional parameter may result in a different method being called

2015-04-08 Thread Daan Huybrechs
I was a little surprised today by the following behaviour:

julia> f(d=2) = d
f (generic function with 2 methods)

julia> f(a::Int) = -a
f (generic function with 3 methods)

julia> f()
-2

julia> methods(f)
# 3 methods for generic function "f":
f() at none:1
f(a::Int64) at none:1
f(d) at none:1

Yet, there is no bug here and Julia works as documented: the first 
definition creates two methods, with f() simply calling f(2). The result is 
-2 because I specified this is the behaviour I want f to have for integers. 
Still, it feels a bit weird that the value 2 is taken from the first 
definition above, but the corresponding method is not actually called - the 
second one is.

For completeness, the fact that the second definition is more specialized 
is not essential. The behaviour stays the same when you specialize d to be 
an Int as well:

julia> g(d::Int=2) = d
g (generic function with 2 methods)

julia> g(a::Int) = -a
g (generic function with 2 methods)

julia> g()
-2

julia> methods(g)
# 2 methods for generic function "g":
g() at none:1
g(a::Int64) at none:1

This time, there are only two methods for g and my second definition is 
called because it is given after the definition with d, essentially 
overwriting it. But the optional parameter survives this.

I guess that in my mind an optional parameter with a default value was tied 
to a specific method, but in a dynamic setting that is not necessarily the 
case. I don't think anything should change here (a warning at best, though 
that might be difficult), but it may be good to know about the above 
possibility when debugging code.




[julia-users] Re: Unexpected behaviour when running parallelized code on single worker

2015-04-08 Thread Patrick O'Leary
I understand *why* you're using them; the question is whether they're 
broken or not. One way to figure that out is to not use them and see if you 
still get the weird behavior.

(No worries on the response speed, this is your issue after all!)

On Wednesday, April 8, 2015 at 7:57:54 AM UTC-5, Nils Gudat wrote:
>
> Hi Patrick, 
>
> Sorry for the late reply (Easter and all that)! The problem with using 
> Arrays in the code above is that they're not available for remote processes 
> to write on, so I'd have to come up with a more complicated structure of 
> passing computations and results around. The whole appeal of SharedArrays 
> to me is that they make this unnecessary and therefore lend itself 
> perfectly to facilitate easy parallelization of the kind of problems 
> economists are often facing (basically filling large Arrays with the 
> results of lots and lots of simple maximization problems). One way to get 
> around both problems would be to initialize the Array as
>
> nprocs() > 1 ? result1 = SharedArray(Float64, (3,3,3)) : result1 = 
> Array(Float64, (3,3,3))
>
> The main reason for my original post here was to see whether anyone could 
> explain why SharedArrays behave in this way when used on one process and 
> whether this was potentially a bug (or at least something that should be 
> mentioned in the docs).
>


Re: [julia-users] Re: [ANN] JuliaIO and FileIO

2015-04-08 Thread Tim Holy
Images has (I think) basically already solved this problem. The way it uses 
magic numbers is described at
https://github.com/timholy/Images.jl/blob/master/doc/extendingIO.md

For the example (a SIF) file, the magic bytes are enclosed in that
b"Andor Technology Multi-Channel File"
string.

--Tim

On Wednesday, April 08, 2015 05:58:11 AM Simon Danisch wrote:
> This is half true.
> Magic numbers are only used in ImageMagick.jl per "accident", as that's how
> the underlying C-library does things.
> To be honest, I haven't looked at the details yet myself, as I mostly copy
> and pasted code to get a first prototype running.
> I sketch out in FileIO.jl#3 
> how I plan on using magic numbers.
> If I find some time, I will make a schematic of some more advanced features
> to handle ambiguities and different endings which map to the same type.
> This should also give a better overview of the architecture and what a
> package creator must do to get integrated into FileIO.
> 
> Best,
> Simon
> 
> Am Samstag, 4. April 2015 17:41:14 UTC+2 schrieb Simon Danisch:
> > Hi there,
> > 
> > FileIO has the aim to make it very easy to read any arbitrary file.
> > I hastily copied together a proof of concept by taking code from
> > Images.jl.
> > 
> > JuliaIO is the umbrella group, which takes IO packages with no home. If
> > someone wrote an IO package, but doesn't have time to implement the FileIO
> > interface, giving it to JuliaIO might be a good idea in order to keep the
> > package usable.
> > 
> > Concept of FileIO is described in the readme:
> > 
> > Meta package for FileIO. Purpose is to open a file and return the
> > respective Julia object, without doing any research on how to open the
> > file.
> > 
> > f = file"test.jpg" # -> File{:jpg}read(f) # -> Imageread(file"test.obj") #
> > -> Meshread(file"test.csv") # -> DataFrame
> > 
> > So far only Images are supported and MeshIO is on the horizon.
> > 
> > It is structured the following way: There are three levels of abstraction,
> > first FileIO, defining the file_str macro etc, then a meta package for a
> > certain class of file, e.g. Images or Meshes. This meta package defines
> > the
> > Julia datatype (e.g. Mesh, Image) and organizes the importer libraries.
> > This is also a good place to define IO library independant tests for
> > different file formats. Then on the last level, there are the low-level
> > importer libraries, which do the actual IO. They're included via Mike
> > Innes
> > Requires  package, so
> > that it doesn't introduce extra load time if not needed. This way, using
> > FileIO without reading/writing anything should have short load times.
> > 
> > As an implementation example please look at FileIO -> ImageIO ->
> > ImageMagick. This should already work as a proof of concept. Try:
> > 
> > using FileIO # should be very fast, thanks to Mike Innes Requires
> > packageread(file"test.jpg") # takes a little longer as it needs to load
> > the IO libraryread(file"test.jpg") # should be fastread(File("documents",
> > "images", "myimage.jpg") # automatic joinpath via File constructor
> > 
> > Please open issues if things are not clear or if you find flaws in the
> > concept/implementation.
> > 
> > If you're interested in working on this infrastructure I'll be pleased to
> > add you to the group JuliaIO.
> > 
> > 
> > Best,
> > 
> > Simon



Re: [julia-users] Why enumerate and zip are realized by different types?

2015-04-08 Thread Stefan Karpinski
That implementation has a few drawbacks: it allocates a vector of indices
the length of what it's iterating over; it has to know in advance how long
what it's iterating over is; it is more complicated, leading to more
complex machine code.

On Wed, Apr 8, 2015 at 4:51 AM, Jinxuan Zhu  wrote:

> Recently, I am interested about the two functions enumerate(iter) and
> zip(iter).  So I read the iterator.jl and find they are realized by
> different Types.  I thought the enumerate was an alias like
> zip([1:length(a)], a), however it is not.
>
> I am curious that why the two functions are realized by two different
> types. What difference between the type Zip2 and Enumerate do make the
> design an advantage?
>
> Thank you.
>
> Regards,
> Jinxuan Zhu(zhujinx...@gmail.com)
>


Re: [julia-users] Holt-Winters D-E Smoothing in Julia

2015-04-08 Thread Tom Short
Given the lack of replies, I'd guess that the answer is no. Searching
"holt-winters julia jl" didn't turn up anything.

Your options might be:

* Use RCall to call the function in R. RCall is still rather young.

* Use PyCall to call a function in Python. PyCall is fairly mature.

* Implement your own version in Julia. As a starting point, you can use R
or Python code. The following Python code is more compatible with the MIT
license used by StatsBase.jl than the R code:

https://pythonhosted.org/pycast/_modules/pycast/methods/exponentialsmoothing.html#ExponentialSmoothing

The best starting point might be the following MIT-licensed code in Ruby
that looks pretty clean.

https://github.com/cmdrkeene/holt_winters/blob/master/lib/holt_winters.rb



On Tue, Apr 7, 2015 at 5:48 PM, Philip Tellis 
wrote:

> Does anyone know if there's a Holt-Winters Double-Exponential Smoothing
> module for Julia?
>
> It's available in R:
> http://astrostatistics.psu.edu/su07/R/html/stats/html/HoltWinters.html
>


[julia-users] Re: [ANN] JuliaIO and FileIO

2015-04-08 Thread Simon Danisch
This is half true.
Magic numbers are only used in ImageMagick.jl per "accident", as that's how 
the underlying C-library does things.
To be honest, I haven't looked at the details yet myself, as I mostly copy 
and pasted code to get a first prototype running.
I sketch out in FileIO.jl#3  
how 
I plan on using magic numbers.
If I find some time, I will make a schematic of some more advanced features 
to handle ambiguities and different endings which map to the same type.
This should also give a better overview of the architecture and what a 
package creator must do to get integrated into FileIO.

Best,
Simon


Am Samstag, 4. April 2015 17:41:14 UTC+2 schrieb Simon Danisch:
>
> Hi there,
>
> FileIO has the aim to make it very easy to read any arbitrary file.
> I hastily copied together a proof of concept by taking code from Images.jl.
>
> JuliaIO is the umbrella group, which takes IO packages with no home. If 
> someone wrote an IO package, but doesn't have time to implement the FileIO 
> interface, giving it to JuliaIO might be a good idea in order to keep the 
> package usable.
>
> Concept of FileIO is described in the readme:
>
> Meta package for FileIO. Purpose is to open a file and return the 
> respective Julia object, without doing any research on how to open the file.
>
> f = file"test.jpg" # -> File{:jpg}read(f) # -> Imageread(file"test.obj") # -> 
> Meshread(file"test.csv") # -> DataFrame
>
> So far only Images are supported and MeshIO is on the horizon.
>
> It is structured the following way: There are three levels of abstraction, 
> first FileIO, defining the file_str macro etc, then a meta package for a 
> certain class of file, e.g. Images or Meshes. This meta package defines the 
> Julia datatype (e.g. Mesh, Image) and organizes the importer libraries. 
> This is also a good place to define IO library independant tests for 
> different file formats. Then on the last level, there are the low-level 
> importer libraries, which do the actual IO. They're included via Mike Innes 
> Requires  package, so 
> that it doesn't introduce extra load time if not needed. This way, using 
> FileIO without reading/writing anything should have short load times.
>
> As an implementation example please look at FileIO -> ImageIO -> 
> ImageMagick. This should already work as a proof of concept. Try:
>
> using FileIO # should be very fast, thanks to Mike Innes Requires 
> packageread(file"test.jpg") # takes a little longer as it needs to load the 
> IO libraryread(file"test.jpg") # should be fastread(File("documents", 
> "images", "myimage.jpg") # automatic joinpath via File constructor
>
> Please open issues if things are not clear or if you find flaws in the 
> concept/implementation.
>
> If you're interested in working on this infrastructure I'll be pleased to 
> add you to the group JuliaIO.
>
>
> Best,
>
> Simon
>


[julia-users] Re: Unexpected behaviour when running parallelized code on single worker

2015-04-08 Thread Nils Gudat
Hi Patrick, 

Sorry for the late reply (Easter and all that)! The problem with using 
Arrays in the code above is that they're not available for remote processes 
to write on, so I'd have to come up with a more complicated structure of 
passing computations and results around. The whole appeal of SharedArrays 
to me is that they make this unnecessary and therefore lend itself 
perfectly to facilitate easy parallelization of the kind of problems 
economists are often facing (basically filling large Arrays with the 
results of lots and lots of simple maximization problems). One way to get 
around both problems would be to initialize the Array as

nprocs() > 1 ? result1 = SharedArray(Float64, (3,3,3)) : result1 = 
Array(Float64, (3,3,3))

The main reason for my original post here was to see whether anyone could 
explain why SharedArrays behave in this way when used on one process and 
whether this was potentially a bug (or at least something that should be 
mentioned in the docs).


[julia-users] Re: Default behavior from omitted catch block when a finally block is added.

2015-04-08 Thread Jan van Oort
 there is something to be said for the explicitness that comes with only 
catching exceptions when there is a catch keyword involved.

Absolutely. Please go that way. It would be even better to make a "catch" 
mandatory with each "try", in order to ( as someone else here said ) ease 
the load on programmers' brains. 

Anything else would veer off into the exotic, IMHO. 




Op dinsdag 7 april 2015 20:54:18 UTC+2 schreef Harry B:
>
> Hello All,
>
> I find the following behavior a bit non-intuitive. IMHO, if addition of 
> finally block changes the default behavior provided by the omitted catch 
> block, we should force the user to declare a catch block.
>
> # Version 0.4.0-dev+4160 (2015-04-06 03:40 UTC)  Commit 8fc5b4e* (1 day 
> old master) x86_64-apple-darwin13.4.0
>
> julia> function f1(x)
>try
>return x > 0 ? x/2 : error("x should be greater than zero")
>end # Let us assume user wants to ignore exception...
>println("returning normally without exception")
>-1
>end
> f1 (generic function with 1 method)
>
> julia> f1(1)
> 0.5
>
> julia> f1(-1)
> returning normally without exception
> -1
>
> julia> function f2(x)
>fp = open("/tmp/unrelated_resource.txt", "w")
>try
>return x > 0 ? x/2 : error("x should be greater than zero")
>finally
>close(fp) # only change is to close the file, no change 
> intended for default on missing-catch-block
>end
>println("returning normally")
>-1
>end
> f2 (generic function with 1 method)
>
> julia> f2(1)
> 0.5
>
> julia> f2(-1)
> ERROR: x should be greater than zero
>  in f2 at none:4
>
> julia>
>
> Thanks
> --
> Harry
>
>

Re: [julia-users] Re: Default behavior from omitted catch block when a finally block is added.

2015-04-08 Thread Mauro
> If we require a 'catch' block for every "try", it would be cleaner. 
> Otherwise I suggest we had a note in the documentation to state the 
> caveats. Particularly for those who come from other background and get too 
> excited about "try end" terse syntax. I would be happy to make some edits 
> in the documentation if you approve.

I read the docs after seeing your first post and was not any wiser.  So
yes, some better docs would be good, thanks!


[julia-users] Why enumerate and zip are realized by different types?

2015-04-08 Thread Jinxuan Zhu
Recently, I am interested about the two functions enumerate(iter) and
zip(iter).  So I read the iterator.jl and find they are realized by
different Types.  I thought the enumerate was an alias like
zip([1:length(a)], a), however it is not.

I am curious that why the two functions are realized by two different
types. What difference between the type Zip2 and Enumerate do make the
design an advantage?

Thank you.

Regards,
Jinxuan Zhu(zhujinx...@gmail.com)


Re: [julia-users] Extreme garbage collection time

2015-04-08 Thread cdm

just a moment to applaud this exemplary thread !!!

excellent closing post ...

love the Julia community.

best,

cdm