[julia-users] Re: GSoC 2015 with JuliaQuantum

2015-09-24 Thread Amit Jamadagni
Hello everyone,
 We have made some decent progress during GSoC 2015 and we would
like to present the work here :
QuDynamics.jl : This is the repo which has resulted out of GSoC. Almost all
of the work is reflected in the repo.
Also the blog posts  reflect
on the internal design and also the progress at various stages of the
projects ! Hoping to hear from the community on this activity.

Thanking you,
Amit.

On Tue, May 19, 2015 at 11:20 PM, Jarrett Revels 
wrote:

> Amit has already contributed a fair amount to QuBase.jl already. His and
> Alexander's work on dynamics solvers will make for an exciting summer for
> JuliaQuantum!
>
> Looking forward to it!
>
> -- Jarrett
>
>
> On Monday, May 18, 2015 at 10:50:02 PM UTC-4, Viral Shah wrote:
>>
>> This is amazing! Do keep us posted on how things go.
>>
>> -viral
>>
>> On Tuesday, May 19, 2015 at 4:42:39 AM UTC+5:30, Amit Jamadagni wrote:
>>>
>>> Hello everyone,
>>>I am most happy to inform that my project titled JuliaQuantum :
>>> Framework for Solvers
>>> 
>>>  has
>>> been selected for GSoC 2015
>>>  under
>>> NumFOCUS umbrella. I will be working on creating a framework for solvers
>>> used in Quantum Mechanics under the mentor-ship of Alexander Croy and
>>> support from the Julia Quantum team. The project will be using
>>> implementations from QuBase.jl, ODE.jl, Expokit.jl and also will aim to
>>> implement few enhancements in the packages in the course of development.
>>> The updates of the project can be followed under the news
>>>  section on the
>>> JuliaQuantum  site. The end result of
>>> the project will be to populate the package QuDynamics.jl
>>>  with features outlined
>>> in the project proposal
>>> .
>>> It would be great to hear from the community on the above mentioned ideas.
>>>
>>> Thanking you,
>>> Amit.
>>>
>>


[julia-users] Re: [ANN] ShaderToy.jl

2015-09-24 Thread Páll Haraldsson
On Wednesday, September 23, 2015 at 11:21:16 PM UTC, Simon Danisch wrote:
>
> Hi,
> you want to try out GPU accelerated ray tracing? You want some quick and 
> easy start for GPU accelerated fractal rendering?
> You can do this quite easily now!
> ShaderToy  allows you to 
> only specify a fragmentshader, which is an OpenGL program which can execute 
> arbitrary code per pixel(fragment).
> Its based on GLVisualize and basically the Julia native version of: 
> https://www.shadertoy.com/
> I copied a few examples to get you started. Just click on the gifs in the 
> README to see the fragment shader that produced the image.
> The installation is still a little bit wonky, but should mostly work if 
> the script executes without error.
> If it doesn't work, please open an issue. This will help me to make the 
> release of GLPlot and GLVisualize a lot smoother!
>

Very cool. I wander how it compares to the ray tracer in the K language (or 
the other examples), with the trivial 7 lines of code:
http://www.ffconsultancy.com/languages/ray_tracer/


Or..:

a simple ray tracer in PHP
http://quaxio.com/raytracer/
"in PHP (around 1500 lines of code). The code is open source and available 
on my github."

-- 
Palli.



[julia-users] setting span for Loess filter

2015-09-24 Thread Evan
I am using Loess.jl to do some filtering of a few hundred daily time 
series, with gaps, and length ~4000 days.

I want to remove variability with periods below 20 days. In matlab this is 
easy to define:

Z = SMOOTH(X,Y,0.3,'loess') uses the loess method where span is
30% of the data, i.e. span = ceil(0.3*length(Y)).

The loess.jl function accepts the *span* parameter, with recommended value 
between 0 and 1. I have tried using span=1/20 (i.e., my desired cutoff 
period), but this approach would contradict the fact that smaller (larger) 
values of span lead to less (more) smoothing.  My question is how to 
specify span precisely for various cutoff periods?

Thanks, Evan


Re: [julia-users] Re: Documentation of operators

2015-09-24 Thread Alex Copeland
Thank you.  I struggled a bit with how to phrase my question. 'operator', 
it turns out wasn't the best choice. Maybe symbols would have been better? 

A table of operators and other symbols would be very useful. Maybe a good 
project for a newbie like myself. 

To leave something in this thread which might be useful for future 
searchers, I'll briefly mention that I discovered julia's help mode, if fed 
an empty tab complete, prints a huge list of items including a long list of 
symbols (unicode too) with documentation.   The list isn't complete, 
though, as neither '..' and '->' are in the list, and some have missing 
documentation. 

Providing aliases for some of the unicode items, expanding the list to 
include things like '..',  and coming up with a brief synopsis for all 
might be even better project for me than constructing a table of operators. 
I'll start a new thread if I get anywhere with this.


help?> ⊈
search: ⊈

  No documentation found.

  Base.⊈ is a generic Function.

  # 1 method for generic function "⊈":
  ⊈(l::Set{T}, r::Set{T}) at set.jl:104




On Wednesday, September 23, 2015 at 5:07:06 PM UTC-7, Erik Schnetter wrote:
>
> Yes, operators are difficult to search for. There should be a table of 
> them...
>
> The two dots .. are not an operator; they specify an access path to a 
> module. See 
>
> The arrow -> defines an anonymous function. See <
> http://docs.julialang.org/en/release-0.3/manual/functions/>. Here, this 
> function accesses the second element of an array (or tuple or other 
> collection).
>
> -erik
>
> On Wed, Sep 23, 2015 at 7:47 PM, Alex Copeland  > wrote:
>
>> Update from OP: Not sure it matters, but on the off chance the syntax is 
>> new,  I'm running
>>
>> julia> versioninfo()
>> Julia Version 0.5.0-dev+318
>> Commit 3b189b9* (2015-09-22 15:28 UTC)
>> Platform Info:
>>   System: Darwin (x86_64-apple-darwin12.4.0)
>>   CPU: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Sandybridge)
>>   LAPACK: libopenblas
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.3
>>
>>
>>
>> On Wednesday, September 23, 2015 at 3:32:57 PM UTC-7, Alex Copeland wrote:
>>>
>>>
>>>
>>> Hi,
>>>
>>> Can someone point me to the documentation for '..'  and '->'  as in 
>>>  'include ..Sort'  and x -> x[2] . I've dug around in the source and in 
>>> readthedocs but patterns like this are the devil to search for unless they 
>>> have a text alias (that you happen to know). 
>>>
>>> Thanks,
>>> Alex
>>>
>>
>
>
> -- 
> Erik Schnetter  
> http://www.perimeterinstitute.ca/personal/eschnetter/
>


[julia-users] Re: GSoC 2015 with JuliaQuantum

2015-09-24 Thread Amit Jamadagni
The link stands corrected : and as well reproducing the message so that it
does not get covered under the thread !

Hello everyone,
 We have made some decent progress during GSoC 2015 and we would
like to present the work here :
QuDynamics.jl  : This is the
repo which has resulted out of GSoC. Almost all of the work is reflected in
the repo.
Also the blog posts  reflect
on the internal design and also the progress at various stages of the
projects ! Hoping to hear from the community on this activity.

Thanking you,
Amit.

PS : Sincere apologies for the noise.

On Thu, Sep 24, 2015 at 6:17 PM, Amit Jamadagni 
wrote:

> Hello everyone,
>  We have made some decent progress during GSoC 2015 and we would
> like to present the work here :
> QuDynamics.jl : This is the repo which has resulted out of GSoC. Almost
> all of the work is reflected in the repo.
> Also the blog posts  reflect
> on the internal design and also the progress at various stages of the
> projects ! Hoping to hear from the community on this activity.
>
> Thanking you,
> Amit.
>
> On Tue, May 19, 2015 at 11:20 PM, Jarrett Revels 
> wrote:
>
>> Amit has already contributed a fair amount to QuBase.jl already. His and
>> Alexander's work on dynamics solvers will make for an exciting summer for
>> JuliaQuantum!
>>
>> Looking forward to it!
>>
>> -- Jarrett
>>
>>
>> On Monday, May 18, 2015 at 10:50:02 PM UTC-4, Viral Shah wrote:
>>>
>>> This is amazing! Do keep us posted on how things go.
>>>
>>> -viral
>>>
>>> On Tuesday, May 19, 2015 at 4:42:39 AM UTC+5:30, Amit Jamadagni wrote:

 Hello everyone,
I am most happy to inform that my project titled JuliaQuantum :
 Framework for Solvers
 
  has
 been selected for GSoC 2015
  under
 NumFOCUS umbrella. I will be working on creating a framework for solvers
 used in Quantum Mechanics under the mentor-ship of Alexander Croy and
 support from the Julia Quantum team. The project will be using
 implementations from QuBase.jl, ODE.jl, Expokit.jl and also will aim to
 implement few enhancements in the packages in the course of development.
 The updates of the project can be followed under the news
  section on the
 JuliaQuantum  site. The end result of
 the project will be to populate the package QuDynamics.jl
  with features outlined
 in the project proposal
 .
 It would be great to hear from the community on the above mentioned ideas.

 Thanking you,
 Amit.

>>>
>


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Páll Haraldsson
[This (part A) is probably not implemented yet, and probably not even a 
new idea.. possibly julia-dev material?]


On mið 23.sep 2015 16:26, Milan Bouchet-Valat wrote:

Le mercredi 23 septembre 2015 à 07:38 -0700, Páll Haraldsson a écrit :



instead of these two type-unstable variants:
type MyType
 s::String
end

or of
type MyType
 s::Union(UTF8String,UTF16String)
end


A.
Are these (almost*) equivalent? For speed at least (generate same 
machine code)? [To each other, but not to having only one type - type 
stable.]


That may have been a misunderstanding of mine, because it doesn't have 
to be that way.



I know String (abstract) can have any number of concrete implementation, 
but at any point in time, just a few, and I know more than two (e.g. 
UTF32String).


With a Union or two (or any limited number of types) it seems what extra 
indirection (converting to if or "switch", and possibly borrowing bits 
in the pointers), might be avoided, by clever optimization, that I guess 
are not there - yet..



But in the former case and say there were only these two subtypes, it is 
not safe to do these optimizations? Because you can always add more 
subtypes at runtime.. Possibly until you actually do that, you could, 
but still then you would have to recompile code at that point..




B.
* I see the former is nicer, I was going to test with @code_native 
myself (but only on 0.3), but with the latter type:


b=MyType2("Palli")
ERROR: `convert` has no method matching 
convert(::Type{Union(UTF16String,UTF8String)}, ::ASCIIString)

 in MyType2 at no file

julia> b=MyType2(UTF8String("Palli"))
ERROR: `convert` has no method matching convert(::Type{Array{Uint8,1}}, 
::ASCIIString)

you may have intended to import Base.convert
 in UTF8String at no file


I guess you just need a constructor. I also wander if that could be 
automatically generated for you..


--
Palli.


Re: [julia-users] Re: transparent color in gadfly

2015-09-24 Thread Tim Holy
I'd recommend filing an issue with Gadfly containing a complete, stand-alone 
example. (Or tackling it yourself, it may not be hard---see the changes in the 
PR I linked to.)

Best,
--Tim

On Wednesday, September 23, 2015 07:47:24 PM Li Zhang wrote:
> Hi Tim,
> 
> I Checked out the Compose and Gadfly, and set transparent color works in
> ```
> theme(default_color=RGBA(1.0,0.0,0.0,0.3))
> ```
> but stack overflowed when set transparent color in
> ```
> Guide.manual_color_key("legend",["label"],[RGBA(1.0,0.0,0.0,0.3)]), and
> Scale.color_discrete_manual(RGBA(1.0,0.0,0.0,0.3))
> ```
> 
> stack overflow
> while loading In[45], in expression starting on line 2
> 
>  in vcat at abstractarray.jl:517
>  in to_vec at /home/alex/.julia/v0.3/Compose/src/Compose.jl:52
>  in parse_colorant_vec at /home/alex/.julia/v0.3/Compose/src/Compose.jl:51
>  in ManualColorKey at /home/alex/.julia/v0.3/Gadfly/src/guide.jl:490
> (repeats 34898 times)
> On Wednesday, September 23, 2015 at 9:39:30 PM UTC-4, Li Zhang wrote:
> > Hi there,
> > 
> >  i found no way to get transparent colors in gadfly, anyone have done
> > 
> > this, or a issue should be filed?



[julia-users] Re: [ANN] ShaderToy.jl

2015-09-24 Thread Simon Danisch
Well, Julia is not directly at work here and its not really about ray 
tracing ;) You need to write the shader in GLSL 
, which is more 
C-like.
That said, I'm pretty sure that the ray tracing examples are faster than 
most of the ray tracing examples listed in the links.
It runs even on on-board GPU's in real time. The algorithms allow only for 
very limited ray tracing, but work nicely on the GPU.

Am Donnerstag, 24. September 2015 01:21:16 UTC+2 schrieb Simon Danisch:
>
> Hi,
> you want to try out GPU accelerated ray tracing? You want some quick and 
> easy start for GPU accelerated fractal rendering?
> You can do this quite easily now!
> ShaderToy  allows you to 
> only specify a fragmentshader, which is an OpenGL program which can execute 
> arbitrary code per pixel(fragment).
> Its based on GLVisualize and basically the Julia native version of: 
> https://www.shadertoy.com/
> I copied a few examples to get you started. Just click on the gifs in the 
> README to see the fragment shader that produced the image.
> The installation is still a little bit wonky, but should mostly work if 
> the script executes without error.
> If it doesn't work, please open an issue. This will help me to make the 
> release of GLPlot and GLVisualize a lot smoother!
>
> Best,
> Simon
>


[julia-users] Why Julia completions fails to complete unicode when I use sublime text?

2015-09-24 Thread Roger Luo
It always fails to complete when I'm typing in the line which is not the 
last line.
if it is the last line of this program the completion can work


Re: [julia-users] @code_warntype and for loops

2015-09-24 Thread Mauro
>> This is the lowered and typed abstract syntax tree that you're seeing,
>> so two steps removed from what you've typed already (and another two
>> steps to go to get to machine code).  Thus it gets more verbose.  I
>> guess it would be nice to translate this typed code back to what you
>> wrote but with type annotations and display that.  But that is not
>> possible (yet?).  Have you seen this short and sweet JuliaCon
>> presentation by Jacob:
>>
>>
>> https://www.youtube.com/watch?v=RYZkHudRTvI=PLP8iPy9hna6Sdx4soiGrSefrmOPdUWixM=16
>>
>
> Yes, you worded better than I could: whether it is possible to get a sort
> of breadth first view of the code. From the nice (and too short, thanks for
> the link) presentation, it seems that this is not possible. Is there any
> hope that the "yet?" will become reality? I understand thought that this
> may be a request from somebody unable to read quickly the output of
> code_typed.

If I understand correctly, this issue is requesting this feature
https://github.com/jakebolewski/JuliaParser.jl/issues/22 . By the sounds
of it, it's not going to happen tomorrow but maybe the day after.


Re: [julia-users] @code_warntype and for loops

2015-09-24 Thread Michele Zaffalon


On Thursday, September 24, 2015 at 9:21:47 AM UTC+2, Mauro wrote:
>
> >> This is the lowered and typed abstract syntax tree that you're seeing, 
> >> so two steps removed from what you've typed already (and another two 
> >> steps to go to get to machine code).  Thus it gets more verbose.  I 
> >> guess it would be nice to translate this typed code back to what you 
> >> wrote but with type annotations and display that.  But that is not 
> >> possible (yet?).  Have you seen this short and sweet JuliaCon 
> >> presentation by Jacob: 
> >> 
> >> 
> >> 
> https://www.youtube.com/watch?v=RYZkHudRTvI=PLP8iPy9hna6Sdx4soiGrSefrmOPdUWixM=16
>  
> >> 
> > 
> > Yes, you worded better than I could: whether it is possible to get a 
> sort 
> > of breadth first view of the code. From the nice (and too short, thanks 
> for 
> > the link) presentation, it seems that this is not possible. Is there any 
> > hope that the "yet?" will become reality? I understand thought that this 
> > may be a request from somebody unable to read quickly the output of 
> > code_typed. 
>
> If I understand correctly, this issue is requesting this feature 
> https://github.com/jakebolewski/JuliaParser.jl/issues/22 . By the sounds 
> of it, it's not going to happen tomorrow but maybe the day after. 
>

Just wow. 


Re: [julia-users] @code_warntype and for loops

2015-09-24 Thread Michele Zaffalon


On Wednesday, September 23, 2015 at 11:06:51 AM UTC+2, Mauro wrote:
>
> > Thank you, Kristoffer. I have read the manual and your post about not 
> > getting carried away by the red == bad assumption 
> > <
> https://groups.google.com/forum/#!searchin/julia-users/@code_warntype$20red$20bad/julia-users/g9O9Ik5OAJA/uSGNDyCDEuEJ>.
>  
>
> > Yet, for a for loop, one would not expect so much output. 
>
> This is the lowered and typed abstract syntax tree that you're seeing, 
> so two steps removed from what you've typed already (and another two 
> steps to go to get to machine code).  Thus it gets more verbose.  I 
> guess it would be nice to translate this typed code back to what you 
> wrote but with type annotations and display that.  But that is not 
> possible (yet?).  Have you seen this short and sweet JuliaCon 
> presentation by Jacob: 
>
>
> https://www.youtube.com/watch?v=RYZkHudRTvI=PLP8iPy9hna6Sdx4soiGrSefrmOPdUWixM=16
>  
>

Yes, you worded better than I could: whether it is possible to get a sort 
of breadth first view of the code. From the nice (and too short, thanks for 
the link) presentation, it seems that this is not possible. Is there any 
hope that the "yet?" will become reality? I understand thought that this 
may be a request from somebody unable to read quickly the output of 
code_typed.


Re: [julia-users] [ANN] Plots 0.2.0

2015-09-24 Thread Tom Breloff
Yes that's an important point, and one that may be Plots most important
use. For me, I want to write complex plotting/visualizations for
OnlineStats and OnlineAI, and its too restrictive to assume that a user
will have Gadfly, or Qwt, or Winston installed.  (Right now, Plots is only
dependent on Colors and Reexport... Everything else is imported dynamically
as needed... Even DataFrames)  So I either have to force huge dependecies
on them or accept that some people can't ever use my visualizations. Now
with Plots they only need any backend that supports the plot type, and the
user can choose what backend they like, so package authors may be more
inclined to expose plotting recipes for their problem domain.

On Thursday, September 24, 2015, Kristoffer Carlsson 
wrote:

> It's a really nice package when you write your own module that does some
> plotting but you don't want to enforce a certain backend on the user.
>
>
> On Thursday, September 24, 2015 at 4:04:50 AM UTC+2, Tom Breloff wrote:
>>
>> I'm happy to announce a new tagged release of
>> https://github.com/tbreloff/Plots.jl, a plotting package that is simple
>> and easy to use, and which I hope will one day offer the union (not an
>> intersection) of functionality between underlying backends.  This is still
>> a work in progress, but it's possible to create many varied plots already.
>> Check out the recently expanded readme and linked examples to see more.  I
>> currently support 6 different plotting backends to varying degrees, all of
>> which use exactly the same plotting commands:
>>
>>
>>- Gadfly
>>- Immerse
>>- PyPlot
>>- Winston
>>- UnicodePlots
>>- Qwt
>>
>> Immerse is the new package from Tim Holy which allows for viewing Gadfly
>> plots in a Gtk gui window, with some cool functionality to zoom/pan and
>> select points for numerical analysis... it is my recommendation for the
>> best backend, and it is (along with Gadfly) the best supported.  I also
>> appreciate UnicodePlots (REPL only) for the minimal dependencies... for
>> quick plots it's pretty cool.
>>
>> Some of the big improvements coming down the pipeline are support for
>> partitioning incoming data, various datapoint-specific settings (such as
>> varying size/color of individual points), and rounding out the standard
>> library of graph manipulation (specialized axes, scales, etc).  If Plots
>> covers most of your needs right now except for a few tweaks, you can always
>> access the underlying plot objects for the backend of your choice and
>> update directly.  (however please ask questions and request features so
>> that I know what to prioritize, and of course report any bugs that you
>> find).
>>
>> In the longer term I plan to include some recipes and example IJulia
>> notebooks for creating more complicated plots, as well as other types of
>> visualizations (graphs, 3D) and interactivity.
>>
>> Feedback and wishlists are appreciated!  Happy plotting.
>>
>> Tom
>>
>


Re: [julia-users] Matrix multiplication: "_unsafe_getindex" and "_unsafe_batchsetindex"

2015-09-24 Thread Benjamin Born
Thanks for the clarification, Tim! My concern was that the "unsafe" calls 
signalled a problem with the bounds.

Am Donnerstag, 24. September 2015 02:51:16 UTC+2 schrieb Tim Holy:
>
> Welcome to Julia! 
>
> Can you clarify the precise nature of your concern? I'm not sure I see a 
> problem. To make sure you're interpreting this right, it's indicating that 
> 741/839 = 88% of your execution time is in gemm!, which is the core 
> routine 
> for matrix-matrix multiplication. 
>
> The "unsafe" calls refer to the fact that these functions assume that the 
> indexes are in bounds...which is fine if the algorithm is guaranteed to 
> stay 
> in-bounds, or if a previous function already checked to make sure that 
> everything is OK. In other words, these are just internal functions that 
> get 
> called in the course of multiplication. 
>
> Best, 
> --Tim 
>
> On Wednesday, September 23, 2015 06:33:55 AM Benjamin Born wrote: 
> > Hey everybody, 
> > 
> > I'm a Julia beginner slowly transitioning from Matlab to Julia (0.4 RC2 
> on 
> > Win 7 64bit) and I haven't been able to figure out the following 
> problem. 
> > Profiling my code I have received the following message for a line of my 
> > code that multiplies three matrices D=A*B*C: 
> > 
> >  
> > 
> > 839  ...on\run_bbeg_check.jl; vfi_smart; line: 284 
> >  5   cartesian.jl; _unsafe_batchsetindex!; line: 34 
> >  4   multidimensional.jl; _unsafe_batchsetindex!; line: 322 
> >   4 operators.jl; setindex_shape_check; line: 256 
> >  11  multidimensional.jl; _unsafe_batchsetindex!; line: 328 
> >  5   multidimensional.jl; _unsafe_batchsetindex!; line: 329 
> >  15  multidimensional.jl; _unsafe_getindex; line: 193 
> >  42  multidimensional.jl; _unsafe_getindex; line: 195 
> >  757 operators.jl; *; line: 103 
> >   1   linalg/matmul.jl; gemm_wrapper!; line: 321 
> >   1   linalg/matmul.jl; gemm_wrapper!; line: 327 
> >1 abstractarray.jl; stride; line: 80 
> >   741 linalg/matmul.jl; gemm_wrapper!; line: 328 
> >741 linalg/blas.jl; gemm!; line: 632 
> > 
> >  
> > 
> > My code runs and I also get the same results as in Matlab but I would 
> still 
> > like to know whether I do something wrong or inefficient. 
> > 
> > Unfortunately I could not exactly reproduce the message with a 
> simplified 
> > example code but the following code at least produces the 
> > "_unsafe_getindex" message: 
> > 
> > function matmult_test() 
> > A=rand(20,20,5) 
> > B=rand(20,700,5) 
> > C=eye(700,700) 
> > D=zeros(20,700,5); 
> > for ii=1:5 
> > D[:,:,ii] = A[:,:,ii]*B[:,:,ii]*C; 
> > end 
> > end 
> > matmult_test() 
> > Profile.clear() 
> > @profile matmult_test() 
> > Profile.print() 
> > 
> > Thanks for your help, 
> > 
> > Benjamin 
>
>

Re: [julia-users] Re: Why Julia completions fails to complete unicode when I use sublime text?

2015-09-24 Thread Roger Luo
so any idea on how to fix it?

2015-09-24 14:52 GMT+08:00 Tomas Lycken :

> It could be that some feature or plugin in Sublime is hogging the key
> sequences, or that you have the completion option turned off (with a
> possible bug for the last-line thing). For example, you wouldn't want that
> type of completion to happen when typing a LaTeX document, so the editor
> has to have "last say" in if it works or not.
>
> // T
>
> On Thursday, September 24, 2015 at 8:02:47 AM UTC+2, Roger Luo wrote:
>>
>> It always fails to complete when I'm typing in the line which is not the
>> last line.
>> if it is the last line of this program the completion can work
>>
>


[julia-users] Re: Why Julia completions fails to complete unicode when I use sublime text?

2015-09-24 Thread Tomas Lycken
It could be that some feature or plugin in Sublime is hogging the key 
sequences, or that you have the completion option turned off (with a 
possible bug for the last-line thing). For example, you wouldn't want that 
type of completion to happen when typing a LaTeX document, so the editor 
has to have "last say" in if it works or not.

// T

On Thursday, September 24, 2015 at 8:02:47 AM UTC+2, Roger Luo wrote:
>
> It always fails to complete when I'm typing in the line which is not the 
> last line.
> if it is the last line of this program the completion can work
>


[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Marcio Sales
Wow. All this discussion to make Julia only *as fast as* the old scripting 
languages? I gotta say that worried me a bit. What do you do when there's 
no code to compare? How will you know that it was really a good idea 
switching from Matlab/Python to Julia? 

Considering what the develops proudly advertize about performance (what I 
think is why most people would even consider changing to it), shouldn't  
the language be designed as to put the user in the best performant 
direction most of the time? Matlab does a good job on that with fewer but 
simplified and efficient data structures, supporting vectorized code etc. 
In my short experience with Julia, it seems that there are a lot of ways to 
do the same thing, some of which very bad in terms of performance, like the 
original code in this post. If Julia can't be easily faster and less 
verbose than R for example, we could just forget about it...



 




Re: [julia-users] Re: What are the "strengths" of the Julia "ecosystem" (e.g. for bio and quants/finance)? This is not a very specific question (while sub-questions are), you've been warned..

2015-09-24 Thread Jonathan Malmaud
I agree with all that - there isn't a web framework for Julia that is at
the level of something Django or RoR. It seems totally reasonable to use
those mature tools for the frontend of your webapp, which could in term
communicate with a Julia backend.

I just meant that some of the lower levels of the stack, like
implementation for the full HTTP spec, proper handling of unicode and
binary data at the HTTP level, solid SSL support, etc is good now.
HttpServer.jl includes examples of setting up HTTPS.

On Thu, Sep 24, 2015 at 9:16 AM, Andrei Zh 
wrote:

> It's great that webstack has got many improvements recently, but as far as
> I can see even more job is still to be done. E.g. for me 2 kinds of web
> apps that I need most often are web UIs and high-performance web services.
> I'm not sure about performance (last time I tested HttpServer it was quite
> moderate, but maybe it has changed already), but for web UIs I miss at
> least following features (taking Mux as the basis):
>
>  - template engine: Mustache.jl can probably be used, but so far Google
> knows about zero common occurrences of Mustache.jl and Mux.jl except for
> very general lists of frameworks
>  - serving static files: possible to do in pure Julia, of course, but it's
> another several hours to implement
>  - sessions: middleware is exactly for this kind of things, but again,
> it's better to have it out of the box, than write everything yourself
>  - authentication and security: how to set up HTTPS? how to restrict
> access to certain pages?
>  - stability: I've just knew that Morsel.jl is now deprecated, if I had
> applications using it, I would need to migrate them now, and I'm really not
> sure Mux.jl won't be deprecated during next year too
>
> This means that if you want to provide users with a nice interface to your
> Julia application, you should either spend a couple of days adding missing
> stuff (and probably not in way suitable for other users) or just use, say,
> Python and do the job in a couple of hours.
>
>
> On Thursday, September 24, 2015 at 2:35:57 AM UTC+3, Jonathan Malmaud
> wrote:
>>
>> The webstack has seen considerable improvement lately. Mux is the most
>> mature and supported webapp framework at this point.
>>
>> On Wednesday, September 23, 2015 at 4:58:01 PM UTC-4, Andrei Zh wrote:
>>>
>>> If you are looking for a best in the class libraries, you probably won't
>>> find many. This is implied by a simple fact that most such libraries had
>>> already been created in other languages by the time Julia was born.
>>> However, if you want something comparable to such best libraries, then I
>>> would stress the following areas (from my experience and highly
>>> subjectively, of course):
>>>
>>>  * image processing (e.g. Images.jl, ImageView.jl), which still changes,
>>> but has quite impressive functionality already
>>>  * deep learning (e.g. Mocha.jl, Strada.jl, Boltzmann.jl) - fast, full
>>> functional, easy to use and modify libraries (compare to frameworks in C++
>>> or Theano, for example)
>>>  * concurrent, parallel and distributed programming (core Julia) - far
>>> behind Python or R, probably comparable with Erlang
>>>  * GPU computing (see JuliaGPU organization) - pretty convenient,
>>> especially combined with Julia's compilation to native code
>>>  * symbolic and metaprogramming (macros, Calculus.jl) -  like Lisp with
>>> infix notation or SymPy, built in the language
>>>
>>> I also expect that Julia will become more popular with development of
>>> new areas for which there are no good libraries at all and Julia may become
>>> perfect solution. At the same time, to keep people involved, we not only
>>> need to add more strengths, but also remove weaknesses. And Julia's web
>>> stack seems to be one of the biggest weaknesses, so if you are interested
>>> and wish to contribute, please, do it.
>>>
>>>
>>> On Wednesday, September 23, 2015 at 5:03:02 PM UTC+3, Páll Haraldsson
>>> wrote:

 On Wednesday, September 23, 2015 at 12:35:47 PM UTC, Randy Zwitch wrote:
>
> Julia is as capable as any of the languages you have mentioned as far
> as I'm concerned. When I read "people want to get work done", I read that
> as "people want SOMEONE ELSE to do the work".
>

 And you would be absolutely right. I tried to phrase the question in a
 positive way with "and help needed?" [For me, that would be mostly non-math
 stuff*, and I've submitted some trivial/beginner.. fixes.]  I'm ok with
 that as I am just tinkering. Imagine Julia had no libraries, as at first
 then I would have been as exited about the language. It is a language that
 makes me think differently and try new paradigms I haven't tried before
 (multiple dispatch).

 I might have tried to build a website (and web server from scratch).
 Some people do not want to be early adopters. I can understand that. I'm
 not so sure you would be by now. I'm asking about the 

Re: [julia-users] Re: Documentation of operators

2015-09-24 Thread Michael Hatherly


expanding the list to include things like ‘..’

There’s a brief mention of .. syntax in the manual, 
http://docs.julialang.org/en/latest/manual/modules/#relative-and-absolute-module-paths,
 
but adding something about it
to the docstring for using or import would probably be worth doing.

and -> are in the list

Presumably you’d want help?> -> to display some docs? If so then they would 
need to be added to 
https://github.com/JuliaLang/julia/blob/master/base/docs/basedocs.jl in the 
same manner as the keywords in that file are documented, something like:

keywords[:->] = doc"""
(args...) -> ...

`->` is syntax used to declare an anonymous function.
"""

The docs for using and import are also in that same file if you’re wanting 
to add details about .. to those.

coming up with a brief synopsis for all

Most objects that aren’t documented have some very basic automatically 
generated summary info, but could always use improvement if you’ve got 
ideas.

— Mike
​
On Thursday, 24 September 2015 13:07:54 UTC+2, Alex Copeland wrote:
>
> Thank you.  I struggled a bit with how to phrase my question. 'operator', 
> it turns out wasn't the best choice. Maybe symbols would have been better? 
>
> A table of operators and other symbols would be very useful. Maybe a good 
> project for a newbie like myself. 
>
> To leave something in this thread which might be useful for future 
> searchers, I'll briefly mention that I discovered julia's help mode, if fed 
> an empty tab complete, prints a huge list of items including a long list of 
> symbols (unicode too) with documentation.   The list isn't complete, 
> though, as neither '..' and '->' are in the list, and some have missing 
> documentation. 
>
> Providing aliases for some of the unicode items, expanding the list to 
> include things like '..',  and coming up with a brief synopsis for all 
> might be even better project for me than constructing a table of operators. 
> I'll start a new thread if I get anywhere with this.
>
>
> help?> ⊈
> search: ⊈
>
>   No documentation found.
>
>   Base.⊈ is a generic Function.
>
>   # 1 method for generic function "⊈":
>   ⊈(l::Set{T}, r::Set{T}) at set.jl:104
>
>
>
>
> On Wednesday, September 23, 2015 at 5:07:06 PM UTC-7, Erik Schnetter wrote:
>>
>> Yes, operators are difficult to search for. There should be a table of 
>> them...
>>
>> The two dots .. are not an operator; they specify an access path to a 
>> module. See 
>>
>> The arrow -> defines an anonymous function. See <
>> http://docs.julialang.org/en/release-0.3/manual/functions/>. Here, this 
>> function accesses the second element of an array (or tuple or other 
>> collection).
>>
>> -erik
>>
>> On Wed, Sep 23, 2015 at 7:47 PM, Alex Copeland  wrote:
>>
>>> Update from OP: Not sure it matters, but on the off chance the syntax is 
>>> new,  I'm running
>>>
>>> julia> versioninfo()
>>> Julia Version 0.5.0-dev+318
>>> Commit 3b189b9* (2015-09-22 15:28 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin12.4.0)
>>>   CPU: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Sandybridge)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>>
>>>
>>> On Wednesday, September 23, 2015 at 3:32:57 PM UTC-7, Alex Copeland 
>>> wrote:



 Hi,

 Can someone point me to the documentation for '..'  and '->'  as in 
  'include ..Sort'  and x -> x[2] . I've dug around in the source and in 
 readthedocs but patterns like this are the devil to search for unless they 
 have a text alias (that you happen to know). 

 Thanks,
 Alex

>>>
>>
>>
>> -- 
>> Erik Schnetter  
>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>
>

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Christof Stocker
I have been spending the past weeks trying to really understand how to 
implement efficient code.


As far as I can tell (from first hand experience), Julia really does 
give you a prominent edge over R and Matlab in terms of performance. 
However, I also think that there are currently a lot of ways to shoot 
yourself in the foot (accidental type instability and memory allocation 
are the most prominent of those in my experience).


So currently I don't think that Julia allows you to naively write 
"simple and efficient code", iff you are not used to the language. In my 
opinion, you really do need to get to know the language before simple 
code becomes efficient as well.


Given that we are not at version 1.0 yet, I don't mind that at all. I 
see it like this: If you know what to look out for, the code is usually 
pretty competitive with C (about a factor of 2 to 3 currently for my 
code). I rather have the developers focus on improving the potential 
performance, than on making it "idiot proof" just yet. I think the later 
is best tackled after the more pressing issues are taken care of.


That being said, I do think one should be careful of how Julia is 
advertised by word of mouth. The comparison to Matlab, which has a very 
similar syntax but (I think) a very different way of efficient coding, 
can be a red herring and give rise to false expectations. I don't think 
you can currently just copy and paste Matlab code and expect it to be 
faster.


On 2015-09-24 14:17, Marcio Sales wrote:
Wow. All this discussion to make Julia only /as fast as/ the old 
scripting languages? I gotta say that worried me a bit. What do you do 
when there's no code to compare? How will you know that it was really 
a good idea switching from Matlab/Python to Julia?


Considering what the develops proudly advertize about performance 
(what I think is why most people would even consider changing to it), 
shouldn't  the language be designed as to put the user in the best 
performant direction most of the time? Matlab does a good job on that 
with fewer but simplified and efficient data structures, supporting 
vectorized code etc. In my short experience with Julia, it seems that 
there are a lot of ways to do the same thing, some of which very bad 
in terms of performance, like the original code in this post. If Julia 
can't be easily faster and less verbose than R for example, we could 
just forget about it...




Re: [julia-users] Juno for Julia 0.4.0

2015-09-24 Thread Isaiah Norton
>
> Also, are the developers considering incorporating and IDE, so one is
> guaranteed to work with every new release?
>

The long-term plan is to create a bundle package, along the lines of
Anaconda or PythonXY, with one editor and several major packages +
dependencies included. There is a lot of other, higher-priority work to do
first though, and contributions are always welcome in every area.

On Thu, Sep 24, 2015 at 8:33 AM, Marcio Sales 
wrote:

> Hi All,
>
> I've tried some IDEs and 'found all of them have at least one annoying bug
> or problem. Juno lacks a console to type commands, but at least it has all
> other things I wanted working (code completion, docs, code evaluation).
> Atom with julia-autocomplete and julia-console (or whatever they're called)
> is almost perfect but have bad code completion and I couldn't find a way to
> invoke docs easily, like Juno does with ctrl+d. Then so far, Juno ftw!
>
> However, Juno is broken after 0.4.0, with ppl reporting different
> problems. Will a new bundle with 0.4.0 be released soon?
>
> Also, are the developers considering incorporating and IDE, so one is
> guaranteed to work with every new release? Please, don't mention the REPL...
>
> Peace,
> M.
>
>


[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Simon Danisch
There is no question that Julia needs more work. This applies to offering 
speedy primitives and also doing more optimizations.
But I think you get one thing wrong.
The magic lays in the fact, that in Julia you have the chance to write the 
vectorized implementation that are offered by languages like R or Matlab in 
Julia itself.
In that case Julia still offers much more comfort than the usual suspects 
(C/C++ fortran). It's not only more comfortable but also makes the high 
performance code easier to maintain.
And in theory, that gives you more room for optimizations, since the 
compiler does not have to optimize outside the border of the language.

It also means, that some person like you and me can go out there, write 
some functions that incoorperates the speed improvement achieved in this 
thread and offer it to other people so they can write higher level and fast 
code in the end.
That's basically what the Matlab team does (in C++ and Java) and they have 
had much more time and resources to do this.
So no, just because Matlab already has some speedy, optimized primitives 
written in C++ and better loop fusion, which Julia still doesn't have, 
Julia does not become irrelevant.



Am Samstag, 19. September 2015 19:50:50 UTC+2 schrieb Adam:
>
> Hi, I'm a novice to Julia but have heard promising things and wanted to 
> see if the language can help with a problem I'm working on. I have some 
> Matlab code with some user-defined functions that runs a simulation in 
> about ~1.4 seconds in Matlab (for a certain set of parameters that 
> characterize a small problem instance). I translated the code into Julia, 
> and to my surprise the Julia code runs 5x to 30x slower than the Matlab 
> code. I'll be running this code on much larger problem instances many, many 
> times (within some other loops), so performance is important here. 
>
> I created a GitHub gist that contains a stripped-down version of the Julia 
> code that gets as close to (as I can find) the culprit of the problem. The 
> gist is here: https://gist.github.com/anonymous/010bcbda091381b0de9e. A 
> quick description: 
>
>- set_up.jl sets up parameters and other items.
>- set_up_sim.jl sets up items particular to the simulation.
>- simulation.jl runs the simulation.
>- calc_net.jl, dist_o_i.jl, and update_w.jl are user-defined functions 
>executed in the simulation. 
>
>
> On my laptop (running in Juno with Julia version 0.3.10), this code yields:
> elapsed time: 43.269609577 seconds (20297989440 bytes allocated, 38.77% gc 
> time)
> elapsed time: 38.500054653 seconds (20291872804 bytes allocated, 40.41% gc 
> time)
> elapsed time: 40.238907235 seconds (20291869252 bytes allocated, 39.44% gc 
> time)
>
> Why is so much memory used, and why is so much time spent in garbage 
> collection?
>
> I'm familiar with 
> http://docs.julialang.org/en/release-0.3/manual/performance-tips/ and 
> have tried to follow these tips to the best of my knowledge. One example of 
> what might be seen as low-hanging fruit: I tried removing the type 
> declarations from my functions, but this actually increased the run-time of 
> the code by a few seconds. Also, other permutations of the column orders 
> pertaining to D, T, and I led to slower performance. 
>
> I'm sure there are several issues at play here-- I'm just using Julia for 
> the first time. Any tips would be greatly appreciated. 
>


[julia-users] Juno for Julia 0.4.0

2015-09-24 Thread Marcio Sales
Hi All,

I've tried some IDEs and 'found all of them have at least one annoying bug 
or problem. Juno lacks a console to type commands, but at least it has all 
other things I wanted working (code completion, docs, code evaluation). 
Atom with julia-autocomplete and julia-console (or whatever they're called) 
is almost perfect but have bad code completion and I couldn't find a way to 
invoke docs easily, like Juno does with ctrl+d. Then so far, Juno ftw!

However, Juno is broken after 0.4.0, with ppl reporting different problems. 
Will a new bundle with 0.4.0 be released soon?

Also, are the developers considering incorporating and IDE, so one is 
guaranteed to work with every new release? Please, don't mention the REPL...

Peace,
M.
 


Re: [julia-users] Re: Documentation of operators

2015-09-24 Thread Scott T
There's already a table of punctuation 
 which is 
pretty handy, though not complete (it doesn't contain .. or ->), so adding 
those and other symbols would be a great little project!

Scott

On Thursday, 24 September 2015 12:07:54 UTC+1, Alex Copeland wrote:
>
> Thank you.  I struggled a bit with how to phrase my question. 'operator', 
> it turns out wasn't the best choice. Maybe symbols would have been better? 
>
> A table of operators and other symbols would be very useful. Maybe a good 
> project for a newbie like myself. 
>
> To leave something in this thread which might be useful for future 
> searchers, I'll briefly mention that I discovered julia's help mode, if fed 
> an empty tab complete, prints a huge list of items including a long list of 
> symbols (unicode too) with documentation.   The list isn't complete, 
> though, as neither '..' and '->' are in the list, and some have missing 
> documentation. 
>
> Providing aliases for some of the unicode items, expanding the list to 
> include things like '..',  and coming up with a brief synopsis for all 
> might be even better project for me than constructing a table of operators. 
> I'll start a new thread if I get anywhere with this.
>
>
> help?> ⊈
> search: ⊈
>
>   No documentation found.
>
>   Base.⊈ is a generic Function.
>
>   # 1 method for generic function "⊈":
>   ⊈(l::Set{T}, r::Set{T}) at set.jl:104
>
>
>
>
> On Wednesday, September 23, 2015 at 5:07:06 PM UTC-7, Erik Schnetter wrote:
>>
>> Yes, operators are difficult to search for. There should be a table of 
>> them...
>>
>> The two dots .. are not an operator; they specify an access path to a 
>> module. See 
>>
>> The arrow -> defines an anonymous function. See <
>> http://docs.julialang.org/en/release-0.3/manual/functions/>. Here, this 
>> function accesses the second element of an array (or tuple or other 
>> collection).
>>
>> -erik
>>
>> On Wed, Sep 23, 2015 at 7:47 PM, Alex Copeland  wrote:
>>
>>> Update from OP: Not sure it matters, but on the off chance the syntax is 
>>> new,  I'm running
>>>
>>> julia> versioninfo()
>>> Julia Version 0.5.0-dev+318
>>> Commit 3b189b9* (2015-09-22 15:28 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin12.4.0)
>>>   CPU: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Sandybridge)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>>
>>>
>>> On Wednesday, September 23, 2015 at 3:32:57 PM UTC-7, Alex Copeland 
>>> wrote:



 Hi,

 Can someone point me to the documentation for '..'  and '->'  as in 
  'include ..Sort'  and x -> x[2] . I've dug around in the source and in 
 readthedocs but patterns like this are the devil to search for unless they 
 have a text alias (that you happen to know). 

 Thanks,
 Alex

>>>
>>
>>
>> -- 
>> Erik Schnetter  
>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>
>

[julia-users] Re: [ANN] ShaderToy.jl

2015-09-24 Thread Páll Haraldsson
On Wednesday, September 23, 2015 at 11:21:16 PM UTC, Simon Danisch wrote:
>
> Hi,
> you want to try out GPU accelerated ray tracing? You want some quick and 
> easy start for GPU accelerated fractal rendering?
> You can do this quite easily now!
> ShaderToy  allows you to 
> only specify a fragmentshader, which is an OpenGL program which can execute 
> arbitrary code per pixel(fragment).
>

I see now, this seems not be a wrapper (for say some C/C++ code).

I understand there is work that allows compute kernels in pure Julia to be 
compiled for GPUs even with just one extra line of code (I guess a macro, 
@GPU..?). That is for GPGPU, is that very different to allowing Julia code 
to work for this?

-- 
Palli.



[julia-users] Re: [ANN] ShaderToy.jl

2015-09-24 Thread Páll Haraldsson
[My reponse here is kind of off-topic for Julia, but on graphics..]

Similar to other GPU/OpenGL wrapper stuff for Julia I've seen, is the 
overhead low, as in 0% or low single digits?

On Thursday, September 24, 2015 at 11:04:41 AM UTC, Simon Danisch wrote:
>
> Well, Julia is not directly at work here and its not really about ray 
> tracing ;)
>

Now I'm a little confused.. At first I didn't look to closely, just at the 
pictures..

Are you saying this is just for shaders and the OpenGL paradigm is still 
used only allowing you to embed a ray tracer as one option in a shader? To 
you still then have to take care of no overdraw and get the right triangles 
to paint? E.g. not work in screenspace?
  

> You need to write the shader in GLSL 
> , which is more 
> C-like.
> That said, I'm pretty sure that the ray tracing examples are faster than 
> most of the ray tracing examples listed in the links.
> It runs even on on-board GPU's in real time.
>

I have a crappy GPU so can't confirm, but find this hard to believe.. E.g. 
here, from this year they say interactive:

http://blogs.nvidia.com/blog/2015/03/19/ray-tracing-death-ray/

E.g. good enough for graphics people, but last cool tech demo I looked at 
showed graphics breaking up and took several frames to get right after the 
movement stopped. If/when you get true real-time ray-tracing (a few years 
back, there where specialized chips/boards/APIs for it) on regular GPUs, 
the older paradigm is dead (it has higher time complexity). Then it is only 
a matter of time until the low end chips can do real-time also. For a time, 
at least, both old style, and ray-tracing, needs to be supported in the 
same chips, as the regular public will not buy specialized chips (or both!) 
or you would have a chicken and egg situation. 

The algorithms allow only for very limited ray tracing, but work nicely on 
> the GPU.
>

Still cool if this is the state of the art, and even with Julia.
-- 
Palli.
 

>
> Am Donnerstag, 24. September 2015 01:21:16 UTC+2 schrieb Simon Danisch:
>>
>> Hi,
>> you want to try out GPU accelerated ray tracing? You want some quick and 
>> easy start for GPU accelerated fractal rendering?
>> You can do this quite easily now!
>> ShaderToy  allows you to 
>> only specify a fragmentshader, which is an OpenGL program which can execute 
>> arbitrary code per pixel(fragment).
>> Its based on GLVisualize and basically the Julia native version of: 
>> https://www.shadertoy.com/
>> I copied a few examples to get you started. Just click on the gifs in the 
>> README to see the fragment shader that produced the image.
>> The installation is still a little bit wonky, but should mostly work if 
>> the script executes without error.
>> If it doesn't work, please open an issue. This will help me to make the 
>> release of GLPlot and GLVisualize a lot smoother!
>>
>> Best,
>> Simon
>>
>

Re: [julia-users] Re: IDE for Julia

2015-09-24 Thread jonathan . bieler
Here's the current state of my Gtk-based IDE prototype. It's very buggy but 
there's some basic things work. 

https://gfycat.com/YawningLeftCaterpillar

It seems doable to make a decent IDE this way, although it's a lot of work 
and there's a few challenges. Currently I can't set fonts, there's also
some problems with getting print commands and errors into the console, and 
I'm not sure how to implement auto-complete in the editor. 
GtkSourceView has a system but the documentation is a bit obscure, and I'm 
not sure it's flexible enough.


[julia-users] Re: What are the "strengths" of the Julia "ecosystem" (e.g. for bio and quants/finance)? This is not a very specific question (while sub-questions are), you've been warned..

2015-09-24 Thread Páll Haraldsson
On Wednesday, September 23, 2015 at 8:58:01 PM UTC, Andrei Zh wrote:
>
> If you are looking for a best in the class libraries, you probably won't 
> find many. This is implied by a simple fact that most such libraries had 
> already been created in other languages by the time Julia was born. 
> However, if you want something comparable to such best libraries, then I 
> would stress the following areas (from my experience and highly 
> subjectively, of course):
>
>  * image processing (e.g. Images.jl, ImageView.jl), which still changes, 
> but has quite impressive functionality already
>  * deep learning (e.g. Mocha.jl, Strada.jl, Boltzmann.jl) - fast, full 
> functional, easy to use and modify libraries (compare to frameworks in C++ 
> or Theano, for example)
>  * concurrent, parallel and distributed programming (core Julia) - far 
> behind Python or R, probably comparable with Erlang
>

I find this statement highly surprising.. wander if you meant to reverse 
this.. My quant friend who had worked for years in Python had trouble 
parallelizing Python code (may be resolved now..). I'm not familiar with R, 
but Python has the GIL and associated problems. I also thought Erlang was 
best-in-class (for concurrent, not "parallel")..

@Malmaud About "Mux is the most mature and supported webapp framework at 
this point." I should check out, but got excited about http://escher-jl.org/

Are you saying that Escher may just be immature at this point? I understood 
it was high/higher level [than Mux], but compatibility between browsers 
(e.g. to Safari) not a given yet. I assume, that is a JavaScript generation 
issue, while Mux doesn't even have that(?) and assume you need to provide 
all your client-side "content" yourself?

@Anthoff "The JuMP package is just phenomenal", yes, that I heard and JuMP 
is what had in mind. And I understand it's pure Julia, while it "currently 
supports a number of open-source and commercial solvers".


Back to "best in the class libraries, you probably won't find many". In 
many cases at least, people do not care if pure Julia libraries are 
available. Often just having libraries for relevant things/wrappers for 
libraries in C/C++ etc., that they can build on is great. From that 
perspective, I'm very optimistic Julia is very usable for all kinds of 
stuff already. For those in the Java/Scala world, I'm less sure about 
reusing that, I know you can with JavaCall.jl, but understand there are 
bugs and limitations to it.

 * GPU computing (see JuliaGPU organization) - pretty convenient, 
> especially combined with Julia's compilation to native code
>  * symbolic and metaprogramming (macros, Calculus.jl) -  like Lisp with 
> infix notation or SymPy, built in the language
>
> I also expect that Julia will become more popular with development of new 
> areas for which there are no good libraries at all and Julia may become 
> perfect solution. At the same time, to keep people involved, we not only 
> need to add more strengths, but also remove weaknesses. And Julia's web 
> stack seems to be one of the biggest weaknesses, so if you are interested 
> and wish to contribute, please, do it. 
>
>
> On Wednesday, September 23, 2015 at 5:03:02 PM UTC+3, Páll Haraldsson 
> wrote:
>>
>> On Wednesday, September 23, 2015 at 12:35:47 PM UTC, Randy Zwitch wrote:
>>>
>>> Julia is as capable as any of the languages you have mentioned as far as 
>>> I'm concerned. When I read "people want to get work done", I read that as 
>>> "people want SOMEONE ELSE to do the work".
>>>
>>
>> And you would be absolutely right. I tried to phrase the question in a 
>> positive way with "and help needed?" [For me, that would be mostly non-math 
>> stuff*, and I've submitted some trivial/beginner.. fixes.]  I'm ok with 
>> that as I am just tinkering. Imagine Julia had no libraries, as at first 
>> then I would have been as exited about the language. It is a language that 
>> makes me think differently and try new paradigms I haven't tried before 
>> (multiple dispatch).
>>
>> I might have tried to build a website (and web server from scratch). Some 
>> people do not want to be early adopters. I can understand that. I'm not so 
>> sure you would be by now. I'm asking about the "ecosystem" not the language 
>> per se. I know about JuliaQuant, BioJulia, GPU stuff in Julia JuliaWeb etc. 
>> I am so grateful for what has already been done with the language - and the 
>> libraries from what I can see. If there where my fields, I think I would 
>> jump on Julia right now.
>>
>> I'm not sure why people are reluctant, I want to tell them you do not 
>> only have basic building blocks (linear algebra/matrix multiplication, FFT 
>> etc. stuff in Base), but also these libraries that (mostly) work, and if 
>> not you can help fix/contribute. I do not want to oversell Julia, so I keep 
>> quiet (mostly) about stuff I'm ignorant about..
>>
>>
>> * I knew about say, Morsel (Sinatra-like), then Mux is recommended over 
>> it. I'm not 

Re: [julia-users] Re: What are the "strengths" of the Julia "ecosystem" (e.g. for bio and quants/finance)? This is not a very specific question (while sub-questions are), you've been warned..

2015-09-24 Thread Páll Haraldsson
On fim 24.sep 2015 13:25, Jonathan Malmaud wrote: 

I agree with all that - there isn't a web framework for Julia that is at 
the level of something Django or RoR. It seems totally reasonable to use 
those mature tools for the frontend of your webapp, which could in term 
communicate with a Julia backend. 


Sounds good. I wasn't really sure if using with Python/Django was 
recommended. Python is well supported with PyCall.jl but frameworks call 
you (the "Hollywood principle": "don't call use, we'll call you") unlike 
regular libraries, so I guess you have to mess with callbacks. 

OR you use use Python as your main language and call Julia from it with: 

https://github.com/JuliaLang/pyjulia 

[that uses PyCall indirectly, that you have to install first] 


I wander which is preferred (is pyjulia no longer buggy/inferior to PyCall? 
Much [less] used?) and if you use the second option you end up not using 
Julia much and might never migrate fully to Julia.. [That could be ok 
though.] 


About Ruby on Rails you can use that and call Julia, with RoR_julia_eg 
(haven't heard of calling in the other direction). It may not be as good as 
using Django, as Python will work in the same address space as Julia and 
allows zero-copy. It however may not be too important as: are (individual) 
web page that speed-critical, (and would have to share that much data 
between the languages)? 

I just meant that some of the lower levels of the stack, like 
implementation for the full HTTP spec, proper handling of unicode and 
binary data at the HTTP level, solid SSL support, etc is good now. 
HttpServer.jl includes examples of setting up HTTPS. 

On Thu, Sep 24, 2015 at 9:16 AM, Andrei Zh wrote: 

It's great that webstack has got many improvements recently, but as 
far as I can see even more job is still to be done. E.g. for me 2 
kinds of web apps that I need most often are web UIs and 
high-performance web services. I'm not sure about performance (last 
time I tested HttpServer it was quite moderate, 


https://en.wikipedia.org/wiki/Julia_(programming_language)#For_web_use 

"HttpServer.jl, has low latency 0.5 ms and high throughput (latency on the 
same order of Python's Flask and Scala's Spray mature frameworks, and 
throughput also comparable[82])." 

I'm not sure if Julia's web page is missing something with the most 
important web (or other..) libraries. Possibly session management, if it 
exists: 

but maybe it has 
changed already), but for web UIs I miss at least following features 
(taking Mux as the basis): 

  - template engine: Mustache.jl can probably be used, but so far 
Google knows about zero common occurrences of Mustache.jl and Mux.jl 
except for very general lists of frameworks 
  - serving static files: possible to do in pure Julia, of course, 
but it's another several hours to implement 


Wouldn't that be kind of trivial, since Julia already acts as a web server, 
to load files from disk and forward? 

  - sessions: middleware is exactly for this kind of things, but 
again, it's better to have it out of the box, than write everything 
yourself 


I thought I had seen something related to sessions, but may misremember, 
maybe it was to save REPL sessions.. or related to web/IJulia (only things 
I find now..). 

  - authentication and security: how to set up HTTPS? how to 
restrict access to certain pages? 


mbedTLS.jl? A good replacement for GnuTLS.jl that seem on the way out. 

  - stability: I've just knew that Morsel.jl is now deprecated, if I 
had applications using it, I would need to migrate them now, and I'm 
really not sure Mux.jl won't be deprecated during next year too 


I didn't check if this works the same, or just similarly? Or even not 
that..? Morsel was Sinatra-like and that seems to be a hot thing and often 
the way to go. Python and others reimplemented Ruby's Sinatra micro 
framework with Flask. I'm not sure Julia and Flask would make sense as by 
being "micro", this style of framework is already "complete" in Julia?] 


This means that if you want to provide users with a nice interface 
to your Julia application, you should either spend a couple of days 
adding missing stuff (and probably not in way suitable for other 
users) or just use, say, Python and do the job in a couple of hours. 


-- 
Palli.

P.S.: I replied by e-mail.. And got "Delivery Status Notification 
(Failure)". "You might have spelled or formatted the group name 
incorrectly." Does this work for others from Thunderbird? E-mail works for 
GitHub, that is nice.. and I do not know about Discourse, the proposed new 
forum software..


[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Páll Haraldsson
On Thursday, September 24, 2015 at 12:17:25 PM UTC, Marcio Sales wrote:
>
> Wow. All this discussion to make Julia only *as fast as* the old 
> scripting languages?
>

This is not what I meant. Matching C is the goal I understand and often 
that goal is met. *Should* always be possible.

A. By matching the speed C/C++ (and I guess Fortran that I'm not too 
familiar with) the length of your code shouldn't be longer. It might very 
well be shorter and still not slower.

B. Some other languages (such as Haskell or scripting languages), can be 
very concise. And sometimes there is not much speed penalty. It would be 
ideal to also have (as) short code and not pay a price in speed. That is 
the only thing I meant.



[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Marcio Sales
"Idiot-proof" sounds awesome, but at least makes it so that the user could 
be aggressive to the language, and not the language to be aggressive to the 
newbie, would be a great to way to approach. It would also help avoinding 
"scarecrowing" away potential contributors.
 


[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Kristoffer Carlsson
Yeah cause Haskell is so super newbie friendly???

I don't think the developers had the goal with Julia that you should be 
able to write code with as few characters as possible.

On Thursday, September 24, 2015 at 5:54:37 PM UTC+2, Páll Haraldsson wrote:
>
> On Thursday, September 24, 2015 at 12:17:25 PM UTC, Marcio Sales wrote:
>>
>> Wow. All this discussion to make Julia only *as fast as* the old 
>> scripting languages?
>>
>
> This is not what I meant. Matching C is the goal I understand and often 
> that goal is met. *Should* always be possible.
>
> A. By matching the speed C/C++ (and I guess Fortran that I'm not too 
> familiar with) the length of your code shouldn't be longer. It might very 
> well be shorter and still not slower.
>
> B. Some other languages (such as Haskell or scripting languages), can be 
> very concise. And sometimes there is not much speed penalty. It would be 
> ideal to also have (as) short code and not pay a price in speed. That is 
> the only thing I meant.
>
>

[julia-users] Same native code, different performance

2015-09-24 Thread Kristoffer Carlsson
Can someone explain these results to me.

Two functions: 
f(x) = @fastmath cos(x)^3
f_float(x) = @fastmath  cos(x)^3.0


Identical native code:

julia> code_native(f, (Float64,))
.text
Filename: none
Source line: 1
pushq   %rbp
movq%rsp, %rbp
movabsq $cos, %rax
Source line: 1
callq   *%rax
movabsq $140084090479408, %rax  # imm = 0x7F67DE73A330
vmovsd  (%rax), %xmm1
movabsq $pow, %rax
callq   *%rax
popq%rbp
ret

julia> code_native(f_float, (Float64,))
.text
Filename: none
Source line: 1
pushq   %rbp
movq%rsp, %rbp
movabsq $cos, %rax
Source line: 1
callq   *%rax
movabsq $140084090501536, %rax  # imm = 0x7F67DE73F9A0
vmovsd  (%rax), %xmm1
movabsq $pow, %rax
callq   *%rax
popq%rbp
ret

Still a large difference in performance:

function bench(N)
@time for i = 1:N
f(π/4)
   end
   @time for i = 1:N
   f_float(π/4)
   end
end

julia> bench(10^6)
  0.062536 seconds
  0.010077 seconds

Secondly, can someone explain why there should be a performance difference 
at all? Is power by a float which is == an int defined differently? IEEE 
shenanigans?


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Tom Breloff
I think julia is very newbie-friendly, except for some very common patterns
that people run into.  How many times do people run into the "globals are
slow" problem, post a long question about why "I thought julia was
fast...", and then we go through the same performance tips.  It would be
nice if there was a "newb mode", in which there were warnings generated for
some of these common problems (or they were fixed automatically).  I think
some of it may be solved with something like Lint, but your typical Matlab
user will not go anywhere near Lint when they're first starting out.  I'm
not sure the right answer, but it would be nice to have a more obvious way
to automatically fix things like globals that could be declared const, or
maybe even automatically wrapping global code in functions?

Also with regards to how Julia should be pitched to new users... I think
you should always stress: "Julia *can* be faster than most alternatives,
and it takes much less effort and much less code alteration to make it
happen.  If you want to make python fast, you have to rewrite your whole
algo in cython or c, and go through the annoyance of writing code to
*call* that
new code.  If you want to make matlab fast, you have to rewrite your whole
algo in a mex file, and you also have to wrap it.  If you want to make
Julia fast, wrap the same code in a function, maybe with a few type
declarations for safety."  The comparison is stark when you put it in those
terms.

On Thu, Sep 24, 2015 at 12:06 PM, Kristoffer Carlsson  wrote:

> Yeah cause Haskell is so super newbie friendly???
>
> I don't think the developers had the goal with Julia that you should be
> able to write code with as few characters as possible.
>
>
> On Thursday, September 24, 2015 at 5:54:37 PM UTC+2, Páll Haraldsson wrote:
>>
>> On Thursday, September 24, 2015 at 12:17:25 PM UTC, Marcio Sales wrote:
>>>
>>> Wow. All this discussion to make Julia only *as fast as* the old
>>> scripting languages?
>>>
>>
>> This is not what I meant. Matching C is the goal I understand and often
>> that goal is met. *Should* always be possible.
>>
>> A. By matching the speed C/C++ (and I guess Fortran that I'm not too
>> familiar with) the length of your code shouldn't be longer. It might very
>> well be shorter and still not slower.
>>
>> B. Some other languages (such as Haskell or scripting languages), can be
>> very concise. And sometimes there is not much speed penalty. It would be
>> ideal to also have (as) short code and not pay a price in speed. That is
>> the only thing I meant.
>>
>>


[julia-users] Re: Same native code, different performance

2015-09-24 Thread Simon Danisch
I cannot reproduce this on RC2.
Probably the inlining fails for f on some julia version?

Am Donnerstag, 24. September 2015 18:04:18 UTC+2 schrieb Kristoffer 
Carlsson:
>
> Can someone explain these results to me.
>
> Two functions: 
> f(x) = @fastmath cos(x)^3
> f_float(x) = @fastmath  cos(x)^3.0
>
>
> Identical native code:
>
> julia> code_native(f, (Float64,))
> .text
> Filename: none
> Source line: 1
> pushq   %rbp
> movq%rsp, %rbp
> movabsq $cos, %rax
> Source line: 1
> callq   *%rax
> movabsq $140084090479408, %rax  # imm = 0x7F67DE73A330
> vmovsd  (%rax), %xmm1
> movabsq $pow, %rax
> callq   *%rax
> popq%rbp
> ret
>
> julia> code_native(f_float, (Float64,))
> .text
> Filename: none
> Source line: 1
> pushq   %rbp
> movq%rsp, %rbp
> movabsq $cos, %rax
> Source line: 1
> callq   *%rax
> movabsq $140084090501536, %rax  # imm = 0x7F67DE73F9A0
> vmovsd  (%rax), %xmm1
> movabsq $pow, %rax
> callq   *%rax
> popq%rbp
> ret
>
> Still a large difference in performance:
>
> function bench(N)
> @time for i = 1:N
> f(π/4)
>end
>@time for i = 1:N
>f_float(π/4)
>end
> end
>
> julia> bench(10^6)
>   0.062536 seconds
>   0.010077 seconds
>
> Secondly, can someone explain why there should be a performance difference 
> at all? Is power by a float which is == an int defined differently? IEEE 
> shenanigans?
>


[julia-users] Re: [ANN] ShaderToy.jl

2015-09-24 Thread Simon Danisch
I'm not sure what you're talking about.
ShaderToy.jl just makes it easy to execute an OpenGL shader written in 
GLSL, which then runs directly on the GPU. 
Running an OpenGL shader, animating some variables and displaying the 
results requires some setup, which is the part done in Julia.

Ray tracing just happens to be a much applied technology on shadertoy.com, 
which is why the examples I copied use ray tracing. 
If you go to shadertoy.com, you will see a lot of other stuff.
There are worlds between these simple toy ray tracing implementation and 
IRay from nvidia. It's not comparable at all.



Am Donnerstag, 24. September 2015 01:21:16 UTC+2 schrieb Simon Danisch:
>
> Hi,
> you want to try out GPU accelerated ray tracing? You want some quick and 
> easy start for GPU accelerated fractal rendering?
> You can do this quite easily now!
> ShaderToy  allows you to 
> only specify a fragmentshader, which is an OpenGL program which can execute 
> arbitrary code per pixel(fragment).
> Its based on GLVisualize and basically the Julia native version of: 
> https://www.shadertoy.com/
> I copied a few examples to get you started. Just click on the gifs in the 
> README to see the fragment shader that produced the image.
> The installation is still a little bit wonky, but should mostly work if 
> the script executes without error.
> If it doesn't work, please open an issue. This will help me to make the 
> release of GLPlot and GLVisualize a lot smoother!
>
> Best,
> Simon
>


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Sisyphuss
My argument does be purely about performance. However, you won't know if 
your Julia code is 1.3x slower than C or 130x slower, until you write the C 
code.


On Thursday, September 24, 2015 at 7:14:46 PM UTC+2, Tom Breloff wrote:
>
> Unless you are experts of compilers and Julia language, you can never know 
>> whether your code give you an edge or not
>
>
> Isn't this true of all languages?  How do you know you did that C pointer 
> arithmetic correctly?  Or that python didn't silently clobber your data? 
> This is why integration testing is important... to make sure that 
> everything works together as expected.  Having an implementation in another 
> language only shows that the results match... they could still both be 
> wrong!
>
> If your argument is purely about performance (not correctness), then who 
> cares if julia is 1.3x slower than C... you wrote 100x the functionality in 
> the same development time, which left you time to optimize things you 
> normally wouldn't.
>
> On Thu, Sep 24, 2015 at 1:00 PM, Sisyphuss  > wrote:
>
>> What do you do when there's no code to compare?
>>>
>> This is a good point! When I write a piece of Julia code, how do I know I 
>> wrote it correctly? Should I write a C version to prove it?
>> This is what I called the risk to write Julia code. Unless you are 
>> experts of compilers and Julia language, you can never know whether your 
>> code give you an edge or not.  
>>
>> On Thursday, September 24, 2015 at 2:17:25 PM UTC+2, Marcio Sales wrote:
>>>
>>> Wow. All this discussion to make Julia only *as fast as* the old 
>>> scripting languages? I gotta say that worried me a bit. What do you do when 
>>> there's no code to compare? How will you know that it was really a good 
>>> idea switching from Matlab/Python to Julia? 
>>>
>>> Considering what the develops proudly advertize about performance (what 
>>> I think is why most people would even consider changing to it), shouldn't  
>>> the language be designed as to put the user in the best performant 
>>> direction most of the time? Matlab does a good job on that with fewer but 
>>> simplified and efficient data structures, supporting vectorized code etc. 
>>> In my short experience with Julia, it seems that there are a lot of ways to 
>>> do the same thing, some of which very bad in terms of performance, like the 
>>> original code in this post. If Julia can't be easily faster and less 
>>> verbose than R for example, we could just forget about it...
>>>
>>>
>>>
>>>  
>>>
>>>
>>>
>

Re: [julia-users] Re: @sprintf with a format string

2015-09-24 Thread Tom Breloff
So if I understand your post correctly, one item on your wishlist is for
the output to change formatting dynamically based on the value of the
floating point number?  There's nothing inherently hard about that... it
could just be a setting for `fmt` which you can switch on or off (and that
could possibly be the default approach, since it's probably a better
default than always printing 3 decimal places).

Don't worry so much about whether you could implement it yourself... now's
the time to put your wishlist out there.  What is your best-case-scenario
for formatting?

On Thu, Sep 24, 2015 at 2:39 PM, lawrence dworsky 
wrote:

> Hi Tom
>
> Sorry to take so long to get back to you, I had to go away for a couple of
> days. Thanks for the installation information, @fmt is working fine now.
> It's still not as useful as the Fortran print * formatting however because
> it ​requires the user to know what's coming. For example, the Fortran code
>
> x = -2.34e-12
> do i = 1, 5
>   x = -x*5000.
>   print *, i, x
> end do
>
> produces
>
> 1 1.17E-08
> 2-5.85E-05
> 3 0.292500
> 4 -1462.5
> 5 7.312501e+06
>
> As you can see, print * figured out when exponential notation is necessary
> and automatically used it.
>
> I'm retired now, but when I was working I spent a lot of time writing
> numerical analysis programs for various engineering issues (elastic
> material deformation, electron trajectories, etc.) While a  program was
> being developed I didn't care about the aesthetics of my printout, I just
> needed useful information - and early on, numerical or algebraic or
> programming errors could easily produce results off by 10 order of
> magnitude!
>
> I think a capability such as this in Julia would be heavily used. I wish I
> had the expertise to write it.
>
> Larry
>
>
>
> On Tue, Sep 22, 2015 at 4:59 PM, Tom Breloff  wrote:
>
>> Sorry I wasn't expecting you to run it... just comment.  You'll have to
>> do:
>>
>> Pkg.rm("Formatting")
>> Pkg.clone("https://github.com/tbreloff/Formatting.jl.git;)
>> Pkg.checkout("Formatting", "tom-fmt")
>>
>> Let me know if that works.
>>
>> On Tue, Sep 22, 2015 at 5:52 PM, lawrence dworsky <
>> m...@lawrencedworsky.com> wrote:
>>
>>> I'm afraid my beginner status with Julia is showing:
>>>
>>> I ran Pkg.add("Formatting"), and then   using Formatting   came back
>>> with a whole bunch of warnings, most about  Union(args...) being
>>> depricated, use Union(args) instead.
>>>
>>> When all is said and done,   fmt_default!  gives me a  UndefVarError.
>>>
>>> Help!
>>>
>>>
>>>
>>> On Tue, Sep 22, 2015 at 2:45 PM, Tom Breloff  wrote:
>>>
 Thanks Larry, that's helpful.  Just for discussions sake, here's a
 quick macro that calls my proposed `fmt` method under the hood, and does
 something similar to what you showed.  What do you think about this style
 (and what would you do differently)?

 using Formatting

 macro fmt(args...)
  expr = Expr(:block)
  expr.args = [:(print(fmt($(esc(arg))), "\t\t")) for arg in args]
  push!(expr.args, :(println()))
  expr
 end


 And then an example usage:

 In:

 x = 1010101
 y = 55.5
 fmt_default!(width=15)

 @fmt x y

 fmt_default!(Int, :commas)
 fmt_default!(Float64, prec=2)

 @fmt x y


 Out:

 1010101  55.56
   1,010,101  55.56



 On Tuesday, September 22, 2015 at 3:08:35 PM UTC-4, lawrence dworsky
 wrote:
>
> Hi Tom
>
> What I like about it is that you can just use print *, dumbly and it
> always provides useful, albeit not beautiful, results. When I'm writing a
> program, I use print statements very liberally to observe what's going on 
> -
> I find this more convenient than an in-line debugger.
>
> As the last line in my program below shows, it's easy to switch to
> formatted output when you want to. The formatting capability is pretty
> thorough, I'm just showing a simple example.
>
> This Fortran program doesn't do anything, it just illustrates what the
> print statement produces:
>
>
> real x, y
> integer i, j
> complex z
> character*6  name
>
> x = 2.6
> y = -4.
> i = 36
> j = -40
> z = cmplx(17., 19.)
> name = 'Larry'
>
> print *, x, y, i, j, z
> print *, 'x = ', x, ' and j = ', j
> print *, 'Hello, ', name, j
> print '(2f8.3, i5)', x, y, j
>
> stop
> end
>
>
> The output is:
>
> 2.6 -4.0   36
> -40  (17., 19.)
> x = 2.6   and j =-40
> Hello, Larry -40
>   2.600   -4.000  -40
>
>
> Is this what you are looking for?
>
> Larry
>
>
>
> On Tue, Sep 

Re: [julia-users] Re: @sprintf with a format string

2015-09-24 Thread Miguel Bazdresch
With this Julia code:

x = -2.34e-12;
for i in 1:5
  x=-x*5000.
  println("$i $x")
end

I get this output:

1 1.17e-8
2 -5.8506e-5
3 0.29254
4 -1462.50002
5 7.3125001e6

I don't think this is too bad. True, the output is a bit longer, but I
actually prefer it, because you can copy/paste the numbers back into Julia
and be sure you're getting the exact number that was used in the code.

-- mb

On Thu, Sep 24, 2015 at 2:39 PM, lawrence dworsky 
wrote:

> Hi Tom
>
> Sorry to take so long to get back to you, I had to go away for a couple of
> days. Thanks for the installation information, @fmt is working fine now.
> It's still not as useful as the Fortran print * formatting however because
> it ​requires the user to know what's coming. For example, the Fortran code
>
> x = -2.34e-12
> do i = 1, 5
>   x = -x*5000.
>   print *, i, x
> end do
>
> produces
>
> 1 1.17E-08
> 2-5.85E-05
> 3 0.292500
> 4 -1462.5
> 5 7.312501e+06
>
> As you can see, print * figured out when exponential notation is necessary
> and automatically used it.
>
> I'm retired now, but when I was working I spent a lot of time writing
> numerical analysis programs for various engineering issues (elastic
> material deformation, electron trajectories, etc.) While a  program was
> being developed I didn't care about the aesthetics of my printout, I just
> needed useful information - and early on, numerical or algebraic or
> programming errors could easily produce results off by 10 order of
> magnitude!
>
> I think a capability such as this in Julia would be heavily used. I wish I
> had the expertise to write it.
>
> Larry
>
>
>
> On Tue, Sep 22, 2015 at 4:59 PM, Tom Breloff  wrote:
>
>> Sorry I wasn't expecting you to run it... just comment.  You'll have to
>> do:
>>
>> Pkg.rm("Formatting")
>> Pkg.clone("https://github.com/tbreloff/Formatting.jl.git;)
>> Pkg.checkout("Formatting", "tom-fmt")
>>
>> Let me know if that works.
>>
>> On Tue, Sep 22, 2015 at 5:52 PM, lawrence dworsky <
>> m...@lawrencedworsky.com> wrote:
>>
>>> I'm afraid my beginner status with Julia is showing:
>>>
>>> I ran Pkg.add("Formatting"), and then   using Formatting   came back
>>> with a whole bunch of warnings, most about  Union(args...) being
>>> depricated, use Union(args) instead.
>>>
>>> When all is said and done,   fmt_default!  gives me a  UndefVarError.
>>>
>>> Help!
>>>
>>>
>>>
>>> On Tue, Sep 22, 2015 at 2:45 PM, Tom Breloff  wrote:
>>>
 Thanks Larry, that's helpful.  Just for discussions sake, here's a
 quick macro that calls my proposed `fmt` method under the hood, and does
 something similar to what you showed.  What do you think about this style
 (and what would you do differently)?

 using Formatting

 macro fmt(args...)
  expr = Expr(:block)
  expr.args = [:(print(fmt($(esc(arg))), "\t\t")) for arg in args]
  push!(expr.args, :(println()))
  expr
 end


 And then an example usage:

 In:

 x = 1010101
 y = 55.5
 fmt_default!(width=15)

 @fmt x y

 fmt_default!(Int, :commas)
 fmt_default!(Float64, prec=2)

 @fmt x y


 Out:

 1010101  55.56
   1,010,101  55.56



 On Tuesday, September 22, 2015 at 3:08:35 PM UTC-4, lawrence dworsky
 wrote:
>
> Hi Tom
>
> What I like about it is that you can just use print *, dumbly and it
> always provides useful, albeit not beautiful, results. When I'm writing a
> program, I use print statements very liberally to observe what's going on 
> -
> I find this more convenient than an in-line debugger.
>
> As the last line in my program below shows, it's easy to switch to
> formatted output when you want to. The formatting capability is pretty
> thorough, I'm just showing a simple example.
>
> This Fortran program doesn't do anything, it just illustrates what the
> print statement produces:
>
>
> real x, y
> integer i, j
> complex z
> character*6  name
>
> x = 2.6
> y = -4.
> i = 36
> j = -40
> z = cmplx(17., 19.)
> name = 'Larry'
>
> print *, x, y, i, j, z
> print *, 'x = ', x, ' and j = ', j
> print *, 'Hello, ', name, j
> print '(2f8.3, i5)', x, y, j
>
> stop
> end
>
>
> The output is:
>
> 2.6 -4.0   36
> -40  (17., 19.)
> x = 2.6   and j =-40
> Hello, Larry -40
>   2.600   -4.000  -40
>
>
> Is this what you are looking for?
>
> Larry
>
>
>
> On Tue, Sep 22, 2015 at 11:57 AM, Tom Breloff 
> wrote:
>
>> Larry: can you provide details on exactly what you like about

Re: [julia-users] Re: Same native code, different performance

2015-09-24 Thread Mauro
I dissected the bench-method into two, just to be sure (on 0.4-RC2).

julia> function bench(N)
  for i = 1:N
   f(π/4)
  end
   end
bench (generic function with 1 method)

julia> function bench_f(N)
  for i = 1:N
   f_float(π/4)
  end
   end
bench_f (generic function with 1 method)

They also have identical native code but run differently:

julia> @time bench_f(10^7)
  0.190613 seconds (5 allocations: 176 bytes)

julia> @time bench(10^7)
  0.780212 seconds (5 allocations: 176 bytes)

I thought that @code_native shows the code which is actually run, so why
different speeds?

If I define the f* functions without the @fastmath macro, then I get
the same performance as above:

julia> @time bench_f(10^7)
  0.203071 seconds (5 allocations: 176 bytes)

julia> @time bench(10^7)
  0.787696 seconds (5 allocations: 176 bytes)

but with different native-codes.

> I can reproduce... I think the 2 versions will call these methods
> respectively... I guess there's a performance difference?
>
> pow_fast{T<:FloatTypes}(x::T, y::Integer) =
>> box(T, Base.powi_llvm(unbox(T,x), unbox(Int32,Int32(y
>>
>
>
>> pow_fast(x::Float64, y::Float64) =
>> ccall(("pow",libm), Float64, (Float64,Float64), x, y)
>

Tom, or are those two functions called within the native-code?  I'm no
good assembler reader.


[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Steven G. Johnson


On Thursday, September 24, 2015 at 1:55:18 PM UTC-4, Sisyphuss wrote: 
>
> However, Julia is assumed to be fast (high expectation), and performance 
> varies a lot according to the knowledge/skill a programmer own (high 
> variance).
>

Again, that's true in any language where you are trying to get high 
performance.  It's true in C as well.   If you ask a bunch of programmers 
in C to implement something as "trivial" as a matrix multiplication, 
performance can easily vary between their implementations by a factor of 10 
or more.  (See e.g. 
http://nbviewer.ipython.org/url/math.mit.edu/~stevenj/18.335/Matrix-multiplication-experiments.ipynb
 
for some examples and explanations.)  The performance variation can be even 
larger for more complicated problems.

It is really hard to know whether you are obtaining nearly "maximum" 
performance in any language, even "high-performance" languages, unless you 
either have (a) a lot of knowledge or (b) have alternative highly optimized 
code for similar problems to compare to.  Ideally you have both, even if 
you are an expert.

(e.g. FFTW would never have existed if I hadn't started by benchmarking a 
bunch of FFT implementations, and noticing the wide variation in 
performance for different codes, different problem sizes, and different 
machines.)

What is true, however, is that in a very high-level language there can be 
more going on "under the hood" than in lower-level languages like C that 
require you to write lots of low-level instructions explicitly.  The 
tradeoff here is that you can be more productive in a higher-level 
language, but you need to have some more knowledge to avoid performance 
traps that arise from expressions that seem unexpectedly slow for new 
users.  Fortunately, the rules in Julia are fairly straightforward once you 
get used to them (see the performance tips section of the manual), and the 
situation is much better than in other high-level languages where it is 
often not even possible to get good performance without dropping down to a 
separate C-like language.


[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Jeffrey Sarnoff
Rewriting code in another language, unless it affords supports some manner 
of proof and validation, is not a good approach to proving Julia code 
operates as designed and intended.
There is a likelihood of doing something not quite right twice or two 
different things differently. 
Sometimes, I use Maple to verify the output while developing mathematical 
algorithms for Julia.  

On Thursday, September 24, 2015 at 1:00:43 PM UTC-4, Sisyphuss wrote:
>
> What do you do when there's no code to compare?
>>
> This is a good point! When I write a piece of Julia code, how do I know I 
> wrote it correctly? Should I write a C version to prove it?
> This is what I called the risk to write Julia code. Unless you are experts 
> of compilers and Julia language, you can never know whether your code give 
> you an edge or not.  
>
> On Thursday, September 24, 2015 at 2:17:25 PM UTC+2, Marcio Sales wrote:
>>
>> Wow. All this discussion to make Julia only *as fast as* the old 
>> scripting languages? I gotta say that worried me a bit. What do you do when 
>> there's no code to compare? How will you know that it was really a good 
>> idea switching from Matlab/Python to Julia? 
>>
>> Considering what the develops proudly advertize about performance (what I 
>> think is why most people would even consider changing to it), shouldn't  
>> the language be designed as to put the user in the best performant 
>> direction most of the time? Matlab does a good job on that with fewer but 
>> simplified and efficient data structures, supporting vectorized code etc. 
>> In my short experience with Julia, it seems that there are a lot of ways to 
>> do the same thing, some of which very bad in terms of performance, like the 
>> original code in this post. If Julia can't be easily faster and less 
>> verbose than R for example, we could just forget about it...
>>
>>
>>
>>  
>>
>>
>>

[julia-users] Request for comments: best way to deal with type uncertainty.

2015-09-24 Thread Ben Ward
Hi Julia Users,

I'm one of the Core-Devs in the BioJulia organisation, with a background in 
evolutionary biology/genetics, and, with a few other contributors I'm 
writing Bio.jl's Phylo submodule.

The primary type of this submodule is the Phylogeny. Which is a composite 
type, used to describe a model of evolution. At the very minimum it looks 
like this:

type PhyNode
children::Vector{PhyNode}
parent::PhyNode

function PhyNode(children::Vector{PhyNode} = PhyNode[],
 parent = nothing)
x = new()
if parent != nothing
graft!(parent, x)
else
x.parent = x
end
x.children = PhyNode[]
for child in children
graft!(x, child)
end
return x
end
end

type Phylogeny
root::PhyNode
rooted::Bool
rerootable::Bool

Phylogeny() = new(PhyNode(), false, true)
end

PhyNodes are types which link to their children and to their parent - they 
are the individual objects that form the tree structure. The Phylogeny type 
describes the overall tree, and contains a variable pointing to a PhyNode 
that forms the root of the tree, and determines whether the tree is rooted 
in the phylogenetic sense, and whether the phylogeny is re-rootable. So far 
so good. We can represent the structure of a phylogeny - a model of how 
various species are related through history.

Here is where I'd like comments from the julia-users, if possible: With a 
phylogeny, often additional information is annotated to the tree, like 
branch lengths, confidence intervals, sequences, labels, colours for 
plotting, and so on. Well, we can do this with a Dict, and use PhyNodes as 
keys:

typealias NodeAnnotation{T} Dict{PhyNode, T}

We can then store thee annotations in the Phylogeny type like this:
type Phylogeny{S <: AbstractString}
root::PhyNode
rooted::Bool
rerootable::Bool
annotations::Dict{S, Any}
end

However, I don't like the type uncertainty of Any because if I'm correct, 
it could propagate up through a user's code. But we will always have some 
uncertainty, because we don't know in advance what the user might want to 
annotate the Phylogeny with - could be anything from simple float values, 
to other composite types.

Am I correct that the uncertainty getting and setting such annotations, 
would propagate through the user's code when they deal with annotations?
If so, we have tried to think of ways to get around this. One idea was to 
store the NodeAnnotations in the phylogeny according to the type of their 
values, and then provide getter and setter methods that make the return 
type predictable from the types of the parameters passed in the method:

type Phylogeny{S<:AbstractString}
root::PhyNode
rooted::Bool
rerootable::Bool
annotations::Dict{Type, Dict{S, NodeAnnotation{Any}}}
end

function setannotation!{T}(x::Phylogeny, name::ASCIIString, ann::
NodeAnnotation{T})
if haskey(x.annotations,T)
x.annotations[T][name] = ann
else 
x.annotations[T] = [name => ann]
end
end 

function getannotations{T}(x::Phylogeny, name::ASCIIString, ::Type{T})
x.annotations[T][name]::Dict{PhyNode, T}
end

This seems like it works and would indeed make getting and setting more 
type predictable, the only annoying part is that Dicts get converted:

julia> setannotation!(tree, "Node Names", NodeAnnotation{ASCIIString}())
Dict{PhyNode,ASCIIString} with 0 entries


julia> tree
Phylogeny{ASCIIString}(PhyNode(),false,false,Dict{Type{T},Dict{ASCIIString,
Dict{PhyNode,Any}}}(ASCIIString=>Dict("Node Names"=>Dict{PhyNode,Any}(

You see Dict{PhyNode, ASCIIString} got converted to Dict{PhyNode, Any}.

If anyone has comments on this or has advice on how to prevent type 
uncertainty propagating, please do share. How should we be approaching this?

Many thanks,
Ben.


Re: [julia-users] Re: @sprintf with a format string

2015-09-24 Thread lawrence dworsky
Hi Tom

Sorry to take so long to get back to you, I had to go away for a couple of
days. Thanks for the installation information, @fmt is working fine now.
It's still not as useful as the Fortran print * formatting however because
it ​requires the user to know what's coming. For example, the Fortran code

x = -2.34e-12
do i = 1, 5
  x = -x*5000.
  print *, i, x
end do

produces

1 1.17E-08
2-5.85E-05
3 0.292500
4 -1462.5
5 7.312501e+06

As you can see, print * figured out when exponential notation is necessary
and automatically used it.

I'm retired now, but when I was working I spent a lot of time writing
numerical analysis programs for various engineering issues (elastic
material deformation, electron trajectories, etc.) While a  program was
being developed I didn't care about the aesthetics of my printout, I just
needed useful information - and early on, numerical or algebraic or
programming errors could easily produce results off by 10 order of
magnitude!

I think a capability such as this in Julia would be heavily used. I wish I
had the expertise to write it.

Larry



On Tue, Sep 22, 2015 at 4:59 PM, Tom Breloff  wrote:

> Sorry I wasn't expecting you to run it... just comment.  You'll have to do:
>
> Pkg.rm("Formatting")
> Pkg.clone("https://github.com/tbreloff/Formatting.jl.git;)
> Pkg.checkout("Formatting", "tom-fmt")
>
> Let me know if that works.
>
> On Tue, Sep 22, 2015 at 5:52 PM, lawrence dworsky <
> m...@lawrencedworsky.com> wrote:
>
>> I'm afraid my beginner status with Julia is showing:
>>
>> I ran Pkg.add("Formatting"), and then   using Formatting   came back with
>> a whole bunch of warnings, most about  Union(args...) being depricated, use
>> Union(args) instead.
>>
>> When all is said and done,   fmt_default!  gives me a  UndefVarError.
>>
>> Help!
>>
>>
>>
>> On Tue, Sep 22, 2015 at 2:45 PM, Tom Breloff  wrote:
>>
>>> Thanks Larry, that's helpful.  Just for discussions sake, here's a quick
>>> macro that calls my proposed `fmt` method under the hood, and does
>>> something similar to what you showed.  What do you think about this style
>>> (and what would you do differently)?
>>>
>>> using Formatting
>>>
>>> macro fmt(args...)
>>>  expr = Expr(:block)
>>>  expr.args = [:(print(fmt($(esc(arg))), "\t\t")) for arg in args]
>>>  push!(expr.args, :(println()))
>>>  expr
>>> end
>>>
>>>
>>> And then an example usage:
>>>
>>> In:
>>>
>>> x = 1010101
>>> y = 55.5
>>> fmt_default!(width=15)
>>>
>>> @fmt x y
>>>
>>> fmt_default!(Int, :commas)
>>> fmt_default!(Float64, prec=2)
>>>
>>> @fmt x y
>>>
>>>
>>> Out:
>>>
>>> 1010101  55.56
>>>   1,010,101  55.56
>>>
>>>
>>>
>>> On Tuesday, September 22, 2015 at 3:08:35 PM UTC-4, lawrence dworsky
>>> wrote:

 Hi Tom

 What I like about it is that you can just use print *, dumbly and it
 always provides useful, albeit not beautiful, results. When I'm writing a
 program, I use print statements very liberally to observe what's going on -
 I find this more convenient than an in-line debugger.

 As the last line in my program below shows, it's easy to switch to
 formatted output when you want to. The formatting capability is pretty
 thorough, I'm just showing a simple example.

 This Fortran program doesn't do anything, it just illustrates what the
 print statement produces:


 real x, y
 integer i, j
 complex z
 character*6  name

 x = 2.6
 y = -4.
 i = 36
 j = -40
 z = cmplx(17., 19.)
 name = 'Larry'

 print *, x, y, i, j, z
 print *, 'x = ', x, ' and j = ', j
 print *, 'Hello, ', name, j
 print '(2f8.3, i5)', x, y, j

 stop
 end


 The output is:

 2.6 -4.0   36
 -40  (17., 19.)
 x = 2.6   and j =-40
 Hello, Larry -40
   2.600   -4.000  -40


 Is this what you are looking for?

 Larry



 On Tue, Sep 22, 2015 at 11:57 AM, Tom Breloff  wrote:

> Larry: can you provide details on exactly what you like about
> Fortran's print statement?  Did it provide good defaults?  Was it easy to
> customize?
>
> On Tue, Sep 22, 2015 at 12:55 PM, LarryD  wrote:
>
>> Something I miss from Fortran is the very convenient default "print
>> *, . "  It handled almost 100% of my needs while working on a program
>> and was easily replaced by real formatting when the time came. Is there 
>> any
>> chance that Julia could get something like this?
>>
>> Thanks
>>
>>
>> On Monday, September 21, 2015 at 3:46:31 AM UTC-5, Ferran Mazzanti
>> wrote:
>>>
>>> Dear all,
>>>
>>> I could use some help here, because I can't believe I'm not able to

[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Marcio Sales

>
> it would be nice to have a more obvious way to automatically fix things 
> like globals that could be declared const, or maybe even automatically 
> wrapping global code in functions?


I think these words reflect exactly the point I wanted to make. I 
understand the grandiosity that is envisioned for the language, but until 
we get there, if problems like the ones pointed in that post is not solved, 
you are basically saying that Julia is "a language that will allow you, 
expert C programmer, to fast-write code that runs a bit slower!".

"Julia *can* be faster than most alternatives, and it takes much less 
> effort and much less code alteration to make it happen. 
>

Or that "Julia is a high-level programming language that allows people to 
develop software or packages that deliver C-Like performance" = You don't 
just open a REPL and start firing up code that will run in light speed!

  





Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Daniel Carrera
On 24 September 2015 at 14:17, Marcio Sales  wrote:

> Wow. All this discussion to make Julia only *as fast as* the old
> scripting languages? I gotta say that worried me a bit. What do you do when
> there's no code to compare? How will you know that it was really a good
> idea switching from Matlab/Python to Julia?
>


No. In my experience, Julia *IS* faster, and it is easy to make fast. What
happened here is that the way you optimize Julia code is almost the
opposite of how you optimize Matlab or Python code. If you take the fast
code I wrote for Julia and port it to Matlab or Python it will be
incredibly slow, I guarantee it (try it if you don't believe me). I think
that the real lesson here is:

Porting between Julia and Matlab/Python is not trivial. The way you write
fast code is different in each language. Fast code in Julia means that you
(1) use concrete types, and that you (2) write loops rather than
"vectorized" code.



> Considering what the develops proudly advertize about performance (what I
> think is why most people would even consider changing to it),
>

In my case, while performance is important, I also really really like the
design of the language. I really don't like Python/NumPy. I find Matlab
tolerable (not great). Octave improves a bit on Matlab, but not enough, and
Octave is slow.

Before Julia, I used Octave despite the lower speed.



> shouldn't  the language be designed as to put the user in the best
> performant direction most of the time?
>

That's what Julia does for me.



> Matlab does a good job on that with fewer but simplified and efficient
> data structures, supporting vectorized code etc.
>

This is backwards. It is not that Matlab "supports" vectorized code. Julia
supports it too. Matlab *requires* vectorized code because Matlab loops are
very slow by comparison. For Julia it's the other way. Vectorized Julia
code performs similar to Matlab, but regular loops are much faster (in part
because they avoid creating a lot of temporary variables in memory).



> In my short experience with Julia, it seems that there are a lot of ways
> to do the same thing, some of which very bad in terms of performance, like
> the original code in this post. If Julia can't be easily faster and less
> verbose than R for example, we could just forget about it...
>

Well, if Julia does not fit your needs, by all means use whatever language
works best for you. Performance is a primary design goal for Julia, and I
think that it delivers. I have seen Julia code get close to the performance
of compiled Fortran code. I personally like Julia, and I don't like R, so I
hope that the developers keep developing Julia.

Cheers,
Daniel.


Re: [julia-users] Re: Same native code, different performance

2015-09-24 Thread Kristoffer Carlsson
I think Tom iir right here. These lines call the pow function

movabsq $pow, %rax
callq   *%rax

but the actual pow functions that is being called is different. I am 
surprised it is that much of a difference in performance between the two 
pow functions... That seems odd.

What Mauro says is also interesting that the speed difference is there (and 
is as large) even without the fastmath macro.

My question now is, what does IEEE say about x^double vs x^int. Is there 
any reason these should have different performance? If not, it seems to 
make sense to always convert the exponent to a double and call the libm 
version? All doubles should be able to exactly represent the integers that 
the power function take?


On Thursday, September 24, 2015 at 9:18:45 PM UTC+2, Mauro wrote:
>
> I dissected the bench-method into two, just to be sure (on 0.4-RC2). 
>
> julia> function bench(N) 
>   for i = 1:N 
>f(π/4) 
>   end 
>end 
> bench (generic function with 1 method) 
>
> julia> function bench_f(N) 
>   for i = 1:N 
>f_float(π/4) 
>   end 
>end 
> bench_f (generic function with 1 method) 
>
> They also have identical native code but run differently: 
>
> julia> @time bench_f(10^7) 
>   0.190613 seconds (5 allocations: 176 bytes) 
>
> julia> @time bench(10^7) 
>   0.780212 seconds (5 allocations: 176 bytes) 
>
> I thought that @code_native shows the code which is actually run, so why 
> different speeds? 
>
> If I define the f* functions without the @fastmath macro, then I get 
> the same performance as above: 
>
> julia> @time bench_f(10^7) 
>   0.203071 seconds (5 allocations: 176 bytes) 
>
> julia> @time bench(10^7) 
>   0.787696 seconds (5 allocations: 176 bytes) 
>
> but with different native-codes. 
>
> > I can reproduce... I think the 2 versions will call these methods 
> > respectively... I guess there's a performance difference? 
> > 
> > pow_fast{T<:FloatTypes}(x::T, y::Integer) = 
> >> box(T, Base.powi_llvm(unbox(T,x), unbox(Int32,Int32(y 
> >> 
> > 
> > 
> >> pow_fast(x::Float64, y::Float64) = 
> >> ccall(("pow",libm), Float64, (Float64,Float64), x, y) 
> > 
>
> Tom, or are those two functions called within the native-code?  I'm no 
> good assembler reader. 
>


[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Kristoffer Carlsson
These criticisms are frankly ridiculous. When your critique could be applied to 
any programming language that exists then you are doing it wrong. 



Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Daniel Carrera
On 24 September 2015 at 22:00, Marcio Sales  wrote:

> it would be nice to have a more obvious way to automatically fix things
>> like globals that could be declared const, or maybe even automatically
>> wrapping global code in functions?
>
>
> I think these words reflect exactly the point I wanted to make. I
> understand the grandiosity that is envisioned for the language, but until
> we get there, if problems like the ones pointed in that post is not solved,
> you are basically saying that Julia is "a language that will allow you,
> expert C programmer, to fast-write code that runs a bit slower!".
>

I am not an expert C programmer. I know very little in fact. But I
understand some basic principles that allow me to write acceptable Julia
code. Julia is a lot easier and more convenient to write than C. I also
find it easier and more convenient than Python and Matlab.



> "Julia *can* be faster than most alternatives, and it takes much less
>> effort and much less code alteration to make it happen.
>>
>
> Or that "Julia is a high-level programming language that allows people to
> develop software or packages that deliver C-Like performance" = You don't
> just open a REPL and start firing up code that will run in light speed!
>

I can open the REPL and write code that is faster than other languages with
a REPL. I do a lot of work on the Julia REPL and I like it.

Daniel.


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Daniel Carrera
On 24 September 2015 at 22:31, Marcio Sales  wrote:

> Matlab *requires* vectorized code because Matlab loops are very slow by
>> comparison
>>
> It is much faster these days (from R2014).
>

It is much faster than it used to be, but it is still slow. I know that
Matlab added a JIT compiler for loops several years ago, and this is one of
the reasons why Matlab vastly outperforms Octave. But despite the new JIT,
Matlab loops are much slowr than Julia loops.



> I remember I ran some very simple comparisons and it surprised me that
> Matlab ran a bit faster than Julia in a for loop of matrices
> multiplications and inversions.
>

That is a meaningless comparison. First, you are not comparing loops, you
are comparing matrix inversion. Second, neither Matlab nor Julia will
natively perform a matrix inversion well. They are both going to use an
external library (LAPACK) so what you are testing is the library, not the
language. For example, Matlab almost certainly ships with Intel's MKL (math
kernel libraries) which includes a call for matrix inversion that was
carefully written by Intel employees to run as fast as possible on Intel
CPUs. Julia cannot ship with MKL, but if you care enough it is probably
possible to link to it.

Daniel.


[julia-users] Re: Documentation of operators

2015-09-24 Thread Michael Prentiss
I ran into this problem before with punctuation.
https://github.com/JuliaLang/julia/commit/b418a03529a9afec07c5aa032a9124b03cef912e#diff-91ec6806d45dd62d07012f6a018b151f

Maybe this should be addressed again.



On Wednesday, September 23, 2015 at 5:32:57 PM UTC-5, Alex Copeland wrote:
>
>
>
> Hi,
>
> Can someone point me to the documentation for '..'  and '->'  as in 
>  'include ..Sort'  and x -> x[2] . I've dug around in the source and in 
> readthedocs but patterns like this are the devil to search for unless they 
> have a text alias (that you happen to know). 
>
> Thanks,
> Alex
>


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Daniel Carrera
On 24 September 2015 at 19:00, Sisyphuss  wrote:

> What do you do when there's no code to compare?
>>
> This is a good point! When I write a piece of Julia code, how do I know I
> wrote it correctly? Should I write a C version to prove it?
> This is what I called the risk to write Julia code. Unless you are experts
> of compilers and Julia language, you can never know whether your code give
> you an edge or not.
>
>

This is an issue with ANY language. When I write in Fortran and Matlab I
often wonder whether I really wrote the program in the best possible way.
This is not a risk of Julia. It is a risk of *programming*.

Daniel.


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Stefan Karpinski
This conversation is getting pretty tiresome. There are programs where
Matlab is already as fast as it's possible to be. If all you're doing is
computing a big matrix product, for example, then all any language is just
going to call BLAS. Julia is not going to be any faster than Matlab for
that, and neither, for that matter, is C or Fortran. There are plenty of
programs where this is not the case, however, and in Julia you can get as
fast as C without having to write C.

On Thu, Sep 24, 2015 at 5:15 PM, Marcio Sales 
wrote:

> That is a meaningless comparison. First, you are not comparing loops, you
>> are comparing matrix inversion. Second, neither Matlab nor Julia will
>> natively perform a matrix inversion well. They are both going to use an
>> external library (LAPACK) so what you are testing is the library, not the
>> language. For example, Matlab almost certainly ships with Intel's MKL (math
>> kernel libraries) which includes a call for matrix inversion that was
>> carefully written by Intel employees to run as fast as possible on Intel
>> CPUs. Julia cannot ship with MKL, but if you care enough it is probably
>> possible to link to it.
>>
>> Daniel.
>>
>
> Well, then did you just say that if one keeps using matlab in the way it
> was meant to (matrices operations), there's no way Julia can beat it
> currently (performancewise)?
>
>
>


[julia-users] How to find connected components in a matrix using Julia

2015-09-24 Thread Charles Novaes de Santana
Assume I have the following matrix:

mat = [1 1 0 0 0 ; 1 1 0 0 0 ; 0 0 0 0 1 ; 0 0 0 1 1]

Considering as a "component" a group of neighbour elements that have value
'1', how to identify that this matrix has 2 components and which vertices
compose each one?

For the matrix *mat* above I would like to find the following result:

Component 1 is composed by the following elements of the matrix
(row,column):

(1,1)
(1,2)
(2,1)
(2,2)

Component 2 is composed by the following elements:

(3,5)
(4,4)
(4,5)

I can use Graph algorithms like this

to identify components in square matrices. However such algorithms can not
be used for non-square matrices like the one I present here.

Any idea will be much appreciated.

I am open if your suggestion involves the use of a Python library + PyCall
for example. Although I would prefer to use a pure Julia solution.

Regards
Charles
P.S.: Just asked the same question in Stackoverflow:
https://stackoverflow.com/questions/32772190/how-to-find-connected-components-in-a-matrix-using-julia

-- 
Um axé! :)

--
Charles Novaes de Santana, PhD
http://www.imedea.uib-csic.es/~charles


Re: [julia-users] Re: Same native code, different performance

2015-09-24 Thread Erik Schnetter
In the native code above, the C function `pow(double, double)` is called in
both cases. Maybe `llvm_powi` is involved; if so, it is lowered to the same
`pow` function. The speed difference must have a different reason.

Sometimes there are random things occurring that invalidate benchmark
results. (This could be caused by how the compiled functions are aligned
respective to cache lines or page boundaries, etc. -- this is black magic I
like to invoke if there's a result that I can't explain. You can just
ignore my ramblings here.) You could restart Julia, reboot the machine, try
a different machine, define several identical functions `f` and `f_float`
and look at their speeds, etc...

(I would have hoped that this function is translated to the equivalent of
`c=cos(x); c2=c*c; return c*c2`, but this is obviously not happening.)

-erik

On Thu, Sep 24, 2015 at 4:24 PM, Kristoffer Carlsson 
wrote:

> But the floating ones are the faster ones. Shouldn't it be the opposite?




-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Milan Bouchet-Valat
Le jeudi 24 septembre 2015 à 13:31 -0700, Marcio Sales a écrit :
> > Matlab *requires* vectorized code because Matlab loops are very
> > slow by comparison
> It is much faster these days (from R2014). I remember I ran some very 
> simple comparisons and it surprised me that Matlab ran a bit faster 
> than Julia in a for loop of matrices multiplications and inversions. 
> However, I ran the loop from the REPL and maybe there are better ways 
> to do it.
Not "maybe". This is highlighted as the first point of the Performance
Tips section of the manual:
http://docs.julialang.org/en/latest/manual/performance-tips/

It's not fair to expect a language to be an in-place replacement for
another one and yet provide higher performance for free. Any language
requires some experience to get productive at coding. Vectorized
languages like MATLAB and R maybe even more than others.

I feel like this thread has come to a dead end.


Regards


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Jeffrey Sarnoff
+1

On Thu, Sep 24, 2015 at 5:28 PM, Stefan Karpinski 
wrote:

> This conversation is getting pretty tiresome. There are programs where
> Matlab is already as fast as it's possible to be. If all you're doing is
> computing a big matrix product, for example, then all any language is just
> going to call BLAS. Julia is not going to be any faster than Matlab for
> that, and neither, for that matter, is C or Fortran. There are plenty of
> programs where this is not the case, however, and in Julia you can get as
> fast as C without having to write C.
>
> On Thu, Sep 24, 2015 at 5:15 PM, Marcio Sales 
> wrote:
>
>> That is a meaningless comparison. First, you are not comparing loops, you
>>> are comparing matrix inversion. Second, neither Matlab nor Julia will
>>> natively perform a matrix inversion well. They are both going to use an
>>> external library (LAPACK) so what you are testing is the library, not the
>>> language. For example, Matlab almost certainly ships with Intel's MKL (math
>>> kernel libraries) which includes a call for matrix inversion that was
>>> carefully written by Intel employees to run as fast as possible on Intel
>>> CPUs. Julia cannot ship with MKL, but if you care enough it is probably
>>> possible to link to it.
>>>
>>> Daniel.
>>>
>>
>> Well, then did you just say that if one keeps using matlab in the way it
>> was meant to (matrices operations), there's no way Julia can beat it
>> currently (performancewise)?
>>
>>
>>
>


[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Steven G. Johnson
To put it another way, there are plenty of problems that can't be vectorized 
effectively. ODEs, matrix assembly for FEM or BEM, implementing special 
functions... If you do enough scientific computing, eventually you will hit a 
problem where you need to write your own inner loops, and then with Matlab you 
need to drop down to C if performance matters. 

If all performance-critical computing were linear algebra, life would be a lot 
simpler. 

Re: [julia-users] Re: Same native code, different performance

2015-09-24 Thread Kristoffer Carlsson
I don't like to invoke the black magic card here. I have tried benchmarking in 
different ways in different scenarios and the results are consistent. It is 
also reproducable by others. 

FWIW this is what lead me to this 
https://github.com/JuliaDiff/ForwardDiff.jl/issues/57

Re: [julia-users] Re: Same native code, different performance

2015-09-24 Thread Kristoffer Carlsson
But the floating ones are the faster ones. Shouldn't it be the opposite? 

[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Marcio Sales

>
> That is a meaningless comparison. First, you are not comparing loops, you 
> are comparing matrix inversion. Second, neither Matlab nor Julia will 
> natively perform a matrix inversion well. They are both going to use an 
> external library (LAPACK) so what you are testing is the library, not the 
> language. For example, Matlab almost certainly ships with Intel's MKL (math 
> kernel libraries) which includes a call for matrix inversion that was 
> carefully written by Intel employees to run as fast as possible on Intel 
> CPUs. Julia cannot ship with MKL, but if you care enough it is probably 
> possible to link to it.
>
> Daniel.
>

Well, then did you just say that if one keeps using matlab in the way it 
was meant to (matrices operations), there's no way Julia can beat it 
currently (performancewise)?




Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Daniel Carrera
On 24 September 2015 at 14:47, Christof Stocker 
wrote:

> As far as I can tell (from first hand experience), Julia really does give
> you a prominent edge over R and Matlab in terms of performance. However, I
> also think that there are currently a lot of ways to shoot yourself in the
> foot (accidental type instability and memory allocation are the most
> prominent of those in my experience).
>


To me Julia feels like a lower-level language than Matlab or Python. Julia
code is compiled very directly into machine code, and that makes it feel a
step closer to Fortran or C. For example, in Python or Octave, if your
integer gets too big, you are auto-converted to BigInt:

octave:1> 2^62
ans =4.6117e+18
octave:2> 2^63
ans =9.2234e+18
octave:3> 2^64
ans =1.8447e+19

$ python
...
>>> 2**62
4611686018427387904
>>> 2**63
9223372036854775808L
>>> 2**64
18446744073709551616L


In Julia, you just get an overflow, just as you would in C++ or Fortran:

julia> 2^62
4611686018427387904
julia> 2^63
-9223372036854775808
julia> 2^64
0


I personally like this a lot. With Julia I feel like I have control over
that the program does, without the hassle of static types, variable
declarations, or memory allocation. I feel that Julia is a good balance
between the convenience of Matlab/Python and the performance and control of
C++/Fortran.



> So currently I don't think that Julia allows you to naively write "simple
> and efficient code", iff you are not used to the language.
>

Can you name any language that allows you to write efficient code if you
are not used to the language? I teach Matlab. My students always write
horribly slow and inefficient code because they instinctively code Matlab
like it is Java. I have to drill into them the notion of vectorized code.


Given that we are not at version 1.0 yet, I don't mind that at all. I see
> it like this: If you know what to look out for, the code is usually pretty
> competitive with C (about a factor of 2 to 3 currently for my code). I
> rather have the developers focus on improving the potential performance,
> than on making it "idiot proof" just yet. I think the later is best tackled
> after the more pressing issues are taken care of.
>

Making the language idiot-proof will make it slow (see my example above).



> That being said, I do think one should be careful of how Julia is
> advertised by word of mouth. The comparison to Matlab, which has a very
> similar syntax but (I think) a very different way of efficient coding, can
> be a red herring and give rise to false expectations. I don't think you can
> currently just copy and paste Matlab code and expect it to be faster.
>

Yeah. The fact that the syntax is similar can lead people to think that you
are supposed to write Julia the way you write Matlab.

Cheers,
Daniel.


[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Marcio Sales

>
> Matlab *requires* vectorized code because Matlab loops are very slow by 
> comparison
>
It is much faster these days (from R2014). I remember I ran some very 
simple comparisons and it surprised me that Matlab ran a bit faster than 
Julia in a for loop of matrices multiplications and inversions. However, I 
ran the loop from the REPL and maybe there are better ways to do it.





Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Daniel Carrera
On 24 September 2015 at 19:55, Sisyphuss  wrote:

> People won't apply my critique on Matlab or R, because these languages are
> assumed to be slow and they must be slow. So there is no "risk/variance"
> (in the good sense) for these language. No sooner one learns to write
> vectorized code, than he reaches the limit of these languages.
>

I teach Matlab. My students show ridiculously high variance in the
performance of their programs. I have to teach them about vectorized code,
garbage collection and the like. I would have said that Matlab and R are
assumed to be reasonably fast.

 Daniel.


Re: [julia-users] Re: Same native code, different performance

2015-09-24 Thread Yichao Yu
On Thu, Sep 24, 2015 at 4:42 PM, Erik Schnetter  wrote:
> In the native code above, the C function `pow(double, double)` is called in
> both cases. Maybe `llvm_powi` is involved; if so, it is lowered to the same
> `pow` function. The speed difference must have a different reason.

Not necessarily, IIRC. we use the openlibm functions by default but
llvm will use the system libm version.

>
> Sometimes there are random things occurring that invalidate benchmark
> results. (This could be caused by how the compiled functions are aligned
> respective to cache lines or page boundaries, etc. -- this is black magic I
> like to invoke if there's a result that I can't explain. You can just ignore
> my ramblings here.) You could restart Julia, reboot the machine, try a
> different machine, define several identical functions `f` and `f_float` and
> look at their speeds, etc...
>
> (I would have hoped that this function is translated to the equivalent of
> `c=cos(x); c2=c*c; return c*c2`, but this is obviously not happening.)
>
> -erik
>
> On Thu, Sep 24, 2015 at 4:24 PM, Kristoffer Carlsson 
> wrote:
>>
>> But the floating ones are the faster ones. Shouldn't it be the opposite?
>
>
>
>
> --
> Erik Schnetter 
> http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Sisyphuss

>
> What do you do when there's no code to compare?
>
This is a good point! When I write a piece of Julia code, how do I know I 
wrote it correctly? Should I write a C version to prove it?
This is what I called the risk to write Julia code. Unless you are experts 
of compilers and Julia language, you can never know whether your code give 
you an edge or not.  

On Thursday, September 24, 2015 at 2:17:25 PM UTC+2, Marcio Sales wrote:
>
> Wow. All this discussion to make Julia only *as fast as* the old 
> scripting languages? I gotta say that worried me a bit. What do you do when 
> there's no code to compare? How will you know that it was really a good 
> idea switching from Matlab/Python to Julia? 
>
> Considering what the develops proudly advertize about performance (what I 
> think is why most people would even consider changing to it), shouldn't  
> the language be designed as to put the user in the best performant 
> direction most of the time? Matlab does a good job on that with fewer but 
> simplified and efficient data structures, supporting vectorized code etc. 
> In my short experience with Julia, it seems that there are a lot of ways to 
> do the same thing, some of which very bad in terms of performance, like the 
> original code in this post. If Julia can't be easily faster and less 
> verbose than R for example, we could just forget about it...
>
>
>
>  
>
>
>

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Tom Breloff
>
> Unless you are experts of compilers and Julia language, you can never know
> whether your code give you an edge or not


Isn't this true of all languages?  How do you know you did that C pointer
arithmetic correctly?  Or that python didn't silently clobber your data?
This is why integration testing is important... to make sure that
everything works together as expected.  Having an implementation in another
language only shows that the results match... they could still both be
wrong!

If your argument is purely about performance (not correctness), then who
cares if julia is 1.3x slower than C... you wrote 100x the functionality in
the same development time, which left you time to optimize things you
normally wouldn't.

On Thu, Sep 24, 2015 at 1:00 PM, Sisyphuss  wrote:

> What do you do when there's no code to compare?
>>
> This is a good point! When I write a piece of Julia code, how do I know I
> wrote it correctly? Should I write a C version to prove it?
> This is what I called the risk to write Julia code. Unless you are experts
> of compilers and Julia language, you can never know whether your code give
> you an edge or not.
>
> On Thursday, September 24, 2015 at 2:17:25 PM UTC+2, Marcio Sales wrote:
>>
>> Wow. All this discussion to make Julia only *as fast as* the old
>> scripting languages? I gotta say that worried me a bit. What do you do when
>> there's no code to compare? How will you know that it was really a good
>> idea switching from Matlab/Python to Julia?
>>
>> Considering what the develops proudly advertize about performance (what I
>> think is why most people would even consider changing to it), shouldn't
>> the language be designed as to put the user in the best performant
>> direction most of the time? Matlab does a good job on that with fewer but
>> simplified and efficient data structures, supporting vectorized code etc.
>> In my short experience with Julia, it seems that there are a lot of ways to
>> do the same thing, some of which very bad in terms of performance, like the
>> original code in this post. If Julia can't be easily faster and less
>> verbose than R for example, we could just forget about it...
>>
>>
>>
>>
>>
>>
>>


[julia-users] Optimal Control using Dynamic Programming

2015-09-24 Thread Narayani Vedam
Hi,
  I am new to Julia. I need to solve an optimal control problem using 
dynamic programming. Are there pre-defined functions/packages that I could 
use?

Thanks.


[julia-users] Stateflow equivalent in Julia

2015-09-24 Thread Narayani Vedam
Hi,
   I am new to Julia. I tried implementing a logic that I had in Simulink - 
Stateflow using Julia, but ran into trouble. Any heads-up on this?

Thank you


[julia-users] Is there a way to define abstract type from a type in base.jl

2015-09-24 Thread Roger Luo
How to make Array{BigFloat,1}<:Array{Real,1}?


[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-24 Thread Sisyphuss
People won't apply my critique on Matlab or R, because these languages are 
assumed to be slow and they must be slow. So there is no "risk/variance" 
(in the good sense) for these language. No sooner one learns to write 
vectorized code, than he reaches the limit of these languages. 

However, Julia is assumed to be fast (high expectation), and performance 
varies a lot according to the knowledge/skill a programmer own (high 
variance). Contrast to the low/high expectation and low variance of other 
languages, this is the reason why users are not happy with Julia, because 
Julia exposes their (including me) incapacity that they have previously 
comfortably concealed. 

So the complaints here are in fact one's frustration on his incapacity. 
(Disclaimer: no offense meant.)
Aware of my incapacity, I really hope the documentation could be more dummy 
friendly. 



On Thursday, September 24, 2015 at 7:18:18 PM UTC+2, Kristoffer Carlsson 
wrote:
>
> These criticisms are frankly ridiculous. When your critique could be 
> applied to any programming language that exists then you are doing it 
> wrong. 
>
>

[julia-users] Re: Request for comments: best way to deal with type uncertainty.

2015-09-24 Thread Ben Ward
As an update: We have tested fetching annotations without trying to enforce 
type, and then another in which we don't. I don't understand why, but the 
one in which we don't enforce type, is faster, it is also puzzling for me 
as the one where we don't enforce type allocates memory, and yet is still 
faster:

function getannotations{T}(x::Phylogeny, name::ASCIIString, ::Type{T})
x.annotations[T][name]::T 
end



function getannotations(x::Phylogeny, name::ASCIIString)
for (k, v) in x.annotations
if haskey(v, name)
return(v[name])
 end
end
error("No such key in the phylogeny")
end


*julia> **@time for i in 1:1; a = tree["Node Names", ASCIIString];**end*

  0.002090 seconds

*julia> **@time for i in 1:1; a = tree["Node Names"]; **end*

  0.001367 seconds (10.00 k allocations: 312.500 KB)





 

On Thursday, September 24, 2015 at 8:17:52 PM UTC+1, Ben Ward wrote:
>
> Hi Julia Users,
>
> I'm one of the Core-Devs in the BioJulia organisation, with a background 
> in evolutionary biology/genetics, and, with a few other contributors I'm 
> writing Bio.jl's Phylo submodule.
>
> The primary type of this submodule is the Phylogeny. Which is a composite 
> type, used to describe a model of evolution. At the very minimum it looks 
> like this:
>
> type PhyNode
> children::Vector{PhyNode}
> parent::PhyNode
> 
> function PhyNode(children::Vector{PhyNode} = PhyNode[],
>  parent = nothing)
> x = new()
> if parent != nothing
> graft!(parent, x)
> else
> x.parent = x
> end
> x.children = PhyNode[]
> for child in children
> graft!(x, child)
> end
> return x
> end
> end
>
> type Phylogeny
> root::PhyNode
> rooted::Bool
> rerootable::Bool
>
> Phylogeny() = new(PhyNode(), false, true)
> end
>
> PhyNodes are types which link to their children and to their parent - they 
> are the individual objects that form the tree structure. The Phylogeny type 
> describes the overall tree, and contains a variable pointing to a PhyNode 
> that forms the root of the tree, and determines whether the tree is rooted 
> in the phylogenetic sense, and whether the phylogeny is re-rootable. So far 
> so good. We can represent the structure of a phylogeny - a model of how 
> various species are related through history.
>
> Here is where I'd like comments from the julia-users, if possible: With a 
> phylogeny, often additional information is annotated to the tree, like 
> branch lengths, confidence intervals, sequences, labels, colours for 
> plotting, and so on. Well, we can do this with a Dict, and use PhyNodes as 
> keys:
>
> typealias NodeAnnotation{T} Dict{PhyNode, T}
>
> We can then store thee annotations in the Phylogeny type like this:
> type Phylogeny{S <: AbstractString}
> root::PhyNode
> rooted::Bool
> rerootable::Bool
> annotations::Dict{S, Any}
> end
>
> However, I don't like the type uncertainty of Any because if I'm correct, 
> it could propagate up through a user's code. But we will always have some 
> uncertainty, because we don't know in advance what the user might want to 
> annotate the Phylogeny with - could be anything from simple float values, 
> to other composite types.
>
> Am I correct that the uncertainty getting and setting such annotations, 
> would propagate through the user's code when they deal with annotations?
> If so, we have tried to think of ways to get around this. One idea was to 
> store the NodeAnnotations in the phylogeny according to the type of their 
> values, and then provide getter and setter methods that make the return 
> type predictable from the types of the parameters passed in the method:
>
> type Phylogeny{S<:AbstractString}
> root::PhyNode
> rooted::Bool
> rerootable::Bool
> annotations::Dict{Type, Dict{S, NodeAnnotation{Any}}}
> end
>
> function setannotation!{T}(x::Phylogeny, name::ASCIIString, ann::
> NodeAnnotation{T})
> if haskey(x.annotations,T)
> x.annotations[T][name] = ann
> else 
> x.annotations[T] = [name => ann]
> end
> end 
>
> function getannotations{T}(x::Phylogeny, name::ASCIIString, ::Type{T})
> x.annotations[T][name]::Dict{PhyNode, T}
> end
>
> This seems like it works and would indeed make getting and setting more 
> type predictable, the only annoying part is that Dicts get converted:
>
> julia> setannotation!(tree, "Node Names", NodeAnnotation{ASCIIString}())
> Dict{PhyNode,ASCIIString} with 0 entries
>
>
> julia> tree
> Phylogeny{ASCIIString}(PhyNode(),false,false,Dict{Type{T},Dict{ASCIIString
> ,Dict{PhyNode,Any}}}(ASCIIString=>Dict("Node Names"=>Dict{PhyNode,Any
> }(
>
> You see Dict{PhyNode, ASCIIString} got converted to Dict{PhyNode, Any}.
>
> If anyone has comments on this or has advice on how to prevent type 
> uncertainty propagating, please do share. How should we be approaching 

[julia-users] Re: Request for comments: best way to deal with type uncertainty.

2015-09-24 Thread Jeffrey Sarnoff
Missed the `tree = ...` line.
Can you post a version of the above that I can copy and paste (here, or 
link to a gist) to recreate the timings and look at it locally? 

On Thursday, September 24, 2015 at 8:41:32 PM UTC-4, Ben Ward wrote:
>
> As an update: We have tested fetching annotations without trying to 
> enforce type, and then another in which we don't. I don't understand why, 
> but the one in which we don't enforce type, is faster, it is also puzzling 
> for me as the one where we don't enforce type allocates memory, and yet is 
> still faster:
>
> function getannotations{T}(x::Phylogeny, name::ASCIIString, ::Type{T})
> x.annotations[T][name]::T 
> end
>
>
>
> function getannotations(x::Phylogeny, name::ASCIIString)
> for (k, v) in x.annotations
> if haskey(v, name)
> return(v[name])
>  end
> end
> error("No such key in the phylogeny")
> end
>
>
> *julia> **@time for i in 1:1; a = tree["Node Names", ASCIIString];*
> *end*
>
>   0.002090 seconds
>
> *julia> **@time for i in 1:1; a = tree["Node Names"]; **end*
>
>   0.001367 seconds (10.00 k allocations: 312.500 KB)
>
>
>
>
>
>  
>
> On Thursday, September 24, 2015 at 8:17:52 PM UTC+1, Ben Ward wrote:
>>
>> Hi Julia Users,
>>
>> I'm one of the Core-Devs in the BioJulia organisation, with a background 
>> in evolutionary biology/genetics, and, with a few other contributors I'm 
>> writing Bio.jl's Phylo submodule.
>>
>> The primary type of this submodule is the Phylogeny. Which is a composite 
>> type, used to describe a model of evolution. At the very minimum it looks 
>> like this:
>>
>> type PhyNode
>> children::Vector{PhyNode}
>> parent::PhyNode
>> 
>> function PhyNode(children::Vector{PhyNode} = PhyNode[],
>>  parent = nothing)
>> x = new()
>> if parent != nothing
>> graft!(parent, x)
>> else
>> x.parent = x
>> end
>> x.children = PhyNode[]
>> for child in children
>> graft!(x, child)
>> end
>> return x
>> end
>> end
>>
>> type Phylogeny
>> root::PhyNode
>> rooted::Bool
>> rerootable::Bool
>>
>> Phylogeny() = new(PhyNode(), false, true)
>> end
>>
>> PhyNodes are types which link to their children and to their parent - 
>> they are the individual objects that form the tree structure. The Phylogeny 
>> type describes the overall tree, and contains a variable pointing to a 
>> PhyNode that forms the root of the tree, and determines whether the tree is 
>> rooted in the phylogenetic sense, and whether the phylogeny is re-rootable. 
>> So far so good. We can represent the structure of a phylogeny - a model of 
>> how various species are related through history.
>>
>> Here is where I'd like comments from the julia-users, if possible: With a 
>> phylogeny, often additional information is annotated to the tree, like 
>> branch lengths, confidence intervals, sequences, labels, colours for 
>> plotting, and so on. Well, we can do this with a Dict, and use PhyNodes as 
>> keys:
>>
>> typealias NodeAnnotation{T} Dict{PhyNode, T}
>>
>> We can then store thee annotations in the Phylogeny type like this:
>> type Phylogeny{S <: AbstractString}
>> root::PhyNode
>> rooted::Bool
>> rerootable::Bool
>> annotations::Dict{S, Any}
>> end
>>
>> However, I don't like the type uncertainty of Any because if I'm correct, 
>> it could propagate up through a user's code. But we will always have some 
>> uncertainty, because we don't know in advance what the user might want to 
>> annotate the Phylogeny with - could be anything from simple float values, 
>> to other composite types.
>>
>> Am I correct that the uncertainty getting and setting such annotations, 
>> would propagate through the user's code when they deal with annotations?
>> If so, we have tried to think of ways to get around this. One idea was to 
>> store the NodeAnnotations in the phylogeny according to the type of their 
>> values, and then provide getter and setter methods that make the return 
>> type predictable from the types of the parameters passed in the method:
>>
>> type Phylogeny{S<:AbstractString}
>> root::PhyNode
>> rooted::Bool
>> rerootable::Bool
>> annotations::Dict{Type, Dict{S, NodeAnnotation{Any}}}
>> end
>>
>> function setannotation!{T}(x::Phylogeny, name::ASCIIString, ann::
>> NodeAnnotation{T})
>> if haskey(x.annotations,T)
>> x.annotations[T][name] = ann
>> else 
>> x.annotations[T] = [name => ann]
>> end
>> end 
>>
>> function getannotations{T}(x::Phylogeny, name::ASCIIString, ::Type{T})
>> x.annotations[T][name]::Dict{PhyNode, T}
>> end
>>
>> This seems like it works and would indeed make getting and setting more 
>> type predictable, the only annoying part is that Dicts get converted:
>>
>> julia> setannotation!(tree, "Node Names", NodeAnnotation{ASCIIString}())
>> Dict{PhyNode,ASCIIString} with 0 entries

Re: [julia-users] Re: Same native code, different performance

2015-09-24 Thread Erik Schnetter
On Thu, Sep 24, 2015 at 4:56 PM, Yichao Yu  wrote:

> On Thu, Sep 24, 2015 at 4:42 PM, Erik Schnetter 
> wrote:
> > In the native code above, the C function `pow(double, double)` is called
> in
> > both cases. Maybe `llvm_powi` is involved; if so, it is lowered to the
> same
> > `pow` function. The speed difference must have a different reason.
>
> Not necessarily, IIRC. we use the openlibm functions by default but
> llvm will use the system libm version.
>

Good catch. (I can't reproduce this locally, neither with Julia 0.4 nor
0.5, neither on OS X nor on Linux -- I'm getting different assembler code
for both function, both different from the versions shown here, so I can't
try my suggestion below.)

To test this, you could comment out or modify the `llvm_powi` definition of
`pow`, or you could rebuild Juila without Openlibm.

-erik


> > Sometimes there are random things occurring that invalidate benchmark
> > results. (This could be caused by how the compiled functions are aligned
> > respective to cache lines or page boundaries, etc. -- this is black
> magic I
> > like to invoke if there's a result that I can't explain. You can just
> ignore
> > my ramblings here.) You could restart Julia, reboot the machine, try a
> > different machine, define several identical functions `f` and `f_float`
> and
> > look at their speeds, etc...
> >
> > (I would have hoped that this function is translated to the equivalent of
> > `c=cos(x); c2=c*c; return c*c2`, but this is obviously not happening.)
> >
> > -erik
> >
> > On Thu, Sep 24, 2015 at 4:24 PM, Kristoffer Carlsson <
> kcarlsso...@gmail.com>
> > wrote:
> >>
> >> But the floating ones are the faster ones. Shouldn't it be the opposite?
> >
> >
> >
> >
> > --
> > Erik Schnetter 
> > http://www.perimeterinstitute.ca/personal/eschnetter/
>



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/