[julia-users] ANN: node-julia

2014-09-18 Thread Jeff Waller
It's usable enough now, so it's time to announce this.  node-julia 
https://github.com/waTeim/node-julia is a portal from node.js to Julia. 
 It's purpose is partly to allow the node-HTTP-javascript guys access to 
the excellence that is Julia, and visa versa the Julia guys access to the 
HTTP excellence that is node.js, but more fundamentally to allow the these 
two groups to help and interact with each other.  Well at least thats my 
hope.

It's definitely a work in progress, see the README 
https://github.com/waTeim/node-julia/blob/master/README.md for the 
limitations, and of course my previous message here 
https://groups.google.com/forum/#!topic/julia-users/D17iErZEtfg about the 
problem I'm currently dealing with on Linux.

If you are familiar with node.js already it can be obtained via npm here 
https://www.npmjs.org/package/node-julia


[julia-users] Re: LaTeX labels in Winston

2014-09-18 Thread Alex
Hi Pawel,

AFAIK the rendering of the labels is actually handled by Cairo.jl (look for 
tex2pango in Cairo.jl 
https://github.com/JuliaLang/Cairo.jl/blob/master/src/Cairo.jl). There 
some TeX commands (\it, \rm, _, ^, etc) are translated into Pango markup 
format https://developer.gnome.org/pango/stable/PangoMarkupFormat.html. 
Additionally many/most TeX symbols are converted into unicode. More 
sophisticated commands, like \frac, are not handled at the moment.

It would be great to have more complete support, but I guess it is not so 
easy since it would require a custom typesetting system (or one delegates 
the rendering to some external program, but then all the text has to go 
through this). Maybe there is some TeX/MathML engine using Pango one could 
use?


Best,

Alex.


On Wednesday, 17 September 2014 22:59:38 UTC+2, Paweł Biernat wrote:

 Hi,

 Is it possible to use LaTeX labels in Winston?  In the examples.jl there 
 is a point [1] where some LaTeX-like syntax is used in a label.

 I was trying to push $\frac{1}{2}$ as a label and already tested various 
 escaped versions, including \$\\frac{1}{2}\$ and \\frac{1}{2} but I 
 didn't achieve the expected result.

 [1] https://github.com/nolta/Winston.jl/blob/master/test/examples.jl#L18

 Best,
 Paweł



Re: [julia-users] Re: ANN: VideoIO.jl wrapper for ffmpeg/libav

2014-09-18 Thread Max Suster
Hi Kevin, 

Thanks a lot for your feedback.  

Indeed, I have test run Simon´s wonderful GLPlot/Reactive script for 
realtime image acquisition and filtering. 
His example was very valuable to get started and for display.  However, I 
found it to be a bit unstable on my Mac OSX (crashes when trying to record 
repeatedly several sessions). I need to spend more time to learn how to 
work with rendering textures in GLPlot and I am not entirely sure how much 
more work is needed to apply a variety of image filters with this approach. 
However, 
I will try (if possible) to compare the performance of the two approaches 
once I have the GUI interface itself working effectively.   In contrast, 
ImageView is working very well and I am already doing some basic realtime 
acquisition and processing. 


   -  I will test the IO socket to a video stream and see how it works. 
Then I will approach the ffmpeg wrapper.
   -  I am using Tk to build the GUI interface for ImageView/Images 
   since it it clearly the easiest.  Regarding GTk, I tried installing both 
   gtk-2 and gtk-3 but ran into a lot of problems on OSX Mavericks due to the 
   apparent incompatibility of homebrew installation of libgtk libraries with 
   Quartz.  This seems to reported by a lot of people using OSX.  If we switch 
   from Tk to  GTk entirely, will this compromise the use of Tk interfaces for 
   the GUIs with ImageView (e.g., displaying images on the Tk.Widget 
   Canvas)?

Cheers,
Max






On Thursday, September 18, 2014 2:36:56 AM UTC+2, Kevin Squire wrote:

 Hi Max,

 I am a recent newcomer to Julia and I am willing to help if I get some 
 input on the wrapper.


 Thanks for your recent inquiries, and your post here.  Any 
 help/contribution you can offer would be very welcome!
  

  I am working on a basic GUI interface to do realtime video processing 
 (aimed to support tracking and computer vision applications) using 
 theTk/Images/VideoIO 
 packages.  I understand that there is already work going on an OpenCV 
 wrapper for Julia.


 Sounds good!  Have you considered Simon Danisch's OpenGL interface we were 
 talking about before, at least for display?  ImageView is a good choice, 
 too, though.  You should note that the default backend for ImageView will 
 be changing from Tk to Gtk soon 
 https://github.com/timholy/ImageView.jl/issues/46.

 How difficult would it be to get streaming to memory buffer to work with 
 Julia rather than saving images to disk as it is possible with the ffmpeg 
 interface (e.g., av_open_input_stream).Any thoughts on this?

 http://stackoverflow.com/questions/5237175/process-video-stream-from-memory-buffer


 It should be doable, possibly now, although it depends on where the stream 
 is coming from.  VideoIO can already read from Julia IO streams, so if you 
 can open an IO stream in Julia, you can pass it to VideoIO via 
 VideoIO.open or VideoIO.openvideo.  There are some downsides to this, 
 though: 

- you can't seek (or at least, it would be pretty difficult to enable 
seeking)
- for some videos, ffmpeg has a hard time determining the video 
format, so you may need to supply that
- it hasn't been extensively tested, so it may need some tweaking to 
up the performance.  It does work with file IO 
https://github.com/kmsquire/VideoIO.jl/blob/master/test/avio.jl#L45-L70, 
at least.

 Looking at the link you sent, it seems that the second issue might be 
 solved with a call to av_probe_input_format in src/avio.jl here 
 https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fkmsquire%2FVideoIO.jl%2Fblob%2Fmaster%2Fsrc%2Favio.jl%23L167-L201sa=Dsntz=1usg=AFQjCNGydzsu18UuHrB08GPMvBAfg3oURA
 .

 Also, just for reference, it seems that av_open_input_stream is deprecated 
 https://www.google.com/url?q=https%3A%2F%2Fwww.ffmpeg.org%2Fdoxygen%2F0.10%2Fgroup__lavf__decoding.html%23ge38181e3d18f98c1e7761e3d39006b1csa=Dsntz=1usg=AFQjCNE-I_FA3WhrtKlMHJCa-SU_h6MIag,
  
 and that the recommended method for opening a stream is with 
 avformat_open_input, which is what is used in src/avio.jl.

 Why don't you take a look at src/avio.jl, and open up an issue (or pull 
 request) in the VideoIO.jl repository for tracking any problems or needed 
 changes.  If nothing else, if you can open an IO socket to a video stream 
 and then get it decoding with VideoIO, it would be useful to have that as 
 an example in the readme or under VideoIO.jl/examples.  

 If you're going to work at the lower levels of the library, you might also 
 want to look at 
 https://github.com/kmsquire/VideoIO.jl/blob/master/VideoIO_package_layout.md
 .

 Cheers!
Kevin
  

 On Thursday, September 18, 2014 12:41:56 AM UTC+2, Kevin Squire wrote:

 I certainly the idea in my head to support video generation, but I just 
 created https://github.com/kmsquire/VideoIO.jl/issues/36 to track it.  

 It might be a little while before we get there, though--at least, I 
 won't have time to work on it anytime soon.  But 

[julia-users] Re: LaTeX labels in Winston

2014-09-18 Thread Paweł Biernat
Well, the PGFPlot certainly produces nicer plots but I can't use them in 
REPL properly.  The function plot() works and displays a plot, but I cannot 
add any annotations (Axis,Title etc.).  Maybe I am missing something from 
the tutorial you linked to.

W dniu czwartek, 18 września 2014 00:01:25 UTC+2 użytkownik Kaj Wiik 
napisał:

 Hmm, sorry, you asked about general LaTeX. I confirm that at least \frac 
 does not work.

 Have you tried PGFPlots:

 http://nbviewer.ipython.org/github/sisl/PGFPlots.jl/blob/master/doc/PGFPlots.ipynb
 https://github.com/sisl/PGFPlots.jl

 Kaj


 On Thursday, September 18, 2014 12:47:24 AM UTC+3, Kaj Wiik wrote:

 Hi!

 It works fine with version 0.3.0 Ubuntu 14.04:


 https://lh4.googleusercontent.com/-OgNRRRBsXTY/VBoBMh9nd3I/Bnk/u_1ZW1pfRAQ/s1600/winston.png


 On Wednesday, September 17, 2014 11:59:38 PM UTC+3, Paweł Biernat wrote:

 Hi,

 Is it possible to use LaTeX labels in Winston?  In the examples.jl there 
 is a point [1] where some LaTeX-like syntax is used in a label.

 I was trying to push $\frac{1}{2}$ as a label and already tested 
 various escaped versions, including \$\\frac{1}{2}\$ and \\frac{1}{2} 
 but I didn't achieve the expected result.

 [1] https://github.com/nolta/Winston.jl/blob/master/test/examples.jl#L18

 Best,
 Paweł



[julia-users] Re: LaTeX labels in Winston

2014-09-18 Thread Paweł Biernat
Thanks, this is missing from the documentation of the Winston package.  
Maybe someone should add a short info on the typesetting options, so people 
won't have to go to the mailing list to figure it out.

As for Pango rendering MathML there is an example at the end of the script 
gallery [1].  But I couldn't figure out how they achieved this as I don't 
know Cairo/Pango at all.

[1] http://www.pango.org/ScriptGallery

W dniu czwartek, 18 września 2014 08:16:33 UTC+2 użytkownik Alex napisał:

 Hi Pawel,

 AFAIK the rendering of the labels is actually handled by Cairo.jl (look 
 for tex2pango in Cairo.jl 
 https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FJuliaLang%2FCairo.jl%2Fblob%2Fmaster%2Fsrc%2FCairo.jlsa=Dsntz=1usg=AFQjCNF3Cp9Rz43PyR88FNO1BoKYIulrjg).
  
 There some TeX commands (\it, \rm, _, ^, etc) are translated into Pango 
 markup format 
 https://developer.gnome.org/pango/stable/PangoMarkupFormat.html. 
 Additionally many/most TeX symbols are converted into unicode. More 
 sophisticated commands, like \frac, are not handled at the moment.

 It would be great to have more complete support, but I guess it is not so 
 easy since it would require a custom typesetting system (or one delegates 
 the rendering to some external program, but then all the text has to go 
 through this). Maybe there is some TeX/MathML engine using Pango one could 
 use?


 Best,

 Alex.


 On Wednesday, 17 September 2014 22:59:38 UTC+2, Paweł Biernat wrote:

 Hi,

 Is it possible to use LaTeX labels in Winston?  In the examples.jl there 
 is a point [1] where some LaTeX-like syntax is used in a label.

 I was trying to push $\frac{1}{2}$ as a label and already tested 
 various escaped versions, including \$\\frac{1}{2}\$ and \\frac{1}{2} 
 but I didn't achieve the expected result.

 [1] https://github.com/nolta/Winston.jl/blob/master/test/examples.jl#L18

 Best,
 Paweł



[julia-users] Re: LaTeX labels in Winston

2014-09-18 Thread Andreas Lobinger
Hello colleague,

on one of these rainy sunday afternoons i sat there and thought: Hey, this 
can't be that complicated...

Math type setting (still) seems to be some black art and an awful of 
heuristics are put into code and only there. So there is no material in 
algorithmic form available that could be re-implemented somewhere. I mean, 
even TeX itself is available only a literate programming.

So options i considered:
 * wait until it's integrated in Pango (you will wait a long time)
 * try to integrate a latex/pdflatex run to produce dvi or pdf and use 
that. And find a dvi or pdf to cairo renderer.
 * run mathjax.js via a java script engine to svg and use Rsvg or similat 
to render on cairo.
 * rewrite mathjax.js in julia (might have licensing issues).


 





[julia-users] Make a Copy of an Immutable with Specific Fields Changed

2014-09-18 Thread Pontus Stenetorp
Everyone,

As a former Python addict, I recently found myself wanting to generate
a copy of an immutable object with only a single or a few values
changed.  I tried to find an answer to this question but either my
search query skills or my lack of energy towards the end of my workday
hindered me.

Concretely, is there a nice idiomatic way to achieve the following.

immutable Foo
a::Int
b::Int
c::Int
end

o = Foo(17, 17, 17)
p = Foo(o.a, 4711, o.c)

When you know that you want to replace the value for `b`, without
having to explicitly enumerate the fields for which you simply want to
copy the previous value.   For Python and named tuples, the operation
would be `_replace`. [1]

Regards,
Pontus

[1]: 
https://docs.python.org/3/library/collections.html#collections.somenamedtuple._replace


Re: [julia-users] Make a Copy of an Immutable with Specific Fields Changed

2014-09-18 Thread Rafael Fourquet
I think the idiomatic way remains to be designed:
https://github.com/JuliaLang/julia/issues/5333.


[julia-users] Re: LaTeX labels in Winston

2014-09-18 Thread Andreas Lobinger
It's there in fine print: The typesetting of MathML is done via a component 
of firefox and then rendered via Cairo/Pango.




Re: [julia-users] Re: Article on `@simd`

2014-09-18 Thread Gunnar Farnebäck
There are still three arguments to max in the last of those examples. 
Actually it's not clear that you can make an equivalent expression with min 
and max. Functionally (with intended use)
x[i] = max(a, min(b, x[i]))
does the same thing as the earlier examples but it expands to
x[i] = ifelse(ifelse(b  x[i], b, x[i])  a, a, ifelse(b  x[i], b, x[i]))
which should be hard for a compiler to optimize to the earlier examples 
since they don't give the same result in the degenerate case of a  b.

A closer correspondence is given by the clamp function which is implemented 
as a nested ifelse in the same way as example 2 (although in the opposite 
order, so it also differs for ab).

Den onsdagen den 17:e september 2014 kl. 16:28:45 UTC+2 skrev Arch Robison:

 Thanks.  Now fixed.

 On Wed, Sep 17, 2014 at 4:14 AM, Gunnar Farnebäck gun...@lysator.liu.se 
 javascript: wrote:

 In the section The Loop Body Should Be Straight-Line Code, the first 
 and second code example look identical with ifelse constructions. I assume 
 the first one should use ? instead. Also the third code example has a stray 
 x[i]a argument to the max function.



[julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Florian Oswald

# define a variable gamma:

gamma = 1.4
mgamma = 1.0-gamma

julia mgamma 
-0.3999

# this works:

julia -0.3999^2.5 
-0.10119288512475567

# this doesn't:

julia mgamma^2.5 
ERROR: DomainError 
in ^ at math.jl:252



[julia-users] Re: unexpected domain error for ^(float,float)

2014-09-18 Thread Ivar Nesje
This doesn't work either

julia (-0.3999)^2.5 
ERROR: DomainError 
in ^ at math.jl:262

try to convert one (or both) of the arguments to complex numbers

julia complex(mgamma)^2.5
julia mgamma^complex(2.5)
julia complex(mgamma)^complex(2.5)

Which all results in:

3.0981385716323084e-17 + 0.10119288512538809im 

Regards Ivar


kl. 12:24:00 UTC+2 torsdag 18. september 2014 skrev Florian Oswald følgende:


 # define a variable gamma:

 gamma = 1.4
 mgamma = 1.0-gamma

 julia mgamma 
 -0.3999

 # this works:

 julia -0.3999^2.5 
 -0.10119288512475567

 # this doesn't:

 julia mgamma^2.5 
 ERROR: DomainError 
 in ^ at math.jl:252



Re: [julia-users] dispatch based on expression head

2014-09-18 Thread Tony Fong
Actually, having dispatch based on expression head can really help lint 
Lint itself. One thing that keeps bugging me is that currently I can never 
be sure I have covered all possible Expr head types. For example, the other 
day @pao pointed out bitstype 8 MyBitsType doesn't lint. I have never 
used bitstype so I would not have known this. Having a discoverable way to 
go through all concrete subtypes of an abstract Expr would help catch lint 
gaps as the language evolves.

On Sunday, September 14, 2014 4:37:34 PM UTC+7, Stefan Karpinski wrote:

 Avoiding allocation during parsing could really be helpful. It would be 
 possible to make the number of subexpressions a type parameter.



[julia-users] Moving/stepen average without loop ? Is posible ?any command ?

2014-09-18 Thread paul analyst
I have a vector x = int (randbool (100)) 
a / how quickly (without the loop?) receive 10 vectors of length 10, in 
each field the average of the next 10 fields of the vector x (moving/stepen  
average of 10 values ​​at step = 10)? 
b / how to receive the 99 vectors of length 10 of the next 10 fields wektra 
x (moving/stepen average of 10 values ​​at step = 1)?

Paul


Re: [julia-users] Re: ANN: VideoIO.jl wrapper for ffmpeg/libav

2014-09-18 Thread Tim Holy
Glad to hear that you're getting some success with ImageView.

Being able to install Gtk.jl automatically on the Mac is brand-new, and some 
hiccups are expected---there were many such hiccups in the early days of 
Tk.jl. Hopefully your installation problems can be resolved; please do work on 
that with Elliot over at Homebrew.jl.

I'm not sure I fully understand your comment about Tk being much easier; 
https://github.com/JuliaLang/Gtk.jl/blob/master/doc/usage.md
shows a syntax which is pretty similar to that of Tk's. That said, if you find 
usability issues please do report them; like all packages, Gtk.jl is a work in 
progress and can be further improved (documentation included).

There are several reasons ImageView will shift to Gtk, of which many can be 
found in the unresolved issue pages of both Tk and ImageView:
https://github.com/JuliaLang/Tk.jl/issues
https://github.com/timholy/ImageView.jl/issues
and others include undiagnosed segfaults (like one reported in ProfileView).
In contrast, after installation Gtk.jl has seemed like smooth sailing 
(admittedly with fewer testers, unfortunately, since it has been hard to 
install). The Gtk libraries have a much wider user base and larger development 
community, so as a toolkit it's in substantially better shape than Tk.

The other reason to switch is because Gtk just offers a lot more. You can force 
redrawing of just a portion of the screen, which can yield massive 
improvements in performance. (Unrelatedly but similarly, I've timed a 5-fold 
speedup in plotting lines simply by switching Winston from Tk to Gtk.) Gtk has 
Glade. At least on Linux, the open file dialog of Tk is not very nice, and 
dialogs in general are painful with Tk. Gtk has many additional useful 
widgets. In principle you can more easily integrate OpenGL. And so on.

Once the switch is made, then yes, any extensions to ImageView's functionality 
will have to be Gtk, not Tk. If you want to stick with Tk for a while, you can 
always pin ImageView to an older version. If there are Tk enthusiasts, we can 
also leave a tk branch around for the community to use  improve. But I expect 
to put my own focus on Gtk.

Best,
--Tim


On Thursday, September 18, 2014 12:43:53 AM Max Suster wrote:
 Hi Kevin,
 
 Thanks a lot for your feedback.
 
 Indeed, I have test run Simon´s wonderful GLPlot/Reactive script for
 realtime image acquisition and filtering.
 His example was very valuable to get started and for display.  However, I
 found it to be a bit unstable on my Mac OSX (crashes when trying to record
 repeatedly several sessions). I need to spend more time to learn how to
 work with rendering textures in GLPlot and I am not entirely sure how much
 more work is needed to apply a variety of image filters with this approach.
 However, I will try (if possible) to compare the performance of the two
 approaches once I have the GUI interface itself working effectively.   In
 contrast, ImageView is working very well and I am already doing some basic
 realtime acquisition and processing.
 
 
-  I will test the IO socket to a video stream and see how it works.
 Then I will approach the ffmpeg wrapper.
-  I am using Tk to build the GUI interface for ImageView/Images
since it it clearly the easiest.  Regarding GTk, I tried installing both
gtk-2 and gtk-3 but ran into a lot of problems on OSX Mavericks due to
 the apparent incompatibility of homebrew installation of libgtk libraries
 with Quartz.  This seems to reported by a lot of people using OSX.  If we
 switch from Tk to  GTk entirely, will this compromise the use of Tk
 interfaces for the GUIs with ImageView (e.g., displaying images on the
 Tk.Widget  Canvas)?
 
 Cheers,
 Max
 
 On Thursday, September 18, 2014 2:36:56 AM UTC+2, Kevin Squire wrote:
  Hi Max,
  
  I am a recent newcomer to Julia and I am willing to help if I get some
  
  input on the wrapper.
  
  Thanks for your recent inquiries, and your post here.  Any
  help/contribution you can offer would be very welcome!
  
   I am working on a basic GUI interface to do realtime video processing
  
  (aimed to support tracking and computer vision applications) using
  theTk/Images/VideoIO packages.  I understand that there is already work
  going on an OpenCV wrapper for Julia.
  
  Sounds good!  Have you considered Simon Danisch's OpenGL interface we were
  talking about before, at least for display?  ImageView is a good choice,
  too, though.  You should note that the default backend for ImageView will
  be changing from Tk to Gtk soon
  https://github.com/timholy/ImageView.jl/issues/46.
  
  How difficult would it be to get streaming to memory buffer to work with
  
  Julia rather than saving images to disk as it is possible with the ffmpeg
  interface (e.g., av_open_input_stream).Any thoughts on this?
  
  http://stackoverflow.com/questions/5237175/process-video-stream-from-memo
  ry-buffer 
  It should be doable, possibly now, although it depends on where the 

Re: [julia-users] Re: LaTeX labels in Winston

2014-09-18 Thread Tim Holy
Paweł, there's no one better than you to do that! Everyone here is a 
volunteer, and contributing documentation is a terrific way to help out.

If you just grep for latex in the Winston source, that should help you find all 
the relevant information.

Best,
--Tim

On Thursday, September 18, 2014 02:07:01 AM Paweł Biernat wrote:
 Thanks, this is missing from the documentation of the Winston package.
 Maybe someone should add a short info on the typesetting options, so people
 won't have to go to the mailing list to figure it out.
 
 As for Pango rendering MathML there is an example at the end of the script
 gallery [1].  But I couldn't figure out how they achieved this as I don't
 know Cairo/Pango at all.
 
 [1] http://www.pango.org/ScriptGallery
 
 W dniu czwartek, 18 września 2014 08:16:33 UTC+2 użytkownik Alex napisał:
  Hi Pawel,
  
  AFAIK the rendering of the labels is actually handled by Cairo.jl (look
  for tex2pango in Cairo.jl
  https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FJuliaLang%2FCairo
  
.jl%2Fblob%2Fmaster%2Fsrc%2FCairo.jlsa=Dsntz=1usg=AFQjCNF3Cp9Rz43PyR88F
  NO1BoKYIulrjg). There some TeX commands (\it, \rm, _, ^, etc) are
  translated into Pango markup format
  https://developer.gnome.org/pango/stable/PangoMarkupFormat.html.
  Additionally many/most TeX symbols are converted into unicode. More
  sophisticated commands, like \frac, are not handled at the moment.
  
  It would be great to have more complete support, but I guess it is not so
  easy since it would require a custom typesetting system (or one delegates
  the rendering to some external program, but then all the text has to go
  through this). Maybe there is some TeX/MathML engine using Pango one could
  use?
  
  
  Best,
  
  Alex.
  
  On Wednesday, 17 September 2014 22:59:38 UTC+2, Paweł Biernat wrote:
  Hi,
  
  Is it possible to use LaTeX labels in Winston?  In the examples.jl there
  is a point [1] where some LaTeX-like syntax is used in a label.
  
  I was trying to push $\frac{1}{2}$ as a label and already tested
  various escaped versions, including \$\\frac{1}{2}\$ and \\frac{1}{2}
  but I didn't achieve the expected result.
  
  [1] https://github.com/nolta/Winston.jl/blob/master/test/examples.jl#L18
  
  Best,
  Paweł



Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Tim Holy
http://docs.julialang.org/en/latest/manual/faq/#why-does-julia-give-a-domainerror-for-certain-seemingly-sensible-operations

On Thursday, September 18, 2014 03:24:00 AM Florian Oswald wrote:
 # define a variable gamma:
 
 gamma = 1.4
 mgamma = 1.0-gamma
 
 julia mgamma
 -0.3999
 
 # this works:
 
 julia -0.3999^2.5
 -0.10119288512475567
 
 # this doesn't:
 
 julia mgamma^2.5
 ERROR: DomainError
 in ^ at math.jl:252



Re: [julia-users] Re: ANN: VideoIO.jl wrapper for ffmpeg/libav

2014-09-18 Thread Max Suster
Thanks for the thoughtful reply Tim.  

I have nothing against moving to Gtk immediately - in fact I have been 
testing it this morning with X11
to figure out how I can replace the Tk code I wrote with Gtk.   I agree 
that using Gtk should not be any harder than using Tk.
There is no special reason to stick with Tk, only that it is the only thing 
that worked on my OS.
Gtk without Quartz is not that great . . .

However, the problem is indeed first and foremost the installation of Gtk 
with the libgtk libraries on OSX Mavericks. 
I am still struggling this morning to get it working smoothly.  



 

On Thursday, September 18, 2014 1:34:59 PM UTC+2, Tim Holy wrote:

 Glad to hear that you're getting some success with ImageView. 

 Being able to install Gtk.jl automatically on the Mac is brand-new, and 
 some 
 hiccups are expected---there were many such hiccups in the early days of 
 Tk.jl. Hopefully your installation problems can be resolved; please do 
 work on 
 that with Elliot over at Homebrew.jl. 

 I'm not sure I fully understand your comment about Tk being much easier; 
 https://github.com/JuliaLang/Gtk.jl/blob/master/doc/usage.md 
 https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FJuliaLang%2FGtk.jl%2Fblob%2Fmaster%2Fdoc%2Fusage.mdsa=Dsntz=1usg=AFQjCNG4cXKDjPcn90CGWd0KNd8LDjVYOQ
  
 shows a syntax which is pretty similar to that of Tk's. That said, if you 
 find 
 usability issues please do report them; like all packages, Gtk.jl is a 
 work in 
 progress and can be further improved (documentation included). 

 There are several reasons ImageView will shift to Gtk, of which many can 
 be 
 found in the unresolved issue pages of both Tk and ImageView: 
 https://github.com/JuliaLang/Tk.jl/issues 
 https://github.com/timholy/ImageView.jl/issues 
 and others include undiagnosed segfaults (like one reported in 
 ProfileView). 
 In contrast, after installation Gtk.jl has seemed like smooth sailing 
 (admittedly with fewer testers, unfortunately, since it has been hard to 
 install). The Gtk libraries have a much wider user base and larger 
 development 
 community, so as a toolkit it's in substantially better shape than Tk. 

 The other reason to switch is because Gtk just offers a lot more. You can 
 force 
 redrawing of just a portion of the screen, which can yield massive 
 improvements in performance. (Unrelatedly but similarly, I've timed a 
 5-fold 
 speedup in plotting lines simply by switching Winston from Tk to Gtk.) Gtk 
 has 
 Glade. At least on Linux, the open file dialog of Tk is not very nice, and 
 dialogs in general are painful with Tk. Gtk has many additional useful 
 widgets. In principle you can more easily integrate OpenGL. And so on. 

 Once the switch is made, then yes, any extensions to ImageView's 
 functionality 
 will have to be Gtk, not Tk. If you want to stick with Tk for a while, you 
 can 
 always pin ImageView to an older version. If there are Tk enthusiasts, we 
 can 
 also leave a tk branch around for the community to use  improve. But I 
 expect 
 to put my own focus on Gtk. 

 Best, 
 --Tim 


 On Thursday, September 18, 2014 12:43:53 AM Max Suster wrote: 
  Hi Kevin, 
  
  Thanks a lot for your feedback. 
  
  Indeed, I have test run Simon´s wonderful GLPlot/Reactive script for 
  realtime image acquisition and filtering. 
  His example was very valuable to get started and for display.  However, 
 I 
  found it to be a bit unstable on my Mac OSX (crashes when trying to 
 record 
  repeatedly several sessions). I need to spend more time to learn how to 
  work with rendering textures in GLPlot and I am not entirely sure how 
 much 
  more work is needed to apply a variety of image filters with this 
 approach. 
  However, I will try (if possible) to compare the performance of the two 
  approaches once I have the GUI interface itself working effectively.   
 In 
  contrast, ImageView is working very well and I am already doing some 
 basic 
  realtime acquisition and processing. 
  
  
 -  I will test the IO socket to a video stream and see how it 
 works. 
  Then I will approach the ffmpeg wrapper. 
 -  I am using Tk to build the GUI interface for ImageView/Images 
 since it it clearly the easiest.  Regarding GTk, I tried installing 
 both 
 gtk-2 and gtk-3 but ran into a lot of problems on OSX Mavericks due 
 to 
  the apparent incompatibility of homebrew installation of libgtk 
 libraries 
  with Quartz.  This seems to reported by a lot of people using OSX.  If 
 we 
  switch from Tk to  GTk entirely, will this compromise the use of Tk 
  interfaces for the GUIs with ImageView (e.g., displaying images on the 
  Tk.Widget  Canvas)? 
  
  Cheers, 
  Max 
  
  On Thursday, September 18, 2014 2:36:56 AM UTC+2, Kevin Squire wrote: 
   Hi Max, 
   
   I am a recent newcomer to Julia and I am willing to help if I get some 
   
   input on the wrapper. 
   
   Thanks for your recent inquiries, and your post here.  Any 
   help/contribution 

Re: [julia-users] Lint.jl status update

2014-09-18 Thread Patrick O'Leary
It was shockingly easy once I figured out how to get the bleepin regex to 
work. And how to use rx syntax.

On Thursday, September 18, 2014 12:29:46 AM UTC-5, Tony Fong wrote:

 This is so cool.

 On Thursday, September 18, 2014 11:43:38 AM UTC+7, Patrick O'Leary wrote:

 On Sunday, September 14, 2014 12:12:49 AM UTC-5, Viral Shah wrote:

 I wonder if these can be integrated into LightTable and IJulia, so that 
 they always automatically are running in the background on all code one 
 writes.


 For Emacs users, I threw together a Flycheck extension for Lint.jl:

 https://gist.github.com/pao/e65029bf88650e592929

 Note that if the `julia` executable is not on your path, you need to 
 customize--*not* setq--the variable `flycheck-julia-lint-executable`; if 
 you try to use setq it just goes buffer-local. I have no idea why Flycheck 
 is designed that way.



Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Patrick O'Leary
Seems like the literal -0.4^2.5 should throw the same error, though?

On Thursday, September 18, 2014 6:42:56 AM UTC-5, Tim Holy wrote:


 http://docs.julialang.org/en/latest/manual/faq/#why-does-julia-give-a-domainerror-for-certain-seemingly-sensible-operations
  

 On Thursday, September 18, 2014 03:24:00 AM Florian Oswald wrote: 
  # define a variable gamma: 
  
  gamma = 1.4 
  mgamma = 1.0-gamma 
  
  julia mgamma 
  -0.3999 
  
  # this works: 
  
  julia -0.3999^2.5 
  -0.10119288512475567 
  
  # this doesn't: 
  
  julia mgamma^2.5 
  ERROR: DomainError 
  in ^ at math.jl:252 



Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Andreas Noack
because it parses as -(0.4^2.5)

Med venlig hilsen

Andreas Noack

2014-09-18 8:54 GMT-04:00 Florian Oswald florian.osw...@gmail.com:

 yes - not sure why -0.4 and (-0.4) are any different.

 On 18 September 2014 13:52, Patrick O'Leary patrick.ole...@gmail.com
 wrote:

 Seems like the literal -0.4^2.5 should throw the same error, though?


 On Thursday, September 18, 2014 6:42:56 AM UTC-5, Tim Holy wrote:

 http://docs.julialang.org/en/latest/manual/faq/#why-does-
 julia-give-a-domainerror-for-certain-seemingly-sensible-operations

 On Thursday, September 18, 2014 03:24:00 AM Florian Oswald wrote:
  # define a variable gamma:
 
  gamma = 1.4
  mgamma = 1.0-gamma
 
  julia mgamma
  -0.3999
 
  # this works:
 
  julia -0.3999^2.5
  -0.10119288512475567
 
  # this doesn't:
 
  julia mgamma^2.5
  ERROR: DomainError
  in ^ at math.jl:252





Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Ivar Nesje
Operator precedence makes them parse very different.

*julia **:(-0.4^-2.5)*

*:(-(0.4^-2.5))*


kl. 14:54:26 UTC+2 torsdag 18. september 2014 skrev Florian Oswald følgende:

 yes - not sure why -0.4 and (-0.4) are any different.

 On 18 September 2014 13:52, Patrick O'Leary patrick...@gmail.com 
 javascript: wrote:

 Seems like the literal -0.4^2.5 should throw the same error, though?


 On Thursday, September 18, 2014 6:42:56 AM UTC-5, Tim Holy wrote:

 http://docs.julialang.org/en/latest/manual/faq/#why-does-
 julia-give-a-domainerror-for-certain-seemingly-sensible-operations 

 On Thursday, September 18, 2014 03:24:00 AM Florian Oswald wrote: 
  # define a variable gamma: 
  
  gamma = 1.4 
  mgamma = 1.0-gamma 
  
  julia mgamma 
  -0.3999 
  
  # this works: 
  
  julia -0.3999^2.5 
  -0.10119288512475567 
  
  # this doesn't: 
  
  julia mgamma^2.5 
  ERROR: DomainError 
  in ^ at math.jl:252 




Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Jutho
because it is not recognized/parsed as literal but as the application of a 
unary minus, which has lower precedence than ^

I guess it is not possible to give binary minus a lower precedence than ^ 
and unary minus of higher precedence, since these are just different 
methods of the same function/operator.

Op donderdag 18 september 2014 14:54:26 UTC+2 schreef Florian Oswald:

 yes - not sure why -0.4 and (-0.4) are any different.

 On 18 September 2014 13:52, Patrick O'Leary patrick...@gmail.com 
 javascript: wrote:

 Seems like the literal -0.4^2.5 should throw the same error, though?


 On Thursday, September 18, 2014 6:42:56 AM UTC-5, Tim Holy wrote:

 http://docs.julialang.org/en/latest/manual/faq/#why-does-
 julia-give-a-domainerror-for-certain-seemingly-sensible-operations 

 On Thursday, September 18, 2014 03:24:00 AM Florian Oswald wrote: 
  # define a variable gamma: 
  
  gamma = 1.4 
  mgamma = 1.0-gamma 
  
  julia mgamma 
  -0.3999 
  
  # this works: 
  
  julia -0.3999^2.5 
  -0.10119288512475567 
  
  # this doesn't: 
  
  julia mgamma^2.5 
  ERROR: DomainError 
  in ^ at math.jl:252 




Re: [julia-users] Re: ANN: ApproxFun v0.0.3 with general linear PDE solving on rectangles

2014-09-18 Thread Sheehan Olver
Just pushed an update so that the below is possible, for automatically 
approximating a function with a singularity.  This seems like the same vein 
as what you were suggesting. 

 Fun(x-exp(x)/sqrt(1-x.^2),JacobiWeightSpace(-.5,-.5))

On Monday, September 15, 2014 8:11:18 PM UTC+10, Gabriel Mitchell wrote:

 Here’s a partial list of features in Chebfun not in ApproxFun: 
 1)Automatic edge detection and domain splitting 

 The automatic splitting capability of chebfun is definitely really cool, 
 but it always seemed to me to be a bit more then one would need for most 
 use cases. That is, if I am defining some function like

 f =  Fun(g::Function,[-1,1])

 where g is composed of things like absolute values and step functions I 
 might need to do something sophisticated to figure out how to break up the 
 domain, but if I instead pass something like

 f =  Fun(g::PiecewiseFunction,[-1,1])

 which has some g that has been annotated by the user in some obvious way 
 (or semiautomatically, given some basic rules for composing PiecewiseFunction 
 types under standard operations) I might have a much easier time. In 
 practice, when setting up problems in the first place one is often paying 
 attention to where discontinuities are anyway, so providing such a 
 mechanism might even be a natural way to help someone set up their problem. 

 Maybe this kind of thing is incompatible with ApproxFun (sorry, I didn't 
 look in detail yet). But at any rate, super cool work! If there are any 
 plans to start a gallery of examples ala chebfun I would be happy to 
 contribute some from population dynamics.

 On Friday, September 12, 2014 1:43:27 AM UTC+2, Sheehan Olver wrote:


 Chebfun is a lot more full featured, and ApproxFun is _very_ 
 rough around the edges.  ApproxFun will probably end up a very different 
 animal than chebfun: right now the goal is to tackle PDEs on a broader 
 class of domains, something I think is beyond the scope of Chebfun due to 
 issues with Matlab's speed, memory management, etc.   

 Here’s a partial list of features in Chebfun not in ApproxFun: 

 1)Automatic edge detection and domain splitting 
 2)Support for delta functions 
 3)Built-in time stepping (pde15s) 
 4)Eigenvalue problems 
 5)Automatic nonlinear ODE solver 
 6)Operator exponential 
 7)Smarter constructor for determining convergence 
 8)Automatic differentiation 

 I have no concrete plans at the moment of adding these features, though 
 eigenvalue problems and operator exponentials will likely find their way in 
 at some point.   


 Sheehan 


 On 12 Sep 2014, at 12:14 am, Steven G. Johnson steve...@gmail.com 
 wrote: 

  This is great! 
  
  At this point, what are the major differences in functionality between 
 ApproxFun and Chebfun? 



Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Patrick O'Leary
Haha, yeah, forgot about that.

On Thursday, September 18, 2014 8:00:13 AM UTC-5, Ivar Nesje wrote:

 Operator precedence makes them parse very different.

 *julia **:(-0.4^-2.5)*

 *:(-(0.4^-2.5))*


 kl. 14:54:26 UTC+2 torsdag 18. september 2014 skrev Florian Oswald 
 følgende:

 yes - not sure why -0.4 and (-0.4) are any different.

 On 18 September 2014 13:52, Patrick O'Leary patrick...@gmail.com wrote:

 Seems like the literal -0.4^2.5 should throw the same error, though?


 On Thursday, September 18, 2014 6:42:56 AM UTC-5, Tim Holy wrote:

 http://docs.julialang.org/en/latest/manual/faq/#why-does-
 julia-give-a-domainerror-for-certain-seemingly-sensible-operations 

 On Thursday, September 18, 2014 03:24:00 AM Florian Oswald wrote: 
  # define a variable gamma: 
  
  gamma = 1.4 
  mgamma = 1.0-gamma 
  
  julia mgamma 
  -0.3999 
  
  # this works: 
  
  julia -0.3999^2.5 
  -0.10119288512475567 
  
  # this doesn't: 
  
  julia mgamma^2.5 
  ERROR: DomainError 
  in ^ at math.jl:252 




Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Florian Oswald
i see!

*julia **:(-0.4^-2.5)*

*:(-(0.4^-2.5))*

is good to know! didnt' think of this at all so far.

On 18 September 2014 14:01, Jutho juthohaege...@gmail.com wrote:

 because it is not recognized/parsed as literal but as the application of a
 unary minus, which has lower precedence than ^

 I guess it is not possible to give binary minus a lower precedence than ^
 and unary minus of higher precedence, since these are just different
 methods of the same function/operator.

 Op donderdag 18 september 2014 14:54:26 UTC+2 schreef Florian Oswald:

 yes - not sure why -0.4 and (-0.4) are any different.

 On 18 September 2014 13:52, Patrick O'Leary patrick...@gmail.com wrote:

 Seems like the literal -0.4^2.5 should throw the same error, though?


 On Thursday, September 18, 2014 6:42:56 AM UTC-5, Tim Holy wrote:

 http://docs.julialang.org/en/latest/manual/faq/#why-does-jul
 ia-give-a-domainerror-for-certain-seemingly-sensible-operations

 On Thursday, September 18, 2014 03:24:00 AM Florian Oswald wrote:
  # define a variable gamma:
 
  gamma = 1.4
  mgamma = 1.0-gamma
 
  julia mgamma
  -0.3999
 
  # this works:
 
  julia -0.3999^2.5
  -0.10119288512475567
 
  # this doesn't:
 
  julia mgamma^2.5
  ERROR: DomainError
  in ^ at math.jl:252





[julia-users] Cell indexing

2014-09-18 Thread nils . gudat
I'm sure this is an extremely trivial question, but I can't seem to find an 
answer anywhere. I'm trying to store a couple of matrices of different size 
in a cell object.
In Matlab, I'd do the following:

 A = cell(1, 10);
 for i = 1:10
 A{i} = matrix(:, :, i);
 end

However, I can't figure out how to access the cell array in Julia:

 A = cell(1, 10)
 A

1x10 Array{Any,2}:
 #undef  #undef  #undef  #undef  #undef #undef  #undef  #undef  #undef  #undef  
#undef

 A[1]
access to undefined reference
 A[1, 1]
access to undefined reference
 A{1}
type: instantiate type: expected TypeConstructor, got Array{Any, 2}

Apologies for posting such a basic question, but I haven't been able to figure 
this out even after reading 
http://julia.readthedocs.org/en/latest/manual/arrays/



Re: [julia-users] Cell indexing

2014-09-18 Thread John Myles White
Hi Nils,

Try something like:

A = Array(Any, 10)
for i in 1:10
A[i] = randn(1, 10)
end

On Sep 18, 2014, at 6:47 AM, nils.gu...@gmail.com wrote:

 I'm sure this is an extremely trivial question, but I can't seem to find an 
 answer anywhere. I'm trying to store a couple of matrices of different size 
 in a cell object.
 In Matlab, I'd do the following:
 
  A = cell(1, 10);
  for i = 1:10
  A{i} = matrix(:, :, i);
  end
 
 However, I can't figure out how to access the cell array in Julia:
 
  A = cell(1, 10)
  A
 1x10 Array{Any,2}:
  #undef  #undef  #undef  #undef  #undef #undef  #undef  #undef  #undef  
 #undef  #undef
 
  A[1]
 access to undefined reference
  A[1, 1]
 access to undefined reference
  A{1}
 type: instantiate type: expected TypeConstructor, got Array{Any, 2}
 
 Apologies for posting such a basic question, but I haven't been able to 
 figure this out even after reading 
 http://julia.readthedocs.org/en/latest/manual/arrays/
 



Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Gunnar Farnebäck
It's not like Julia is doing anything strange or uncommon here. Most people 
would be really surprised if -10² meant positive 100.

Den torsdagen den 18:e september 2014 kl. 15:01:44 UTC+2 skrev Jutho:

 because it is not recognized/parsed as literal but as the application of a 
 unary minus, which has lower precedence than ^

 I guess it is not possible to give binary minus a lower precedence than ^ 
 and unary minus of higher precedence, since these are just different 
 methods of the same function/operator.

 Op donderdag 18 september 2014 14:54:26 UTC+2 schreef Florian Oswald:

 yes - not sure why -0.4 and (-0.4) are any different.

 On 18 September 2014 13:52, Patrick O'Leary patrick...@gmail.com wrote:

 Seems like the literal -0.4^2.5 should throw the same error, though?


 On Thursday, September 18, 2014 6:42:56 AM UTC-5, Tim Holy wrote:

 http://docs.julialang.org/en/latest/manual/faq/#why-does-
 julia-give-a-domainerror-for-certain-seemingly-sensible-operations 

 On Thursday, September 18, 2014 03:24:00 AM Florian Oswald wrote: 
  # define a variable gamma: 
  
  gamma = 1.4 
  mgamma = 1.0-gamma 
  
  julia mgamma 
  -0.3999 
  
  # this works: 
  
  julia -0.3999^2.5 
  -0.10119288512475567 
  
  # this doesn't: 
  
  julia mgamma^2.5 
  ERROR: DomainError 
  in ^ at math.jl:252 




Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Florian Oswald
well, I guess most computer scientists would be surprised. writing on a
piece of paper

-10^2

and

-(10^2)

I think most people are going to say the first expression is 100 and the
second is -100. I take the point that what I did was a bit stupid and Julia
is not making any mistake here.

On 18 September 2014 16:50, Gunnar Farnebäck gun...@lysator.liu.se wrote:

 It's not like Julia is doing anything strange or uncommon here. Most
 people would be really surprised if -10² meant positive 100.

 Den torsdagen den 18:e september 2014 kl. 15:01:44 UTC+2 skrev Jutho:

 because it is not recognized/parsed as literal but as the application of
 a unary minus, which has lower precedence than ^

 I guess it is not possible to give binary minus a lower precedence than ^
 and unary minus of higher precedence, since these are just different
 methods of the same function/operator.

 Op donderdag 18 september 2014 14:54:26 UTC+2 schreef Florian Oswald:

 yes - not sure why -0.4 and (-0.4) are any different.

 On 18 September 2014 13:52, Patrick O'Leary patrick...@gmail.com
 wrote:

 Seems like the literal -0.4^2.5 should throw the same error, though?


 On Thursday, September 18, 2014 6:42:56 AM UTC-5, Tim Holy wrote:

 http://docs.julialang.org/en/latest/manual/faq/#why-does-jul
 ia-give-a-domainerror-for-certain-seemingly-sensible-operations

 On Thursday, September 18, 2014 03:24:00 AM Florian Oswald wrote:
  # define a variable gamma:
 
  gamma = 1.4
  mgamma = 1.0-gamma
 
  julia mgamma
  -0.3999
 
  # this works:
 
  julia -0.3999^2.5
  -0.10119288512475567
 
  # this doesn't:
 
  julia mgamma^2.5
  ERROR: DomainError
  in ^ at math.jl:252





[julia-users] Re: ANN: VideoIO.jl wrapper for ffmpeg/libav

2014-09-18 Thread Simon Danisch
Applying various filters needs to be done by hand right now, but it could 
be easy to implement, depending on your demands ;)

Right now, I don't really have much time to do much myself, but feel free 
to ask me anything that might come up!
Can you report the stability issues? I guess you're doing something 
uncommon, since things have been quite stable so far!

If you want, feel free to just open issues with feature request, like this 
I have a place to discuss things and a list with wanted features.
It's a lot more satisfying to implement something, when someone already 
expressed that its needed :)




Am Dienstag, 19. August 2014 09:32:03 UTC+2 schrieb Kevin Squire:

 VideoIO.jl https://github.com/kmsquire/VideoIO.jl is a wrapper around 
 libav/ffmpeg libraries, which are the defacto open-source libraries for 
 video IO.  At this point, the library offers an easy way to open video 
 files or a camera and read sequences of images, as either arrays, or 
 optionally as `Image` objects, using the `Images` package.  Support for 
 reading audio and other data streams from media files is planned but not 
 yet supported.

 


 The package has been developed on Linux, and seems to work well there. 
 Installation and functionality has been minimally tested on Macs, but not 
 yet on Windows, so it would be great if, especially Mac and Windows users 
 could test both camera and file reading, and report any issues.

 See https://github.com/kmsquire/VideoIO.jl and the README for more 
 information.

 Cheers!
Kevin
  


Re: [julia-users] Re: Article on `@simd`

2014-09-18 Thread Arch Robison
Thanks for pointing out the problems, particularly the ab issue.  I've
reworked that section.

On Thu, Sep 18, 2014 at 4:57 AM, Gunnar Farnebäck gun...@lysator.liu.se
wrote:

 There are still three arguments to max in the last of those examples.
 Actually it's not clear that you can make an equivalent expression with min
 and max. Functionally (with intended use)
 x[i] = max(a, min(b, x[i]))
 does the same thing as the earlier examples but it expands to
 x[i] = ifelse(ifelse(b  x[i], b, x[i])  a, a, ifelse(b  x[i], b, x[i]))
 which should be hard for a compiler to optimize to the earlier examples
 since they don't give the same result in the degenerate case of a  b.

 A closer correspondence is given by the clamp function which is
 implemented as a nested ifelse in the same way as example 2 (although in
 the opposite order, so it also differs for ab).

 Den onsdagen den 17:e september 2014 kl. 16:28:45 UTC+2 skrev Arch Robison:

 Thanks.  Now fixed.

 On Wed, Sep 17, 2014 at 4:14 AM, Gunnar Farnebäck gun...@lysator.liu.se
 wrote:

 In the section The Loop Body Should Be Straight-Line Code, the first
 and second code example look identical with ifelse constructions. I assume
 the first one should use ? instead. Also the third code example has a stray
 x[i]a argument to the max function.




[julia-users] Have Julia send an email (or sound) when code is done

2014-09-18 Thread Alex
Hi Everyone, 

Does anyone know of code that would have julia send an email or text at a 
certain point in a script. I'm sending some big projects of to a digital 
ocean droplet and I think this would be a nice thing to add so I can stop 
obsessively checking my code all day. Here's the stata code that makes it 
work:

*stata -b 'yourdofile.do'  echo done body | mail -s done subject 
[youremail @ yourhost .com] *


 I've done with Stata pretty easily, but I can't quite get it to work with 
julia. Also with Matlab, it's pretty easy to make it chirp (gong) when 
a code has successful (unsuccessfully) reached a point. Does anyone know 
how to do this? Here's the matlab code that makes it work:

load chirp 
sound(y,Fs)

Thanks!

Alex

The stata trick was found 
via: http://scholar.harvard.edu/bauhoff/tricks.html


Re: [julia-users] Re: Article on `@simd`

2014-09-18 Thread Arch Robison
ISPC is not only a an explicit vectorization language, but has some novel
semantics, particularly for structures.  Not only SOA vs. AOS, but the
whole notion of uniform vs. varying fields of a structure is a new
thing.  A macro-based imitation might be plausible.

On Wed, Sep 17, 2014 at 7:58 PM, Erik Schnetter schnet...@cct.lsu.edu
wrote:

 On Wed, Sep 17, 2014 at 7:14 PM,  gael.mc...@gmail.com wrote:
  Slightly OT, but since I won't talk about it myself I don't feel this
 will harm the current thread ...
 
 
  I don't know if it can be of any help/use/interest for any of you but
 some people (some at Intel) are actively working on SIMD use with LLVM:
 
  https://ispc.github.io/index.html
 
  But I really don't have the skills to tell you if they just wrote a
 new C-like language that is autovectorizing well or if they do some even
 smarter stuff to get maximum performances.

 I think they are up to something clever.

 If I read things correctly: ispc adds new keywords that describes the
 memory layout (!) of data structures that are accessed via SIMD
 instructions. There exist a few commonly-used data layout
 optimizations that are generally necessary to achieve good performance
 with SIMD code, called SOA or replicated or similar. Apparently,
 ispc introduces respective keywords that automatically transform the
 layout of data data structures.

 I wonder whether something equivalent could be implemented via macros
 in Julia. These would be macros acting on type declarations, not on
 code. Presumably, these would be array- or structure-like data types,
 and accessing them is then slightly more complex, so that one would
 also need to automatically define respective iterators. Maybe there
 could be a companion macro that acts on loops, so that the loops are
 transformed (and simd'ized) the same way as the data types...

 -erik

 --
 Erik Schnetter schnet...@cct.lsu.edu
 http://www.perimeterinstitute.ca/personal/eschnetter/



[julia-users] Incrementing integer slow?

2014-09-18 Thread G. Patrick Mauroy
Profiling shows incrementing integers by 1 (i += 1) being the bottleneck.

Within the same loop are other statements that do take much less time.

In my performance optimizing zeal, I over typed the hell out of everything 
to attempt squeezing performance to the last once.
Some of this zeal did help in other parts of the code, but now struggling 
making sense at spending most of the time incrementing by 1.
I suspect the problem is over typing zeal because I seem to recall having a 
version not so strongly typed that ran consistently 2-3 times faster for 
default Int (but not for other Int types).  It was late at night so I don't 
recall the details!

I am pretty confident the increment variables are typed so there should not 
be any undue cast.

Any idea?

Here is how my code conceptually looks like:

# Global static type declaration ahead seems to have helped (as opposed to 
 deriving from eltype of underlying array at the beginning of function being 
 profiled).
 IdType = Int # Int64
 DType = Int
 function my_fct(dt1, dt2)
   # Convert is for sure unnecessary for default Int types but more 
 rigorous and necessary in some parts of code when experimenting with other 
 IdType  DType types.
   const oneIdType = convert(IdType, 1) # Used to make sure I increment 
 with a value of the proper type, again useless with IdType = Int.
   const zeroIdType = convert(IdType, 0)
   i::IdType = zeroIdType; i2Match::IdType = zeroIdType; i2Lower::IdType = 
 zeroIdType; i2Upper::IdType = oneIdType;
   ...
 # Critical loop.
 i2Match = i2Lower
 while i2Match  i2Upper
   @inbounds i2MatchD2 = dt2D2[i2Match]
   if i1D = i2MatchD2
 i += oneIdType # SLOW!
 @inbounds i2MatchD1 = dt2D1[i2Match]
 @inbounds resid1[i] = i1id1
 ...
   end
   i2Match += oneIdType # SLOW!
 end
   ...
 end


The undeclared types are 1-dim arrays of the appropriate type -- basically 
all Int in this configuration.

Enclosed is the full stand-alone code if anyone cares to try.
On my machines, one function call is in the range of 0.05 to 0.1 sec, 
highly depending upon garbage collection, so profiling with 100 runs is 
done in about 10 sec.

Thanks.

Patrick



crossJoinFilter.jl
Description: Binary data


Re: [julia-users] Incrementing integer slow?

2014-09-18 Thread John Myles White
1 has type Int. If you add it to something with a different type, you might be 
causing type instability. What happens if you replace the literal 1 with one(T) 
for the type you're working with?

  -- John

On Sep 18, 2014, at 9:56 AM, G. Patrick Mauroy gpmau...@gmail.com wrote:

 Profiling shows incrementing integers by 1 (i += 1) being the bottleneck.
 
 Within the same loop are other statements that do take much less time.
 
 In my performance optimizing zeal, I over typed the hell out of everything to 
 attempt squeezing performance to the last once.
 Some of this zeal did help in other parts of the code, but now struggling 
 making sense at spending most of the time incrementing by 1.
 I suspect the problem is over typing zeal because I seem to recall having a 
 version not so strongly typed that ran consistently 2-3 times faster for 
 default Int (but not for other Int types).  It was late at night so I don't 
 recall the details!
 
 I am pretty confident the increment variables are typed so there should not 
 be any undue cast.
 
 Any idea?
 
 Here is how my code conceptually looks like:
 
 # Global static type declaration ahead seems to have helped (as opposed to 
 deriving from eltype of underlying array at the beginning of function being 
 profiled).
 IdType = Int # Int64
 DType = Int
 function my_fct(dt1, dt2)
   # Convert is for sure unnecessary for default Int types but more rigorous 
 and necessary in some parts of code when experimenting with other IdType  
 DType types.
   const oneIdType = convert(IdType, 1) # Used to make sure I increment with a 
 value of the proper type, again useless with IdType = Int.
   const zeroIdType = convert(IdType, 0)
   i::IdType = zeroIdType; i2Match::IdType = zeroIdType; i2Lower::IdType = 
 zeroIdType; i2Upper::IdType = oneIdType;
   ...
 # Critical loop.
 i2Match = i2Lower
 while i2Match  i2Upper
   @inbounds i2MatchD2 = dt2D2[i2Match]
   if i1D = i2MatchD2
 i += oneIdType # SLOW!
 @inbounds i2MatchD1 = dt2D1[i2Match]
 @inbounds resid1[i] = i1id1
 ...
   end
   i2Match += oneIdType # SLOW!
 end
   ...
 end
 
 The undeclared types are 1-dim arrays of the appropriate type -- basically 
 all Int in this configuration.
 
 Enclosed is the full stand-alone code if anyone cares to try.
 On my machines, one function call is in the range of 0.05 to 0.1 sec, highly 
 depending upon garbage collection, so profiling with 100 runs is done in 
 about 10 sec.
 
 Thanks.
 
 Patrick
 
 crossJoinFilter.jl



Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Steven G. Johnson


On Thursday, September 18, 2014 12:00:32 PM UTC-4, Florian Oswald wrote:

 well, I guess most computer scientists would be surprised. writing on a 
 piece of paper

 -10^2

 and

 -(10^2)

 I think most people are going to say the first expression is 100 and the 
 second is -100. I take the point that what I did was a bit stupid and Julia 
 is not making any mistake here.


Note that in Fortran, Python, Matlab, and Mathematica, the exponentiation 
operator has higher precedence than unary -, similar to Julia.   -10^2 in 
WolframAlpha (http://www.wolframalpha.com/input/?i=-10%5E2) gives 100, and 
WolframAlpha tries pretty hard to do natural-language interpretation of 
mathematical expressions.

So, I'm not sure why computer scientists would be surprised.


Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread John Myles White
I think that was a typo for not surprised.

 -- John

On Sep 18, 2014, at 9:59 AM, Steven G. Johnson stevenj@gmail.com wrote:

 
 
 On Thursday, September 18, 2014 12:00:32 PM UTC-4, Florian Oswald wrote:
 well, I guess most computer scientists would be surprised. writing on a piece 
 of paper
 
 -10^2
 
 and
 
 -(10^2)
 
 I think most people are going to say the first expression is 100 and the 
 second is -100. I take the point that what I did was a bit stupid and Julia 
 is not making any mistake here.
 
 Note that in Fortran, Python, Matlab, and Mathematica, the exponentiation 
 operator has higher precedence than unary -, similar to Julia.   -10^2 in 
 WolframAlpha (http://www.wolframalpha.com/input/?i=-10%5E2) gives 100, and 
 WolframAlpha tries pretty hard to do natural-language interpretation of 
 mathematical expressions.
 
 So, I'm not sure why computer scientists would be surprised.



Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Steven G. Johnson
On Thursday, September 18, 2014 12:59:10 PM UTC-4, Steven G. Johnson wrote:

 Note that in Fortran, Python, Matlab, and Mathematica, the exponentiation 
 operator has higher precedence than unary -, similar to Julia.   -10^2 in 
 WolframAlpha (http://www.wolframalpha.com/input/?i=-10%5E2) gives 100, 
 and WolframAlpha tries pretty hard to do natural-language interpretation of 
 mathematical expressions.


Sorry, I mean that WolframAlpha gives -100.

I think the rationale here is that -10^2 should be read as ASCII for –10², 
and the latter is –100 in usual math notation as I understand it.


Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Jameson Nash
I'm not sure about most people, but given the first expression, I would
have handed the paper back and told the author to clarify the ambiguity.

On Thursday, September 18, 2014, Florian Oswald florian.osw...@gmail.com
wrote:

 well, I guess most computer scientists would be surprised. writing on a
 piece of paper

 -10^2

 and

 -(10^2)

 I think most people are going to say the first expression is 100 and the
 second is -100. I take the point that what I did was a bit stupid and Julia
 is not making any mistake here.

 On 18 September 2014 16:50, Gunnar Farnebäck gun...@lysator.liu.se
 javascript:_e(%7B%7D,'cvml','gun...@lysator.liu.se'); wrote:

 It's not like Julia is doing anything strange or uncommon here. Most
 people would be really surprised if -10² meant positive 100.

 Den torsdagen den 18:e september 2014 kl. 15:01:44 UTC+2 skrev Jutho:

 because it is not recognized/parsed as literal but as the application of
 a unary minus, which has lower precedence than ^

 I guess it is not possible to give binary minus a lower precedence than
 ^ and unary minus of higher precedence, since these are just different
 methods of the same function/operator.

 Op donderdag 18 september 2014 14:54:26 UTC+2 schreef Florian Oswald:

 yes - not sure why -0.4 and (-0.4) are any different.

 On 18 September 2014 13:52, Patrick O'Leary patrick...@gmail.com
 wrote:

 Seems like the literal -0.4^2.5 should throw the same error, though?


 On Thursday, September 18, 2014 6:42:56 AM UTC-5, Tim Holy wrote:

 http://docs.julialang.org/en/latest/manual/faq/#why-does-jul
 ia-give-a-domainerror-for-certain-seemingly-sensible-operations

 On Thursday, September 18, 2014 03:24:00 AM Florian Oswald wrote:
  # define a variable gamma:
 
  gamma = 1.4
  mgamma = 1.0-gamma
 
  julia mgamma
  -0.3999
 
  # this works:
 
  julia -0.3999^2.5
  -0.10119288512475567
 
  # this doesn't:
 
  julia mgamma^2.5
  ERROR: DomainError
  in ^ at math.jl:252






Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Florian Oswald
ok guys i won't dig myself a deeper hole here - you win.

(savored my 3 seconds of fame before steven corrected that typo tough!)

On 18 September 2014 18:21, Jameson Nash vtjn...@gmail.com wrote:

 I'm not sure about most people, but given the first expression, I would
 have handed the paper back and told the author to clarify the ambiguity.


 On Thursday, September 18, 2014, Florian Oswald florian.osw...@gmail.com
 wrote:

 well, I guess most computer scientists would be surprised. writing on a
 piece of paper

 -10^2

 and

 -(10^2)

 I think most people are going to say the first expression is 100 and the
 second is -100. I take the point that what I did was a bit stupid and Julia
 is not making any mistake here.

 On 18 September 2014 16:50, Gunnar Farnebäck gun...@lysator.liu.se
 wrote:

 It's not like Julia is doing anything strange or uncommon here. Most
 people would be really surprised if -10² meant positive 100.

 Den torsdagen den 18:e september 2014 kl. 15:01:44 UTC+2 skrev Jutho:

 because it is not recognized/parsed as literal but as the application
 of a unary minus, which has lower precedence than ^

 I guess it is not possible to give binary minus a lower precedence than
 ^ and unary minus of higher precedence, since these are just different
 methods of the same function/operator.

 Op donderdag 18 september 2014 14:54:26 UTC+2 schreef Florian Oswald:

 yes - not sure why -0.4 and (-0.4) are any different.

 On 18 September 2014 13:52, Patrick O'Leary patrick...@gmail.com
 wrote:

 Seems like the literal -0.4^2.5 should throw the same error, though?


 On Thursday, September 18, 2014 6:42:56 AM UTC-5, Tim Holy wrote:

 http://docs.julialang.org/en/latest/manual/faq/#why-does-jul
 ia-give-a-domainerror-for-certain-seemingly-sensible-operations

 On Thursday, September 18, 2014 03:24:00 AM Florian Oswald wrote:
  # define a variable gamma:
 
  gamma = 1.4
  mgamma = 1.0-gamma
 
  julia mgamma
  -0.3999
 
  # this works:
 
  julia -0.3999^2.5
  -0.10119288512475567
 
  # this doesn't:
 
  julia mgamma^2.5
  ERROR: DomainError
  in ^ at math.jl:252






Re: [julia-users] Re: LaTeX labels in Winston

2014-09-18 Thread Paweł Biernat
I have added a pull request: https://github.com/nolta/Winston.jl/pull/174, 
if you have any further suggestions of what should I include feel free to 
send them there.

Best,
Paweł

W dniu czwartek, 18 września 2014 13:37:44 UTC+2 użytkownik Tim Holy 
napisał:

 Paweł, there's no one better than you to do that! Everyone here is a 
 volunteer, and contributing documentation is a terrific way to help out. 

 If you just grep for latex in the Winston source, that should help you 
 find all 
 the relevant information. 

 Best, 
 --Tim 

 On Thursday, September 18, 2014 02:07:01 AM Paweł Biernat wrote: 
  Thanks, this is missing from the documentation of the Winston package. 
  Maybe someone should add a short info on the typesetting options, so 
 people 
  won't have to go to the mailing list to figure it out. 
  
  As for Pango rendering MathML there is an example at the end of the 
 script 
  gallery [1].  But I couldn't figure out how they achieved this as I 
 don't 
  know Cairo/Pango at all. 
  
  [1] http://www.pango.org/ScriptGallery 
  
  W dniu czwartek, 18 września 2014 08:16:33 UTC+2 użytkownik Alex 
 napisał: 
   Hi Pawel, 
   
   AFAIK the rendering of the labels is actually handled by Cairo.jl 
 (look 
   for tex2pango in Cairo.jl 
   
 https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FJuliaLang%2FCairo 
   
 .jl%2Fblob%2Fmaster%2Fsrc%2FCairo.jlsa=Dsntz=1usg=AFQjCNF3Cp9Rz43PyR88F 
   NO1BoKYIulrjg). There some TeX commands (\it, \rm, _, ^, etc) are 
   translated into Pango markup format 
   https://developer.gnome.org/pango/stable/PangoMarkupFormat.html. 
   Additionally many/most TeX symbols are converted into unicode. More 
   sophisticated commands, like \frac, are not handled at the moment. 
   
   It would be great to have more complete support, but I guess it is not 
 so 
   easy since it would require a custom typesetting system (or one 
 delegates 
   the rendering to some external program, but then all the text has to 
 go 
   through this). Maybe there is some TeX/MathML engine using Pango one 
 could 
   use? 
   
   
   Best, 
   
   Alex. 
   
   On Wednesday, 17 September 2014 22:59:38 UTC+2, Paweł Biernat wrote: 
   Hi, 
   
   Is it possible to use LaTeX labels in Winston?  In the examples.jl 
 there 
   is a point [1] where some LaTeX-like syntax is used in a label. 
   
   I was trying to push $\frac{1}{2}$ as a label and already tested 
   various escaped versions, including \$\\frac{1}{2}\$ and 
 \\frac{1}{2} 
   but I didn't achieve the expected result. 
   
   [1] 
 https://github.com/nolta/Winston.jl/blob/master/test/examples.jl#L18 
   
   Best, 
   Paweł 



Re: [julia-users] Incrementing integer slow?

2014-09-18 Thread G. Patrick Mauroy
No change.
I over typed everything to avoid such type mismatches, particularly when 
experimenting with other integer types.  So unless I missed something 
somewhere, it should not be the case.
I suspect something like the compiler does not recognize the incrementing 
variables should be registries.  Unless it is the inherent speed of 
incrementing, but I doubt it, I had some faster runs at some points...

On Thursday, September 18, 2014 12:58:12 PM UTC-4, John Myles White wrote:

 1 has type Int. If you add it to something with a different type, you 
 might be causing type instability. What happens if you replace the literal 
 1 with one(T) for the type you're working with?

   -- John

 On Sep 18, 2014, at 9:56 AM, G. Patrick Mauroy gpma...@gmail.com 
 javascript: wrote:

 Profiling shows incrementing integers by 1 (i += 1) being the bottleneck.

 Within the same loop are other statements that do take much less time.

 In my performance optimizing zeal, I over typed the hell out of everything 
 to attempt squeezing performance to the last once.
 Some of this zeal did help in other parts of the code, but now struggling 
 making sense at spending most of the time incrementing by 1.
 I suspect the problem is over typing zeal because I seem to recall having 
 a version not so strongly typed that ran consistently 2-3 times faster for 
 default Int (but not for other Int types).  It was late at night so I don't 
 recall the details!

 I am pretty confident the increment variables are typed so there should 
 not be any undue cast.

 Any idea?

 Here is how my code conceptually looks like:

 # Global static type declaration ahead seems to have helped (as opposed to 
 deriving from eltype of underlying array at the beginning of function being 
 profiled).
 IdType = Int # Int64
 DType = Int
 function my_fct(dt1, dt2)
   # Convert is for sure unnecessary for default Int types but more 
 rigorous and necessary in some parts of code when experimenting with other 
 IdType  DType types.
   const oneIdType = convert(IdType, 1) # Used to make sure I increment 
 with a value of the proper type, again useless with IdType = Int.
   const zeroIdType = convert(IdType, 0)
   i::IdType = zeroIdType; i2Match::IdType = zeroIdType; i2Lower::IdType = 
 zeroIdType; i2Upper::IdType = oneIdType;
   ...
 # Critical loop.
 i2Match = i2Lower
 while i2Match  i2Upper
   @inbounds i2MatchD2 = dt2D2[i2Match]
   if i1D = i2MatchD2
 i += oneIdType # SLOW!
 @inbounds i2MatchD1 = dt2D1[i2Match]
 @inbounds resid1[i] = i1id1
 ...
   end
   i2Match += oneIdType # SLOW!
 end
   ...
 end


 The undeclared types are 1-dim arrays of the appropriate type -- basically 
 all Int in this configuration.

 Enclosed is the full stand-alone code if anyone cares to try.
 On my machines, one function call is in the range of 0.05 to 0.1 sec, 
 highly depending upon garbage collection, so profiling with 100 runs is 
 done in about 10 sec.

 Thanks.

 Patrick

 crossJoinFilter.jl




[julia-users] Re: Have Julia send an email (or sound) when code is done

2014-09-18 Thread Jake Bolewski
Yo.jl 

On Thursday, September 18, 2014 12:46:03 PM UTC-4, Alex wrote:

 Hi Everyone, 

 Does anyone know of code that would have julia send an email or text at a 
 certain point in a script. I'm sending some big projects of to a digital 
 ocean droplet and I think this would be a nice thing to add so I can stop 
 obsessively checking my code all day. Here's the stata code that makes it 
 work:

 *stata -b 'yourdofile.do'  echo done body | mail -s done subject 
 [youremail @ yourhost .com] *


  I've done with Stata pretty easily, but I can't quite get it to work with 
 julia. Also with Matlab, it's pretty easy to make it chirp (gong) when 
 a code has successful (unsuccessfully) reached a point. Does anyone know 
 how to do this? Here's the matlab code that makes it work:

 load chirp 
 sound(y,Fs)

 Thanks!

 Alex

 The stata trick was found via: 
 http://scholar.harvard.edu/bauhoff/tricks.html



Re: [julia-users] Re: LaTeX labels in Winston

2014-09-18 Thread Tim Holy
Thanks for a great contribution!

--Tim

On Thursday, September 18, 2014 10:26:58 AM Paweł Biernat wrote:
 I have added a pull request: https://github.com/nolta/Winston.jl/pull/174,
 if you have any further suggestions of what should I include feel free to
 send them there.
 
 Best,
 Paweł
 
 W dniu czwartek, 18 września 2014 13:37:44 UTC+2 użytkownik Tim Holy
 
 napisał:
  Paweł, there's no one better than you to do that! Everyone here is a
  volunteer, and contributing documentation is a terrific way to help out.
  
  If you just grep for latex in the Winston source, that should help you
  find all
  the relevant information.
  
  Best,
  --Tim
  
  On Thursday, September 18, 2014 02:07:01 AM Paweł Biernat wrote:
   Thanks, this is missing from the documentation of the Winston package.
   Maybe someone should add a short info on the typesetting options, so
  
  people
  
   won't have to go to the mailing list to figure it out.
   
   As for Pango rendering MathML there is an example at the end of the
  
  script
  
   gallery [1].  But I couldn't figure out how they achieved this as I
  
  don't
  
   know Cairo/Pango at all.
   
   [1] http://www.pango.org/ScriptGallery
   
   W dniu czwartek, 18 września 2014 08:16:33 UTC+2 użytkownik Alex
  
  napisał:
Hi Pawel,

AFAIK the rendering of the labels is actually handled by Cairo.jl
  
  (look
  
for tex2pango in Cairo.jl

  
  https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FJuliaLang%2FCairo
  
  
.jl%2Fblob%2Fmaster%2Fsrc%2FCairo.jlsa=Dsntz=1usg=AFQjCNF3Cp9Rz43PyR88F
  
NO1BoKYIulrjg). There some TeX commands (\it, \rm, _, ^, etc) are
translated into Pango markup format
https://developer.gnome.org/pango/stable/PangoMarkupFormat.html.
Additionally many/most TeX symbols are converted into unicode. More
sophisticated commands, like \frac, are not handled at the moment.

It would be great to have more complete support, but I guess it is not
  
  so
  
easy since it would require a custom typesetting system (or one
  
  delegates
  
the rendering to some external program, but then all the text has to
  
  go
  
through this). Maybe there is some TeX/MathML engine using Pango one
  
  could
  
use?


Best,

Alex.

On Wednesday, 17 September 2014 22:59:38 UTC+2, Paweł Biernat wrote:
Hi,

Is it possible to use LaTeX labels in Winston?  In the examples.jl
  
  there
  
is a point [1] where some LaTeX-like syntax is used in a label.

I was trying to push $\frac{1}{2}$ as a label and already tested
various escaped versions, including \$\\frac{1}{2}\$ and
  
  \\frac{1}{2}
  
but I didn't achieve the expected result.

[1]
  
  https://github.com/nolta/Winston.jl/blob/master/test/examples.jl#L18
  
Best,
Paweł



[julia-users] Re: ANN: ApproxFun v0.0.3 with general linear PDE solving on rectangles

2014-09-18 Thread SrM@br
This is really great idea Sheehan!
I love the idea of Chebfun and extending it to Julia, specially aiming at a 
general and powerful PDE solver sounds really good. Certainly it will be 
very useful.

Thanks again!!


On Wednesday, September 10, 2014 7:22:36 PM UTC-3, Sheehan Olver wrote:


 This is to announce a new version of ApproxFun (
 https://github.com/dlfivefifty/ApproxFun.jl), a package for approximating 
 functions.  The biggest new feature is support for PDE solving.  The 
 following lines solve Helmholtz equation u_xx + u_yy + 100 u = 0 with the 
 solution held to be one on the boundary:

 d=Interval()⊗Interval()# the domain to solve is a rectangle

 u=[dirichlet(d),lap(d)+100I]\ones(4)   # first 4 entries are boundary 
 conditions, further entries are assumed zero
 contour(u) # contour plot of the solution, 
 requires GadFly

 PDE solving is based on a recent preprint with Alex Townsend (
 http://arxiv.org/abs/1409.2789).   Only splitting rank 2 PDEs are 
 implemented at the moment.  Examples included are:

 examples/RectPDE Examples.ipynb: Poisson equation, Wave equation, 
 linear KdV, semiclassical Schrodinger equation with a potential, and 
 convection/convection-diffusion equations. 
 examples/Wave and Klein–Gordon equation on a square.ipynb: 
 On-the-fly 3D simulation of time-evolution PDEs on a square.  Requires 
 GLPlot.jl (https://github.com/SimonDanisch/GLPlot.jl).   
 examples/Manipulate Helmholtz.upynb: On-the-fly variation of 
 Helmholtz frequency.  Requires Interact.jl (
 https://github.com/JuliaLang/Interact.jl)

 Another new feature is faster root finding, thanks to Alex.



Re: [julia-users] Incrementing integer slow?

2014-09-18 Thread Tim Holy
Try running with --track-allocation=user and see if it's allocating memory on 
that line. If so, you have a type problem.
http://docs.julialang.org/en/latest/manual/performance-tips/
(2nd and 3rd sections)

--Tim


On Thursday, September 18, 2014 10:44:58 AM G. Patrick Mauroy wrote:
 No change.
 I over typed everything to avoid such type mismatches, particularly when
 experimenting with other integer types.  So unless I missed something
 somewhere, it should not be the case.
 I suspect something like the compiler does not recognize the incrementing
 variables should be registries.  Unless it is the inherent speed of
 incrementing, but I doubt it, I had some faster runs at some points...
 
 On Thursday, September 18, 2014 12:58:12 PM UTC-4, John Myles White wrote:
  1 has type Int. If you add it to something with a different type, you
  might be causing type instability. What happens if you replace the literal
  1 with one(T) for the type you're working with?
  
-- John
  
  On Sep 18, 2014, at 9:56 AM, G. Patrick Mauroy gpma...@gmail.com
  javascript: wrote:
  
  Profiling shows incrementing integers by 1 (i += 1) being the bottleneck.
  
  Within the same loop are other statements that do take much less time.
  
  In my performance optimizing zeal, I over typed the hell out of everything
  to attempt squeezing performance to the last once.
  Some of this zeal did help in other parts of the code, but now struggling
  making sense at spending most of the time incrementing by 1.
  I suspect the problem is over typing zeal because I seem to recall having
  a version not so strongly typed that ran consistently 2-3 times faster for
  default Int (but not for other Int types).  It was late at night so I
  don't
  recall the details!
  
  I am pretty confident the increment variables are typed so there should
  not be any undue cast.
  
  Any idea?
  
  Here is how my code conceptually looks like:
  
  # Global static type declaration ahead seems to have helped (as opposed to
  
  deriving from eltype of underlying array at the beginning of function
  being
  profiled).
  IdType = Int # Int64
  DType = Int
  function my_fct(dt1, dt2)
  
# Convert is for sure unnecessary for default Int types but more
  
  rigorous and necessary in some parts of code when experimenting with
  other
  IdType  DType types.
  
const oneIdType = convert(IdType, 1) # Used to make sure I increment
  
  with a value of the proper type, again useless with IdType = Int.
  
const zeroIdType = convert(IdType, 0)
i::IdType = zeroIdType; i2Match::IdType = zeroIdType; i2Lower::IdType =
  
  zeroIdType; i2Upper::IdType = oneIdType;
  
...

  # Critical loop.
  i2Match = i2Lower
  while i2Match  i2Upper
  
@inbounds i2MatchD2 = dt2D2[i2Match]
if i1D = i2MatchD2

  i += oneIdType # SLOW!
  @inbounds i2MatchD1 = dt2D1[i2Match]
  @inbounds resid1[i] = i1id1
  ...

end
i2Match += oneIdType # SLOW!
  
  end

...
  
  end
  
  The undeclared types are 1-dim arrays of the appropriate type -- basically
  all Int in this configuration.
  
  Enclosed is the full stand-alone code if anyone cares to try.
  On my machines, one function call is in the range of 0.05 to 0.1 sec,
  highly depending upon garbage collection, so profiling with 100 runs is
  done in about 10 sec.
  
  Thanks.
  
  Patrick
  
  crossJoinFilter.jl



Re: [julia-users] Incrementing integer slow?

2014-09-18 Thread G. Patrick Mauroy
Ah ah, this is it, I found the culprit!
Pre-declaring the type of my increment variables slowed down by a factor of 
at least 2 if not 3 -- I will profile tonight when I can be on Linux, I 
cannot now from Windows.

i::IdType = zeroIdType # Slow increment in the loop

i = zeroIdType # Much faster increment in the loop even through zeroIdType 
 is of type IdType = Int = Int64.

It smells more like explicitly typing the variables prevents the compiler 
from using it as a registry or something like that.  I will let the experts 
explain what really happens.

By the way, in my endeavors, I got the impression the loop construct 
for i in 1:n
was slower than
while i =n
But I need to run further tests to determine whether it is indeed so or not.

 

On Thursday, September 18, 2014 1:44:58 PM UTC-4, G. Patrick Mauroy wrote:

 No change.
 I over typed everything to avoid such type mismatches, particularly when 
 experimenting with other integer types.  So unless I missed something 
 somewhere, it should not be the case.
 I suspect something like the compiler does not recognize the incrementing 
 variables should be registries.  Unless it is the inherent speed of 
 incrementing, but I doubt it, I had some faster runs at some points...




[julia-users] Re: Have Julia send an email (or sound) when code is done

2014-09-18 Thread Ivar Nesje
Yo.jl is not in METADATA.jl 
https://github.com/JuliaLang/METADATA.jl/pull/1093

You can use the twitter APIs tough, and there is a lot of online APIs that 
can send notifications.

There is also the ancient *print(\x07)* for your amusement, if you happen 
to be within reach of a BELL. 

Regards

kl. 19:44:42 UTC+2 torsdag 18. september 2014 skrev Jake Bolewski følgende:

 Yo.jl 

 On Thursday, September 18, 2014 12:46:03 PM UTC-4, Alex wrote:

 Hi Everyone, 

 Does anyone know of code that would have julia send an email or text at a 
 certain point in a script. I'm sending some big projects of to a digital 
 ocean droplet and I think this would be a nice thing to add so I can stop 
 obsessively checking my code all day. Here's the stata code that makes it 
 work:

 *stata -b 'yourdofile.do'  echo done body | mail -s done subject 
 [youremail @ yourhost .com] *


  I've done with Stata pretty easily, but I can't quite get it to work 
 with julia. Also with Matlab, it's pretty easy to make it chirp (gong) 
 when a code has successful (unsuccessfully) reached a point. Does anyone 
 know how to do this? Here's the matlab code that makes it work:

 load chirp 
 sound(y,Fs)

 Thanks!

 Alex

 The stata trick was found via: 
 http://scholar.harvard.edu/bauhoff/tricks.html



Re: [julia-users] Re: Have Julia send an email (or sound) when code is done

2014-09-18 Thread Cameron McBride
On Thu, Sep 18, 2014 at 1:44 PM, Jake Bolewski jakebolew...@gmail.com
wrote:

 Yo.jl


I thought this was a joke, but naturally, it does exist:
https://github.com/dichika/Yo.jl

Also, not really a julia question -- the stata trick should work just as
well on julia (or anything that goes before the double ampersand).
 However, that trick assumes the code exits successfully.

To continue OT, there is also: echo -e '\a' (or echo ^G supposedly in
Windows).
see,
http://en.wikipedia.org/wiki/Bell_character

Cameron
In any case, also echo \


Re: [julia-users] Incrementing integer slow?

2014-09-18 Thread Ivar Nesje
Type instabilities is often solved with the `const` keyword on some global 
variable.

I would try

const IdType = Int

Seems like Julia is not smart enough to guess that i::IdType will always 
ensure that i is a Int64 when IdType might change.

Regards Ivar

kl. 19:58:02 UTC+2 torsdag 18. september 2014 skrev G. Patrick Mauroy 
følgende:

 Ah ah, this is it, I found the culprit!
 Pre-declaring the type of my increment variables slowed down by a factor 
 of at least 2 if not 3 -- I will profile tonight when I can be on Linux, I 
 cannot now from Windows.

 i::IdType = zeroIdType # Slow increment in the loop

 i = zeroIdType # Much faster increment in the loop even through zeroIdType 
 is of type IdType = Int = Int64.

 It smells more like explicitly typing the variables prevents the compiler 
 from using it as a registry or something like that.  I will let the experts 
 explain what really happens.

 By the way, in my endeavors, I got the impression the loop construct 
 for i in 1:n
 was slower than
 while i =n
 But I need to run further tests to determine whether it is indeed so or 
 not.

  

 On Thursday, September 18, 2014 1:44:58 PM UTC-4, G. Patrick Mauroy wrote:

 No change.
 I over typed everything to avoid such type mismatches, particularly when 
 experimenting with other integer types.  So unless I missed something 
 somewhere, it should not be the case.
 I suspect something like the compiler does not recognize the incrementing 
 variables should be registries.  Unless it is the inherent speed of 
 incrementing, but I doubt it, I had some faster runs at some points...




Re: [julia-users] Re: Have Julia send an email (or sound) when code is done

2014-09-18 Thread Iain Dunning
For emails, check out

https://github.com/JuliaWeb/SMTPClient.jl

On Thursday, September 18, 2014 2:31:48 PM UTC-4, Cameron McBride wrote:


 On Thu, Sep 18, 2014 at 1:44 PM, Jake Bolewski jakebo...@gmail.com 
 javascript: wrote:

 Yo.jl 


 I thought this was a joke, but naturally, it does exist: 
 https://github.com/dichika/Yo.jl

 Also, not really a julia question -- the stata trick should work just as 
 well on julia (or anything that goes before the double ampersand). 
  However, that trick assumes the code exits successfully. 

 To continue OT, there is also: echo -e '\a' (or echo ^G supposedly in 
 Windows).
 see, 
 http://en.wikipedia.org/wiki/Bell_character

 Cameron
 In any case, also echo \



[julia-users] Matlab bench in Julia

2014-09-18 Thread Stephan Buchert
I have installed julia 0.3 from 
http://copr.fedoraproject.org/coprs/nalimilan/julia/ 
on my i7 Haswell 2.4 GHz laptop with updated Fedora 20.

Then I translated the first two parts of the Matlab bench script to julia:

# Transscript of the Matlab bench,
#   only LU and FFT
# Transscript of the Matlab bench,
#   only LU and FFT
print(LU decomposition, );
tic();
A = rand(1600, 1600);
lu(A);
toc();
print(FFT , );
n = 2^21;
tic();
x = rand(1,n);
fft(x);
fft(x);
toc();

The comparison is relatively disastrous for julia:
julia include(code/julia/bench.jl)
LU decomposition, elapsed time: 0.936670955 seconds
FFT , elapsed time: 0.208915093 seconds
(best out of 10 tries)

Matlab r2014a
LU decomposition: 0.0477 seconds
FFT: 0.0682 seconds

LU is 24 times slower on julia, FFT is 3 times slower. According to 
system-monitor Matlab bench causes 3 cores to be busy, julia only 1. This 
could explain the FFT result, but not the LU. 



Re: [julia-users] Matlab bench in Julia

2014-09-18 Thread Elliot Saba
The first thing you should do is run your code once to warm up the JIT, and
then run it again to measure the actual run time, rather than compile time
+ run time.  A convenient way to do this is to put your benchmark code
inside a function, run the function once, then run it again using the @time
macro.  I might modify your code a bit to the following:

function LU_test()
A = rand(1600, 1600);
lu(A);
end

function FFT_test()
n = 2^21;
x = rand(1,n);
fft(x);
fft(x);
end

LU_test()
FFT_test()

println(LU)
@time LU_test()
println(FFT)
@time FFT_test()


Note that I am now measuring the amount of time necessary to calculate
2^21, which you weren't before, but that shouldn't matter at all.  Try the
above and see if there is any improvement.  You may also want to read this
section of the manual when writing more complicated codes;
http://julia.readthedocs.org/en/latest/manual/performance-tips/
-E



On Thu, Sep 18, 2014 at 11:45 AM, Stephan Buchert stephanb...@gmail.com
wrote:

 I have installed julia 0.3 from
 http://copr.fedoraproject.org/coprs/nalimilan/julia/
 on my i7 Haswell 2.4 GHz laptop with updated Fedora 20.

 Then I translated the first two parts of the Matlab bench script to julia:

 # Transscript of the Matlab bench,
 #   only LU and FFT
 # Transscript of the Matlab bench,
 #   only LU and FFT
 print(LU decomposition, );
 tic();
 A = rand(1600, 1600);
 lu(A);
 toc();
 print(FFT , );
 n = 2^21;
 tic();
 x = rand(1,n);
 fft(x);
 fft(x);
 toc();

 The comparison is relatively disastrous for julia:
 julia include(code/julia/bench.jl)
 LU decomposition, elapsed time: 0.936670955 seconds
 FFT , elapsed time: 0.208915093 seconds
 (best out of 10 tries)

 Matlab r2014a
 LU decomposition: 0.0477 seconds
 FFT: 0.0682 seconds

 LU is 24 times slower on julia, FFT is 3 times slower. According to
 system-monitor Matlab bench causes 3 cores to be busy, julia only 1. This
 could explain the FFT result, but not the LU.




Re: [julia-users] Have Julia send an email (or sound) when code is done

2014-09-18 Thread Mike Nolta
On Thu, Sep 18, 2014 at 12:46 PM, Alex hollina...@gmail.com wrote:
 Hi Everyone,

 Does anyone know of code that would have julia send an email or text at a
 certain point in a script. I'm sending some big projects of to a digital
 ocean droplet and I think this would be a nice thing to add so I can stop
 obsessively checking my code all day. Here's the stata code that makes it
 work:

 stata -b 'yourdofile.do'  echo done body | mail -s done subject
 [youremail @ yourhost .com] 


So the following doesn't work?

  julia yourjlfile.jl  echo done | mail -s done [email] 

-Mike


  I've done with Stata pretty easily, but I can't quite get it to work with
 julia. Also with Matlab, it's pretty easy to make it chirp (gong) when a
 code has successful (unsuccessfully) reached a point. Does anyone know how
 to do this? Here's the matlab code that makes it work:

 load chirp
 sound(y,Fs)


 Thanks!

 Alex

 The stata trick was found via:
 http://scholar.harvard.edu/bauhoff/tricks.html


[julia-users] Why does map allocate so much more than a list comprehension?

2014-09-18 Thread Johan Sigfrids
So I was looking at allocations in some code and I noticed I sped things up 
significantly by changing map to a list comprehension. Doing some 
microbenchmarking I noticed that map allocates far more memory than a list 
comprehension. Shouldn't they essentially be doing the same thing?

data = rand(1000)

 

function f1(data)

[sin(i) for i in data]

end

 

function f2(data)

map(sin, data)

end

 

function f3(data)

out = zeros(data)

for i in 1:length(data)

out[i] = sin(data[i])

end

out

end

 

f1([0.1, 0.2])

f2([0.1, 0.2])

f3([0.1, 0.2])

sin([0.1, 0.2])

@time f1(data)

@time f2(data)

@time f3(data)

@time sin(data);


elapsed time: 0.375611486 seconds (8128 bytes allocated)
elapsed time: 1.707264865 seconds (40128 bytes allocated, 28.03% gc time)
elapsed time: 0.359320195 seconds (8128 bytes allocated)
elapsed time: 0.307740829 seconds (8128 bytes allocated)




[julia-users] Re: Matlab bench in Julia

2014-09-18 Thread Jason Riedy
And Elliot Saba writes:
 The first thing you should do is run your code once to warm up the
 JIT, and then run it again to measure the actual run time, rather
 than compile time + run time.

To be fair, he seems to be timing MATLAB in the same way, so he's
comparing systems appropriately at that level.

It's just the tuned BLAS+LAPACK  fftw v. the default ones.  This
is one reason why MATLAB bundles so much.  (Another reason being
the differences in numerical results causing support calls.  Took
a long time before MATLAB gave in to per-platform-tuned libraries.)



Re: [julia-users] Moving/stepen average without loop ? Is posible ?any command ?

2014-09-18 Thread Paul Analyst

No idea ?

W dniu 2014-09-18 o 13:26, paul analyst pisze:

I have a vector x = int (randbool (100))
a / how quickly (without the loop?) receive 10 vectors of length 10, 
in each field the average of the next 10 fields of the vector x 
(moving/stepen average of 10 values ​​at step = 10)?
b / how to receive the 99 vectors of length 10 of the next 10 fields 
wektra x (moving/stepen average of 10 values ​​at step = 1)?


Paul




Re: [julia-users] Re: Matlab bench in Julia

2014-09-18 Thread Andreas Noack
In addition our lu calculates a partially pivoted lu and returns the L and
U matrices and the vector of permutations. To get something comparable in
MATLAB you'll have to write

[L,,U,p] = lu(A,'vector')

On my old Mac where Julia is compiled with OpenBLAS the timings are

MATLAB:
 tic();for i = 1:10
[L,U,p] = qr(A, 'vector');
end;toc()/10

ans =

3.4801

Julia:
julia tic(); for i = 1:10
   qr(A);
   end;toc()/10
elapsed time: 14.758491472 seconds
1.4758491472

Med venlig hilsen

Andreas Noack

2014-09-18 15:33 GMT-04:00 Jason Riedy ja...@lovesgoodfood.com:

 And Elliot Saba writes:
  The first thing you should do is run your code once to warm up the
  JIT, and then run it again to measure the actual run time, rather
  than compile time + run time.

 To be fair, he seems to be timing MATLAB in the same way, so he's
 comparing systems appropriately at that level.

 It's just the tuned BLAS+LAPACK  fftw v. the default ones.  This
 is one reason why MATLAB bundles so much.  (Another reason being
 the differences in numerical results causing support calls.  Took
 a long time before MATLAB gave in to per-platform-tuned libraries.)




[julia-users] Re: LaTeX labels in Winston

2014-09-18 Thread Alex
Hi Andreas,

From time to time I was also thinking that the equation rendering should be 
doable somehow... In the end, this usually led me to the conclusion that it is 
probably not so easy. For instance, Searching for MathML rendering doesn't 
give very many hits (GtkMathView seems to use Pango to render MathML). Going 
the other route (via latex/mathjax) might actually be easier, but requires 
quite some stuff to be installed. Also if one has a nice svg renderer and 
requires latex anyways one might  as well use pgfplot et al for the whole plot?

Maybe one could have a look at matplotlib to get an idea how they do this?

Best,

Alex.


On Thursday, 18 September 2014 11:11:52 UTC+2, Andreas Lobinger  wrote:
 Hello colleague,
 
 on one of these rainy sunday afternoons i sat there and thought: Hey, this 
 can't be that complicated...
 
 Math type setting (still) seems to be some black art and an awful of 
 heuristics are put into code and only there. So there is no material in 
 algorithmic form available that could be re-implemented somewhere. I mean, 
 even TeX itself is available only a literate programming.
 
 So options i considered:
  * wait until it's integrated in Pango (you will wait a long time)
  * try to integrate a latex/pdflatex run to produce dvi or pdf and use that. 
 And find a dvi or pdf to cairo renderer.
  * run mathjax.js via a java script engine to svg and use Rsvg or similat to 
 render on cairo.
  * rewrite mathjax.js in julia (might have licensing issues).
 
 
  



Re: [julia-users] Re: Matlab bench in Julia

2014-09-18 Thread Stefan Karpinski
I'm slightly confused – does that mean Julia is 2.4x faster in this case?

On Thu, Sep 18, 2014 at 3:53 PM, Andreas Noack andreasnoackjen...@gmail.com
 wrote:

 In addition our lu calculates a partially pivoted lu and returns the L and
 U matrices and the vector of permutations. To get something comparable in
 MATLAB you'll have to write

 [L,,U,p] = lu(A,'vector')

 On my old Mac where Julia is compiled with OpenBLAS the timings are

 MATLAB:
  tic();for i = 1:10
 [L,U,p] = qr(A, 'vector');
 end;toc()/10

 ans =

 3.4801

 Julia:
 julia tic(); for i = 1:10
qr(A);
end;toc()/10
 elapsed time: 14.758491472 seconds
 1.4758491472

 Med venlig hilsen

 Andreas Noack

 2014-09-18 15:33 GMT-04:00 Jason Riedy ja...@lovesgoodfood.com:

 And Elliot Saba writes:
  The first thing you should do is run your code once to warm up the
  JIT, and then run it again to measure the actual run time, rather
  than compile time + run time.

 To be fair, he seems to be timing MATLAB in the same way, so he's
 comparing systems appropriately at that level.

 It's just the tuned BLAS+LAPACK  fftw v. the default ones.  This
 is one reason why MATLAB bundles so much.  (Another reason being
 the differences in numerical results causing support calls.  Took
 a long time before MATLAB gave in to per-platform-tuned libraries.)





Re: [julia-users] Re: Matlab bench in Julia

2014-09-18 Thread Andreas Noack
Yes. It appears so on my Mac. I just redid the timings with the same result.

Med venlig hilsen

Andreas Noack

2014-09-18 15:55 GMT-04:00 Stefan Karpinski ste...@karpinski.org:

 I'm slightly confused – does that mean Julia is 2.4x faster in this case?

 On Thu, Sep 18, 2014 at 3:53 PM, Andreas Noack 
 andreasnoackjen...@gmail.com wrote:

 In addition our lu calculates a partially pivoted lu and returns the L
 and U matrices and the vector of permutations. To get something comparable
 in MATLAB you'll have to write

 [L,,U,p] = lu(A,'vector')

 On my old Mac where Julia is compiled with OpenBLAS the timings are

 MATLAB:
  tic();for i = 1:10
 [L,U,p] = qr(A, 'vector');
 end;toc()/10

 ans =

 3.4801

 Julia:
 julia tic(); for i = 1:10
qr(A);
end;toc()/10
 elapsed time: 14.758491472 seconds
 1.4758491472

 Med venlig hilsen

 Andreas Noack

 2014-09-18 15:33 GMT-04:00 Jason Riedy ja...@lovesgoodfood.com:

 And Elliot Saba writes:
  The first thing you should do is run your code once to warm up the
  JIT, and then run it again to measure the actual run time, rather
  than compile time + run time.

 To be fair, he seems to be timing MATLAB in the same way, so he's
 comparing systems appropriately at that level.

 It's just the tuned BLAS+LAPACK  fftw v. the default ones.  This
 is one reason why MATLAB bundles so much.  (Another reason being
 the differences in numerical results causing support calls.  Took
 a long time before MATLAB gave in to per-platform-tuned libraries.)






Re: [julia-users] Incrementing integer slow?

2014-09-18 Thread Stefan Karpinski
On Thu, Sep 18, 2014 at 2:33 PM, Ivar Nesje iva...@gmail.com wrote:

 Seems like Julia is not smart enough to guess that i::IdType will always
 ensure that i is a Int64 when IdType might change.


Since IdType is not constant, it can change at any time – generating code
on the premise that it cannot change would be incorrect. Instead, the code
generated for this function needs to handle the possibility that IdType can
change at any point, which basically makes all optimizations impossible.
It's a bit low level (we should really expose this better), but you can use
@code_typed to see what the inferred types of all the local variables are:

julia (@code_typed manual_iter1(1,2))[1].args[2][2]
38-element Array{Any,1}:
 {:dt1,Int64,0}
 {:dt2,Int64,0}
 {:zeroIdType,Any,18}
 {:oneIdType,Any,18}
 {:zeroDType,Any,18}
 {symbol(#s5149),Any,18}
 {symbol(#s5150),Any,18}
 {:dt1id1,Any,18}
 {:dt1D,Any,18}
 {symbol(#s5151),Any,18}
 {symbol(#s5152),Any,18}
 {symbol(#s5153),Any,18}
 {:dt2id2,Any,18}
 {:dt2D1,Any,18}
 {:dt2D2,Any,18}
 {:MAX_INT,Any,18}
 {:MAX_D,Any,18}
 {:nrow1,Any,18}
 {:nrow2,Any,18}
 {:i1,Any,2}
 {:i1id1,Any,2}
 {:i1D,Any,2}
 {:i2Lower,Any,2}
 {:i2LowerD2,Any,2}
 {:i2Upper,Any,2}
 {:i2UpperD1,Any,2}
 {:i2UpperD2,Any,2}
 {:i2Match,Any,2}
 {:i,Any,2}
 {:nrowMax,Any,18}
 {:resid1,Any,18}
 {:resid2,Any,18}
 {:resD1,Any,18}
 {:resD,Any,18}
 {:resD2,Any,18}
 {:i2MatchD2,Any,18}
 {:i2MatchD1,Any,18}
 {symbol(#s5154),Any,18}


As you can see, it's not pretty: given Int arguments, literally only the
types of the arguments themselves are more specific than Any. If you
declare IdType and DType to be const, it gets better, but still not ideal:

julia (@code_typed manual_iter1(1,2))[1].args[2][2]
38-element Array{Any,1}:
 {:dt1,Int64,0}
 {:dt2,Int64,0}
 {:zeroIdType,Int64,18}
 {:oneIdType,Int64,18}
 {:zeroDType,Int64,18}
 {symbol(#s122),Any,18}
 {symbol(#s121),Any,18}
 {:dt1id1,Any,18}
 {:dt1D,Any,18}
 {symbol(#s120),Any,18}
 {symbol(#s119),Any,18}
 {symbol(#s118),Any,18}
 {:dt2id2,Any,18}
 {:dt2D1,Any,18}
 {:dt2D2,Any,18}
 {:MAX_INT,Int64,18}
 {:MAX_D,Int64,18}
 {:nrow1,Int64,18}
 {:nrow2,Int64,18}
 {:i1,Int64,2}
 {:i1id1,Int64,2}
 {:i1D,Int64,2}
 {:i2Lower,Int64,2}
 {:i2LowerD2,Int64,2}
 {:i2Upper,Int64,2}
 {:i2UpperD1,Int64,2}
 {:i2UpperD2,Int64,2}
 {:i2Match,Int64,2}
 {:i,Int64,2}
 {:nrowMax,Int64,18}
 {:resid1,Any,18}
 {:resid2,Any,18}
 {:resD1,Any,18}
 {:resD,Any,18}
 {:resD2,Any,18}
 {:i2MatchD2,Any,18}
 {:i2MatchD1,Any,18}
 {symbol(#s117),Any,18}


When you pull something out of an untyped structure like dt1id1, dt1D, etc.
it is a good idea to provide some type annotations if you can. Most of the
other type annotations inside the function body are unnecessary, however.


Re: [julia-users] Re: Matlab bench in Julia

2014-09-18 Thread Stefan Karpinski
Nice :-)

On Thu, Sep 18, 2014 at 4:20 PM, Andreas Noack andreasnoackjen...@gmail.com
 wrote:

 Yes. It appears so on my Mac. I just redid the timings with the same
 result.

 Med venlig hilsen

 Andreas Noack

 2014-09-18 15:55 GMT-04:00 Stefan Karpinski ste...@karpinski.org:

 I'm slightly confused – does that mean Julia is 2.4x faster in this case?

 On Thu, Sep 18, 2014 at 3:53 PM, Andreas Noack 
 andreasnoackjen...@gmail.com wrote:

 In addition our lu calculates a partially pivoted lu and returns the L
 and U matrices and the vector of permutations. To get something comparable
 in MATLAB you'll have to write

 [L,,U,p] = lu(A,'vector')

 On my old Mac where Julia is compiled with OpenBLAS the timings are

 MATLAB:
  tic();for i = 1:10
 [L,U,p] = qr(A, 'vector');
 end;toc()/10

 ans =

 3.4801

 Julia:
 julia tic(); for i = 1:10
qr(A);
end;toc()/10
 elapsed time: 14.758491472 seconds
 1.4758491472

 Med venlig hilsen

 Andreas Noack

 2014-09-18 15:33 GMT-04:00 Jason Riedy ja...@lovesgoodfood.com:

 And Elliot Saba writes:
  The first thing you should do is run your code once to warm up the
  JIT, and then run it again to measure the actual run time, rather
  than compile time + run time.

 To be fair, he seems to be timing MATLAB in the same way, so he's
 comparing systems appropriately at that level.

 It's just the tuned BLAS+LAPACK  fftw v. the default ones.  This
 is one reason why MATLAB bundles so much.  (Another reason being
 the differences in numerical results causing support calls.  Took
 a long time before MATLAB gave in to per-platform-tuned libraries.)







Re: [julia-users] Moving/stepen average without loop ? Is posible ?any command ?

2014-09-18 Thread Stefan Karpinski
This seems like a general programming problem rather than a Julia question.

On Thu, Sep 18, 2014 at 3:36 PM, Paul Analyst paul.anal...@mail.com wrote:

 No idea ?

 W dniu 2014-09-18 o 13:26, paul analyst pisze:

  I have a vector x = int (randbool (100))
 a / how quickly (without the loop?) receive 10 vectors of length 10, in
 each field the average of the next 10 fields of the vector x (moving/stepen
 average of 10 values ​​at step = 10)?
 b / how to receive the 99 vectors of length 10 of the next 10 fields
 wektra x (moving/stepen average of 10 values ​​at step = 1)?

 Paul





Re: [julia-users] Re: Matlab bench in Julia

2014-09-18 Thread Andreas Noack
I knew something was not right. I typed qr, not lu. Hence in that case,
 MATLAB did pivoting and Julia didn't. Sorry for that.

Here are the right timings for lu which are as expected. MKL is slightly
faster than OpenBLAS.

MATLAB:
 tic();for i = 1:10
[L,U,p] = lu(A, 'vector');
end;toc()/10

ans =

0.2314


Julia:
julia tic(); for i = 1:10
   lu(A);
   end;toc()/10
elapsed time: 3.147632455 seconds

0.3147632455

Med venlig hilsen

Andreas Noack

2014-09-18 16:25 GMT-04:00 Stefan Karpinski ste...@karpinski.org:

 Nice :-)

 On Thu, Sep 18, 2014 at 4:20 PM, Andreas Noack 
 andreasnoackjen...@gmail.com wrote:

 Yes. It appears so on my Mac. I just redid the timings with the same
 result.

 Med venlig hilsen

 Andreas Noack

 2014-09-18 15:55 GMT-04:00 Stefan Karpinski ste...@karpinski.org:

 I'm slightly confused – does that mean Julia is 2.4x faster in this case?

 On Thu, Sep 18, 2014 at 3:53 PM, Andreas Noack 
 andreasnoackjen...@gmail.com wrote:

 In addition our lu calculates a partially pivoted lu and returns the L
 and U matrices and the vector of permutations. To get something comparable
 in MATLAB you'll have to write

 [L,,U,p] = lu(A,'vector')

 On my old Mac where Julia is compiled with OpenBLAS the timings are

 MATLAB:
  tic();for i = 1:10
 [L,U,p] = qr(A, 'vector');
 end;toc()/10

 ans =

 3.4801

 Julia:
 julia tic(); for i = 1:10
qr(A);
end;toc()/10
 elapsed time: 14.758491472 seconds
 1.4758491472

 Med venlig hilsen

 Andreas Noack

 2014-09-18 15:33 GMT-04:00 Jason Riedy ja...@lovesgoodfood.com:

 And Elliot Saba writes:
  The first thing you should do is run your code once to warm up the
  JIT, and then run it again to measure the actual run time, rather
  than compile time + run time.

 To be fair, he seems to be timing MATLAB in the same way, so he's
 comparing systems appropriately at that level.

 It's just the tuned BLAS+LAPACK  fftw v. the default ones.  This
 is one reason why MATLAB bundles so much.  (Another reason being
 the differences in numerical results causing support calls.  Took
 a long time before MATLAB gave in to per-platform-tuned libraries.)








Re: [julia-users] Matlab bench in Julia

2014-09-18 Thread Milan Bouchet-Valat
Le jeudi 18 septembre 2014 à 15:12 -0400, Andreas Noack a écrit :
 You are not using a fast BLAS, but the slow reference BLAS which
 unfortunately is the default on Linux. An option is to build Julia
 from source. Usually it is just to download the source and write make.
 Then you'll have Julia compiled with OpenBLAS which is much faster and
 comparable in speed to Intel MKL which MATLAB uses.
My Fedora RPM package uses OpenBLAS, or at least is supposed to. But
indeed only one thread is used in my tests here too. The problem seems
to be that LAPACK is not the OpenBLAS one:
julia versioninfo()
Julia Version 0.3.0
Platform Info:
  System: Linux (x86_64-redhat-linux)
  CPU: Intel(R) Core(TM) i5 CPU   M 450  @ 2.40GHz
  WORD_SIZE: 64
  BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY)
  LAPACK: liblapack
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

And indeed I'm passing USE_SYSTEM_LAPACK=1. I didn't know it would incur
such a slowdown. Is there a way to get Julia use the system's OpenBLAS
version of LAPACK?


Regards



[julia-users] Re: Why does map allocate so much more than a list comprehension?

2014-09-18 Thread Patrick O'Leary
On Thursday, September 18, 2014 2:31:14 PM UTC-5, Johan Sigfrids wrote:

 So I was looking at allocations in some code and I noticed I sped things 
 up significantly by changing map to a list comprehension. Doing some 
 microbenchmarking I noticed that map allocates far more memory than a list 
 comprehension. Shouldn't they essentially be doing the same thing?


map() is a generic function call, and doesn't specialize on parameter 
values. The list comprehension is special syntax. The following should be 
instructive:

code_llvm(f1, (Array{Float64, 1},))

vs.

code_llvm(f2, (Array{Float64, 1},))

Making map() fast is one of the nearish-term goals. The latest discussion 
on map-related topics starts here: 
https://github.com/JuliaLang/julia/issues/8389#issuecomment-55930448


Re: [julia-users] Matlab bench in Julia

2014-09-18 Thread Andreas Noack
Ok I see. Good to hear that the package uses OpenBLAS. I have replied to a
couple of linux users who have complained about the slow linear algebra in
Julia.

Elliot is probably the right person to answer the question about the
linking.

Med venlig hilsen

Andreas Noack

2014-09-18 16:35 GMT-04:00 Milan Bouchet-Valat nalimi...@club.fr:

 Le jeudi 18 septembre 2014 à 15:12 -0400, Andreas Noack a écrit :
  You are not using a fast BLAS, but the slow reference BLAS which
  unfortunately is the default on Linux. An option is to build Julia
  from source. Usually it is just to download the source and write make.
  Then you'll have Julia compiled with OpenBLAS which is much faster and
  comparable in speed to Intel MKL which MATLAB uses.
 My Fedora RPM package uses OpenBLAS, or at least is supposed to. But
 indeed only one thread is used in my tests here too. The problem seems
 to be that LAPACK is not the OpenBLAS one:
 julia versioninfo()
 Julia Version 0.3.0
 Platform Info:
   System: Linux (x86_64-redhat-linux)
   CPU: Intel(R) Core(TM) i5 CPU   M 450  @ 2.40GHz
   WORD_SIZE: 64
   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY)
   LAPACK: liblapack
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 And indeed I'm passing USE_SYSTEM_LAPACK=1. I didn't know it would incur
 such a slowdown. Is there a way to get Julia use the system's OpenBLAS
 version of LAPACK?


 Regards




[julia-users] Re: LaTeX labels in Winston

2014-09-18 Thread Steven G. Johnson


On Thursday, September 18, 2014 3:55:41 PM UTC-4, Alex wrote:

 Maybe one could have a look at matplotlib to get an idea how they do this? 


Matplotlib implemented its own LaTeX parser and renderer (about 3000 lines 
of 
code): 
https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/mathtext.py


Re: [julia-users] Matlab bench in Julia

2014-09-18 Thread Milan Bouchet-Valat
Le jeudi 18 septembre 2014 à 16:44 -0400, Andreas Noack a écrit :
 Ok I see. Good to hear that the package uses OpenBLAS. I have replied
 to a couple of linux users who have complained about the slow linear
 algebra in Julia.
What distributions did they use? We should try to improve these
packages, it's really bad advertisement for Julia, and a waste of
people's time not to use OpenBLAS by default.


Regards



Re: [julia-users] Moving/stepen average without loop ? Is posible ?any command ?

2014-09-18 Thread Douglas Bates
On Thursday, September 18, 2014 2:36:53 PM UTC-5, paul analyst wrote:

 No idea ? 

 W dniu 2014-09-18 o 13:26, paul analyst pisze: 
  I have a vector x = int (randbool (100)) 
  a / how quickly (without the loop?) receive 10 vectors of length 10, 
  in each field the average of the next 10 fields of the vector x 
  (moving/stepen average of 10 values ​​at step = 10)? 
  b / how to receive the 99 vectors of length 10 of the next 10 fields 
  wektra x (moving/stepen average of 10 values ​​at step = 1)? 
  
  Paul 


I'm not sure what the first result you want is and what you mean by 
without the loop.  A general moving average of m vector positions with 
steps of 1 could be written

function movingavg1(v::Vector,m::Int)
n = length(v)
0  m  n || error(m = $m must be in the range [1,length(v)] = 
[1,$(length(v))])
res = Array(typeof(v[1]/n), n-m+1)
s = zero(eltype(res))
for i in 1:m
s += v[i]
end
res[1] = s
for j in 1:(length(res)-1)
s -= v[j]
s += v[j + m]
res[j+1] = s
end
res/m
end

 To test this

julia vv = int(randbool(100));

julia show(vv)
[0,0,0,1,1,1,1,1,1,0,1,1,0,0,1,1,1,1,0,1,0,0,0,1,0,1,1,0,1,1,1,1,1,0,0,1,1,0,1,0,0,0,0,1,1,0,1,0,0,0,0,0,1,0,1,0,0,0,0,1,0,0,0,1,0,0,1,1,0,1,0,1,1,1,1,0,1,0,0,1,1,1,0,0,1,1,0,0,1,1,0,0,1,0,0,0,1,0,1,1]
julia show(movingavg1(vv,10))
[0.6,0.7,0.8,0.8,0.7,0.7,0.7,0.7,0.7,0.6,0.7,0.6,0.5,0.5,0.6,0.5,0.5,0.5,0.4,0.5,0.5,0.6,0.7,0.8,0.7,0.7,0.7,0.7,0.7,0.7,0.6,0.5,0.4,0.3,0.4,0.5,0.4,0.4,0.4,0.3,0.3,0.3,0.3,0.4,0.3,0.3,0.3,0.2,0.2,0.2,0.3,0.3,0.3,0.2,0.3,0.2,0.2,0.3,0.4,0.4,0.4,0.4,0.5,0.6,0.6,0.7,0.7,0.7,0.6,0.6,0.6,0.7,0.7,0.6,0.5,0.5,0.6,0.5,0.5,0.6,0.6,0.5,0.4,0.5,0.5,0.4,0.3,0.4,0.4,0.4,0.4]
julia mean(vv[1:10])
0.6

julia mean(vv[2:11])
0.7

julia mean(vv[91:100])
0.4


If you are always working with Bool values you are probably better off 
returning the moving sums than the moving averages.



Re: [julia-users] Matlab bench in Julia

2014-09-18 Thread Douglas Bates
Part of the problem is the definition of benchmark.  If all you are doing 
is solving a linear system or evaluating a decomposition then you are just 
benchmarking the BLAS/LAPACK implementation.  It doesn't really matter if 
user-facing language is Matlab or Octave or R or Julia.  They are just a 
thin wrapper around the BLAS/LAPACK calls in cases like this

On Thursday, September 18, 2014 3:59:08 PM UTC-5, Milan Bouchet-Valat wrote:

 Le jeudi 18 septembre 2014 à 16:44 -0400, Andreas Noack a écrit : 
  Ok I see. Good to hear that the package uses OpenBLAS. I have replied 
  to a couple of linux users who have complained about the slow linear 
  algebra in Julia. 
 What distributions did they use? We should try to improve these 
 packages, it's really bad advertisement for Julia, and a waste of 
 people's time not to use OpenBLAS by default. 


 Regards 



[julia-users] Re: Matlab bench in Julia

2014-09-18 Thread Stephan Buchert
Thanks for the tips. I have now compiled julia on my laptop, and the 
results are:

julia versioninfo()
Julia Version 0.3.0+6
Commit 7681878* (2014-08-20 20:43 UTC)
Platform Info:
  System: Linux (x86_64-redhat-linux)
  CPU: Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

julia include(code/julia/bench.jl)
LU decomposition, elapsed time: 0.123349203 seconds
FFT , elapsed time: 0.20440579 seconds

Matlab r2104a, with [L,U,P] = lu(A); instead of just lu(A);
LU decomposition, elapsed time: 0.0586 seconds 
FFT  elapsed time: 0.0809 seconds

So a great improvement, but julia seems still 2-3 times slower than matlab, 
the underlying linear algebra libraries, respectively, and for these two 
very limited bench marks. Perhaps Matlab found a way to speed their 
lin.alg. up recently?

The Fedora precompiled openblas was installed already at the first test 
(and presumably used by julia), but, as Andreas has also pointed out,  it 
seems to be significantly slower than an openblas library compiled now with 
the julia installation.  



[julia-users] vi mode in julia

2014-09-18 Thread thr
Hello all,

somewhere I read that GNU-readline was dropped for parsing user input, 
because this was implemented in Julia itself.

Consequently setting editing-mode to vi doesn't work with Julia. I hate 
that. Is there a way to use standard readline as command line parser?



[julia-users] Is there a way to use readline as command line interpreter

2014-09-18 Thread thr

Hello all,

Somewhere I read that Julia dropped (GNU) readline as command line 
interpreter and now setting editing mode to vi doesn't work any more. :(

I wonder if there is a way to achieve similar behaviour within the Julia 
command line.

Greetings, Johannes


[julia-users] Re: Matlab bench in Julia

2014-09-18 Thread Peter Simon
I have found that I get better performance from some openblas routines by 
setting the number of blas threads to the number of physical CPU cores 
(half the number returned by CPU_CORES when hyperthreading is enabled):

 Base.blas_set_num_threads(div(CPU_CORES,2))

--Peter


On Thursday, September 18, 2014 3:09:17 PM UTC-7, Stephan Buchert wrote:

 Thanks for the tips. I have now compiled julia on my laptop, and the 
 results are:

 julia versioninfo()
 Julia Version 0.3.0+6
 Commit 7681878* (2014-08-20 20:43 UTC)
 Platform Info:
   System: Linux (x86_64-redhat-linux)
   CPU: Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 julia include(code/julia/bench.jl)
 LU decomposition, elapsed time: 0.123349203 seconds
 FFT , elapsed time: 0.20440579 seconds

 Matlab r2104a, with [L,U,P] = lu(A); instead of just lu(A);
 LU decomposition, elapsed time: 0.0586 seconds 
 FFT  elapsed time: 0.0809 seconds

 So a great improvement, but julia seems still 2-3 times slower than 
 matlab, the underlying linear algebra libraries, respectively, and for 
 these two very limited bench marks. Perhaps Matlab found a way to speed 
 their lin.alg. up recently?

 The Fedora precompiled openblas was installed already at the first test 
 (and presumably used by julia), but, as Andreas has also pointed out,  it 
 seems to be significantly slower than an openblas library compiled now with 
 the julia installation.  



Re: [julia-users] Re: Matlab bench in Julia

2014-09-18 Thread Andreas Noack
As Douglas Bates wrote, these benchmarks mainly measures the speed of the
underlying libraries. MATLAB uses MKL from Intel which is often the fastest
library. However, the speed of OpenBLAS can be very different on different
architectures and sometimes it can be faster than MKL. I just tried the
benchmarks on a Linux server where that is the case.

Milan, unfortunately I don't remember which distribution it was. I think it
was a couple of months ago, but I'm not sure.

Med venlig hilsen

Andreas Noack

2014-09-18 19:06 GMT-04:00 Peter Simon psimon0...@gmail.com:

 I have found that I get better performance from some openblas routines by
 setting the number of blas threads to the number of physical CPU cores
 (half the number returned by CPU_CORES when hyperthreading is enabled):

  Base.blas_set_num_threads(div(CPU_CORES,2))

 --Peter



 On Thursday, September 18, 2014 3:09:17 PM UTC-7, Stephan Buchert wrote:

 Thanks for the tips. I have now compiled julia on my laptop, and the
 results are:

 julia versioninfo()
 Julia Version 0.3.0+6
 Commit 7681878* (2014-08-20 20:43 UTC)
 Platform Info:
   System: Linux (x86_64-redhat-linux)
   CPU: Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 julia include(code/julia/bench.jl)
 LU decomposition, elapsed time: 0.123349203 seconds
 FFT , elapsed time: 0.20440579 seconds

 Matlab r2104a, with [L,U,P] = lu(A); instead of just lu(A);
 LU decomposition, elapsed time: 0.0586 seconds
 FFT  elapsed time: 0.0809 seconds

 So a great improvement, but julia seems still 2-3 times slower than
 matlab, the underlying linear algebra libraries, respectively, and for
 these two very limited bench marks. Perhaps Matlab found a way to speed
 their lin.alg. up recently?

 The Fedora precompiled openblas was installed already at the first test
 (and presumably used by julia), but, as Andreas has also pointed out,  it
 seems to be significantly slower than an openblas library compiled now with
 the julia installation.




[julia-users] Compiling julia on Red Hat 4.4.6-4 with gcc 4.4.7

2014-09-18 Thread Alberto Torres Barrán
When trying to compile Julia from source on RHEL 4.4.6 I get the following 
error:

lvm[4]: Compiling Valgrind.cpp for Release build
/scratch/gaa/local/src/julia-v0.3.0/deps/llvm-3.3/lib/Support/Valgrind.cpp:20:31:
 
warning: valgrind/valgrind.h: No such file or directory
/scratch/gaa/local/src/julia-v0.3.0/deps/llvm-3.3/lib/Support/Valgrind.cpp: 
In function ‘bool InitNotUnderValgrind()’:
/scratch/gaa/local/src/julia-v0.3.0/deps/llvm-3.3/lib/Support/Valgrind.cpp:23: 
error: ‘RUNNING_ON_VALGRIND’ was not declared in this scope
/scratch/gaa/local/src/julia-v0.3.0/deps/llvm-3.3/lib/Support/Valgrind.cpp: 
In function ‘bool llvm::sys::RunningOnValgrind()’:
/scratch/gaa/local/src/julia-v0.3.0/deps/llvm-3.3/lib/Support/Valgrind.cpp:35: 
error: ‘RUNNING_ON_VALGRIND’ was not declared in this scope
/scratch/gaa/local/src/julia-v0.3.0/deps/llvm-3.3/lib/Support/Valgrind.cpp: 
In function ‘void llvm::sys::ValgrindDiscardTranslations(const void*, 
size_t)’:
/scratch/gaa/local/src/julia-v0.3.0/deps/llvm-3.3/lib/Support/Valgrind.cpp:42: 
error: ‘VALGRIND_DISCARD_TRANSLATIONS’ was not declared in this scope
make[4]: *** 
[/scratch/gaa/local/src/julia-v0.3.0/deps/llvm-3.3/build_Release/lib/Support/Release/Valgrind.o]
 
Error 1
make[3]: *** [all] Error 1
make[2]: *** [llvm-3.3/build_Release/Release/lib/libLLVMJIT.a] Error 2
make[1]: *** [julia-release] Error 2
make: *** [release] Error 2

I posted this as an issue on the github repository 
https://github.com/JuliaLang/julia/issues/8400#issuecomment-56033555 but 
I'v been told to post it here. Anyone had any luck compiling Julia on that 
system? 

PS. I don't have sudo access


Re: [julia-users] Why does map allocate so much more than a list comprehension?

2014-09-18 Thread Tim Holy
If you use the FastAnonymous package then they should be much closer:

julia using FastAnonymous

julia fsin = @anon x-sin(x)
##7954 (constructor with 2 methods)

julia fsin(0.3)
0.29552020666133955

julia @time map(fsin, data);
elapsed time: 0.285690205 seconds (80324156 bytes allocated)

julia @time map(fsin, data);
elapsed time: 0.23800597 seconds (8128 bytes allocated)

--Tim

On Thursday, September 18, 2014 12:31:14 PM Johan Sigfrids wrote:
 So I was looking at allocations in some code and I noticed I sped things up
 significantly by changing map to a list comprehension. Doing some
 microbenchmarking I noticed that map allocates far more memory than a list
 comprehension. Shouldn't they essentially be doing the same thing?
 
 data = rand(1000)
 
 
 
 function f1(data)
 
 [sin(i) for i in data]
 
 end
 
 
 
 function f2(data)
 
 map(sin, data)
 
 end
 
 
 
 function f3(data)
 
 out = zeros(data)
 
 for i in 1:length(data)
 
 out[i] = sin(data[i])
 
 end
 
 out
 
 end
 
 
 
 f1([0.1, 0.2])
 
 f2([0.1, 0.2])
 
 f3([0.1, 0.2])
 
 sin([0.1, 0.2])
 
 @time f1(data)
 
 @time f2(data)
 
 @time f3(data)
 
 @time sin(data);
 
 
 elapsed time: 0.375611486 seconds (8128 bytes allocated)
 elapsed time: 1.707264865 seconds (40128 bytes allocated, 28.03% gc
 time) elapsed time: 0.359320195 seconds (8128 bytes allocated)
 elapsed time: 0.307740829 seconds (8128 bytes allocated)



Re: [julia-users] Debugging a segfault (memory allocation?)

2014-09-18 Thread Isaiah Norton
See MEMDEBUG in options.h
On Sep 18, 2014 8:28 PM, Erik Schnetter schnet...@gmail.com wrote:

 I have a Julia application that uses MPI to communicate between several
 processes. Each process uses many tasks, and they send functions to remote
 locations to be executed.

 If I use a large number of tasks per process, I receive segfaults.
 Sometimes I am able to obtain a stack backtrace, and these segfaults
 usually occur in array.c or in gc.c in routines related to memory
 allocation, often for increasing the buffer size for serialization. I've
 added a few assert statements there and examined the code, and it seems
 that these routines themselves are not to blame. My next assumption is thus
 that, somewhere, someone is overwriting memory, and libc's malloc's
 internal data structures are accidentally overwritten.

 - Do you have pointers for debugging this in Julia?
 - Is there a memory-debug mode for Julia, for its garbage collector, for
 flisp, for flisp's garbage collector, ...?
 - Is there a way to rebuild Julia with more aggressive self-checking
 enabled?

 I can reproduce the error quite reliably, but it always occurs at a
 different place. Unfortunately, the error goes away if I reduce the number
 of tasks or the number of processes 
 https://en.wikipedia.org/wiki/Heisenbug.

 -erik




Re: [julia-users] Make a Copy of an Immutable with Specific Fields Changed

2014-09-18 Thread Pontus Stenetorp
On 18 September 2014 18:22, Rafael Fourquet fourquet.raf...@gmail.com wrote:

 I think the idiomatic way remains to be designed:
 https://github.com/JuliaLang/julia/issues/5333.

Thanks a bundle Rafael, I did indeed miss that issue.  There also
seems to be a pull-request from @nolta implementing it, but it is
waiting for a few other features to be sorted out:

https://github.com/JuliaLang/julia/pull/6122#issuecomment-44579024


[julia-users] Is there an inverse of `sparse`?

2014-09-18 Thread DumpsterDoofus
Given column vectors I, J, and V, one can construct a sparse matrix using 
the following syntax:

sparse(I, J, V)

How about the reverse? I.e., given a sparse matrix S, is there a function 
which returns the column vectors I, J, and V that define S?

One can obtain the list of nonzero values V with the command

nonzeros(S)

but I'm not sure how to get the row and column coordinates I and J that go 
along with V. Is there a convenient way to obtain I and J?


[julia-users] Re: Moving/stepen average without loop ? Is posible ?any command ?

2014-09-18 Thread DumpsterDoofus
I'm not entirely sure what you're looking for, but if you have a long list 
of data and want to compute a moving average, then it suffices to convolve 
your data with a box function. The Fourier convolution theorem says that 
you can do this using Fourier transforms, so if you really want to avoid 
using a loop, you can instead compute a smoothed version of your data using 
the built-in rfft and irfft functions. 
See http://www.dspguide.com/ch6/2.htm for more info, and look for an 
intro-level digital signal processing course, as that would definitely 
explain smoothing data in detail.

On Thursday, September 18, 2014 7:26:20 AM UTC-4, paul analyst wrote:

 I have a vector x = int (randbool (100)) 
 a / how quickly (without the loop?) receive 10 vectors of length 10, in 
 each field the average of the next 10 fields of the vector x (moving
 /stepen  average of 10 values ​​at step = 10)? 
 b / how to receive the 99 vectors of length 10 of the next 10 fields 
 wektra x (moving/stepen average of 10 values ​​at step = 1)?

 Paul



Re: [julia-users] Is there an inverse of `sparse`?

2014-09-18 Thread John Myles White
Try findnz.

This seems to not be documented in the sparse section of the manual, but I 
would think it should be.

 — John

On Sep 18, 2014, at 6:58 PM, DumpsterDoofus peter.richter@gmail.com wrote:

 Given column vectors I, J, and V, one can construct a sparse matrix using the 
 following syntax:
 
 sparse(I, J, V)
 
 How about the reverse? I.e., given a sparse matrix S, is there a function 
 which returns the column vectors I, J, and V that define S?
 
 One can obtain the list of nonzero values V with the command
 
 nonzeros(S)
 
 but I'm not sure how to get the row and column coordinates I and J that go 
 along with V. Is there a convenient way to obtain I and J?



[julia-users] Re: ANN: ApproxFun v0.0.3 with general linear PDE solving on rectangles

2014-09-18 Thread DumpsterDoofus
Haha, I remember reading through your paper A fast and well-conditioned 
spectral method last year and feeling like my head was spinning 
afterwards. I vaguely recall that it views differential equations in 
GegenbauerC space, a basis choice which has a bunch of super convenient 
properties, all of which I have completely forgotten by now.


[julia-users] ANN: QuantEcon.jl

2014-09-18 Thread Spencer Lyon


New package QuantEcon.jl https://github.com/QuantEcon/QuantEcon.jl.

This package collects code for quantitative economic modeling. It is 
currently comprised of two main parts:

   1. 
   
   A toolbox of routines useful when doing economics
2. 
   
   Implementations of types and solution methods for common economic 
   models. 

This library has a python twin: QuantEcon.py 
https://github.com/QuantEcon/QuantEcon.py. The same development team is 
working on both projects, so we hope to keep the two libraries in sync very 
closely as new functionality is added.

The library contains all the code necessary to do the computations found on 
http://quant-econ.net/, a website dedicated to providing lectures that each 
economics and programming. The website currently (as of 9/18/14) has only a 
python version, but the Julia version is in late stages of refinement and 
should be live very soon (hopefully within a week).

The initial version of the website will feature 6 lectures dedicated to 
helping a new user set up a working Julia environment and learn the basics 
of the language. In addition to this language specific section, the website 
will include 22 other lectures on topics including

   - statistics: markov processes (continuous and discrete state), 
   auto-regressive processes, the Kalman filter, covariance stationary 
   proceses, ect. 
   - economic models: the income fluctuation problem, an asset pricing 
   model, the classic optimal growth model, optimal (Ramsey) taxation , the 
   McCall search model 
   - dynamic programming: shortest path, as well as recursive solutions to 
   economic models 

All the lectures have code examples in Julia and most of the 22 will 
display code from the QuantEcon.jl library.
​


Re: [julia-users] Is there an inverse of `sparse`?

2014-09-18 Thread DumpsterDoofus


 Thanks, that's what I was looking for! I forked a copy of the 
 documentation on my GitHub account and added in the following entry to the 
 sparse matrix section:


.. function:: findnz(A)

   Returns a tuple (I, J, V) containing the column indices, row indices, and 
nonzero values. The I, J, and V satisfy ``S[I[k], J[k]] = V[k]``. Essentially 
the inverse of ``sparse``.


 


Re: [julia-users] Is there an inverse of `sparse`?

2014-09-18 Thread John Myles White
Submit a pull request?

One point: I think you may have flipped column indices and row indices in your 
description.

 — John

On Sep 18, 2014, at 7:45 PM, DumpsterDoofus peter.richter@gmail.com wrote:

 Thanks, that's what I was looking for! I forked a copy of the documentation 
 on my GitHub account and added in the following entry to the sparse matrix 
 section:
 
 .. function:: findnz(A)
 
Returns a tuple (I, J, V) containing the column indices, row indices, and 
 nonzero values. The I, J, and V satisfy ``S[I[k], J[k]] = V[k]``. Essentially 
 the inverse of ``sparse``.
 
  



[julia-users] Re: ANN: QuantEcon.jl

2014-09-18 Thread Gray Calhoun
That's very exciting! I can't wait to see the new notes.

On Thursday, September 18, 2014 9:14:22 PM UTC-5, Spencer Lyon wrote:

 New package QuantEcon.jl https://github.com/QuantEcon/QuantEcon.jl.

 This package collects code for quantitative economic modeling. It is 
 currently comprised of two main parts:

1. 

A toolbox of routines useful when doing economics
 2. 

Implementations of types and solution methods for common economic 
models. 
 
 This library has a python twin: QuantEcon.py 
 https://github.com/QuantEcon/QuantEcon.py. The same development team is 
 working on both projects, so we hope to keep the two libraries in sync very 
 closely as new functionality is added.

[...cut...] 


[julia-users] Is there a way to use readline as command line interpreter

2014-09-18 Thread Ivar Nesje
Ref: https://github.com/JuliaLang/julia/issues/6774


[julia-users] Is there a way to use readline as command line interpreter

2014-09-18 Thread Ivar Nesje
Ref: https://github.com/JuliaLang/julia/issues/6774


[julia-users] Schurfact ordered eigenvalues

2014-09-18 Thread Chase Coleman
Hi,

The function (`?gges`) being called in `schurfact` from LAPACK accepts an 
argument for sorting the eigenvalues according to a logical/boolean 
function.  I am currently working on some code that would be valuable for 
me to sort eigenvalues into explosive and nonexplosive (modulus bounded by 
unity).  Are there plans to accept the arguments for sorting to the gges 
wrapper?  I started working on one, but 1) haven't had a chance to quite 
finish debugging and 2) am not sure that I have the proper skill level to 
finish debugging it.  My fork https://github.com/cc7768/julia of julia 
has a branch called sort_gen_eig where I add sort and selctg as 
arguments to gges and the appropriate schurfact calls.  For a quicker look, 
I have a temporary file called test_sort.jl 
https://github.com/cc7768/julia/blob/sort_gen_eig/base/linalg/test_sort.jl 
that 
can be used to test what I would like to accomplish.

When I run the code in test_sort.jl, I get segmentation errors.  Could 
anyone help me with this?

For additional background information: Scipy has attempted to include the 
sort function in their call to ?gges, but had some issues with windows 
machines (here 
https://github.com/scipy/scipy/blob/1507874c9f266f6687829bba3ee28ea7a696b657/scipy/linalg/_decomp_qz.py#L18
 is 
code and here https://github.com/scipy/scipy/pull/3107 is issue).