Re: [julia-users] Re: Julia as a matlab mex function (and the struggle for independence from cygwin)

2014-12-03 Thread J Luis
Ok, this is to report my failure as well. I managed to build the MEX with 
VS. For that I had to create an import lib from libjulia.dll, which I did 
both with MinGW and VS (2013) but Matlab fails with

 mexjulia
Invalid MEX-file 'C:\SVN\mironeWC64\mexjulia.mexw64': A dynamic link 
library (DLL) initialization routine failed.

and I'm not able to find why with the Dependency Walker (libjulia.dll is 
from master built by me as well)

Quarta-feira, 3 de Dezembro de 2014 3:19:44 UTC, Jameson escreveu:

 perhaps matlab is providing an older version of one of those dll files? if 
 launched from cygwin, you might still end up pulling in an newer one (from 
 cygwin), so julia is still happy. however, if launched directly, then julia 
 is unable to load.

 depends22 will throw up a warning dialog box if one of the required 
 dependencies is missing. the other unresolved dlls are typically wrapped 
 inside of one of the other dlls (advapi), and are unimportant.

 On Tue Dec 02 2014 at 8:28:07 PM Tracy Wadleigh tracy.w...@gmail.com 
 javascript: wrote:

 Yet more detective work:

 I downloaded ListDLLs 
 http://technet.microsoft.com/en-us/sysinternals/bb896656. I dumped the 
 results for MATLAB.exe to files for both the successful launch and call via 
 cygwin and the unsuccessful non-cygwin launch. I then diff'd the files. The 
 only differences are the obvious ones:

  0x35da  0x25000   U:\prj\mexjulia\mexjulia.mexw64
  0x3673  0x336d000  C:\Julia-0.4.0-dev\bin\libjulia.dll
  0x35dd  0x1c000   C:\Julia-0.4.0-dev\bin\libgcc_s_seh-1.dll
  0x35e8  0xed000   C:\Julia-0.4.0-dev\bin\libstdc++-6.dll
  0x7c19  0x203c000  C:\Julia-0.4.0-dev\bin\libopenblas.DLL
  0x6f60  0x134000  C:\Julia-0.4.0-dev\bin\libgfortran-3.dll
  0x39aa  0x57000   C:\Julia-0.4.0-dev\bin\libquadmath-0.dll
  0x5ac1  0x182000  C:\Julia-0.4.0-dev\bin\libpcre.DLL
  0x39b0  0x27000   C:\Julia-0.4.0-dev\bin\libdSFMT.DLL
  0x3d7b  0x8b000   C:\Julia-0.4.0-dev\bin\libgmp.DLL
  0x64b9  0x1ac000  C:\Julia-0.4.0-dev\bin\libmpfr.DLL
  0x39b3  0x4e000   C:\Julia-0.4.0-dev\bin\libopenlibm.DLL

 I double-checked to make sure that C:\Julia-0.4.0-dev\bin is on the 
 system path, so that can't be it...

 :-(



Re: [julia-users] Re: Why doesn't @sprintf evaluate its format string?

2014-12-03 Thread Mike Innes
To also clarify on why @sprintf behaves this way, the reason is that it
compiles down to specialised code for printing e.g. a float, followed by a
double, etc., whatever you specify. If you use a variable like `fmt` the
actual value is only available long after the code has already been
compiled.

This could certainly be supported this by falling back to interpreting the
string at runtime, if necessary – the code just hasn't been written yet.

Another workaround is to call eval(:@sprintf($fmt, 29)). This will probably
be fairly slow, though. I have another solution which will help with
repeated format strings, but will need to send a PR to base to get that
working.

On 3 December 2014 at 01:58, John Myles White johnmyleswh...@gmail.com
wrote:

 I think both of the problems you're hitting are the same core problem:
 @sprintf is a macro, so its behavior is surprising when you think in terms
 of the run-time behaviors that  control function evaluation. In particular,
 your line continuation problem wouldn't be fixed by having line
 continuations. No matter what you're going to end up passing the wrong
 thing (a call to *) to a macro that expects a literal string.

 I believe Dahua tried to work on writing a more function oriented package
 for doing formatting: https://github.com/lindahua/Formatting.jl

 There might be some other solutions floating around as well.

  -- John

 On Dec 2, 2014, at 5:52 PM, Ronald L. Rivest rivest@gmail.com wrote:

 Another reason for (perhaps) wanting an expression as a format
 string:

 Julia doesn't have a line continuation mechanism.

 How does one split a long format string into parts without having
 a string expression?  The following doesn't work:

 @sprintf(This is the first part of a rather long format string, which
 I *
would like to extend onto %d (or maybe more) lines.,
 5)

 Thanks.

 Cheers,
 Ron


 On Tuesday, December 2, 2014 8:40:56 PM UTC-5, Ronald L. Rivest wrote:

 I'm new to Julia, and got burned (aka wasted a fair amount of time)
 trying to sort out why @sprintf didn't work as I expected.

 julia @sprintf(%2d,29)
 29

 julia fmt = %2d
 %2d

 julia @sprintf(fmt,29)
 ERROR: @sprintf: first argument must be a format string

 julia @sprintf(%*2d,29)
 ERROR: @sprintf: first argument must be a format string

 I would expect that @sprintf would allow an arbitrary string expression
 as its
 format string.  It obviously doesn't...

 There are many good reasons why one might want a format string expression
 instead
 of just a constant format string.  For example, the same format may be
 needed in
 several places in the code, or you may want to compute the format string
 based on
 certain item widths or other alignment needs.

 At a minimum, this should (please!) be noted in the documentation.

 Better would be to have the extended functionality...

 (Or maybe it exists already -- have I missed something?)

 Thanks!

 Cheers,
 Ron Rivest





Re: [julia-users] processor vector instructions?

2014-12-03 Thread Tim Holy
Some of your questions may be answered by 
https://software.intel.com/en-us/articles/vectorization-in-julia

--Tim

On Tuesday, December 02, 2014 11:20:12 PM ivo welch wrote:
 dear julia experts:  my question is not important, but I am curious: there
 seem to be a bewildering array of float vector processor instructions from
 intel and amd.  the latest seem to be FMA3 and FMA4, if I am not mistaken.
  does julia use these CPU hardware features and/or its preceding ops?  if
 so, I also wonder how much difference it makes.  a while ago, I was
 wondering whether something like the amd kaveri's with their unified memory
 would give us the power of the graphics processor on the CPU without the
 hassle of GPU programming.  there is a whole toolchain to worry about, too,
 and I am too busy/lazy to see if I can get it to work.  I would only be
 helped by this if it were transparent.  so, mine is merely a curiosity
 question...does julia?  will julia?
 
 /iaw



[julia-users] Re: Plot3D with Julia + PyPlot

2014-12-03 Thread Giacomo Kresak
Dear Steven, 
Thanks for your help.
Definitively Plot3D works on: 

fig = plt.figure()
ax = gca(projection=3d)
X = linspace(-5, 5, 300)
Y = linspace(-5, 5, 300)
R = linspace(-2pi, 2pi, 300)
X, Y = (X, Y)
R = sqrt(X.^2 + Y.^2)
Z = sin(R)
surf = plot_surface(X, Y, Z, rstride=1, cstride=1, linewidth=1, 
antialiased=False)
plot3D(X,Y,Z)
plt.show()

https://lh3.googleusercontent.com/-iKemrHdhloA/VH7mtge6AMI/AGU/8WGmpHOa2UU/s1600/Screen%2BShot%2B2014-12-03%2Bat%2B11.26.40%2BAM.png




Not able to get version surface3d of it as suggested by mplot3d code: 

https://lh6.googleusercontent.com/-p2tjaDKF4zM/VH7qUhg9OpI/AGg/rnPKhmo6D58/s1600/Screen%2BShot%2B2014-12-03%2Bat%2B11.31.25%2BAM.png


Some translations from python to Julia work ok. But not able to translate 
these: 

X,Y = meshgrid (X,Y) 
and 
fig.colorbar(surf, shrink=0.5, aspect=5)

I'd tried surf(X,Y,Z)  to get the surface3d aspect;  Error: not defined! 
In a second example I was able to use it: 

fig = plt.figure()
ax = gca(projection=3d)
theta = linspace(-4pi, 4pi, 300)
Z = linspace(-2, 2, 300)
R = Z.^2
X =  R .* sin(theta)
Y =  R .* cos(theta)
surf(X, Y, Z)  It works! 
plt.show() 

Welcome yours suggestions on those translations. 
At the end I would suggest to organize a mplot3d examples code translation 
from Python to Julia documentation, but I will need help of community!   
Thanks, G. 
 
On Tuesday, December 2, 2014 2:32:38 AM UTC+1, Steven G. Johnson wrote:

 On Monday, December 1, 2014 9:07:10 AM UTC-5, Giacomo Kresak wrote:

 Do you know if plt.plot is the good way to go? Not able to use plot3d or 
 plot3D with Julia (PyPlot)! 


 Matplotlib's plot3D function is indeed exported by PyPlot.   (You can't 
 access it via plt.plot3D because it is not actually in the 
 matplotlib.pyplot module, it is in mpl_toolkits.mplot3d)  Just call it via, 
 for example:

   θ = linspace(0,6π,300)
   plot3D(cos(θ), sin(θ), θ)

 There are also other 3d plotting functions from Matplotlib, e.g. 
 surf(x,y,z) to do a surface plot.



Re: [julia-users] Resize image to specific size

2014-12-03 Thread Phelipe Wesley
Ok, thank you! I will try to use again .

2014-12-02 13:08 GMT-03:00 Andrei faithlessfri...@gmail.com:

 @Phelipe: yes, but as Tim noticed earlier, you need to get latest dev
 version and use qualified name to import it:

 julia Pkg.checkout(Images)
 ...
 julia using Images

 julia methods(Images.imresize!)
 # 1 method for generic function imresize!:
 imresize!(resized,original) at
 /home/slipslop/.julia/v0.3/Images/src/algorithms.jl:1018

 julia methods(Images.imresize)
 # 1 method for generic function imresize:
 imresize(original,new_size) at
 /home/slipslop/.julia/v0.3/Images/src/algorithms.jl:1039



 On Tue, Dec 2, 2014 at 6:11 PM, Phelipe Wesley 
 phelipewesleydeolive...@gmail.com wrote:

 This function is implemented in package Images ?





[julia-users] How to achieve this?

2014-12-03 Thread Staro Pickle
Hi, I am asking a simple question because I can't find the answer in the 
document.

I want to have this: q[ ] is an array whose elements are vectors.
For example, q[1] = [1,2,3]; q[2] = [2,3,4]

Thank you.



[julia-users] Re: How to achieve this?

2014-12-03 Thread Simon Danisch
You mean like this?
q = Array(Array{Int}, 2) # 2 - space for two arrays... otherwise use 0 and 
push!(q, [1,2,3])
q[1] = [1,2,3];q[2]=[2,3,4]
this works as well:
Array{Int}[[1,2,3], [2,3,4]]

Am Mittwoch, 3. Dezember 2014 14:32:39 UTC+1 schrieb Staro Pickle:

 Hi, I am asking a simple question because I can't find the answer in the 
 document.

 I want to have this: q[ ] is an array whose elements are vectors.
 For example, q[1] = [1,2,3]; q[2] = [2,3,4]

 Thank you.



Re: [julia-users] Re: How to achieve this?

2014-12-03 Thread Mike Innes
Or p = Vector{Int}[[1,2,3],[4,5,6]]

On 3 December 2014 at 13:41, Simon Danisch sdani...@gmail.com wrote:

 You mean like this?
 q = Array(Array{Int}, 2) # 2 - space for two arrays... otherwise use 0
 and push!(q, [1,2,3])
 q[1] = [1,2,3];q[2]=[2,3,4]
 this works as well:
 Array{Int}[[1,2,3], [2,3,4]]

 Am Mittwoch, 3. Dezember 2014 14:32:39 UTC+1 schrieb Staro Pickle:

 Hi, I am asking a simple question because I can't find the answer in the
 document.

 I want to have this: q[ ] is an array whose elements are vectors.
 For example, q[1] = [1,2,3]; q[2] = [2,3,4]

 Thank you.




Re: [julia-users] Re: [ANN - SolveDSGE] A Julia package to solve DSGE models

2014-12-03 Thread sebast...@debian.org
Hi Richard (and other participants to this thread),

Concerning Dynare: it is indeed written in MATLAB/Octave (with some parts in 
C++, like the preprocessor and some optimized portions of code). Dynare 
currently covers a very large range of features, and replicating all of them in 
Julia would take a lot of time. The Dynare Team has currently no plan to port 
the existing Dynare to Julia, given the size of this endeavour and the limited 
resources of the team.

However, as has already been pointed, I have started writing something from 
scratch (https://github.com/DynareTeam/Dynare.jl). This package is able to 
solve models at first order in rational expectations mode, and with full 
nonlinearities in perfect foresight mode. The interesting thing about this 
package is that you can write equations naturally as you would in Dynare: the 
package uses the Julia parser to get the equations and to compute their 
symbolic derivatives, and then forms the model matrices automatically.

I am rather busy at the moment so I have not been able to improve on that 
package in the last couple of months. But it should be functional. Bugreports 
and improvements are welcome. I definitely plan to continue working on it when 
time permits. I may get help from other members of the Dynare Team, but this is 
not guaranteed at this stage.

Richard: it would be great if we could join our efforts and create a nice Julia 
package for DSGE modeling, out of our two prototypes. I am open to any 
suggestion, so feel free to get back to me.

Best,

--
Sébastien Villemot
Economist
OFCE — Sciences Po
http://sebastien.villemot.name



[julia-users] Julia 0.3.3 fails Base.runtests()

2014-12-03 Thread Jeremy Cavanagh
Hi,
I've just run brew update  brew upgrade on my MacBook Air with OS X 
10.9.5, which resulted in julia 0.3.3 being installed. Running the 
recommended test brew test -v julia results in SUCCESS for the core, 
but as I have had many problems trying to get julia installed on linux (now 
successfully) I like to run the full test suite. This time I got the 
following output:

$ julia -e Base.runtests()
 From worker 2: * linalg1
 From worker 3: * linalg2
 From worker 2: * linalg3
 From worker 2: * linalg4
 From worker 2: * core
 From worker 2: * keywordargs
 From worker 2: * numbers
 From worker 3: * strings
 From worker 3: * collections
 From worker 3: * hashing
 From worker 2: * remote
 From worker 2: * iobuffer
 From worker 2: * arrayops
 From worker 3: * reduce
 From worker 3: * reducedim
 From worker 3: * simdloop
 From worker 3: * blas
 From worker 3: * fft
 From worker 2: * dsp
 From worker 3: * sparse
 From worker 2: * bitarray
 exception on 3: ERROR: test error during maximum(abs(a \ b - full(a) \ b)) 
  1000 * eps()
 error compiling factorize: error compiling cholfact: error compiling cmn: 
 could not load module libcholmod: dlopen(libcholmod.dylib, 1): Library not 
 loaded: /usr/local/opt/openblas-julia/lib/libopenblasp-r0.2.12.dylib
   Referenced from: 
 /usr/local/Cellar/julia/0.3.3/lib/julia//libcholmod.dylib
   Reason: image not found
  in \ at linalg/generic.jl:233
  in anonymous at test.jl:62
  in do_test at test.jl:37
  in anonymous at no file:80
  in runtests at 
 /usr/local/Cellar/julia/0.3.3/share/julia/test/testdefs.jl:5
  in anonymous at multi.jl:855
  in run_work_thunk at multi.jl:621
  in anonymous at task.jl:855
 while loading sparse.jl, in expression starting on line 75
 ERROR: test error during maximum(abs(a \ b - full(a) \ b))  1000 * eps()
 error compiling factorize: error compiling cholfact: error compiling cmn: 
 could not load module libcholmod: dlopen(libcholmod.dylib, 1): Library not 
 loaded: /usr/local/opt/openblas-julia/lib/libopenblasp-r0.2.12.dylib
   Referenced from: 
 /usr/local/Cellar/julia/0.3.3/lib/julia//libcholmod.dylib
   Reason: image not found
 while loading sparse.jl, in expression starting on line 75
 while loading /usr/local/Cellar/julia/0.3.3/share/julia/test/runtests.jl, 
 in expression starting on line 39

 ERROR: A test has failed. Please submit a bug report including error 
 messages
 above and the output of versioninfo():
 Julia Version 0.3.3
 Commit b24213b* (2014-11-23 20:19 UTC)
 Platform Info:
   System: Darwin (x86_64-apple-darwin13.3.0)
   CPU: Intel(R) Core(TM) i7-2677M CPU @ 1.80GHz
   WORD_SIZE: 64
   BLAS: libopenblas (NO_AFFINITY NEHALEM)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

  in error at error.jl:21
  in runtests at interactiveutil.jl:370
  in runtests at interactiveutil.jl:359
  in process_options at /usr/local/Cellar/julia/0.3.3/lib/julia/sys.dylib
  in _start at /usr/local/Cellar/julia/0.3.3/lib/julia/sys.dylib (repeats 2 
 times)


Although directed to submit a bug report there is no indication as to where 
this should go. I hope this is the correct place, if not I apologize.


Re: [julia-users] Re: Why doesn't @sprintf evaluate its format string?

2014-12-03 Thread Tony Fong
Just to add to John's comment on Formatting.jl: yes use it. There's this 
NumFormat.jl (of yours truly) but I've since proposed to merge its entire 
code into Formatting.j, subject to @lindahua's blessing. So we should have 
a wealth of options to format numbers by runtime arguments.

Tony

On Wednesday, December 3, 2014 4:16:08 PM UTC+7, Mike Innes wrote:

 To also clarify on why @sprintf behaves this way, the reason is that it 
 compiles down to specialised code for printing e.g. a float, followed by a 
 double, etc., whatever you specify. If you use a variable like `fmt` the 
 actual value is only available long after the code has already been 
 compiled.

 This could certainly be supported this by falling back to interpreting the 
 string at runtime, if necessary – the code just hasn't been written yet.

 Another workaround is to call eval(:@sprintf($fmt, 29)). This will 
 probably be fairly slow, though. I have another solution which will help 
 with repeated format strings, but will need to send a PR to base to get 
 that working.

 On 3 December 2014 at 01:58, John Myles White johnmyl...@gmail.com 
 javascript: wrote:

 I think both of the problems you're hitting are the same core problem: 
 @sprintf is a macro, so its behavior is surprising when you think in terms 
 of the run-time behaviors that  control function evaluation. In particular, 
 your line continuation problem wouldn't be fixed by having line 
 continuations. No matter what you're going to end up passing the wrong 
 thing (a call to *) to a macro that expects a literal string.

 I believe Dahua tried to work on writing a more function oriented package 
 for doing formatting: https://github.com/lindahua/Formatting.jl

 There might be some other solutions floating around as well.

  -- John

 On Dec 2, 2014, at 5:52 PM, Ronald L. Rivest rives...@gmail.com 
 javascript: wrote:

 Another reason for (perhaps) wanting an expression as a format
 string:

 Julia doesn't have a line continuation mechanism.

 How does one split a long format string into parts without having
 a string expression?  The following doesn't work:

 @sprintf(This is the first part of a rather long format string, 
 which I *
would like to extend onto %d (or maybe more) lines., 
 5)

 Thanks.

 Cheers,
 Ron


 On Tuesday, December 2, 2014 8:40:56 PM UTC-5, Ronald L. Rivest wrote:

 I'm new to Julia, and got burned (aka wasted a fair amount of time)
 trying to sort out why @sprintf didn't work as I expected.  

 julia @sprintf(%2d,29)
 29

 julia fmt = %2d
 %2d

 julia @sprintf(fmt,29)
 ERROR: @sprintf: first argument must be a format string

 julia @sprintf(%*2d,29)
 ERROR: @sprintf: first argument must be a format string

 I would expect that @sprintf would allow an arbitrary string expression 
 as its
 format string.  It obviously doesn't...

 There are many good reasons why one might want a format string 
 expression instead
 of just a constant format string.  For example, the same format may be 
 needed in
 several places in the code, or you may want to compute the format string 
 based on
 certain item widths or other alignment needs.

 At a minimum, this should (please!) be noted in the documentation.  

 Better would be to have the extended functionality...

 (Or maybe it exists already -- have I missed something?)

 Thanks!

 Cheers,
 Ron Rivest





Re: [julia-users] Re: Why doesn't @sprintf evaluate its format string?

2014-12-03 Thread Stefan Karpinski
I gave this answer http://stackoverflow.com/a/19784718/659248 on
StackOverflow, which I think explains it decently. However, this has caused
enough people to be confused or frustrated that it should probably be
revisited. I'm not sure what the right solution is. Note that we can't just
call C's printf function because it's not type safe; Julia's printf is safe
– it does type conversion for you.



On Wed, Dec 3, 2014 at 9:15 AM, Tony Fong tony.hf.f...@gmail.com wrote:

 Just to add to John's comment on Formatting.jl: yes use it. There's this
 NumFormat.jl (of yours truly) but I've since proposed to merge its entire
 code into Formatting.j, subject to @lindahua's blessing. So we should have
 a wealth of options to format numbers by runtime arguments.

 Tony

 On Wednesday, December 3, 2014 4:16:08 PM UTC+7, Mike Innes wrote:

 To also clarify on why @sprintf behaves this way, the reason is that it
 compiles down to specialised code for printing e.g. a float, followed by a
 double, etc., whatever you specify. If you use a variable like `fmt` the
 actual value is only available long after the code has already been
 compiled.

 This could certainly be supported this by falling back to interpreting
 the string at runtime, if necessary – the code just hasn't been written yet.

 Another workaround is to call eval(:@sprintf($fmt, 29)). This will
 probably be fairly slow, though. I have another solution which will help
 with repeated format strings, but will need to send a PR to base to get
 that working.

 On 3 December 2014 at 01:58, John Myles White johnmyl...@gmail.com
 wrote:

 I think both of the problems you're hitting are the same core problem:
 @sprintf is a macro, so its behavior is surprising when you think in terms
 of the run-time behaviors that  control function evaluation. In particular,
 your line continuation problem wouldn't be fixed by having line
 continuations. No matter what you're going to end up passing the wrong
 thing (a call to *) to a macro that expects a literal string.

 I believe Dahua tried to work on writing a more function oriented
 package for doing formatting: https://github.com/lindahua/Formatting.jl

 There might be some other solutions floating around as well.

  -- John

 On Dec 2, 2014, at 5:52 PM, Ronald L. Rivest rives...@gmail.com wrote:

 Another reason for (perhaps) wanting an expression as a format
 string:

 Julia doesn't have a line continuation mechanism.

 How does one split a long format string into parts without having
 a string expression?  The following doesn't work:

 @sprintf(This is the first part of a rather long format string,
 which I *
would like to extend onto %d (or maybe more)
 lines., 5)

 Thanks.

 Cheers,
 Ron


 On Tuesday, December 2, 2014 8:40:56 PM UTC-5, Ronald L. Rivest wrote:

 I'm new to Julia, and got burned (aka wasted a fair amount of time)
 trying to sort out why @sprintf didn't work as I expected.

 julia @sprintf(%2d,29)
 29

 julia fmt = %2d
 %2d

 julia @sprintf(fmt,29)
 ERROR: @sprintf: first argument must be a format string

 julia @sprintf(%*2d,29)
 ERROR: @sprintf: first argument must be a format string

 I would expect that @sprintf would allow an arbitrary string expression
 as its
 format string.  It obviously doesn't...

 There are many good reasons why one might want a format string
 expression instead
 of just a constant format string.  For example, the same format may be
 needed in
 several places in the code, or you may want to compute the format
 string based on
 certain item widths or other alignment needs.

 At a minimum, this should (please!) be noted in the documentation.

 Better would be to have the extended functionality...

 (Or maybe it exists already -- have I missed something?)

 Thanks!

 Cheers,
 Ron Rivest






Re: [julia-users] Re: Simple Finite Difference Methods

2014-12-03 Thread Jameson Nash
this stack overflow question indicates that there are two options (
http://stackoverflow.com/questions/153559/what-are-some-good-profilers-for-native-c-on-windows
)

https://software.intel.com/sites/default/files/managed/cd/92/Intel-VTune-AmplifierXE-2015-Product-Brief-072914.pdf
($900)
http://www.glowcode.com/summary.htm ($500)


On Wed Dec 03 2014 at 9:11:28 AM Stefan Karpinski ste...@karpinski.org
wrote:

 This seems nuts. There have to be good profilers on Windows – how do those
 work?

 On Tue, Dec 2, 2014 at 11:55 PM, Jameson Nash vtjn...@gmail.com wrote:

 (I forgot to mention, that, to be fair, the windows machine that was used
 to run this test was an underpowered dual-core hyperthreaded atom
 processor, whereas the linux and mac machines were pretty comparable Xeon
 and sandybridge machines, respectively. I only gave windows a factor of 2
 advantage in the above computation in my accounting for this gap)

 On Tue Dec 02 2014 at 10:50:20 PM Tim Holy tim.h...@gmail.com wrote:

 Wow, those are pathetically-slow backtraces. Since most of us don't have
 machines with 500 cores, I don't see anything we can do.

 --Tim

 On Wednesday, December 03, 2014 03:14:02 AM Jameson Nash wrote:
  you could copy the whole stack (typically only a few 100kb, max of
 8MB),
  then do the stack walk offline. if you could change the stack pages to
  copy-on-write, it may even not be too expensive.
 
  but this is the real problem:
 
  ```
 
  |__/   |  x86_64-linux-gnu
 
  julia @time for i=1:10^4 backtrace() end
  elapsed time: 2.789268693 seconds (3200320016 bytes allocated, 89.29%
 gc
  time)
  ```
 
  ```
 
  |__/   |  x86_64-apple-darwin14.0.0
 
  julia @time for i=1:10^4 backtrace() end
  elapsed time: 2.586410216 seconds (640048 bytes allocated, 89.96%
 gc
  time)
  ```
 
  ```
  jameson@julia:~/julia-win32$ ./usr/bin/julia.exe -E  @time for
 i=1:10^3
  backtrace() end 
  fixme:winsock:WS_EnterSingleProtocolW unknown Protocol 0x
  fixme:winsock:WS_EnterSingleProtocolW unknown Protocol 0x
  err:dbghelp_stabs:stabs_parse Unknown stab type 0x0a
  elapsed time: 22.6314386 seconds (320032016 bytes allocated, 1.51% gc
 time)
  ```
 
  ```
 
  |__/   |  i686-w64-mingw32
 
  julia @time for i=1:10^4 backtrace() end
  elapsed time: 69.243275608 seconds (3200320800 bytes allocated, 13.16%
 gc
  time)
  ```
 
  And yes, those gc fractions are verifiably correct. With gc_disable(),
 they
  execute in 1/10 of the time. So, that pretty much means you must take
 1/100
  of the samples if you want to preserve roughly the same slow down. On
  linux, I find the slowdown to be in the range of 2-5x, and consider
 that to
  be pretty reasonable, especially for what you're getting. If you took
 the
  same number of samples on windows, it would cause a 200-500x slowdown
 (give
  or take a few percent). If you wanted to offload this work to other
 cores
  to get the same level of accuracy and no more slowdown than linux, you
  would need a machine with 200-500 processors (give or take 2-5)!
 
  (I think I did those conversions correctly. However, since I just did
 them
  for the purposes of this email, sans calculator, and as I was typing,
 let
  me know if I made more than a factor of 2 error somewhere, or just
 have fun
  reading https://what-if.xkcd.com/84/ instead)
 
  On Tue Dec 02 2014 at 6:23:07 PM Tim Holy tim.h...@gmail.com wrote:
   On Tuesday, December 02, 2014 10:24:43 PM Jameson Nash wrote:
You can't profile a moving target. The thread must be frozen first
 to
ensure the stack trace doesn't change while attempting to record it
  
   Got it. I assume there's no good way to make a copy and then
 discard if
   the
   copy is bad?
  
   --Tim
  
On Tue, Dec 2, 2014 at 5:12 PM Tim Holy tim.h...@gmail.com
 wrote:
 If the work of walking the stack is done in the thread, why does
 it
  
   cause
  
 any
 slowdown of the main process?

 But of course the time it takes to complete the backtrace sets an
 upper
 limit
 on how frequently you can take a snapshot. In that case, though,
  
   couldn't
  
 you
 just have the thread always collecting backtraces?

 --Tim

 On Tuesday, December 02, 2014 09:54:17 PM Jameson Nash wrote:
  That's essentially what we do now. (Minus the busy wait part).
 The

 overhead

  is too high to run it any more frequently -- it already causes
 a
  significant performance penalty on the system, even at the much
  lower
  sample rate than linux. However, I suspect the truncated
 backtraces
  
   on
  
  win32 were exaggerating the effect somewhat -- that should not
 be as
  much
  of an issue now.
 
  Sure, windows lets you snoop on (and modify) the address space
 of
  any
  process, you just need to find the right handle.
 
  On Tue, Dec 2, 2014 at 2:18 PM Tim Holy tim.h...@gmail.com
 wrote:
   On 

[julia-users] QRPackedQ not properly transposed

2014-12-03 Thread Daniel Høegh
It has been discussed in:https://github.com/JuliaLang/julia/pull/9170

[julia-users] Re: Plot3D with Julia + PyPlot

2014-12-03 Thread Steven G. Johnson
The following reproduces the surface plot example from mplot3d:

fig = figure()
X = linspace(-5, 5, 100)'
Y = linspace(-5, 5, 100)
R = sqrt(X.^2 .+ Y.^2)
Z = sin(R)
surf = plot_surface(X, Y, Z, rstride=1, cstride=1, linewidth=0, 
antialiased=false, cmap=coolwarm)
zlim(-1.01,1.01)
ax = gca()
ax[:zaxis][:set_major_locator](matplotlib[:ticker][:LinearLocator](10))
ax[:zaxis][:set_major_formatter](matplotlib[:ticker][:FormatStrFormatter](%.02f))
fig[:colorbar](surf, shrink=0.5, aspect=5)

(surf is a Matlab-like synonym for plot_surface in PyPlot.)

You don't need meshgrid because you can use broadcasting operations 
https://github.com/JuliaLang/julia/issues/4093 (.+) instead to combine a 
row vector (X: notice the ' to transpose the linspace) and a column vector 
(Y).

Unlike in mplot3d, you don't need to create the axis with 
gca(projection=3d), because the 3d plotting functions like plot_surface 
do this for you in PyPlot.

As explained in the PyCall documentation, instead of foo.bar() in Python 
you do foo[:bar]() in Julia.  This is because Julia doesn't yet allow . to 
be overloaded https://github.com/JuliaLang/julia/issues/1974.


[julia-users] Quick method to give a random lattice vector that on average points in a particular direction?

2014-12-03 Thread DumpsterDoofus
Hello everyone,

I'm trying to make a function (let's call it `return_direction`) which 
takes in a Float between 0 and 2*pi, and randomly returns one of the 
following lattice vectors {(0,0), (1,0), (0,1), (1,1)}, weighted in such a 
way that the expectation value of `return_direction(theta)` is 
`(cos(theta), sin(theta))`.

To do that, I implemented the following:

function return_direction2(theta::Float64)


x = cos(theta)


y = sin(theta)


return (sign(x)*(rand()  abs(x)), sign(y)*(rand()  abs(y)))


end


One can check that this indeed gives the proper expectation value. To test 
it's speed, I ran the following test:

function test()


(i1, i2, x, y, a) = (0,0,0,0,10^6)


for i in 1:a


(i1, i2) = return_direction(0.22)


x += i1


y += i2


end


println((x/a,y/a))


end


Running `@time test()` gives an execution time of 0.7 seconds on my 
computer, or roughly 700 nanoseconds per `return_direction` call. 
Ordinarily this would be fine, but  `return_direction` is used in a 
lightweight script where it is called frequently, and it ends up taking a 
decent chunk of the runtime, so I was wondering if there was a way to speed 
this up.

Presumably there isn't that much that can be sped up, other than the line

return (sign(x)*(rand()  abs(x)), sign(y)*(rand()  abs(y)))


Here it multiplies a Bool and an Int64, which might result in a 
type-conversion slowdown. Is this the case? If so, can I speed it up by 
modifying it? Alternately, is there a faster method of returning a random 
lattice vector that on average points in a particular direction `theta`?


Re: [julia-users] Julia 0.3.3 fails Base.runtests()

2014-12-03 Thread Tim Holy
Hi Jeremy,

Thanks for reporting the problem. Julia's bug reports can be filed here:
https://github.com/JuliaLang/julia/issues

I've added an issue suggesting that the URL be specified in that message, but 
it would be best to open a separate issue for your test failure.

Best,
--Tim

On Wednesday, December 03, 2014 02:07:40 AM Jeremy Cavanagh wrote:
 Hi,
 I've just run brew update  brew upgrade on my MacBook Air with OS X
 10.9.5, which resulted in julia 0.3.3 being installed. Running the
 recommended test brew test -v julia results in SUCCESS for the core,
 but as I have had many problems trying to get julia installed on linux (now
 successfully) I like to run the full test suite. This time I got the
 following output:
 
 $ julia -e Base.runtests()
 
  From worker 2: * linalg1
  From worker 3: * linalg2
  From worker 2: * linalg3
  From worker 2: * linalg4
  From worker 2: * core
  From worker 2: * keywordargs
  From worker 2: * numbers
  From worker 3: * strings
  From worker 3: * collections
  From worker 3: * hashing
  From worker 2: * remote
  From worker 2: * iobuffer
  From worker 2: * arrayops
  From worker 3: * reduce
  From worker 3: * reducedim
  From worker 3: * simdloop
  From worker 3: * blas
  From worker 3: * fft
  From worker 2: * dsp
  From worker 3: * sparse
  From worker 2: * bitarray
  
  exception on 3: ERROR: test error during maximum(abs(a \ b - full(a) \ b))
   1000 * eps()
  error compiling factorize: error compiling cholfact: error compiling cmn:
  could not load module libcholmod: dlopen(libcholmod.dylib, 1): Library not
  loaded: /usr/local/opt/openblas-julia/lib/libopenblasp-r0.2.12.dylib
  
Referenced from:
  /usr/local/Cellar/julia/0.3.3/lib/julia//libcholmod.dylib
  
Reason: image not found
   
   in \ at linalg/generic.jl:233
   in anonymous at test.jl:62
   in do_test at test.jl:37
   in anonymous at no file:80
   in runtests at
  
  /usr/local/Cellar/julia/0.3.3/share/julia/test/testdefs.jl:5
  
   in anonymous at multi.jl:855
   in run_work_thunk at multi.jl:621
   in anonymous at task.jl:855
  
  while loading sparse.jl, in expression starting on line 75
  ERROR: test error during maximum(abs(a \ b - full(a) \ b))  1000 * eps()
  error compiling factorize: error compiling cholfact: error compiling cmn:
  could not load module libcholmod: dlopen(libcholmod.dylib, 1): Library not
  loaded: /usr/local/opt/openblas-julia/lib/libopenblasp-r0.2.12.dylib
  
Referenced from:
  /usr/local/Cellar/julia/0.3.3/lib/julia//libcholmod.dylib
  
Reason: image not found
  
  while loading sparse.jl, in expression starting on line 75
  while loading /usr/local/Cellar/julia/0.3.3/share/julia/test/runtests.jl,
  in expression starting on line 39
  
  ERROR: A test has failed. Please submit a bug report including error
  messages
  above and the output of versioninfo():
  Julia Version 0.3.3
  Commit b24213b* (2014-11-23 20:19 UTC)
  
  Platform Info:
System: Darwin (x86_64-apple-darwin13.3.0)
CPU: Intel(R) Core(TM) i7-2677M CPU @ 1.80GHz
WORD_SIZE: 64
BLAS: libopenblas (NO_AFFINITY NEHALEM)
LAPACK: libopenblas
LIBM: libopenlibm
LLVM: libLLVM-3.3
   
   in error at error.jl:21
   in runtests at interactiveutil.jl:370
   in runtests at interactiveutil.jl:359
   in process_options at /usr/local/Cellar/julia/0.3.3/lib/julia/sys.dylib
   in _start at /usr/local/Cellar/julia/0.3.3/lib/julia/sys.dylib (repeats 2
  
  times)
 
 Although directed to submit a bug report there is no indication as to where
 this should go. I hope this is the correct place, if not I apologize.



[julia-users] Re: Plot3D with Julia + PyPlot

2014-12-03 Thread Steven G. Johnson
(It would be great if someone were to undertake a translation of code 
examples from Matplotlib to PyPlot.   Ideally, create one or more IJulia 
notebooks with all of the examples, and we can put them in the PyPlot 
github repository and link to the nbviewer pages from the README.  I'd be 
happy to advise.)


[julia-users] Gadfly conflict with JLD

2014-12-03 Thread xiongjieyi
I found that after using Gadfly, the load funciton in JLD can not work, as 
below:

julia using Gadfly

julia using HDF5,JLD

julia save(tempfile.jld,var,[1,2,3])

julia load(tempfile.jld,var)
ERROR: `convert` has no method matching convert(::Type{Int64...}, ::UInt64)
 in convert at base.jl:44
 in read at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:330
 in read at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:313
 in anonymous at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:972
 in jldopen at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:234
 in load at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:971

However, if I run using HDF5, JLD before using Gadfly, everything will 
be fine.

FYI:
julia versioninfo()
Julia Version 0.4.0-dev+1928
Commit b1c99af* (2014-12-03 08:58 UTC)
Platform Info:
  System: Linux (x86_64-redhat-linux)
  CPU: Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3


[julia-users] Re: Plot3D with Julia + PyPlot

2014-12-03 Thread Steven G. Johnson
The code can be simplified a bit further to:

X = linspace(-5, 5, 100)'
Y = linspace(-5, 5, 100)
R = sqrt(X.^2 .+ Y.^2)
Z = sin(R)
surf = plot_surface(X, Y, Z, rstride=1, cstride=1, linewidth=0, 
antialiased=false, cmap=coolwarm)
zlim(-1.01,1.01)
ax = gca()
ax[:zaxis][:set_major_locator](matplotlib[:ticker][:LinearLocator](10))
ax[:zaxis][:set_major_formatter](matplotlib[:ticker][:FormatStrFormatter](%.02f))
colorbar(surf, shrink=0.5, aspect=5)


Re: [julia-users] Gadfly conflict with JLD

2014-12-03 Thread Isaiah Norton
See https://github.com/timholy/HDF5.jl/issues/160

On Wed, Dec 3, 2014 at 9:51 AM, xiongji...@gmail.com wrote:

 I found that after using Gadfly, the load funciton in JLD can not work, as
 below:

 julia using Gadfly

 julia using HDF5,JLD

 julia save(tempfile.jld,var,[1,2,3])

 julia load(tempfile.jld,var)
 ERROR: `convert` has no method matching convert(::Type{Int64...}, ::UInt64)
  in convert at base.jl:44
  in read at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:330
  in read at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:313
  in anonymous at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:972
  in jldopen at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:234
  in load at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:971

 However, if I run using HDF5, JLD before using Gadfly, everything will
 be fine.

 FYI:
 julia versioninfo()
 Julia Version 0.4.0-dev+1928
 Commit b1c99af* (2014-12-03 08:58 UTC)
 Platform Info:
   System: Linux (x86_64-redhat-linux)
   CPU: Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3



[julia-users] Re: Gadfly conflict with JLD

2014-12-03 Thread xiongjieyi
I guess maybe Gadfly forgot importing base.convert but exported it in 
somewhere.

On Wednesday, December 3, 2014 3:51:54 PM UTC+1, xiong...@gmail.com wrote:

 I found that after using Gadfly, the load funciton in JLD can not work, as 
 below:

 julia using Gadfly

 julia using HDF5,JLD

 julia save(tempfile.jld,var,[1,2,3])

 julia load(tempfile.jld,var)
 ERROR: `convert` has no method matching convert(::Type{Int64...}, ::UInt64)
  in convert at base.jl:44
  in read at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:330
  in read at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:313
  in anonymous at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:972
  in jldopen at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:234
  in load at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:971

 However, if I run using HDF5, JLD before using Gadfly, everything will 
 be fine.

 FYI:
 julia versioninfo()
 Julia Version 0.4.0-dev+1928
 Commit b1c99af* (2014-12-03 08:58 UTC)
 Platform Info:
   System: Linux (x86_64-redhat-linux)
   CPU: Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3



[julia-users] Re: Plot3D with Julia + PyPlot

2014-12-03 Thread Steven G. Johnson
And actually this produces almost the same plot:

X = linspace(-5, 5, 100)'
Y = linspace(-5, 5, 100)
R = sqrt(X.^2 .+ Y.^2)
Z = sin(R)
surf = plot_surface(X, Y, Z, rstride=1, cstride=1, linewidth=0, 
antialiased=false, cmap=coolwarm)
colorbar(surf, shrink=0.5, aspect=5)

I don't know why the example bothered to use LinearLocator etcetera when 
they weren't actually doing much with those options.


[julia-users] Re: Quick method to give a random lattice vector that on average points in a particular direction?

2014-12-03 Thread DumpsterDoofus
Oh, apologies, I forgot to convert the `sign` to Int64. Changing the line 
in question to 

return (int(sign(x))*(rand()  abs(x)), int(sign(y))*(rand()  abs(y)))

speeds up things to 300 nanoseconds per call. 

In any case, the original question still stands: could this be improved?



On Wednesday, December 3, 2014 9:48:11 AM UTC-5, DumpsterDoofus wrote:

 Hello everyone,

 I'm trying to make a function (let's call it `return_direction`) which 
 takes in a Float between 0 and 2*pi, and randomly returns one of the 
 following lattice vectors {(0,0), (1,0), (0,1), (1,1)}, weighted in such a 
 way that the expectation value of `return_direction(theta)` is 
 `(cos(theta), sin(theta))`.

 To do that, I implemented the following:

 function return_direction2(theta::Float64)


 x = cos(theta)


 y = sin(theta)


 return (sign(x)*(rand()  abs(x)), sign(y)*(rand()  abs(y)))


 end


 One can check that this indeed gives the proper expectation value. To test 
 it's speed, I ran the following test:

 function test()


 (i1, i2, x, y, a) = (0,0,0,0,10^6)


 for i in 1:a


 (i1, i2) = return_direction(0.22)


 x += i1


 y += i2


 end


 println((x/a,y/a))


 end


 Running `@time test()` gives an execution time of 0.7 seconds on my 
 computer, or roughly 700 nanoseconds per `return_direction` call. 
 Ordinarily this would be fine, but  `return_direction` is used in a 
 lightweight script where it is called frequently, and it ends up taking a 
 decent chunk of the runtime, so I was wondering if there was a way to speed 
 this up.

 Presumably there isn't that much that can be sped up, other than the line

 return (sign(x)*(rand()  abs(x)), sign(y)*(rand()  abs(y)))


 Here it multiplies a Bool and an Int64, which might result in a 
 type-conversion slowdown. Is this the case? If so, can I speed it up by 
 modifying it? Alternately, is there a faster method of returning a random 
 lattice vector that on average points in a particular direction `theta`?



[julia-users] Re: Gadfly conflict with JLD

2014-12-03 Thread xiongjieyi
I guess maybe some packages Gadfly used forgot importing base.convert but 
exported it in somewhere.

On Wednesday, December 3, 2014 3:51:54 PM UTC+1, xiong...@gmail.com wrote:

 I found that after using Gadfly, the load funciton in JLD can not work, as 
 below:

 julia using Gadfly

 julia using HDF5,JLD

 julia save(tempfile.jld,var,[1,2,3])

 julia load(tempfile.jld,var)
 ERROR: `convert` has no method matching convert(::Type{Int64...}, ::UInt64)
  in convert at base.jl:44
  in read at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:330
  in read at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:313
  in anonymous at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:972
  in jldopen at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:234
  in load at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:971

 However, if I run using HDF5, JLD before using Gadfly, everything will 
 be fine.

 FYI:
 julia versioninfo()
 Julia Version 0.4.0-dev+1928
 Commit b1c99af* (2014-12-03 08:58 UTC)
 Platform Info:
   System: Linux (x86_64-redhat-linux)
   CPU: Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3



[julia-users] Re: Quick method to give a random lattice vector that on average points in a particular direction?

2014-12-03 Thread DumpsterDoofus
Oh, apologies, I forgot to convert the `sign` to Int64. Changing the line 
in question to 

return (int(sign(x))*(rand()  abs(x)), int(sign(y))*(rand()  abs(y)))

In any case, the original question still stands: could this be improved?


Re: [julia-users] Re: Simple Finite Difference Methods

2014-12-03 Thread Tim Holy
Some potentially-interesting links (of which I understand very little):
http://stackoverflow.com/questions/860602/recommended-open-source-profilers#comment2363112_1137133
http://stackoverflow.com/questions/8406175/optimizing-stack-walking-performance

I can tell from this comment:
https://github.com/JuliaLang/julia/issues/2597#issuecomment-15159868
that you already know about this (and its negatives):
http://www.lenholgate.com/blog/2008/09/alternative-call-stack-capturing.html

--Tim


On Wednesday, December 03, 2014 02:25:22 PM Jameson Nash wrote:
 this stack overflow question indicates that there are two options (
 http://stackoverflow.com/questions/153559/what-are-some-good-profilers-for-n
 ative-c-on-windows )
 
 https://software.intel.com/sites/default/files/managed/cd/92/Intel-VTune-Amp
 lifierXE-2015-Product-Brief-072914.pdf ($900)
 http://www.glowcode.com/summary.htm ($500)
 
 
 On Wed Dec 03 2014 at 9:11:28 AM Stefan Karpinski ste...@karpinski.org
 
 wrote:
  This seems nuts. There have to be good profilers on Windows – how do those
  work?
  
  On Tue, Dec 2, 2014 at 11:55 PM, Jameson Nash vtjn...@gmail.com wrote:
  (I forgot to mention, that, to be fair, the windows machine that was used
  to run this test was an underpowered dual-core hyperthreaded atom
  processor, whereas the linux and mac machines were pretty comparable Xeon
  and sandybridge machines, respectively. I only gave windows a factor of 2
  advantage in the above computation in my accounting for this gap)
  
  On Tue Dec 02 2014 at 10:50:20 PM Tim Holy tim.h...@gmail.com wrote:
  Wow, those are pathetically-slow backtraces. Since most of us don't have
  machines with 500 cores, I don't see anything we can do.
  
  --Tim
  
  On Wednesday, December 03, 2014 03:14:02 AM Jameson Nash wrote:
   you could copy the whole stack (typically only a few 100kb, max of
  
  8MB),
  
   then do the stack walk offline. if you could change the stack pages to
   copy-on-write, it may even not be too expensive.
   
   but this is the real problem:
   
   ```
   
   |__/   |  x86_64-linux-gnu
   
   julia @time for i=1:10^4 backtrace() end
   elapsed time: 2.789268693 seconds (3200320016 bytes allocated, 89.29%
  
  gc
  
   time)
   ```
   
   ```
   
   |__/   |  x86_64-apple-darwin14.0.0
   
   julia @time for i=1:10^4 backtrace() end
   elapsed time: 2.586410216 seconds (640048 bytes allocated, 89.96%
  
  gc
  
   time)
   ```
   
   ```
   jameson@julia:~/julia-win32$ ./usr/bin/julia.exe -E  @time for
  
  i=1:10^3
  
   backtrace() end 
   fixme:winsock:WS_EnterSingleProtocolW unknown Protocol 0x
   fixme:winsock:WS_EnterSingleProtocolW unknown Protocol 0x
   err:dbghelp_stabs:stabs_parse Unknown stab type 0x0a
   elapsed time: 22.6314386 seconds (320032016 bytes allocated, 1.51% gc
  
  time)
  
   ```
   
   ```
   
   |__/   |  i686-w64-mingw32
   
   julia @time for i=1:10^4 backtrace() end
   elapsed time: 69.243275608 seconds (3200320800 bytes allocated, 13.16%
  
  gc
  
   time)
   ```
   
   And yes, those gc fractions are verifiably correct. With gc_disable(),
  
  they
  
   execute in 1/10 of the time. So, that pretty much means you must take
  
  1/100
  
   of the samples if you want to preserve roughly the same slow down. On
   linux, I find the slowdown to be in the range of 2-5x, and consider
  
  that to
  
   be pretty reasonable, especially for what you're getting. If you took
  
  the
  
   same number of samples on windows, it would cause a 200-500x slowdown
  
  (give
  
   or take a few percent). If you wanted to offload this work to other
  
  cores
  
   to get the same level of accuracy and no more slowdown than linux, you
   would need a machine with 200-500 processors (give or take 2-5)!
   
   (I think I did those conversions correctly. However, since I just did
  
  them
  
   for the purposes of this email, sans calculator, and as I was typing,
  
  let
  
   me know if I made more than a factor of 2 error somewhere, or just
  
  have fun
  
   reading https://what-if.xkcd.com/84/ instead)
   
   On Tue Dec 02 2014 at 6:23:07 PM Tim Holy tim.h...@gmail.com wrote:
On Tuesday, December 02, 2014 10:24:43 PM Jameson Nash wrote:
 You can't profile a moving target. The thread must be frozen first
  
  to
  
 ensure the stack trace doesn't change while attempting to record
 it

Got it. I assume there's no good way to make a copy and then
  
  discard if
  
the
copy is bad?

--Tim

 On Tue, Dec 2, 2014 at 5:12 PM Tim Holy tim.h...@gmail.com
  
  wrote:
  If the work of walking the stack is done in the thread, why does
  
  it
  
cause

  any
  slowdown of the main process?
  
  But of course the time it takes to complete the backtrace sets
  an
  upper
  limit
  on how frequently you can take a snapshot. In that case, though,

couldn't

  you
   

Re: [julia-users] Gadfly conflict with JLD

2014-12-03 Thread João Felipe Santos
As in the issue posted by Isaiah, this is a problem with the Color package (or 
interaction between Color and another package). In my case, updating Color did 
not work, so I pinned it to v0.3.9.

João

 On Dec 3, 2014, at 9:57 AM, xiongji...@gmail.com wrote:
 
 I guess maybe some packages Gadfly used forgot importing base.convert but 
 exported it in somewhere.
 
 On Wednesday, December 3, 2014 3:51:54 PM UTC+1, xiong...@gmail.com wrote:
 I found that after using Gadfly, the load funciton in JLD can not work, as 
 below:
 
 julia using Gadfly
 
 julia using HDF5,JLD
 
 julia save(tempfile.jld,var,[1,2,3])
 
 julia load(tempfile.jld,var)
 ERROR: `convert` has no method matching convert(::Type{Int64...}, ::UInt64)
  in convert at base.jl:44
  in read at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:330
  in read at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:313
  in anonymous at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:972
  in jldopen at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:234
  in load at /home/JXiong/.julia/v0.4/HDF5/src/JLD.jl:971
 
 However, if I run using HDF5, JLD before using Gadfly, everything will be 
 fine.
 
 FYI:
 julia versioninfo()
 Julia Version 0.4.0-dev+1928
 Commit b1c99af* (2014-12-03 08:58 UTC)
 Platform Info:
   System: Linux (x86_64-redhat-linux)
   CPU: Intel(R) Xeon(R) CPU E7- 4830  @ 2.13GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3



Re: [julia-users] Re: Why doesn't @sprintf evaluate its format string?

2014-12-03 Thread Jeff Waller
I think this could be done by instead of expanding into specific 
(optimized) code dedicated to each argument, it instead would expand into a 
tree of if statements connected to a bunch of Expr(:call,:typeof,more 
args) and ? : (whatever the parse tree is for that).  Essentially a 
function call turned inline.  Is there support for that (inlining 
functions) already?


[julia-users] Re: Text editor for coding Julia: Atom vs Light Table vs Bracket

2014-12-03 Thread Jutho
I remember reading somewhere that Codebox might support Julia in the near 
future. Does anybody have any comments or information about this?

Op vrijdag 28 november 2014 17:39:43 UTC+1 schreef Daniel Carrera:

 Hi everyone,

 Can anyone here comment or share opinions on the newer text editors -- 
 Atom, Light Table, Bracket -- that seem to be trying to supplant Sublime 
 Text? A lot of the information you find online seems to be geared toward 
 web development, but my interest is programming with Julia (and Fortran). 
 That's why I asking for opinions on the Julia mailing list.

 I currently use Sublime Text, and I am very happy with it. But I am 
 curious about the others, since they seem to intentionally copy the most 
 important features from Sublime Text. If you have experience with these 
 editors and can tell me why you like one better than another, I would love 
 to hear it.

 Cheers,
 Daniel.



Re: [julia-users] Re: Why doesn't @sprintf evaluate its format string?

2014-12-03 Thread Stefan Karpinski
I don't see how inlining solves this problem though – if format strings are
run-time, you can't inline anything since you don't know what to inline
until it runs, at which point it's too late.

On Wed, Dec 3, 2014 at 10:20 AM, Stefan Karpinski ste...@karpinski.org
wrote:

 If Julia didn't do inlining, all of your code would take years to run.

 On Wed, Dec 3, 2014 at 10:08 AM, Jeff Waller truth...@gmail.com wrote:

 I think this could be done by instead of expanding into specific
 (optimized) code dedicated to each argument, it instead would expand into a
 tree of if statements connected to a bunch of Expr(:call,:typeof,more
 args) and ? : (whatever the parse tree is for that).  Essentially a
 function call turned inline.  Is there support for that (inlining
 functions) already?





Re: [julia-users] Re: Simple Finite Difference Methods

2014-12-03 Thread Tim Holy
That's a pretty serious bummer. I can't believe anybody puts up with this.

Should we change the default initialization
https://github.com/JuliaLang/julia/blob/b1c99af9bdeef22e0999b28388597757541e2cc7/base/profile.jl#L44
so that, on Windows, it fires every 100ms or so? And add a note to the Profile 
docs?

--Tim

On Wednesday, December 03, 2014 03:37:59 PM Jameson Nash wrote:
 The suggestion apparently is to use Event Tracing for Windows (aka
 ptrace/dtrace). Yes, that is faster (on linux too...), but misses the point
 entirely of profiling user code.
 
 the other offerings typically wrap StalkWalk64 (as we do), and complain
 about how absurdly slow it is
 
 we used to use RtlCaptureStackBackTrace, but it often failed to give useful
 backtraces. I think it depends too heavily of walking the EBP pointer chain
 (which typically doesn't exist on x86_64). As it happens, the remaining
 suggestions fall into the category of well, obviously you should just
 write your own EBP (32-bit base pointer register) pointer chain walk
 algorithm. here, I'll even write part of it for you ... which would be
 very helpful, if RBP (64-bit base pointer register) was used to make stack
 frame chains (hint: it isn't). and these days, the EBP isn't used to make
 stack pointer chains on x86 either.
 
 llvm3.5 contains the ability to interpret COFF files, so you could
 presumably write your own stack-walk algorithm. i don't recommend it,
 however. you might have to call out to StalkWalk anyways to access the
 microsoft symbol server (yes, off their network servers) to complete the
 stalk walk symbol lookup correctly
 
 On Wed Dec 03 2014 at 10:04:19 AM Tim Holy tim.h...@gmail.com wrote:
  Some potentially-interesting links (of which I understand very little):
  http://stackoverflow.com/questions/860602/recommended-open-  
  source-profilers#comment2363112_1137133
  http://stackoverflow.com/questions/8406175/optimizing-stack-  
  walking-performance
  
  I can tell from this comment:
  https://github.com/JuliaLang/julia/issues/2597#issuecomment-15159868
  that you already know about this (and its negatives):
  http://www.lenholgate.com/blog/2008/09/alternative-call-stac
  k-capturing.html
  
  --Tim
  
  On Wednesday, December 03, 2014 02:25:22 PM Jameson Nash wrote:
   this stack overflow question indicates that there are two options (
   http://stackoverflow.com/questions/153559/what-are-some-  
  good-profilers-for-n
  
   ative-c-on-windows )
   
   https://software.intel.com/sites/default/files/managed/cd/
  
  92/Intel-VTune-Amp
  
   lifierXE-2015-Product-Brief-072914.pdf ($900)
   http://www.glowcode.com/summary.htm ($500)
   
   
   On Wed Dec 03 2014 at 9:11:28 AM Stefan Karpinski ste...@karpinski.org
   
   wrote:
This seems nuts. There have to be good profilers on Windows – how do
  
  those
  
work?

On Tue, Dec 2, 2014 at 11:55 PM, Jameson Nash vtjn...@gmail.com
  
  wrote:
(I forgot to mention, that, to be fair, the windows machine that was
  
  used
  
to run this test was an underpowered dual-core hyperthreaded atom
processor, whereas the linux and mac machines were pretty comparable
  
  Xeon
  
and sandybridge machines, respectively. I only gave windows a factor
  
  of 2
  
advantage in the above computation in my accounting for this gap)

On Tue Dec 02 2014 at 10:50:20 PM Tim Holy tim.h...@gmail.com
  
  wrote:
Wow, those are pathetically-slow backtraces. Since most of us don't
  
  have
  
machines with 500 cores, I don't see anything we can do.

--Tim

On Wednesday, December 03, 2014 03:14:02 AM Jameson Nash wrote:
 you could copy the whole stack (typically only a few 100kb, max of

8MB),

 then do the stack walk offline. if you could change the stack
  
  pages to
  
 copy-on-write, it may even not be too expensive.
 
 but this is the real problem:
 
 ```
 
 |__/   |  x86_64-linux-gnu
 
 julia @time for i=1:10^4 backtrace() end
 elapsed time: 2.789268693 seconds (3200320016 bytes allocated,
  
  89.29%
  
gc

 time)
 ```
 
 ```
 
 |__/   |  x86_64-apple-darwin14.0.0
 
 julia @time for i=1:10^4 backtrace() end
 elapsed time: 2.586410216 seconds (640048 bytes allocated,
  
  89.96%
  
gc

 time)
 ```
 
 ```
 jameson@julia:~/julia-win32$ ./usr/bin/julia.exe -E  @time for

i=1:10^3

 backtrace() end 
 fixme:winsock:WS_EnterSingleProtocolW unknown Protocol
  
  0x
  
 fixme:winsock:WS_EnterSingleProtocolW unknown Protocol
  
  0x
  
 err:dbghelp_stabs:stabs_parse Unknown stab type 0x0a
 elapsed time: 22.6314386 seconds (320032016 bytes allocated, 1.51%
  
  gc
  
time)

 ```
 
 ```
 
 |__/   |  i686-w64-mingw32
 
 julia @time for i=1:10^4 backtrace() end
 elapsed time: 

[julia-users] Re: Plot3D with Julia + PyPlot

2014-12-03 Thread Daniel Høegh
I have found a good notebook in https://gist.github.com/gizmaa/7214002

Re: [julia-users] Re: Simple Finite Difference Methods

2014-12-03 Thread Jameson Nash
Yes, probably (I thought we already had). Someone would need to do some
comparison work though first.
On Wed, Dec 3, 2014 at 10:57 AM Tim Holy tim.h...@gmail.com wrote:

 That's a pretty serious bummer. I can't believe anybody puts up with this.

 Should we change the default initialization
 https://github.com/JuliaLang/julia/blob/b1c99af9bdeef22e0999b283885977
 57541e2cc7/base/profile.jl#L44
 so that, on Windows, it fires every 100ms or so? And add a note to the
 Profile
 docs?

 --Tim

 On Wednesday, December 03, 2014 03:37:59 PM Jameson Nash wrote:
  The suggestion apparently is to use Event Tracing for Windows (aka
  ptrace/dtrace). Yes, that is faster (on linux too...), but misses the
 point
  entirely of profiling user code.
 
  the other offerings typically wrap StalkWalk64 (as we do), and complain
  about how absurdly slow it is
 
  we used to use RtlCaptureStackBackTrace, but it often failed to give
 useful
  backtraces. I think it depends too heavily of walking the EBP pointer
 chain
  (which typically doesn't exist on x86_64). As it happens, the remaining
  suggestions fall into the category of well, obviously you should just
  write your own EBP (32-bit base pointer register) pointer chain walk
  algorithm. here, I'll even write part of it for you ... which would be
  very helpful, if RBP (64-bit base pointer register) was used to make
 stack
  frame chains (hint: it isn't). and these days, the EBP isn't used to make
  stack pointer chains on x86 either.
 
  llvm3.5 contains the ability to interpret COFF files, so you could
  presumably write your own stack-walk algorithm. i don't recommend it,
  however. you might have to call out to StalkWalk anyways to access the
  microsoft symbol server (yes, off their network servers) to complete the
  stalk walk symbol lookup correctly
 
  On Wed Dec 03 2014 at 10:04:19 AM Tim Holy tim.h...@gmail.com wrote:
   Some potentially-interesting links (of which I understand very little):
   http://stackoverflow.com/questions/860602/recommended-open- 
 source-profilers#comment2363112_1137133
   http://stackoverflow.com/questions/8406175/optimizing-stack- 
 walking-performance
  
   I can tell from this comment:
   https://github.com/JuliaLang/julia/issues/2597#issuecomment-15159868
   that you already know about this (and its negatives):
   http://www.lenholgate.com/blog/2008/09/alternative-call-stac
   k-capturing.html
  
   --Tim
  
   On Wednesday, December 03, 2014 02:25:22 PM Jameson Nash wrote:
this stack overflow question indicates that there are two options (
http://stackoverflow.com/questions/153559/what-are-some- 
   good-profilers-for-n
  
ative-c-on-windows )
   
https://software.intel.com/sites/default/files/managed/cd/
  
   92/Intel-VTune-Amp
  
lifierXE-2015-Product-Brief-072914.pdf ($900)
http://www.glowcode.com/summary.htm ($500)
   
   
On Wed Dec 03 2014 at 9:11:28 AM Stefan Karpinski 
 ste...@karpinski.org
   
wrote:
 This seems nuts. There have to be good profilers on Windows – how
 do
  
   those
  
 work?

 On Tue, Dec 2, 2014 at 11:55 PM, Jameson Nash vtjn...@gmail.com
  
   wrote:
 (I forgot to mention, that, to be fair, the windows machine that
 was
  
   used
  
 to run this test was an underpowered dual-core hyperthreaded atom
 processor, whereas the linux and mac machines were pretty
 comparable
  
   Xeon
  
 and sandybridge machines, respectively. I only gave windows a
 factor
  
   of 2
  
 advantage in the above computation in my accounting for this gap)

 On Tue Dec 02 2014 at 10:50:20 PM Tim Holy tim.h...@gmail.com
  
   wrote:
 Wow, those are pathetically-slow backtraces. Since most of us
 don't
  
   have
  
 machines with 500 cores, I don't see anything we can do.

 --Tim

 On Wednesday, December 03, 2014 03:14:02 AM Jameson Nash wrote:
  you could copy the whole stack (typically only a few 100kb,
 max of

 8MB),

  then do the stack walk offline. if you could change the stack
  
   pages to
  
  copy-on-write, it may even not be too expensive.
 
  but this is the real problem:
 
  ```
 
  |__/   |  x86_64-linux-gnu
 
  julia @time for i=1:10^4 backtrace() end
  elapsed time: 2.789268693 seconds (3200320016 bytes allocated,
  
   89.29%
  
 gc

  time)
  ```
 
  ```
 
  |__/   |  x86_64-apple-darwin14.0.0
 
  julia @time for i=1:10^4 backtrace() end
  elapsed time: 2.586410216 seconds (640048 bytes allocated,
  
   89.96%
  
 gc

  time)
  ```
 
  ```
  jameson@julia:~/julia-win32$ ./usr/bin/julia.exe -E  @time
 for

 i=1:10^3

  backtrace() end 
  fixme:winsock:WS_EnterSingleProtocolW unknown Protocol
  
   0x
  
  fixme:winsock:WS_EnterSingleProtocolW unknown Protocol
  
   0x
  
  err:dbghelp_stabs:stabs_parse 

Re: [julia-users] Re: Why doesn't @sprintf evaluate its format string?

2014-12-03 Thread Jeff Waller


Unfortunately the number of types or arguments one can encounter is 
 unbounded. Are you talking about the format specifiers or the arguments?


Yea,  the format specifiers, poor choice of words, my bad.  


Re: [julia-users] Re: Simple Finite Difference Methods

2014-12-03 Thread Tim Holy
Can somebody on a Windows system report back with the output of 
`Profile.init()`?

--Tim

On Wednesday, December 03, 2014 04:38:01 PM Jameson Nash wrote:
 Yes, probably (I thought we already had). Someone would need to do some
 comparison work though first.
 
 On Wed, Dec 3, 2014 at 10:57 AM Tim Holy tim.h...@gmail.com wrote:
  That's a pretty serious bummer. I can't believe anybody puts up with this.
  
  Should we change the default initialization
  https://github.com/JuliaLang/julia/blob/b1c99af9bdeef22e0999b283885977
  57541e2cc7/base/profile.jl#L44
  so that, on Windows, it fires every 100ms or so? And add a note to the
  Profile
  docs?
  
  --Tim
  
  On Wednesday, December 03, 2014 03:37:59 PM Jameson Nash wrote:
   The suggestion apparently is to use Event Tracing for Windows (aka
   ptrace/dtrace). Yes, that is faster (on linux too...), but misses the
  
  point
  
   entirely of profiling user code.
   
   the other offerings typically wrap StalkWalk64 (as we do), and complain
   about how absurdly slow it is
   
   we used to use RtlCaptureStackBackTrace, but it often failed to give
  
  useful
  
   backtraces. I think it depends too heavily of walking the EBP pointer
  
  chain
  
   (which typically doesn't exist on x86_64). As it happens, the remaining
   suggestions fall into the category of well, obviously you should just
   write your own EBP (32-bit base pointer register) pointer chain walk
   algorithm. here, I'll even write part of it for you ... which would be
   very helpful, if RBP (64-bit base pointer register) was used to make
  
  stack
  
   frame chains (hint: it isn't). and these days, the EBP isn't used to
   make
   stack pointer chains on x86 either.
   
   llvm3.5 contains the ability to interpret COFF files, so you could
   presumably write your own stack-walk algorithm. i don't recommend it,
   however. you might have to call out to StalkWalk anyways to access the
   microsoft symbol server (yes, off their network servers) to complete the
   stalk walk symbol lookup correctly
   
   On Wed Dec 03 2014 at 10:04:19 AM Tim Holy tim.h...@gmail.com wrote:
Some potentially-interesting links (of which I understand very
little):
http://stackoverflow.com/questions/860602/recommended-open- 
  
  source-profilers#comment2363112_1137133
  
http://stackoverflow.com/questions/8406175/optimizing-stack- 
  
  walking-performance
  
I can tell from this comment:
https://github.com/JuliaLang/julia/issues/2597#issuecomment-15159868
that you already know about this (and its negatives):
http://www.lenholgate.com/blog/2008/09/alternative-call-stac
k-capturing.html

--Tim

On Wednesday, December 03, 2014 02:25:22 PM Jameson Nash wrote:
 this stack overflow question indicates that there are two options (
 http://stackoverflow.com/questions/153559/what-are-some- 

good-profilers-for-n

 ative-c-on-windows )
 
 https://software.intel.com/sites/default/files/managed/cd/

92/Intel-VTune-Amp

 lifierXE-2015-Product-Brief-072914.pdf ($900)
 http://www.glowcode.com/summary.htm ($500)
 
 
 On Wed Dec 03 2014 at 9:11:28 AM Stefan Karpinski 
  
  ste...@karpinski.org
  
 wrote:
  This seems nuts. There have to be good profilers on Windows – how
  
  do
  
those

  work?
  
  On Tue, Dec 2, 2014 at 11:55 PM, Jameson Nash vtjn...@gmail.com

wrote:
  (I forgot to mention, that, to be fair, the windows machine that
  
  was
  
used

  to run this test was an underpowered dual-core hyperthreaded atom
  processor, whereas the linux and mac machines were pretty
  
  comparable
  
Xeon

  and sandybridge machines, respectively. I only gave windows a
  
  factor
  
of 2

  advantage in the above computation in my accounting for this gap)
  
  On Tue Dec 02 2014 at 10:50:20 PM Tim Holy tim.h...@gmail.com

wrote:
  Wow, those are pathetically-slow backtraces. Since most of us
  
  don't
  
have

  machines with 500 cores, I don't see anything we can do.
  
  --Tim
  
  On Wednesday, December 03, 2014 03:14:02 AM Jameson Nash wrote:
   you could copy the whole stack (typically only a few 100kb,
  
  max of
  
  8MB),
  
   then do the stack walk offline. if you could change the stack

pages to

   copy-on-write, it may even not be too expensive.
   
   but this is the real problem:
   
   ```
   
   |__/   |  x86_64-linux-gnu
   
   julia @time for i=1:10^4 backtrace() end
   elapsed time: 2.789268693 seconds (3200320016 bytes allocated,

89.29%

  gc
  
   time)
   ```
   
   ```
   
   |__/   |  x86_64-apple-darwin14.0.0
   
   julia @time for i=1:10^4 backtrace() end
   elapsed time: 2.586410216 seconds (640048 

Re: [julia-users] Re: Simple Finite Difference Methods

2014-12-03 Thread Daniel Høegh
   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type help() for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.3 (2014-11-23 20:19 UTC)
 _/ |\__'_|_|_|\__'_|  |
|__/   |  x86_64-w64-mingw32

julia Profile.init()
(100,0.001)

julia

[julia-users] Re: Text editor for coding Julia: Atom vs Light Table vs Bracket

2014-12-03 Thread cdm

over at https://www.codebox.io, it says that accounts have sudo access in 
terminals on virtual machines,
so technically this can be accomplished; similar has been done at 
koding.com ...

currently, codebox.io is not advertising an off the shelf Julia stack, so 
it is not obvious what might be
involved with getting their cloud IDE paired up with Julia running on a 
machine you physically have
access to.

somewhat concerning: dead blog link, zero twitter activity since July, 
github quiet for 3 months and
this ... https://github.com/CodeboxIDE/codebox/issues/454

good luck,

cdm


On Wednesday, December 3, 2014 7:18:57 AM UTC-8, Jutho wrote:

 I remember reading somewhere that Codebox might support Julia in the near 
 future. Does anybody have any comments or information about this?



Re: [julia-users] Re: Simple Finite Difference Methods

2014-12-03 Thread Isaiah Norton

  The accuracy of windows timers is somewhat questionable.


I don't know if 5% is good enough for this purpose, but: one of our
collaborators uses (a lightly modified version of) the code below in a
real-time imaging application:

http://www.codeguru.com/cpp/w-p/system/timers/article.php/c5759/Creating-a-HighPrecision-HighResolution-and-Highly-Reliable-Timer-Utilising-Minimal-CPU-Resources.htm

On Tue, Dec 2, 2014 at 1:20 PM, Jameson Nash vtjn...@gmail.com wrote:

 Correct. Windows imposes a much higher overhead on just about every aspect
 of doing profiling. Unfortunately, there isn't much we can do about this,
 other then to complain to Microsoft. (It doesn't have signals, so we must
 emulate them with a separate thread. The accuracy of windows timers is
 somewhat questionable. And the stack walk library (for recording the
 backtrace) is apparently just badly written and therefore insanely slow and
 memory hungry.)

 On Tue, Dec 2, 2014 at 12:59 PM Tim Holy tim.h...@gmail.com wrote:

 I think it's just that Windows is bad at scheduling tasks with
 short-latency,
 high-precision timing, but I am not the right person to answer such
 questions.

 --Tim

 On Tuesday, December 02, 2014 09:57:28 AM Peter Simon wrote:
  I have also experienced the inaccurate profile timings on Windows.  Is
 the
  reason for the bad profiler performance on Windows understood?  Are
 there
  plans for improvement?
 
  Thanks,
  --Peter
 
  On Tuesday, December 2, 2014 3:57:16 AM UTC-8, Tim Holy wrote:
   By default, the profiler takes one sample per millisecond. In
 practice,
   the
   timing is quite precise on Linux, seemingly within a factor of twoish
 on
   OSX,
   and nowhere close on Windows. So at least on Linux you can simply read
   samples
   as milliseconds.
  
   If you want to visualize the relative contributions of each
 statement, I
   highly recommend ProfileView. If you use LightTable, it's already
 built-in
   via
   the profile() command. The combination of ProfileView and @profile
 is, in
   my
   (extremely biased) opinion, quite powerful compared to tools I used
   previously
   in other programming environments.
  
   Finally, there's IProfile.jl, which works via a completely different
   mechanism
   but does report raw timings (with some pretty big caveats).
  
   Best,
   --Tim
  
   On Monday, December 01, 2014 10:13:16 PM Christoph Ortner wrote:
How do you get timings from the Julia profiler, or even better,
 %-es? I
guess one can convert from the numbers one gets, but it is a bit
  
   painful?
  
Christoph




[julia-users] Re: Plot3D with Julia + PyPlot

2014-12-03 Thread Steven G. Johnson
Thanks, I've posted an issue to discuss this:

 https://github.com/gizmaa/Julia_Examples/issues/1

On Wednesday, December 3, 2014 11:37:20 AM UTC-5, Daniel Høegh wrote:

 I have found a good notebook in https://gist.github.com/gizmaa/7214002



Re: [julia-users] Re: Simple Finite Difference Methods

2014-12-03 Thread Tim Holy
Thanks. Looks like that needs to be changed.

--Tim

On Wednesday, December 03, 2014 08:53:50 AM Daniel Høegh wrote:
_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() for help.
 
   | | | | | | |/ _` |  |
   | | |
   | | |_| | | | (_| |  |  Version 0.3.3 (2014-11-23 20:19 UTC)
 
  _/ |\__'_|_|_|\__'_|  |
 
 |__/   |  x86_64-w64-mingw32
 
 julia Profile.init()
 (100,0.001)
 
 julia



Re: [julia-users] Julia 0.3.3 fails Base.runtests()

2014-12-03 Thread Elliot Saba
Hey Jeremy, I'm the guy you should be talking to.  Can you `brew remove
openblas-julia` and then `brew install openblas-julia` and try again?
-E

On Wed, Dec 3, 2014 at 6:42 AM, Tim Holy tim.h...@gmail.com wrote:

 Hi Jeremy,

 Thanks for reporting the problem. Julia's bug reports can be filed here:
 https://github.com/JuliaLang/julia/issues

 I've added an issue suggesting that the URL be specified in that message,
 but
 it would be best to open a separate issue for your test failure.

 Best,
 --Tim

 On Wednesday, December 03, 2014 02:07:40 AM Jeremy Cavanagh wrote:
  Hi,
  I've just run brew update  brew upgrade on my MacBook Air with OS X
  10.9.5, which resulted in julia 0.3.3 being installed. Running the
  recommended test brew test -v julia results in SUCCESS for the core,
  but as I have had many problems trying to get julia installed on linux
 (now
  successfully) I like to run the full test suite. This time I got the
  following output:
 
  $ julia -e Base.runtests()
 
   From worker 2: * linalg1
   From worker 3: * linalg2
   From worker 2: * linalg3
   From worker 2: * linalg4
   From worker 2: * core
   From worker 2: * keywordargs
   From worker 2: * numbers
   From worker 3: * strings
   From worker 3: * collections
   From worker 3: * hashing
   From worker 2: * remote
   From worker 2: * iobuffer
   From worker 2: * arrayops
   From worker 3: * reduce
   From worker 3: * reducedim
   From worker 3: * simdloop
   From worker 3: * blas
   From worker 3: * fft
   From worker 2: * dsp
   From worker 3: * sparse
   From worker 2: * bitarray
  
   exception on 3: ERROR: test error during maximum(abs(a \ b - full(a) \
 b))
1000 * eps()
   error compiling factorize: error compiling cholfact: error compiling
 cmn:
   could not load module libcholmod: dlopen(libcholmod.dylib, 1): Library
 not
   loaded: /usr/local/opt/openblas-julia/lib/libopenblasp-r0.2.12.dylib
  
 Referenced from:
   /usr/local/Cellar/julia/0.3.3/lib/julia//libcholmod.dylib
  
 Reason: image not found
  
in \ at linalg/generic.jl:233
in anonymous at test.jl:62
in do_test at test.jl:37
in anonymous at no file:80
in runtests at
  
   /usr/local/Cellar/julia/0.3.3/share/julia/test/testdefs.jl:5
  
in anonymous at multi.jl:855
in run_work_thunk at multi.jl:621
in anonymous at task.jl:855
  
   while loading sparse.jl, in expression starting on line 75
   ERROR: test error during maximum(abs(a \ b - full(a) \ b))  1000 *
 eps()
   error compiling factorize: error compiling cholfact: error compiling
 cmn:
   could not load module libcholmod: dlopen(libcholmod.dylib, 1): Library
 not
   loaded: /usr/local/opt/openblas-julia/lib/libopenblasp-r0.2.12.dylib
  
 Referenced from:
   /usr/local/Cellar/julia/0.3.3/lib/julia//libcholmod.dylib
  
 Reason: image not found
  
   while loading sparse.jl, in expression starting on line 75
   while loading
 /usr/local/Cellar/julia/0.3.3/share/julia/test/runtests.jl,
   in expression starting on line 39
  
   ERROR: A test has failed. Please submit a bug report including error
   messages
   above and the output of versioninfo():
   Julia Version 0.3.3
   Commit b24213b* (2014-11-23 20:19 UTC)
  
   Platform Info:
 System: Darwin (x86_64-apple-darwin13.3.0)
 CPU: Intel(R) Core(TM) i7-2677M CPU @ 1.80GHz
 WORD_SIZE: 64
 BLAS: libopenblas (NO_AFFINITY NEHALEM)
 LAPACK: libopenblas
 LIBM: libopenlibm
 LLVM: libLLVM-3.3
  
in error at error.jl:21
in runtests at interactiveutil.jl:370
in runtests at interactiveutil.jl:359
in process_options at
 /usr/local/Cellar/julia/0.3.3/lib/julia/sys.dylib
in _start at /usr/local/Cellar/julia/0.3.3/lib/julia/sys.dylib
 (repeats 2
  
   times)
 
  Although directed to submit a bug report there is no indication as to
 where
  this should go. I hope this is the correct place, if not I apologize.




[julia-users] pyplot doesn't display anything, how do I choose the right backend

2014-12-03 Thread thr


Hi,

I try to use pyplot.jl, nothing works, I need help :)

In python, I can use matplotlib with both the gtk3 and wx backends. But 
Julia tells me

WARNING: No working GUI backend found for matplotlib.


when I try using PyPlot. 

Calls to pygui(:gtk) and pygui(:wx) tell me that I don't have the 
corresponding GUI toolkits installed, but I have.

If I understand correctly, even with no working pygui, I should be able to 
see figures, through display(gcf()), but that also doesn't work. The actual 
code I tried is:

using PyCall
#pygui(:gtk)
using PyPlot

x = linspace(0,2*pi,1000); y = sin(3*x + 4*cos(2*x));
plot(x, y, color=red, linewidth=2.0, linestyle=--)

display(gcf())




did I miss something?


[julia-users] Re: Why doesn't @sprintf evaluate its format string?

2014-12-03 Thread Mike Innes
#9423 https://github.com/JuliaLang/julia/pull/9243 should help with the 
repetition of format strings issue. It occurs to me now that you can always 
just write a function wrapper for `@sprintf` to solve that issue but this 
might still be useful.

On Wednesday, 3 December 2014 01:40:56 UTC, Ronald L. Rivest wrote:

 I'm new to Julia, and got burned (aka wasted a fair amount of time)
 trying to sort out why @sprintf didn't work as I expected.  

 julia @sprintf(%2d,29)
 29

 julia fmt = %2d
 %2d

 julia @sprintf(fmt,29)
 ERROR: @sprintf: first argument must be a format string

 julia @sprintf(%*2d,29)
 ERROR: @sprintf: first argument must be a format string

 I would expect that @sprintf would allow an arbitrary string expression as 
 its
 format string.  It obviously doesn't...

 There are many good reasons why one might want a format string expression 
 instead
 of just a constant format string.  For example, the same format may be 
 needed in
 several places in the code, or you may want to compute the format string 
 based on
 certain item widths or other alignment needs.

 At a minimum, this should (please!) be noted in the documentation.  

 Better would be to have the extended functionality...

 (Or maybe it exists already -- have I missed something?)

 Thanks!

 Cheers,
 Ron Rivest



[julia-users] Re: pyplot doesn't display anything, how do I choose the right backend

2014-12-03 Thread thr
Sorry, I forgot to mention the versions: Julia 0.3, python 3.3.5, 
matplotlib 1.3 on gentoo.


Re: [julia-users] Re: pyplot doesn't display anything, how do I choose the right backend

2014-12-03 Thread Isaiah Norton
Might be a paths issue - check the contents of 'sys.path' when running
python directly vs. PyCall.

On Wed, Dec 3, 2014 at 2:08 PM, thr johannes.thr...@gmail.com wrote:

 Sorry, I forgot to mention the versions: Julia 0.3, python 3.3.5,
 matplotlib 1.3 on gentoo.



[julia-users] Re: Julia 0.3.3 doesn't start on Mac 10.9.5

2014-12-03 Thread david . w . watson
I don't think there is a problem with my .bashrc; but I use csh instead. I 
don't have a problem with 0.3.2, just 0.3.3. Your suggestion for running 
~/Desktop/Julia-0.3.3.app/Contents/Resources/julia/bin/julia 
works. That's good enough for now. Thanks.

On Tuesday, December 2, 2014 10:49:35 AM UTC-6, Viral Shah wrote:

 Do you have something in your .bashrc that may be interfering? We had some 
 issues with opening in the right window and such, but we fixed that in 
 0.3.3. Do you have other Terminal windows running when you do this, or is 
 this the behaviour when Terminal is not even running?

 A quick way around is to start the Terminal and assuming that 
 Julia-0.3.3.app is on the Desktop, run:

 ~/Desktop/Julia-0.3.3.app/Contents/Resources/julia/bin/julia

 -viral

 On Tuesday, December 2, 2014 10:07:54 PM UTC+5:30, david.w...@nasa.gov 
 wrote:

 Running Julia.app starts two terminal windows but Julia doesn't start.



Re: [julia-users] Re: pyplot doesn't display anything, how do I choose the right backend

2014-12-03 Thread Steven G. Johnson
Yeah, probably you are running a different version of Python (or a 
different Python path) in Julia than you are when you run Python 
separately.   (You can use the PYTHON environment variable to specify the 
path of the python executable that you want Julia to use.)


[julia-users] Re: pyplot doesn't display anything, how do I choose the right backend

2014-12-03 Thread Steven G. Johnson
On Wednesday, December 3, 2014 2:08:50 PM UTC-5, thr wrote:

 Sorry, I forgot to mention the versions: Julia 0.3, python 3.3.5, 
 matplotlib 1.3 on gentoo.


I'm guessing that you also have python2 installed, and Julia is picking up 
that instead because python==python2 in your PATH.  Try
PYTHON=python3 julia
assuming that your Python 3.3 executable is called python3 and is in the 
PATH.


[julia-users] Re: Julia as a matlab mex function (and the struggle for independence from cygwin)

2014-12-03 Thread Tracy Wadleigh
I did some more poking around with gdb, and was able to isolate one issue.

When run outside of cygwin, `uv_guess_handle` returns `UV_UNKNOWN_HANDLE` 
such that `init_stdio_handle` errors out. When run inside of cygwin, 
`uv_guess_handle` returns `UV_NAMED_PIPE` to no ill effect.

How can this be? I'm hoping an expert knows. I opened an issue 
https://github.com/JuliaLang/julia/issues/9244 for this.



[julia-users] Are there julia versions of dynamic time warping and peak finding in noisy data?

2014-12-03 Thread ggggg
Hello,

I'm interested in using dynamic time warping and an algorithm for peak 
finding in noisy data (like scipy.signal.find_peaks_cwt).  I'm just 
wondering if there are any Julia implementations around, otherwise I'll 
probably just use PyCall for now to use existing python code.






[julia-users] Re: Why doesn't @sprintf evaluate its format string?

2014-12-03 Thread elextr

As Stefan said above, the problem with traditional (s)printf functions is 
type safety.

On Thursday, December 4, 2014 4:53:17 AM UTC+10, Mike Innes wrote:

 #9423 https://github.com/JuliaLang/julia/pull/9243 should help with the 
 repetition of format strings issue. It occurs to me now that you can always 
 just write a function wrapper for `@sprintf` to solve that issue but this 
 might still be useful.

 [...]


Re: [julia-users] Re: what's the best way to do R table() in julia? (why does StatsBase.count(x,k) need k?)

2014-12-03 Thread David van Leeuwen
Hi, 

On Tuesday, December 2, 2014 6:50:33 PM UTC+1, Ivar Nesje wrote:

 It's not the obvious choice to me either, but it is in the docs 
 http://docs.julialang.org/en/latest/stdlib/base/#associative-collections, 
 and has been since I read it the first time 1.5 years ago.

 I don't think is says that Dict : Associative, which is what I meant. 

---david



Re: [julia-users] Re: what's the best way to do R table() in julia? (why does StatsBase.count(x,k) need k?)

2014-12-03 Thread David van Leeuwen
Hello Milan, 

I just uploaded a new version of NamedArrays to github.  I've reimplemented 
the getindex() mess---the never ending chain of ambiguities was unworkable. 
 It is much cleaner now, I believe, and I've replaced the `negative index' 
implementation with a simple `Not()` type for performance reasons. So now 
you can say

n[Not(1),:]

to remove the first row, or equivalently use

n[Not(one),:]

if you happened to have indexed with strings.  You should be able to use an 
associative index now with all kinds of types, e.g., rationals or perhaps 
even integers---I haven't tested that yet.  

The standard travis test includes a simple getindex performance benchmark, 
indexing with integers is slightly slower than for normal arrays, but 
indeed quite a bit slower for names.  

On Sunday, November 30, 2014 11:22:39 AM UTC+1, Milan Bouchet-Valat wrote:

 Le mercredi 26 novembre 2014 à 09:30 -0800, David van Leeuwen a écrit : 
  Hello again, 
  
  

 The idea is that any type could be used instead of a Dict, as long as it 
 can be indexed with a key and return the index. For small NamedArrays, 
 doing a linear search on an array is faster than using a Dict. And when 
 computing frequency tables from PooledDataArrays, we could reuse the 
 existing pool instead of creating a Dict from it, it would save some 
 memory. 


 Also, John suggested that the array that a NamedArray wraps could be of 
 any AbstractArray type, not just Array. Sounds like a good idea (e.g. to 
 wrap a sparse matrix). 

 I tried replacing Array by AbstractArray and Dict by Associative in the 
type definition, and all works fine, except that plain indexing becomes 
very slow again.  So I left that out for now.  The array and associative 
types themselves should probably become part of the type parameter list.  

---david

 


[julia-users] Re: pyplot doesn't display anything, how do I choose the right backend

2014-12-03 Thread thr

Almost :) This gave me the right direction. 
In fact, I have two pythons, but I have to explicitly set PYTHON=python2.7 
if I want to see interactive graphs. 

Here is what I found out about the combinations of backends and python 
versions:

Backends gtk3agg, gtk3cairo, wx, wxagg just seemed to work in both pythons. 
It turned out that they don't work interactively (i.e. with 
matplotlib.interactive(true)) due to some bug with matplotlib on the python 
side.
Backend TkAgg works with both pythons, but TkAgg doesn't seem to work with 
PyPlot. This is what I set in my matplotlibrc, which seems to be ignored by 
PyPlot.
Backend gtkAgg works with Python2.7 and PyPlot, but NOT with python3.

I don't know about the qt backend. I suspect this is the only one working 
with python3 and Julia. 

On Wednesday, December 3, 2014 9:40:52 PM UTC+1, Steven G. Johnson wrote:

 On Wednesday, December 3, 2014 2:08:50 PM UTC-5, thr wrote:

 Sorry, I forgot to mention the versions: Julia 0.3, python 3.3.5, 
 matplotlib 1.3 on gentoo.


 I'm guessing that you also have python2 installed, and Julia is picking up 
 that instead because python==python2 in your PATH.  Try
 PYTHON=python3 julia
 assuming that your Python 3.3 executable is called python3 and is in the 
 PATH.



[julia-users] Re: Julia Mongo

2014-12-03 Thread Tobias Jone
Inside the Mongo.jl file,  what is  const MONGO_LIB defined as?  If it's 
currently libmongoc, try explicitly setting the suffix, eg const MONGO_LIB 
= libmongoc-1.0 
Bit of a late response, but you never know.


[julia-users] Re: pyplot doesn't display anything, how do I choose the right backend

2014-12-03 Thread Steven G. Johnson


On Wednesday, December 3, 2014 7:28:05 PM UTC-5, thr wrote:

 Backend TkAgg works with both pythons, but TkAgg doesn't seem to work with 
 PyPlot. This is what I set in my matplotlibrc, which seems to be ignored by 
 PyPlot.
 Backend gtkAgg works with Python2.7 and PyPlot, but NOT with python3.


That's odd; TkAgg works on my machine, and PyPlot nowadays should honor 
your matplotlibrc if it can.   Can you try to do Pkg.update() to see if 
your packages are out of date?  If that fails, please file a PyPlot issue.

(The fact that gtk and wx don't work interactively even in Python probably 
means something is broken in your Python setup, which is a little worrying.)


Re: [julia-users] Re: Text editor for coding Julia: Atom vs Light Table vs Bracket

2014-12-03 Thread Todd Leo
Hey Mike,

I've been used Juno for about a month, and I love it! Coding julia 
interactively is awesome, exactly how I feel when I was using RStudio! Big 
thanks!

Yesterday I had to regretfully upgrade my project on julia v0.4, with no 
surprise that Juno doesn't seem to be working(the little grids kept 
spinning and no evaluation result came out). This morning I updated Juno 
and Julia plugin and it solved this problem, bravo! 

But I still lost auto-completion, it might still caused by conflicts with 
Emacs plugin, I think I will live with it for now :-)

Cheers,
Todd Leo