[julia-users] Re: debugging Kernel died in IJulia
Note that when you update Julia you need to re-run Pkg.build(IJulia) to tell IPython about the new Julia location.
Re: [julia-users] How to test argument types available on a Function?
Since Julia doesn't (and perhaps couldn't) distinguish univariate and bivariate functions by type, I think it would probably be clearer and less confusing to have an additional argument for the number of inputs, or a separate function (which would be more type stable?).
Re: [julia-users] How to test argument types available on a Function?
Hmm, this is surprising, how can you tell the difference between anonymous and named functions? g=x-x^2 foo(x,y)=1 typeof(g)==typeof(foo) #true On 29 Mar 2015, at 7:50 am, Sheehan Olver dlfivefi...@gmail.com wrote: Great, thanks! It looks like applicable doesn’t work on anonymous functions, which seems like a bug. I guess I’ll file a ticket. On 29 Mar 2015, at 7:47 am, Miles Lubin miles.lu...@gmail.com mailto:miles.lu...@gmail.com wrote: Take a look at applicable(). On Saturday, March 28, 2015 at 4:30:09 PM UTC-4, Sheehan Olver wrote: It currently works like this, which does work with f(x)=x, f(x,y) = x+y: try f(0) catch try f(0,0) catch f((0,0)) end end This code is meant for REPL usage primarily, and so the convenience of typing just Fun(f) is worth having such “questionable” code. On 29 Mar 2015, at 6:29 am, Mauro maur...@runbox.com javascript: wrote: In ApproxFun, a user supplied function is approximated. the approximation depends on whether the function is univariate or bivariate How does it work, if the user defines several methods? f(x) = x f(x,y) = x+y The method_exists function would probably be a slightly cleaner way to do this. method_exists should be good for generic functions but it does not work with anonymous functions. I think this gives you the number of arguments: length(Base.uncompressed_ast(( (x,y,z)-1 ).code.def).args[1]) something similar should let you figure out whether a one-argument signature is a tuple. Not sure though this is the preferred approach.
[julia-users] Advice on vectorizing functions
I am trying Julia after years with MATLAB and Python/NumPy. When I want to scale by a power of 2, I use pow2() in MATLAB and np.ldexp() in Python. This maximizes speed and accuracy because it just adds the second argument to the floating-point exponent of the first argument (and leaves the mantissa alone). What surprised me about Julia v0.3.5 (advertized as being speedy) is that there are no methods for ldexp() supporting vectors or arrays. Using map(ldexp,...) is awkward when I want to iterate on the first argument of ldexp() and not the second. In general, how do I know which functions support vectors/arrays and which only support scalers? If I create custom vectorized functions, then I lose forward compatibility with the language when (faster) vectorized versions become available in the future. I really want to use the same syntax for scalers and vectors and then create vector-optimized methods when speed is important. I see that this is a highly-controversial subject, not planned to be resolved until Julia 0.5: https://github.com/JuliaLang/julia/issues/8450 Since I ran into problems adding new methods to base functions (with an implied `using`), I am considering creating custom vectorized functions for the types I need in each module (adding a v prefix): `vldexp(x::Array{Float64,1}, e::Int) = [ldexp(x[i], e) for i=1:length(x)]` `vldexp(x::Array{Float64,2}, e::Int) = [ldexp(x[i,j], e) for i=1:size(x,1), j=1:size(x,2)]` One feature of these methods is user-friendly error messages if the parent code calls vldexp() with types not supported by ldexp(). Another feature is the speed gained by using specific instead of abstract types. I assume that Julia will evolve to support the same syntax for scalers and vectors (like MATLAB and Python). Is there a way that I can add new vectorized methods with the same name as the base (scaler) methods? What are other community members doing to write fast, vectorized code in Julia v0.3 and deal with readability and minimizing code rewrite for future Julia versions?
[julia-users] Re: debugging Kernel died in IJulia
Are you really sure you use the new julia + updated packages ? The new chain is really much much more stable.
[julia-users] Re: Advice on vectorizing functions
In general, you need to unlearn the intuition from Matlab/Python that vectorized/built-in functions are fast, and functions or loops you write yourself are slow. There's not the same drive to vectorize everything in Julia because not only are your own loops fast, but writing your own loops (or using something equivalent like a comprehension) is often much faster than vectorized code — especially in the common case where you are doing several vectorized operations in sequence. For example, see problem 3(c) from the following notebook (solutions to a recent homework assignment in my class): http://nbviewer.ipython.org/url/math.mit.edu/~stevenj/18.335/pset3sol-s15.ipynb showing a 15x speedup from writing your own loops to compute A+3B+4A.^2 vs. the vectorized expression. Vectorization is certainly extremely convenient for linear algebra (and for matrix-matrix operations can still lead to big speedups over your own code) and basic arithmetic operations on arrays. But there isn't the same drive to vectorize *everything* in a language like Julia.
Re: [julia-users] Gaussian Filter
Doh! I was assuming it was a radial blur and used the scalar like you suggested. Thank you.
Re: [julia-users] Extending a DataFrame (or, why aren't my imports working?)
On Fri, 2015-03-27 at 19:46, kevin.dale.sm...@gmail.com wrote: Ok, I narrowed it down to a very small test case. The mymodule.jl file is at the bottom of this posting. If you save that to a file then run this code, you'll get the same effect as my original problem except with the 'index' method. using mymodule mydf = MyDataFrame(Any[], Index()) # This line complains that `index` has no method matching index(::MyDataFrame) This line is not valid syntax: julia # This line complains that `index` has no method matching index(::MyDataFrame) ERROR: syntax: invalid :: syntax So, correcting this everything seems to work, right? display(mydf) # This displays my `index` method methods(index) # This shows that my `index` method works index(mydf) === mymodule.jl === module mymodule import DataFrames: AbstractDataFrame, DataFrame, Index, nrow, ncol import DataArrays: DataArray export MyDataFrame, nrow, ncol, Index, index, columns type MyDataFrame : AbstractDataFrame columns::Vector{Any} colindex::Index function MyDataFrame(columns::Vector{Any}, colindex::Index) ncols = length(columns) if ncols 1 nrows = length(columns[1]) equallengths = true for i in 2:ncols equallengths = length(columns[i]) == nrows end if !equallengths msg = All columns in a DataFrame must be the same length throw(ArgumentError(msg)) end end if length(colindex) != ncols msg = Columns and column index must be the same length throw(ArgumentError(msg)) end new(columns, colindex) end end index(df::MyDataFrame) = df.colindex columns(df::MyDataFrame) = df.columns nrow(df::MyDataFrame) = ncol(df) 0 ? length(df.columns[1])::Int : 0 ncol(df::MyDataFrame) = length(index(df)) end ==
Re: [julia-users] How to test argument types available on a Function?
Great, thanks! It looks like applicable doesn’t work on anonymous functions, which seems like a bug. I guess I’ll file a ticket. On 29 Mar 2015, at 7:47 am, Miles Lubin miles.lu...@gmail.com wrote: Take a look at applicable(). On Saturday, March 28, 2015 at 4:30:09 PM UTC-4, Sheehan Olver wrote: It currently works like this, which does work with f(x)=x, f(x,y) = x+y: try f(0) catch try f(0,0) catch f((0,0)) end end This code is meant for REPL usage primarily, and so the convenience of typing just Fun(f) is worth having such “questionable” code. On 29 Mar 2015, at 6:29 am, Mauro maur...@runbox.com javascript: wrote: In ApproxFun, a user supplied function is approximated. the approximation depends on whether the function is univariate or bivariate How does it work, if the user defines several methods? f(x) = x f(x,y) = x+y The method_exists function would probably be a slightly cleaner way to do this. method_exists should be good for generic functions but it does not work with anonymous functions. I think this gives you the number of arguments: length(Base.uncompressed_ast(( (x,y,z)-1 ).code.def).args[1]) something similar should let you figure out whether a one-argument signature is a tuple. Not sure though this is the preferred approach.
Re: [julia-users] How to test argument types available on a Function?
f(0)/f(0,0) are really short hands for Fun(f,Interval()) #univariate and Fun(f,Interval()^2) #bivariate so that exists in practice. (For non-REPL, one should probably always specify the domain.) On 29 Mar 2015, at 8:20 am, Toivo Henningsson toivo@gmail.com wrote: How about having an optional ndims argument at least, for non-REPL usage? On Sat, Mar 28, 2015 at 9:30 PM, Sheehan Olver dlfivefi...@gmail.com mailto:dlfivefi...@gmail.com wrote: It currently works like this, which does work with f(x)=x, f(x,y) = x+y: try f(0) catch try f(0,0) catch f((0,0)) end end This code is meant for REPL usage primarily, and so the convenience of typing just Fun(f) is worth having such “questionable” code. On 29 Mar 2015, at 6:29 am, Mauro mauro...@runbox.com mailto:mauro...@runbox.com wrote: In ApproxFun, a user supplied function is approximated. the approximation depends on whether the function is univariate or bivariate How does it work, if the user defines several methods? f(x) = x f(x,y) = x+y The method_exists function would probably be a slightly cleaner way to do this. method_exists should be good for generic functions but it does not work with anonymous functions. I think this gives you the number of arguments: length(Base.uncompressed_ast(( (x,y,z)-1 ).code.def).args[1]) something similar should let you figure out whether a one-argument signature is a tuple. Not sure though this is the preferred approach.
[julia-users] Re: Using spones
If you use tab-completion in an interactive session, julia spones( spones{T}(S::SparseMatrixCSC{T,Ti:Integer}) at sparse/sparsematrix.jl:401 you'll see that the spones() function in Base only accepts CSC matrices http://julia.readthedocs.org/en/latest/manual/arrays/#sparse-matrices so far. Here's the code to make your example work: julia A = SparseMatrixCSC(2,3,[1,3,4,6],[1,2,2,1,2],[1,2,1,3,3]) 2x3 sparse matrix with 5 Int64 entries: [1, 1] = 1 [2, 1] = 2 [2, 2] = 1 [1, 3] = 3 [2, 3] = 3 julia spones(A) 2x3 sparse matrix with 5 Int64 entries: [1, 1] = 1 [2, 1] = 1 [2, 2] = 1 [1, 3] = 1 [2, 3] = 1 On Saturday, 28 March 2015 20:10:07 UTC-4, Mladen Kolar wrote: Hi, I have a question about spones() function, as I am not quite sure how to use it. I have encountered the following issue using Base.spones(). julia A = [1 0 3.; 2. 1. 3.] 2x3 Array{Float64,2}: 1.0 0.0 3.0 2.0 1.0 3.0 julia spones(A) ERROR: `spones` has no method matching spones(::Array{Float64,2}) I would expect to get back a matrix that has ones everywhere except at [1, 2], as per help: help? spones INFO: Loading help data... Base.spones(S) Create a sparse matrix with the same structure as that of S, but with every nonzero element having the value 1.0. I am new to Julia and not sure if I am missing something. Thanks, Mladen
Re: [julia-users] How to test argument types available on a Function?
It currently works like this, which does work with f(x)=x, f(x,y) = x+y: try f(0) catch try f(0,0) catch f((0,0)) end end This code is meant for REPL usage primarily, and so the convenience of typing just Fun(f) is worth having such “questionable” code. On 29 Mar 2015, at 6:29 am, Mauro mauro...@runbox.com wrote: In ApproxFun, a user supplied function is approximated. the approximation depends on whether the function is univariate or bivariate How does it work, if the user defines several methods? f(x) = x f(x,y) = x+y The method_exists function would probably be a slightly cleaner way to do this. method_exists should be good for generic functions but it does not work with anonymous functions. I think this gives you the number of arguments: length(Base.uncompressed_ast(( (x,y,z)-1 ).code.def).args[1]) something similar should let you figure out whether a one-argument signature is a tuple. Not sure though this is the preferred approach.
Re: [julia-users] How to test argument types available on a Function?
Ok, good.
Re: [julia-users] How to test argument types available on a Function?
In ApproxFun, a user supplied function is approximated. the approximation depends on whether the function is univariate or bivariate How does it work, if the user defines several methods? f(x) = x f(x,y) = x+y The method_exists function would probably be a slightly cleaner way to do this. method_exists should be good for generic functions but it does not work with anonymous functions. I think this gives you the number of arguments: length(Base.uncompressed_ast(( (x,y,z)-1 ).code.def).args[1]) something similar should let you figure out whether a one-argument signature is a tuple. Not sure though this is the preferred approach.
[julia-users] Re: Advice on vectorizing functions
I found a way to add new vectorized methods with the same name as the base (scaler) methods: `import Base.Math.ldexp' `ldexp(x::Array{Float64,1}, e::Int) = [ldexp(x[i], e) for i=1:length(x)]` `ldexp(x::Array{Float64,2}, e::Int) = [ldexp(x[i,j], e) for i=1:size(x,1), j=1:size(x,2)]` @Steven G. Johnson: your point is well taken. In Python, I have already seen a big speedup from this library compared to vectorized Numpy: https://github.com/pydata/numexpr I have already seen the similar concept in Julia: https://github.com/lindahua/Devectorize.jl I will consider all options when trying to speed up the critical parts of my code. When vectorized functions prove optimum, I could still use advice about readability and minimizing code rewrite for future Julia versions.
Re: [julia-users] Gaussian Filter
The one in Images works just fine for plain Arrays (or any AbstractArray). --Tim On Saturday, March 28, 2015 04:14:31 PM DumpsterDoofus wrote: I have a 2D Float64 array, and I would like to Gaussian filter it. I'm familiar with the Mathematica equivalent, which is `GaussianFilter[array, sigma]`. Is there an equivalent in Julia? I know there is a fast Gaussian filter implemented in the Images package, but it (as best I can tell) only works for images, not numerical arrays. In the Standard Library, there is conv2, but it uses FFT methods, which are slow and unnecessary.
Re: [julia-users] How to test argument types available on a Function?
Take a look at applicable(). On Saturday, March 28, 2015 at 4:30:09 PM UTC-4, Sheehan Olver wrote: It currently works like this, which does work with f(x)=x, f(x,y) = x+y: try f(0) catch try f(0,0) catch f((0,0)) end end This code is meant for REPL usage primarily, and so the convenience of typing just Fun(f) is worth having such “questionable” code. On 29 Mar 2015, at 6:29 am, Mauro maur...@runbox.com javascript: wrote: In ApproxFun, a user supplied function is approximated. the approximation depends on whether the function is univariate or bivariate How does it work, if the user defines several methods? f(x) = x f(x,y) = x+y The method_exists function would probably be a slightly cleaner way to do this. method_exists should be good for generic functions but it does not work with anonymous functions. I think this gives you the number of arguments: length(Base.uncompressed_ast(( (x,y,z)-1 ).code.def).args[1]) something similar should let you figure out whether a one-argument signature is a tuple. Not sure though this is the preferred approach.
Re: [julia-users] Extending a DataFrame (or, why aren't my imports working?)
That should be index(mydf). I did get the small test case working, but I still can't seem to use the same techniques to get my application working. I just don't understand how these method overrides are supposed to work. I originally thought that you just needed to have methods with the same name and Julia would simply look at the name and the argument types to determine the correct method to use. But there is apparently more to it since a previous suggestion was to do something like: DataFrames.nrow(df::MyDataFrame) = ncol(df) 0 ? length(df.columns[1])::Int : 0 I don't see why DataFrames should be involved at all. I'm using AbstractDataFrames as a super-type, but why would the DataFrames type have to know about MyDataFrame? They are peers, so I don't see why DataFrames would be special. Actually, I'm kind of surprised that DataFrames' nrow is even implemented on DataFrames and not AbstractDataFrames. I would think that most of the methods in dataframes.jl should be done on the AbstractDataFrame so that anyone creating a subtype like I'm trying to do wouldn't have to reimplement them all. But that's another issue altogether.
Re: [julia-users] How to test argument types available on a Function?
See if `isgeneric` helps (and you can check its implementation to see how it works). Best, --Tim On Sunday, March 29, 2015 09:06:05 AM Sheehan Olver wrote: Hmm, this is surprising, how can you tell the difference between anonymous and named functions? g=x-x^2 foo(x,y)=1 typeof(g)==typeof(foo) #true On 29 Mar 2015, at 7:50 am, Sheehan Olver dlfivefi...@gmail.com wrote: Great, thanks! It looks like applicable doesn’t work on anonymous functions, which seems like a bug. I guess I’ll file a ticket. On 29 Mar 2015, at 7:47 am, Miles Lubin miles.lu...@gmail.com mailto:miles.lu...@gmail.com wrote: Take a look at applicable(). On Saturday, March 28, 2015 at 4:30:09 PM UTC-4, Sheehan Olver wrote: It currently works like this, which does work with f(x)=x, f(x,y) = x+y: try f(0) catch try f(0,0) catch f((0,0)) end end This code is meant for REPL usage primarily, and so the convenience of typing just Fun(f) is worth having such “questionable” code. On 29 Mar 2015, at 6:29 am, Mauro maur...@runbox.com javascript: wrote: In ApproxFun, a user supplied function is approximated. the approximation depends on whether the function is univariate or bivariate How does it work, if the user defines several methods? f(x) = x f(x,y) = x+y The method_exists function would probably be a slightly cleaner way to do this. method_exists should be good for generic functions but it does not work with anonymous functions. I think this gives you the number of arguments: length(Base.uncompressed_ast(( (x,y,z)-1 ).code.def).args[1]) something similar should let you figure out whether a one-argument signature is a tuple. Not sure though this is the preferred approach.
[julia-users] Using spones
Hi, I have a question about spones() function, as I am not quite sure how to use it. I have encountered the following issue using Base.spones(). julia A = [1 0 3.; 2. 1. 3.] 2x3 Array{Float64,2}: 1.0 0.0 3.0 2.0 1.0 3.0 julia spones(A) ERROR: `spones` has no method matching spones(::Array{Float64,2}) I would expect to get back a matrix that has ones everywhere except at [1, 2], as per help: help? spones INFO: Loading help data... Base.spones(S) Create a sparse matrix with the same structure as that of S, but with every nonzero element having the value 1.0. I am new to Julia and not sure if I am missing something. Thanks, Mladen
Re: [julia-users] Extending a DataFrame (or, why aren't my imports working?)
On Saturday, March 28, 2015 at 4:19:44 PM UTC-5, Mauro wrote: Now, generic functions carry around with them the module in which they were first defined. To extend such a function with another method in another module you either have to import it or fully qualify it (DataFrames.nrow). If you don't do that then you create a new generic function with the same name as the other but not sharing any methods. Also, this function will shadow the other one in the current module. Aha. This is the piece of information I was looking for. It seems a bit odd, but it does clear up some things. I'll have to play around with my implementation and this new information to see if it helps.
Re: [julia-users] Extending a DataFrame (or, why aren't my imports working?)
That should be index(mydf). I did get the small test case working, but I still can't seem to use the same techniques to get my application working. I just don't understand how these method overrides are supposed to work. I originally thought that you just needed to have methods with the same name and Julia would simply look at the name and the argument types to determine the correct method to use. But there is apparently more to it since a previous suggestion was to do something like: DataFrames.nrow(df::MyDataFrame) = ncol(df) 0 ? length(df.columns[1])::Int : 0 I don't see why DataFrames should be involved at all. I'm using AbstractDataFrames as a super-type, but why would the DataFrames type have to know about MyDataFrame? They are peers, so I don't see why DataFrames would be special. Note, DataFrames is not a type but the module. The type is DataFrame. It is customary (when applicable) to name the module (aka package) in plural and the type in singular tense. Now, generic functions carry around with them the module in which they were first defined. To extend such a function with another method in another module you either have to import it or fully qualify it (DataFrames.nrow). If you don't do that then you create a new generic function with the same name as the other but not sharing any methods. Also, this function will shadow the other one in the current module. However, as far as I understand “using DataFrames” + “DataFrames.nrow(...) =...” and import DataFrames: nrow + nrow(...) = ... should be equivalent. If that is indeed not the case, then it sounds like a bug to me. Do you have a self contained test-case? Actually, I'm kind of surprised that DataFrames' nrow is even implemented on DataFrames and not AbstractDataFrames. I would think that most of the methods in dataframes.jl should be done on the AbstractDataFrame so that anyone creating a subtype like I'm trying to do wouldn't have to reimplement them all. But that's another issue altogether. If nrow is dependent on the implementation details of DataFrame then that is the only way, otherwise it probably should be defined on AbstractDataFrame.
Re: [julia-users] Gaussian Filter
More specifically: julia import Images julia x = rand(200,200); julia y = Images.imfilter_gaussian(x, [5.0, 5.0]); It might be nice to have a version which takes a scalar for sigma and assumes the same size filter in both/all directions. Cheers, Kevin On Sat, Mar 28, 2015 at 4:54 PM, Tim Holy tim.h...@gmail.com wrote: The one in Images works just fine for plain Arrays (or any AbstractArray). --Tim On Saturday, March 28, 2015 04:14:31 PM DumpsterDoofus wrote: I have a 2D Float64 array, and I would like to Gaussian filter it. I'm familiar with the Mathematica equivalent, which is `GaussianFilter[array, sigma]`. Is there an equivalent in Julia? I know there is a fast Gaussian filter implemented in the Images package, but it (as best I can tell) only works for images, not numerical arrays. In the Standard Library, there is conv2, but it uses FFT methods, which are slow and unnecessary.
Re: [julia-users] Is there a function to flatten a multidimensional array?
None of the above are working for me. s = string arr = [s,s,s,s] transform = (x - [x, x*d]) chunky = map(transform, arr) # chunky is now 4 by 2 vec(chunky) # no change chunky[:] # no change reshape(chunky, 1, 8) # fails with dimension mismatch I'm guessing something has changed with Julia since the last comment here but I can't seem to find an updated answer anywhere. On Saturday, 7 December 2013 00:20:30 UTC+11, Tim Holy wrote: flat = A[:] On Thursday, December 05, 2013 04:24:57 PM Johan Sigfrids wrote: Does julia have a function to flatten a multidimensional array into a single dimensional one? I.e. something like flatten([1 4; 2 5; 3 6]) 6-element Array{Int64,1}: 1 2 3 4 5 6
Re: [julia-users] Gaussian Filter
Good point. Since I almost always work in 3d, where you usually don't have isotropic sampling, I guess I didn't even notice :-). --Tim On Saturday, March 28, 2015 04:58:00 PM Kevin Squire wrote: More specifically: julia import Images julia x = rand(200,200); julia y = Images.imfilter_gaussian(x, [5.0, 5.0]); It might be nice to have a version which takes a scalar for sigma and assumes the same size filter in both/all directions. Cheers, Kevin On Sat, Mar 28, 2015 at 4:54 PM, Tim Holy tim.h...@gmail.com wrote: The one in Images works just fine for plain Arrays (or any AbstractArray). --Tim On Saturday, March 28, 2015 04:14:31 PM DumpsterDoofus wrote: I have a 2D Float64 array, and I would like to Gaussian filter it. I'm familiar with the Mathematica equivalent, which is `GaussianFilter[array, sigma]`. Is there an equivalent in Julia? I know there is a fast Gaussian filter implemented in the Images package, but it (as best I can tell) only works for images, not numerical arrays. In the Standard Library, there is conv2, but it uses FFT methods, which are slow and unnecessary.
[julia-users] converting array of Int64 to array of UTF8String doesn't quite work
I've written the following code: import Base.convert function convert(::Type{UTF8String}, x::Int64) return utf8(string(x)) end println(convert(UTF8String, 10)) println(convert(Array{UTF8String, 1}, [10])) The intent is to convert an Array of Int64 into an Array of UTF8String. The first println works correctly and converts 10 into 10 The second println should print an array of [10], but instead gives me the following error: type: arrayset: expected UTF8String, got ASCIIString while loading In[41], in expression starting on line 6 in copy! at abstractarray.jl:149 in convert at array.jl:220 I'm using Julia 0.3.6 Any idea on what I'm doing wrong? Thanks, Philip
[julia-users] Re: Understanding the Cost of Virtualization
This is the same issue that prevents a fast symbolic system using Julia dynamic dispatch: https://github.com/jcrist/Juniper.jl/issues/1 . And I guess it is probably the reason Julia self AST representation is not using its own dynamic dispatching system... +1 for trying a faster run-time dispatching system, at least when the possible dispatching paths can be reduced to a small finite set of types... On Saturday, March 28, 2015 at 1:19:53 AM UTC, danie...@gmail.com wrote: One of the patterns I use a lot at work is when there are a number of options for a certain model, and we want to be able to just select one to use. We also want to be able to change which model is being used part way through the overall execution of the code. In C++, we have an abstract base class, and then each model fills in the virtual methods for its particular algorithm. Some manager class keeps a pointer to the base class, and we can change that pointer to change the model. I tried doing this same thing in Julia. I have a few versions of the code, they are all in a gist: https://gist.github.com/danielmatz/1a64cded91f996d40b99. The version slow.jl is the approach I would use in C++. Since I knew that using abstract types directly was a performance hit, I mdd a second version, fast.jl, that instead uses an integer flag to indicate which model to use. This second version is 10x faster and uses less memory. I'm curious about why this performance difference is so pronounced. What exactly is going on that makes virtualization expensive? The Julia dispatch code must be doing something more than how I conceptualize it (i.e., as a set of if statements). I'm just curious what that is. To try to investigate this, I made yet another version, other.jl (yeah, stupid names, sorry), that is like slow.jl, but instead imitates dispatch with isa. Even this is faster, by about 2x! I'd love to learn more about what is going on in these three cases. I'm also curious if this is something that is expected to get faster in the future? Thanks for the help. And thanks for being patient with simple minded engineers like myself… Hopefully I didn't screw up the terminology too badly. Daniel
[julia-users] Why is Gadfly so slow when plotting it's first plot?
Hi, I use Gadfly to create simple barplots save them as SVG. Since this is for usage in a web page, I've only installed Gadfly, not extra backends. Now when doing the first plot it is incredibly slow but much better on subsequent plots. Why is that? Is there anything that can be done to speed things up?
Re: [julia-users] Why is Gadfly so slow when plotting it's first plot?
This delay is due to parsing and JIT'ing a bunch of code in both Gadfly and dependencies. There is a work-in-progress caching process you could try. See: https://github.com/dcjones/Gadfly.jl/issues/251#issuecomment-38626716 and: http://docs.julialang.org/en/latest/devdocs/sysimg/ The PR to make that simpler and module-specific is here: https://github.com/JuliaLang/julia/pull/8745 On Sat, Mar 28, 2015 at 9:00 AM, Steven Sagaert steven.saga...@gmail.com wrote: Hi, I use Gadfly to create simple barplots save them as SVG. Since this is for usage in a web page, I've only installed Gadfly, not extra backends. Now when doing the first plot it is incredibly slow but much better on subsequent plots. Why is that? Is there anything that can be done to speed things up?
[julia-users] ANN/RFC: Termbox.jl
Termbox.jl (https://github.com/jgoldfar/Termbox.jl) is a wrapper for Termbox, a lightweight text-based user interface library. OSX and Linux are currently supported, and the low-level interface is complete enough to re-implement the demo from the original package (see test/outputexample.jl.) I wrote this up yesterday for use in another project I'm working on (reporting results of a long-running computation in a nicer way,) but I'd be interested in feedback, ideas for a higher-level interface and how to effectively unit-test the package, etc. Regards, Max Goldfarb
Re: [julia-users] SubArray memory footprint
Random thought: If tuples can avoid boxing why not return tuples from the iterator protocol? On Friday, March 27, 2015, Matt Bauman mbau...@gmail.com wrote: On Friday, March 27, 2015 at 8:21:10 AM UTC-4, Sebastian Good wrote: Forgive my ignorance, but what is Cartesian indexing? There are two ways to iterate over all elements of an array: Linear indexing and Cartesian indexing. For example, given a 2x3 matrix, linear indexing would use just one index from 1:6, whereas Cartesian indexing specifies indices for both dimensions: (1,1), (1,2), (2,1), ... If an array isn't stored continuously in memory for linear indexing, converting to the Cartesian indices is very expensive (because it requires integer division, which is a surprising slow). The new `eachindex` method in 0.4 returns an iterator to go over all the Cartesian indices very quickly. -- *Sebastian Good*
Re: [julia-users] SubArray memory footprint
Right now getindex(::AbstractArray, ::Tuple) isn't defined (though there's a proposal to define it as one of several options to solve a tricky problem, see https://github.com/JuliaLang/julia/pull/10525#issuecomment-84597488). More importantly, tuples don't have +, -, min, and max defined for them, and it's not obvious they should. But all that is supported by CartesianIndex. --Tim On Saturday, March 28, 2015 07:10:50 AM Sebastian Good wrote: Random thought: If tuples can avoid boxing why not return tuples from the iterator protocol? On Friday, March 27, 2015, Matt Bauman mbau...@gmail.com wrote: On Friday, March 27, 2015 at 8:21:10 AM UTC-4, Sebastian Good wrote: Forgive my ignorance, but what is Cartesian indexing? There are two ways to iterate over all elements of an array: Linear indexing and Cartesian indexing. For example, given a 2x3 matrix, linear indexing would use just one index from 1:6, whereas Cartesian indexing specifies indices for both dimensions: (1,1), (1,2), (2,1), ... If an array isn't stored continuously in memory for linear indexing, converting to the Cartesian indices is very expensive (because it requires integer division, which is a surprising slow). The new `eachindex` method in 0.4 returns an iterator to go over all the Cartesian indices very quickly.
[julia-users] Re: default value for slider in Interact.js
Just use e.g. slider(1:10, value=2) On Friday, March 27, 2015 at 12:31:45 PM UTC-4, Andrei Berceanu wrote: Is there any way of setting a default value for a slider object, different from the middle of the slider interval (which is automatically chosen)?
[julia-users] ANN: eval in local scope using Debug.jl
A stackoverflow post made me realize that pretty much all functionality needed to support eval functionality in a local scope already exists in the Debug package, and that it might be useful for other applications than pure debugging, such as logging or just trying to figure out how one's code works. I just released an update to the package which supports this. The key new ingredient is the @localscope macro, which returns an object representing the local scope (and must be wrapped in a @debug or @debug_analyze macro to work). Example: @debug_analyze function f(x) y = x+5 @localscope end scope = f(2) @show scope[:x] scope[:y] # prints scope[:x] = 2 #scope[:y] = 7 scope[:y] = 3 @show debug_eval(scope, :(x*y)) # prints debug_eval(scope,:(x * y)) = 6 The new @debug_analyze macro is introduced to only instrument as little as possible (no single stepping) and only in the scopes that can be seen by a @localscope invocation, and leave the rest of the code alone. Some more details can be found in this section https://github.com/toivoh/Debug.jl#evaluation-of-code-in-local-scope of the Debug package readme. Thoughts? Is there more functionality in this direction that you would like to see? Or less?
[julia-users] Re: encode 4 x 1 byte into a one 4 bytes variable
I have a similar problem and I was wondering how to solve it best. Look at RGB24, which is probably pretty much the type you want. SimonDanisch/ColorTypes.jl/src/types.jl#L218 I copied that from Color.jl, and was wondering why it isn't defined as: immutable RGB24 r::Uint8 g::Uint8 b::Uint8 a::Uint8 end Which could be reinterpreted as Uint32 if needed for a ccall, but inside of Julia it would be a lot easier to handle. From your example it looks like you have a matrix of the shape (x, 4), which is a little annoying, as this means that you can't just reinterpret the array. You have to juggle around with the dimensions to get the color dimension in the first position. Specifying the shape is pretty easy, in your case it'd be reinterpret(Uint32, Uint8[1 2 3 255], (1,)) as the resulting array should have the size (1,). Am Samstag, 28. März 2015 02:04:35 UTC+1 schrieb J Luis: Hi, How can I encode 4 one byte variable, lets say julia UInt8[1 2 5 255] 1x4 Array{UInt8,2}: 0x01 0x02 0x05 0xff into a single variable with 4 bytes length? Thanks.
Re: [julia-users] Re: Does it make sence to use Uint8 instead of Int64 on x64 OS?
There can also be some speed advantages if you can exploit vectorised SIMD operations, see for example https://software.intel.com/en-us/articles/computing-delacorte-numbers-with-julia On Wednesday, 25 March 2015 21:38:31 UTC+1, Boris Kheyfets wrote: Thanks. On Wed, Mar 25, 2015 at 11:31 PM, Ivar Nesje iva...@gmail.com javascript: wrote: If you store millions of them, you can use only 1/8 of the space, and get better memory efficiency. onsdag 25. mars 2015 21.11.05 UTC+1 skrev Boris Kheyfets følgende: The question says it all. I wonder if on would get any benefits of keeping small things in small containers: Uint8 instead of Int64 on x64 OS?
Re: [julia-users] Understanding the Cost of Virtualization
Lots of info here: http://docs.julialang.org/en/release-0.3/manual/faq/#how-do-abstract-or-ambiguous-fields-in-types-interact-with-the-compiler --Tim On Friday, March 27, 2015 06:19:53 PM daniel.m...@gmail.com wrote: One of the patterns I use a lot at work is when there are a number of options for a certain model, and we want to be able to just select one to use. We also want to be able to change which model is being used part way through the overall execution of the code. In C++, we have an abstract base class, and then each model fills in the virtual methods for its particular algorithm. Some manager class keeps a pointer to the base class, and we can change that pointer to change the model. I tried doing this same thing in Julia. I have a few versions of the code, they are all in a gist: https://gist.github.com/danielmatz/1a64cded91f996d40b99. The version slow.jl is the approach I would use in C++. Since I knew that using abstract types directly was a performance hit, I mdd a second version, fast.jl, that instead uses an integer flag to indicate which model to use. This second version is 10x faster and uses less memory. I'm curious about why this performance difference is so pronounced. What exactly is going on that makes virtualization expensive? The Julia dispatch code must be doing something more than how I conceptualize it (i.e., as a set of if statements). I'm just curious what that is. To try to investigate this, I made yet another version, other.jl (yeah, stupid names, sorry), that is like slow.jl, but instead imitates dispatch with isa. Even this is faster, by about 2x! I'd love to learn more about what is going on in these three cases. I'm also curious if this is something that is expected to get faster in the future? Thanks for the help. And thanks for being patient with simple minded engineers like myself… Hopefully I didn't screw up the terminology too badly. Daniel
Re: [julia-users] How to test argument types available on a Function?
In ApproxFun, a user supplied function is approximated. the approximation depends on whether the function is univariate or bivariate Sent from my iPad On 28 Mar 2015, at 4:49 pm, Toivo Henningsson toivo@gmail.com wrote: The method_exists function would probably be a slightly cleaner way to do this. But I'm not sure that it's such a good idea anyway. Why do you need it?
Re: [julia-users] Re: encode 4 x 1 byte into a one 4 bytes variable
J Luis, you probably want to reinterpret it as RGBA{Ufixed8}, from the Color package. Images already handles the result shape not specified error, if you want to try that (just say using Images first). Simon, the RGB24 you defined is redundant with RGBA{Ufixed8} or ABGR{Ufixed8} depending on the endianness of the host machine. The RGB24 type was created specifically to give meaning to UInt32s, which is what the Cairo.jl package uses, with consistent endianness interpretation (in agreement with Stefan's code example). Images contains quite a number of color types and operations not defined in Color.jl; if you're needing more too, I should finally get around to folding these back in to Color. Best, --Tim On Saturday, March 28, 2015 02:25:39 AM Simon Danisch wrote: I have a similar problem and I was wondering how to solve it best. Look at RGB24, which is probably pretty much the type you want. SimonDanisch/ColorTypes.jl/src/types.jl#L218 I copied that from Color.jl, and was wondering why it isn't defined as: immutable RGB24 r::Uint8 g::Uint8 b::Uint8 a::Uint8 end Which could be reinterpreted as Uint32 if needed for a ccall, but inside of Julia it would be a lot easier to handle. From your example it looks like you have a matrix of the shape (x, 4), which is a little annoying, as this means that you can't just reinterpret the array. You have to juggle around with the dimensions to get the color dimension in the first position. Specifying the shape is pretty easy, in your case it'd be reinterpret(Uint32, Uint8[1 2 3 255], (1,)) as the resulting array should have the size (1,). Am Samstag, 28. März 2015 02:04:35 UTC+1 schrieb J Luis: Hi, How can I encode 4 one byte variable, lets say julia UInt8[1 2 5 255] 1x4 Array{UInt8,2}: 0x01 0x02 0x05 0xff into a single variable with 4 bytes length? Thanks.