Re: [julia-users] Parallel for-loops
Thanks! Yes, in this setting I would stay away from the SharedArrays, due to the above reasons. (All processors see the same arrays, so they interfere all the time with their edits) SharedArrays are good to 1) share immutable input data accross local workers (no data being serialized/copied except for the SharedArray metadata) 2) store outputs, but only when each worker is responsible for a specific part of the output. 3) 1+2 combined, when each worker manipulates its part of the array in-place. The pmap version looks good to me as far as I can see! Am 13.03.2015 um 18:09 schrieb Pieter Barendrecht pjbarendre...@gmail.com: Cheers. I uploaded the two scripts — https://gist.github.com/pjbarendrecht/ee4eff971ec2073bfad6 (using SharedArrays) https://gist.github.com/pjbarendrecht/617b73a36b4848634eae (using the pmap() function) → use ParSet(10) to run 10,000 simulations. Pieter On Friday, March 13, 2015 at 3:29:48 PM UTC, René Donner wrote: Am 13.03.2015 um 16:20 schrieb Pieter Barendrecht pjbare...@gmail.com: Thanks! I tried both approaches you suggested. Some results using SharedArrays (100,000 simulations) #workers #time 1 ~120s 3 ~42s 6 ~40s Short question. The first print statement after the for-loop is already executed before the for-loop ends. How do I prevent this from happening? Some results using the other approach (again 100,000 simulations) #workers #time 1 ~118s 2 ~60s 3 ~42s 4 ~38s 6 ~40s 6 ~40s Could you post a simplified code snippet? Either here on in a gist. It is difficult to know what exactly you doing ;-) Couple of questions. My equivalent of myfunc_pure() also requires a second argument. Is that argument changing, or is this there to switch between different algorithms etc? In addition, I don't make use of the startindex argument in the function. What's the common approach here? Next, there are actually multiple variables that should be returned, not just result. You can always return (a,b,c) instead of a, i.e. a tuple. The function you provide to reduce then has the following signature: myreducer(a::Tuple, b::Tuple). Combine the tuples, and again return a tuple. Overall, I'm a bit surprised that using more than 3 or 4 workers does not decrease the running time. Any ideas? I'm using Julia 0.3.6 on a 64bit Arch Linux system, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz. Can be any number of things, could be the memory bandwidth being the limiting factor, or that the computation is actually nicely sped up and a lot of what you see is communication overhead. In that case, work on chunks of data / batches of itertations, i.e. dont pmap over millions of things but only a couple dozen. Looking at the code might shed some light. On Friday, March 13, 2015 at 8:37:19 AM UTC, René Donner wrote: Perhaps SharedArrays are what you need here? http://docs.julialang.org/en/release-0.3/stdlib/parallel/?highlight=sharedarray#Base.SharedArray Reading from a shared array in workers is fine, but when different workers try to update the same part of that array you will get racy behaviour and most likely not the correct result. Can you somehow re-formulate your problem along these lines, using a map and reduce approach using a pure function? @everywhere function myfunc_pure(startindex) result = zeros(Int,10) for i in startindex + (0:19) # 20 iterations result[mod(i,length(result))+1] += 1 end result end reduce(+,pmap(myfunc_pure, 1:5)) # 5 blocks of 20 iterations Like this you don't have a shared mutable state and thus no risk for mess-ups. Am 13.03.2015 um 00:56 schrieb Pieter Barendrecht pjbare...@gmail.com: I'm wondering how to save data/results in a parallel for-loop. Let's assume there is a single Int64 array, initialised using zeros() before starting the for-loop. In the for-loop (typically ~100,000 iterations, that's the reason I'm interested in parallel processing) the entries of this Int64 array should be increased (based on the results of an algorithm that's invoked in the for-loop). Everything works fine when using just a single proc, but I'm not sure how to modify the code such that, when using e.g. addprocs(4), the data/results stored in the Int64 array can be processed once the for-loop ends. The algorithm (a separate function) is available to all procs (using the require() function). Just using the Int64 array in the for-loop (using @parallel for k=1:10) does not work as each proc receives its own copy, so after the for-loop it contains just zeros (as illustrated in a set of slides on the Julia language). I guess it involves @spawn and fetch() and/or pmap(). Any suggestions or examples would be much appreciated :).
Re: [julia-users] Re: Time type
On Friday, March 13, 2015 at 12:28:50 PM UTC-4, Stefan Karpinski wrote: Yes, but the representation is quite inefficient. This would be an efficient scalar type. Couldn't you just represent it by Dates.Second (if you want second resolution) or Dates.Millisecond (if you want millisecond resolution)? Or are you worried about entry and pretty-printing?
Re: [julia-users] Parallel for-loops
Cheers. I uploaded the two scripts — https://gist.github.com/pjbarendrecht/ee4eff971ec2073bfad6 (using SharedArrays) https://gist.github.com/pjbarendrecht/617b73a36b4848634eae (using the pmap() function) → use ParSet(10) to run 10,000 simulations. Pieter On Friday, March 13, 2015 at 3:29:48 PM UTC, René Donner wrote: Am 13.03.2015 um 16:20 schrieb Pieter Barendrecht pjbare...@gmail.com javascript:: Thanks! I tried both approaches you suggested. Some results using SharedArrays (100,000 simulations) #workers #time 1 ~120s 3 ~42s 6 ~40s Short question. The first print statement after the for-loop is already executed before the for-loop ends. How do I prevent this from happening? Some results using the other approach (again 100,000 simulations) #workers #time 1 ~118s 2 ~60s 3 ~42s 4 ~38s 6 ~40s 6 ~40s Could you post a simplified code snippet? Either here on in a gist. It is difficult to know what exactly you doing ;-) Couple of questions. My equivalent of myfunc_pure() also requires a second argument. Is that argument changing, or is this there to switch between different algorithms etc? In addition, I don't make use of the startindex argument in the function. What's the common approach here? Next, there are actually multiple variables that should be returned, not just result. You can always return (a,b,c) instead of a, i.e. a tuple. The function you provide to reduce then has the following signature: myreducer(a::Tuple, b::Tuple). Combine the tuples, and again return a tuple. Overall, I'm a bit surprised that using more than 3 or 4 workers does not decrease the running time. Any ideas? I'm using Julia 0.3.6 on a 64bit Arch Linux system, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz. Can be any number of things, could be the memory bandwidth being the limiting factor, or that the computation is actually nicely sped up and a lot of what you see is communication overhead. In that case, work on chunks of data / batches of itertations, i.e. dont pmap over millions of things but only a couple dozen. Looking at the code might shed some light. On Friday, March 13, 2015 at 8:37:19 AM UTC, René Donner wrote: Perhaps SharedArrays are what you need here? http://docs.julialang.org/en/release-0.3/stdlib/parallel/?highlight=sharedarray#Base.SharedArray Reading from a shared array in workers is fine, but when different workers try to update the same part of that array you will get racy behaviour and most likely not the correct result. Can you somehow re-formulate your problem along these lines, using a map and reduce approach using a pure function? @everywhere function myfunc_pure(startindex) result = zeros(Int,10) for i in startindex + (0:19) # 20 iterations result[mod(i,length(result))+1] += 1 end result end reduce(+,pmap(myfunc_pure, 1:5)) # 5 blocks of 20 iterations Like this you don't have a shared mutable state and thus no risk for mess-ups. Am 13.03.2015 um 00:56 schrieb Pieter Barendrecht pjbare...@gmail.com: I'm wondering how to save data/results in a parallel for-loop. Let's assume there is a single Int64 array, initialised using zeros() before starting the for-loop. In the for-loop (typically ~100,000 iterations, that's the reason I'm interested in parallel processing) the entries of this Int64 array should be increased (based on the results of an algorithm that's invoked in the for-loop). Everything works fine when using just a single proc, but I'm not sure how to modify the code such that, when using e.g. addprocs(4), the data/results stored in the Int64 array can be processed once the for-loop ends. The algorithm (a separate function) is available to all procs (using the require() function). Just using the Int64 array in the for-loop (using @parallel for k=1:10) does not work as each proc receives its own copy, so after the for-loop it contains just zeros (as illustrated in a set of slides on the Julia language). I guess it involves @spawn and fetch() and/or pmap(). Any suggestions or examples would be much appreciated :).
[julia-users] Factorization of big integers is taking too long
I got interested in factorizing some larger integers such as N = 3^100 + 2 . In all tries, factor(N) did not return and had to be interrupted: julia N = big(3)^100 + 2 julia factor(N) ^CERROR: interrupt in finalizer at ./base.jl:126 in + at gmp.jl:243 in factor at primes.jl:111 It is calling GMP, but the GMP software cannot be the reason as this works with the GMP package in R and returns the factorization within seconds: R library(gmp) R N - bigz(3)^100 R factorize(N) Big Integer ('bigz') object of length 3: [1] 31721 246451584544723 65924521656039679831393482841 R system.time(factorize(N)) user system elapsed 3.738 0.000 3.730 Is this a bug? Did I do something wrong? The first factor, 31721, is not even large. Mathematical software such as GAP or PARI/GP will factorize this in much less than a second. PS: Versioninfo Julia Version 0.3.6; System: Linux (x86_64-linux-gnu) CPU: Intel(R) Core(TM) i3-3217U CPU @ 1.80GHz; WORD_SIZE: 64 BLAS: libblas.so.3; LAPACK: liblapack.so.3 LIBM: libopenlibm; LLVM: libLLVM-3.3
Re: [julia-users] Re: Best practices for migrating 0.3 code to 0.4? (specifically constructors)
That string does represent a filename, doesn't it? Do you obtain that by calling readdir() at some point perhaps? This can now return a Array{Union(UTF8String,ASCIIString),1}, which I believe was not the case in 0.3. (At least I had to adapt my code at some point to deal with this) Am 13.03.2015 um 18:43 schrieb Phil Tomson philtom...@gmail.com: On Thursday, March 12, 2015 at 9:00:45 PM UTC-7, Avik Sengupta wrote: I think this is simply due to your passing a UTF8String, while your function defined only for ASCIIString. Since there is no function defined for UTF8String, julia falls back to the default constructor that calls convert. julia type A a::ASCIIString b::Int end julia function A(fn::ASCIIString) A(fn, length(fn) end ERROR: syntax: missing comma or ) in argument list julia function A(fn::ASCIIString) A(fn, length(fn)) end A julia A(X) A(X,1) julia A(∞) ERROR: MethodError: `convert` has no method matching convert(::Type{A}, ::UTF8String) This may have arisen from a call to the constructor A(...), since type constructors fall back to convert methods. Closest candidates are: convert{T}(::Type{T}, ::T) in call at no file #Now try this: julia type A a::AbstractString b::Int end julia function A(fn::AbstractString) A(fn, length(fn)) end A julia A(X) A(X,1) julia A(∞) A(∞,1) Not that using AbstractString is only one way to solve this, which may or may not be appropriate for your use case. The code above is simply to demonstrate the issue at hand. Why would this have changed in 0.4, though? Running in Julia 0.4: julia typeof(string) ASCIIString so a quoted string still has the same type as it did before. On Thursday, 12 March 2015 23:43:58 UTC, Phil Tomson wrote: I thought I'd give 0.4 a spin to try out the new garbage collector. On my current codebase developed with 0.3 I ran into several warnings (float32() should now be Float32() - that sort of thing) And then this error: ERROR: LoadError: LoadError: LoadError: LoadError: LoadError: MethodError: `convert` has no method matching convert(::Type{Img.ImgHSV}, ::UTF8String) This may have arisen from a call to the constructor Img.ImgHSV(...), since type constructors fall back to convert methods. Closest candidates are: convert{T}(::Type{T}, ::T) After poking around New Language Features and the list here a bit it seems that there are changes to how overloaded constructors work. In my case I've got: type ImgHSV name::ASCIIString data::Array{HSV{Float32},2} #data::Array{IntHSV,2} height::Int64 wid::Int64 h_mean::Float32 s_mean::Float32 v_mean::Float32 h_std::Float32 s_std::Float32 v_std::Float32 end # Given a filename of an image file, construct an ImgHSV function ImgHSV(fn::ASCIIString) name,ext = splitext(basename(fn)) source_img_hsv = Images.data(convert(Image{HSV{Float64}},imread(fn))) #scale all the values up from (0-1) to (0-255) source_img_scaled = map(x- HSV( ((x.h/360)*255),(x.s*255),(x.v*255)), source_img_hsv) img_ht = size(source_img_hsv,2) img_wid = size(source_img_hsv,1) h_mean = (mean(map(x- x.h,source_img_hsv)/360)*255) s_mean = (mean(map(x- x.s,source_img_hsv))*255) v_mean = (mean(map(x- x.v,source_img_hsv))*255) h_std = (std(map(x- x.h,source_img_hsv)/360)*255) s_std = (std(map(x- x.s,source_img_hsv))*255) v_std = (std(map(x- x.v,source_img_hsv))*255) ImgHSV( name, float32(source_img_scaled), img_ht, img_wid, h_mean, s_mean, v_mean, h_std, s_std, v_std ) end Should I rename this function to something like buildImgHSV so it's not actually a constructor and convert doesn't enter the picture? Phil
[julia-users] Re: Factorization of big integers is taking too long
On Friday, March 13, 2015 at 1:25:14 PM UTC-4, Jake Bolewski wrote: This is falling back to factor() for generic integers, so the GMP method does not looked to be wrapped. The generic version will be terribly slow for bigints. Would be easy to add if you would like to submit a Pull Request. At first glance, I'm not seeing a built-in factorization method in GMP. Googling factorization gmp turns up a bunch of algorithms that people have built on top of GMP, but nothing in GMP itself. (We should see what algorithm/code R is using.)
Re: [julia-users] Re: Time type
Printing is one issue; I encountered it because I was wrapping an API that has Date, DateTime and Time types, which I ended up mapping to Date, DateTime and DateTime values, but with the DateTime values coming from Times having nonsense in the date parts. Kind of awkward. Returning seconds would just be kind of awkward and confusing. On Fri, Mar 13, 2015 at 2:48 PM, Steven G. Johnson stevenj@gmail.com wrote: On Friday, March 13, 2015 at 12:28:50 PM UTC-4, Stefan Karpinski wrote: Yes, but the representation is quite inefficient. This would be an efficient scalar type. Couldn't you just represent it by Dates.Second (if you want second resolution) or Dates.Millisecond (if you want millisecond resolution)? Or are you worried about entry and pretty-printing?
[julia-users] Re: Factorization of big integers is taking too long
This is falling back to factor() for generic integers, so the GMP method does not looked to be wrapped. The generic version will be terribly slow for bigints. Would be easy to add if you would like to submit a Pull Request. julia N = big(3)^100 + 2 515377520732011331036461129765621272702107522003 julia @which factor(N) factor{T:Integer}(n::T:Integer) at primes.jl:79 Link to implementation https://github.com/JuliaLang/julia/blob/d87e303e6b74b5e36bfa58a20d9404c3d1d53415/base/primes.jl#L78 On Friday, March 13, 2015 at 1:20:16 PM UTC-4, Hans W Borchers wrote: I got interested in factorizing some larger integers such as N = 3^100 + 2 . In all tries, factor(N) did not return and had to be interrupted: julia N = big(3)^100 + 2 julia factor(N) ^CERROR: interrupt in finalizer at ./base.jl:126 in + at gmp.jl:243 in factor at primes.jl:111 It is calling GMP, but the GMP software cannot be the reason as this works with the GMP package in R and returns the factorization within seconds: R library(gmp) R N - bigz(3)^100 R factorize(N) Big Integer ('bigz') object of length 3: [1] 31721 246451584544723 65924521656039679831393482841 R system.time(factorize(N)) user system elapsed 3.738 0.000 3.730 Is this a bug? Did I do something wrong? The first factor, 31721, is not even large. Mathematical software such as GAP or PARI/GP will factorize this in much less than a second. PS: Versioninfo Julia Version 0.3.6; System: Linux (x86_64-linux-gnu) CPU: Intel(R) Core(TM) i3-3217U CPU @ 1.80GHz; WORD_SIZE: 64 BLAS: libblas.so.3; LAPACK: liblapack.so.3 LIBM: libopenlibm; LLVM: libLLVM-3.3
[julia-users] Re: Best practices for migrating 0.3 code to 0.4? (specifically constructors)
On Thursday, March 12, 2015 at 9:00:45 PM UTC-7, Avik Sengupta wrote: I think this is simply due to your passing a UTF8String, while your function defined only for ASCIIString. Since there is no function defined for UTF8String, julia falls back to the default constructor that calls convert. julia type A a::ASCIIString b::Int end julia function A(fn::ASCIIString) A(fn, length(fn) end ERROR: syntax: missing comma or ) in argument list julia function A(fn::ASCIIString) A(fn, length(fn)) end A julia A(X) A(X,1) julia A(∞) ERROR: MethodError: `convert` has no method matching convert(::Type{A}, ::UTF8String) This may have arisen from a call to the constructor A(...), since type constructors fall back to convert methods. Closest candidates are: convert{T}(::Type{T}, ::T) in call at no file #Now try this: julia type A a::AbstractString b::Int end julia function A(fn::AbstractString) A(fn, length(fn)) end A julia A(X) A(X,1) julia A(∞) A(∞,1) Not that using AbstractString is only one way to solve this, which may or may not be appropriate for your use case. The code above is simply to demonstrate the issue at hand. Why would this have changed in 0.4, though? Running in Julia 0.4: julia typeof(string) ASCIIString so a quoted string still has the same type as it did before. On Thursday, 12 March 2015 23:43:58 UTC, Phil Tomson wrote: I thought I'd give 0.4 a spin to try out the new garbage collector. On my current codebase developed with 0.3 I ran into several warnings (*float32() should now be Float32()* - that sort of thing) And then this error: *ERROR: LoadError: LoadError: LoadError: LoadError: LoadError: MethodError: `convert` has no method matching convert(::Type{Img.ImgHSV}, ::UTF8String)This may have arisen from a call to the constructor Img.ImgHSV(...),since type constructors fall back to convert methods.Closest candidates are: convert{T}(::Type{T}, ::T)* After poking around New Language Features and the list here a bit it seems that there are changes to how overloaded constructors work. In my case I've got: type ImgHSV name::ASCIIString data::Array{HSV{Float32},2} #data::Array{IntHSV,2} height::Int64 wid::Int64 h_mean::Float32 s_mean::Float32 v_mean::Float32 h_std::Float32 s_std::Float32 v_std::Float32 end # Given a filename of an image file, construct an ImgHSV function ImgHSV(fn::ASCIIString) name,ext = splitext(basename(fn)) source_img_hsv = Images.data(convert(Image{HSV{Float64}},imread(fn))) #scale all the values up from (0-1) to (0-255) source_img_scaled = map(x- HSV( ((x.h/360)*255),(x.s*255),(x.v*255)), source_img_hsv) img_ht = size(source_img_hsv,2) img_wid = size(source_img_hsv,1) h_mean = (mean(map(x- x.h,source_img_hsv)/360)*255) s_mean = (mean(map(x- x.s,source_img_hsv))*255) v_mean = (mean(map(x- x.v,source_img_hsv))*255) h_std = (std(map(x- x.h,source_img_hsv)/360)*255) s_std = (std(map(x- x.s,source_img_hsv))*255) v_std = (std(map(x- x.v,source_img_hsv))*255) ImgHSV( name, float32(source_img_scaled), img_ht, img_wid, h_mean, s_mean, v_mean, h_std, s_std, v_std ) end Should I rename this function to something like buildImgHSV so it's not actually a constructor and *convert* doesn't enter the picture? Phil
Re: [julia-users] Re: Time type
Also, given that Julia time is based on UT, Time is well-defined – a Time is conceptually an equivalence class of DateTimes that differ by exactly an integral number of days. If you think of a Date as the set of DateTimes that occur during the same UTC day, Date x Time is naturally isomorphic to DateTime. On Fri, Mar 13, 2015 at 2:58 PM, Stefan Karpinski ste...@karpinski.org wrote: Printing is one issue; I encountered it because I was wrapping an API that has Date, DateTime and Time types, which I ended up mapping to Date, DateTime and DateTime values, but with the DateTime values coming from Times having nonsense in the date parts. Kind of awkward. Returning seconds would just be kind of awkward and confusing. On Fri, Mar 13, 2015 at 2:48 PM, Steven G. Johnson stevenj@gmail.com wrote: On Friday, March 13, 2015 at 12:28:50 PM UTC-4, Stefan Karpinski wrote: Yes, but the representation is quite inefficient. This would be an efficient scalar type. Couldn't you just represent it by Dates.Second (if you want second resolution) or Dates.Millisecond (if you want millisecond resolution)? Or are you worried about entry and pretty-printing?
Re: [julia-users] Some simple use cases for multi-threading
Viral I assume you are referring to real multi-threading that is currently under development, and not the multi-tasking that has been around in Julia for a long time? I also assume that you are looking for use cases that will benefit, not only for cases that have already benefitted? -erik On Mar 12, 2015, at 23:52 , Viral Shah vi...@mayin.org wrote: I am looking to put together a set of use cases for our multi-threading capabilities - mainly to push forward as well as a showcase. I am thinking of starting with stuff in the microbenchmarks and the shootout implementations that are already in test/perf. I am looking for other ideas that would be of interest. If there is real interest, we can collect all of these in a repo in JuliaParallel. -viral -- Erik Schnetter schnet...@gmail.com http://www.perimeterinstitute.ca/personal/eschnetter/ My email is as private as my paper mail. I therefore support encrypting and signing email messages. Get my PGP key from https://sks-keyservers.net. signature.asc Description: Message signed with OpenPGP using GPGMail
[julia-users] Re: MATLAB MEX embedding signal handling segfault
I've put up the code I have so far at https://github.com/invenia/JuliaLAB. I'm still hoping this bug can be fixed! I'll be working on this library more (it's obviously incomplete), with the hope that at some point it will be too good to ignore. On Thursday, 5 March 2015 17:00:20 UTC-6, ele...@gmail.com wrote: On Friday, March 6, 2015 at 1:41:17 AM UTC+10, Eric Davies wrote: I thought I'd post the various errors I get on crash in case any of them sparks a thought. One of a number of things happens each time; they are now in files error1 to error5 in the gist: https://gist.github.com/iamed2/e883c6b0b8ff4220d946 In errors 4 and 5 I can no longer see the lines I type at the terminal until I enter a newline. Error2 looks at first glance like libuv's listener thread is still running and it becomes confused when an FD it is monitoring is re-used by Matlab for a file, not one of the types it expected, so the assert fails causing a SIGABORT, which then segfaults because its not the sigabort handler that was expected. Errors 4 and 5 look like the same thing, but the handlers just get stuck and don't complete by the sound of it. Errors 1 and 3 may be the same thing but look like they are happening at matlab shutdown, maybe its saying whats this thread 'ere, I didn't start it :) Cheers Lex
[julia-users] Re: Factorization of big integers is taking too long
The R gmp docs say that they use the Pollard Rho algorithm, and there is an implementation of it in the GMP demos directory: https://gmplib.org/repo/gmp/file/2d027c920892/demos/factorize.c So presumably they're using that code? simon On Friday, 13 March 2015 18:46:21 UTC, Steven G. Johnson wrote: On Friday, March 13, 2015 at 1:25:14 PM UTC-4, Jake Bolewski wrote: This is falling back to factor() for generic integers, so the GMP method does not looked to be wrapped. The generic version will be terribly slow for bigints. Would be easy to add if you would like to submit a Pull Request. At first glance, I'm not seeing a built-in factorization method in GMP. Googling factorization gmp turns up a bunch of algorithms that people have built on top of GMP, but nothing in GMP itself. (We should see what algorithm/code R is using.)
[julia-users] Re: Factorization of big integers is taking too long
That's right; I looked into the source of the 'gmp' package, the C code in there is the same as the demo code you are linking to. It uses Pollard's Rho and Miller Rabin for primality testing. That code is under GPL license. As Miller Rabin is used for Julia's isprime() function, it would be nice implement Pollard Rho in pure Julia and demo how fast it can be. But consider that there will be improvements and newer versions of this algorithm that should be taken into account. On Friday, March 13, 2015 at 8:54:37 PM UTC+1, Simon Byrne wrote: The R gmp docs say that they use the Pollard Rho algorithm, and there is an implementation of it in the GMP demos directory: https://gmplib.org/repo/gmp/file/2d027c920892/demos/factorize.c So presumably they're using that code? simon On Friday, 13 March 2015 18:46:21 UTC, Steven G. Johnson wrote: On Friday, March 13, 2015 at 1:25:14 PM UTC-4, Jake Bolewski wrote: This is falling back to factor() for generic integers, so the GMP method does not looked to be wrapped. The generic version will be terribly slow for bigints. Would be easy to add if you would like to submit a Pull Request. At first glance, I'm not seeing a built-in factorization method in GMP. Googling factorization gmp turns up a bunch of algorithms that people have built on top of GMP, but nothing in GMP itself. (We should see what algorithm/code R is using.)
[julia-users] Re: Some simple use cases for multi-threading
Building of some type of tree (like kd trees) maybe? Just start a separate thread on a subtree and have the thread work down. I have a Julia package for KDTrees so that's what I would use it for at least :) On Friday, March 13, 2015 at 9:54:17 PM UTC+1, Tobias Knopp wrote: you can start with a simple matrix-vector or matrix-matrix product. imfilter (Images.jl) is also a function that can benefit from multithreading. Am Freitag, 13. März 2015 04:52:37 UTC+1 schrieb Viral Shah: I am looking to put together a set of use cases for our multi-threading capabilities - mainly to push forward as well as a showcase. I am thinking of starting with stuff in the microbenchmarks and the shootout implementations that are already in test/perf. I am looking for other ideas that would be of interest. If there is real interest, we can collect all of these in a repo in JuliaParallel. -viral
[julia-users] Re: Some simple use cases for multi-threading
you can start with a simple matrix-vector or matrix-matrix product. imfilter (Images.jl) is also a function that can benefit from multithreading. Am Freitag, 13. März 2015 04:52:37 UTC+1 schrieb Viral Shah: I am looking to put together a set of use cases for our multi-threading capabilities - mainly to push forward as well as a showcase. I am thinking of starting with stuff in the microbenchmarks and the shootout implementations that are already in test/perf. I am looking for other ideas that would be of interest. If there is real interest, we can collect all of these in a repo in JuliaParallel. -viral
Re: [julia-users] Re: Time type
On Fri, Mar 13, 2015 at 1:01 PM, Stefan Karpinski ste...@karpinski.org wrote: Also, given that Julia time is based on UT, Time is well-defined – a Time is conceptually an equivalence class of DateTimes that differ by exactly an integral number of days. If you think of a Date as the set of DateTimes that occur during the same UTC day, Date x Time is naturally isomorphic to DateTime. I'm not an expert, but doesn't the existence of leap seconds in universal time mean that the last statement is false? For certain dates, it will sometimes be necessary to check if they harbor leap seconds, so I don't see how there is a clean factorization of DateTime into Date x Time. Thanks, Jiahao Chen Staff Research Scientist MIT Computer Science and Artificial Intelligence Laboratory
Re: [julia-users] Re: Block Matrices: problem enforcing parametric type constraints
@TimHoly thanks. I was able to use the described approach to add in functionality for arbitrary eltype, so everything in my original request is satisfied. This is good since I realized earlier that I also will need a Complex eltype later down the road. Best, G On Friday, March 13, 2015 at 9:59:58 AM UTC+1, Tim Holy wrote: Note that BlockMatrix{S,TA:AbstractMatrix{S},TB:AbstractMatrix{S},TC: AbstractMatrix{S},TD:AbstractMatrix{S}} is a syntax that (1) has come to be called triangular dispatch and (2) is not actually supported by julia yet, although neither does it throw an error. It just won't always do what you think it will do. There are more indirect strategies to achieve this; see one here: http://docs.julialang.org/en/release-0.3/manual/faq/#how-should-i-declare-abstract-container-type-fields (the part about the inner constructor). --Tim On Thursday, March 12, 2015 07:20:45 PM Greg Plowman wrote: I don't really understand how this works, but this might point someone in the right direction. It seems Julia can't fully infer types, in particular the element type S. So we get further if we give a hint: type BlockMatrix{S,TA:AbstractMatrix{S},TB:AbstractMatrix{S},TC: AbstractMatrix{S},TD:AbstractMatrix{S}} : AbstractMatrix{S} A::TA B::TB C::TC D::TD end typealias Block{S} BlockMatrix{S,AbstractMatrix{S},AbstractMatrix{S}, AbstractMatrix{S},AbstractMatrix{S}} # not really sure what size() should be, but need to define for output Base.size(x::BlockMatrix) = (size(x.A,1) + size(x.C,1), size(x.A,2) + size(x .B,2)) N = Block{Float64}(A,A,A,B) julia N.A 4x4 Array{Float64,2}: 0.805914 0.473687 0.721984 0.464178 0.306 0.728015 0.148804 0.776728 0.439048 0.566558 0.72709 0.524761 0.255731 0.16528 0.331941 0.167353 julia N.B 4x4 Array{Float64,2}: 0.805914 0.473687 0.721984 0.464178 0.306 0.728015 0.148804 0.776728 0.439048 0.566558 0.72709 0.524761 0.255731 0.16528 0.331941 0.167353 julia N.C 4x4 Array{Float64,2}: 0.805914 0.473687 0.721984 0.464178 0.306 0.728015 0.148804 0.776728 0.439048 0.566558 0.72709 0.524761 0.255731 0.16528 0.331941 0.167353 julia N.D 4x4 Diagonal{Float64}: 1.0 0.0 0.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0 4.0 On Friday, March 13, 2015 at 8:49:41 AM UTC+11, Gabriel Mitchell wrote: @g Sorry, I guess I didn't state my intent that clearly. While your example does enforce the Matrix/eltype constraint that is only part of what I am after. Having a type parameter for each block is a main thing that I am interested in. The reason is that I can write methods that dispatch on those types. An example of such a method with an explicit 4-ary structure would be #generic fallback det(A::Matrix,B::Matrix,C::Matrix,D::Matrix) = det([A B; C D]) #specialized method, should actually, check for A invertible, but you get the idea det(A::Diagonal,B::Diagonal,C::Diagonal,D::Diagonal) = det(A)*det(D-C*inv(A)*B) #etc... In my applications there are at least a dozen situations where certain block structure allow for significantly more efficient implementations than the generic fallback. One would like to make the calls to these methods (det,inv,trace, and so on) with the normal 1-ary argument, that matrix M itself. This would be possible if the type information of the blocks could be read out of the type of M. I hope this clear up my motivation for the above question. On Thursday, March 12, 2015 at 8:49:05 PM UTC+1, g wrote: BlockMatrix only needs one type parameter to fully specify the type, so you should probably only use one type parameter. Like so: *type BlockMatrix{S} : AbstractMatrix{S}* *A::AbstractMatrix{S}* *B::AbstractMatrix{S}* *C::AbstractMatrix{S}* *D::AbstractMatrix{S}* *end* I'm sure someone else can explain in more detail why yours didn't work.
Re: [julia-users] Plotting table of numbers in Gadfly?
Hmm… that’s a little bit complicated… I wonder if there would be interest in a wrapper package for Gadfly that makes these things simpler? On 13 Mar 2015, at 3:03 pm, Jiahao Chen jia...@mit.edu wrote: Daniel is the authoritative source, but for such situations I use layers and manual color schemes like this: using Color, Gadfly xgrid=0:10:100 data=rand(10,10) nrows = size(data, 1) cm = distinguishable_colors(nrows, lchoices=0:50) #lchoices between 50 and 100 are too bright for my taste for plotting lines plot( [layer(x=xgrid, y=data[i, :], Geom.line, Theme(default_color=cm[i])) for i=1:nrows]..., Guide.manual_color_key(row id, [row $i for i=1:nrows], cm), ) Thanks, Jiahao Chen Staff Research Scientist MIT Computer Science and Artificial Intelligence Laboratory On Thu, Mar 12, 2015 at 11:54 PM, Sheehan Olver dlfivefi...@gmail.com wrote: I have a table of numbers that I want to line plot in Gadfly: i.e., each column corresponds to values of a function. Is this possible without creating a DataFrame?
Re: [julia-users] Plotting table of numbers in Gadfly?
I'm working on it: https://github.com/dcjones/Gadfly.jl/issues/563 The desire for a separate plotting semantics is indicative of some shortcoming. So I hope we can resolve that, and there won't be a need to wrap Gadfly. On Friday, March 13, 2015 at 8:24:03 PM UTC-7, Sheehan Olver wrote: Hmm… that’s a little bit complicated… I wonder if there would be interest in a wrapper package for Gadfly that makes these things simpler? On 13 Mar 2015, at 3:03 pm, Jiahao Chen jia...@mit.edu javascript: wrote: Daniel is the authoritative source, but for such situations I use layers and manual color schemes like this: using Color, Gadfly xgrid=0:10:100 data=rand(10,10) nrows = size(data, 1) cm = distinguishable_colors(nrows, lchoices=0:50) #lchoices between 50 and 100 are too bright for my taste for plotting lines plot( [layer(x=xgrid, y=data[i, :], Geom.line, Theme(default_color=cm[i])) for i=1:nrows]..., Guide.manual_color_key(row id, [row $i for i=1:nrows], cm), ) Thanks, Jiahao Chen Staff Research Scientist MIT Computer Science and Artificial Intelligence Laboratory On Thu, Mar 12, 2015 at 11:54 PM, Sheehan Olver dlfiv...@gmail.com javascript: wrote: I have a table of numbers that I want to line plot in Gadfly: i.e., each column corresponds to values of a function. Is this possible without creating a DataFrame?
[julia-users] Re: isnull woes
You could conditionally use Compat. Something like module Foo export isnull if Base.VERSION v0.4 using Compat import Compat: isnull else import Base: isnull end # new method for isnull end should work. On Friday, March 13, 2015 at 8:54:55 AM UTC-7, Avik Sengupta wrote: So JavaCall has always had an isnull(::JavaObject) method, which is exported. Now that Base has Nullable in v0.4, it also has an isnull(::Nullable) method exported. So far so good. But I want to use Compat for JavaCall, primarily for the Int/int change. Now, Compat also has an isnull(::Nullable) method exported, but that is in the module Compat! So I cant figure out how to write the definition of isnull in JavaCall, that'll work in all versions of Julia. Any help appreciated. Regards - Avik PS Maybe this is another reason why new functionality should NOT be backported, even in compat?
Re: [julia-users] ERROR: `convert` has no method matching convert(::Type{Int64...}, ::Int64)
It is not possible to run your code since it requires an external file run_random_2.jl which you did not include. Do you have no backtrace from the calculation? The backtrace would have told you which line number was causing the error. You will probably find it more helpful to post a minimal code snippet that exhibits the problem rather than your entire code. It's good etiquette since otherwise anyone who wants to help you has to wade through hundreds of lines of code for no good reason. Thanks, Jiahao Chen Staff Research Scientist MIT Computer Science and Artificial Intelligence Laboratory On Tue, Mar 10, 2015 at 12:48 AM, Maxwell rsz...@gmail.com wrote: Hello folks, I am trying to run a little bootstrap in Julia using pmap and I am getting the following error from each process: fatal error on fatal error on fatal error on fatal error on fatal error on fatal error on fatal error on fatal error on fatal error on fatal error on fatal error on fatal error on fatal error on fatal error on fatal error on 2141612: : : : 46: : 8: 91071131315: : : : : : : 5: ERROR: `convert` has no method matching convert(::Type{Int64...}, ::Int64) ERROR: `convert` has no method matching convert(::Type{Int64...}, ::Int64) ERROR: `convert` has no method matching convert(::Type{Int64...}, ::Int64) ERROR: `convert` has no method matching convert(::Type{Int64...}, ::Int64) ERROR: `convert` has no method matching convert(::Type{Int64...}, ::Int64) However, running the code for one case works just perfectly. I am been scratching my head about this for few days now. Any help will be greatly appreciated. Find attached my code. Thanks. _ _ _ _(_)_ | A fresh approach to technical computing (_) | (_) (_)| Documentation: http://docs.julialang.org _ _ _| |_ __ _ | Type help() for help. | | | | | | |/ _` | | | | |_| | | | (_| | | Version 0.3.6 _/ |\__'_|_|_|\__'_| | |__/ | x86_64-redhat-linux
[julia-users] Re: isnull woes
Regarding the PS, just because it possibly causes problems in one edge case is no reason to not make backports available to assist in making code run across multiple versions, to assist in porting, and to keep packages running on stable Julia. The benefits far outweigh the occasional cost, sorry you might have got hit with the cost. Cheers Lex On Saturday, March 14, 2015 at 1:54:55 AM UTC+10, Avik Sengupta wrote: So JavaCall has always had an isnull(::JavaObject) method, which is exported. Now that Base has Nullable in v0.4, it also has an isnull(::Nullable) method exported. So far so good. But I want to use Compat for JavaCall, primarily for the Int/int change. Now, Compat also has an isnull(::Nullable) method exported, but that is in the module Compat! So I cant figure out how to write the definition of isnull in JavaCall, that'll work in all versions of Julia. Any help appreciated. Regards - Avik PS Maybe this is another reason why new functionality should NOT be backported, even in compat?
Re: [julia-users] question about module, using/import
Could you give further details such as * Code that very explicitly shows the error you see in as simple of a way possible * Include the version of Julia that you're using * are you able to use PyPlot in other scenarios? On Thu, Mar 12, 2015 at 8:07 AM, antony schutz antonysch...@gmail.com wrote: Hello I'm trying to generalize an algorithm for alpha user. The algorithm can draw plot but I dont want this to be mandatory, so in the module i don't import the library (for example, i dont call using PyPlot) I want the plot drawing to be an option and has to be done by the user. Unfortunately, when I call using Pyplot and if i am not working in the folder containing the module, the package is not recognized by the algorithm. module mymodule using needed_library export needed_function include(needed_files) end julia using mymodule julia using PyPlot *INFO: Loading help data...* *ERROR: figure not defined* I tried to define 2 module with different name but I can't load the second module (mymodulePyPlot) because the module is inside folder mymodule and not in folder mymodulePyPlot Is somebody know a solution to this problem ? Thanks in advance -- chris.p...@ieee.org
Re: [julia-users] Indenting by 2 spaces in ESS[Julia]
(defun customize-julia () ;; also put other Julia customizations here (interactive) (setq julia-basic-offset 2)) (add-hook 'julia-mode-hook 'customize-julia) BTW, I am working on ESS Julia repl interaction, you might find this useful (will be merged eventually I hope): https://github.com/tpapp/ESS-julia-extensions Best, Tamas On Fri, Mar 13 2015, Shivkumar Chandrasekaran wrote: I have tried all possible combinations of suggestions from the ESS manual to get the ESS[Julia] mode to convert indentation to 2 spaces rather than 4 with no luck. Has anybody else succeeded, and if so could you please post your magic sauce? Thanks. --shiv--
[julia-users] Julia on OpenBSD
Just curiious, has anyone out there tried building Julia on OpenBSD? Cheers, Mike
Re: [julia-users] Parallel for-loops
Perhaps SharedArrays are what you need here? http://docs.julialang.org/en/release-0.3/stdlib/parallel/?highlight=sharedarray#Base.SharedArray Reading from a shared array in workers is fine, but when different workers try to update the same part of that array you will get racy behaviour and most likely not the correct result. Can you somehow re-formulate your problem along these lines, using a map and reduce approach using a pure function? @everywhere function myfunc_pure(startindex) result = zeros(Int,10) for i in startindex + (0:19) # 20 iterations result[mod(i,length(result))+1] += 1 end result end reduce(+,pmap(myfunc_pure, 1:5)) # 5 blocks of 20 iterations Like this you don't have a shared mutable state and thus no risk for mess-ups. Am 13.03.2015 um 00:56 schrieb Pieter Barendrecht pjbarendre...@gmail.com: I'm wondering how to save data/results in a parallel for-loop. Let's assume there is a single Int64 array, initialised using zeros() before starting the for-loop. In the for-loop (typically ~100,000 iterations, that's the reason I'm interested in parallel processing) the entries of this Int64 array should be increased (based on the results of an algorithm that's invoked in the for-loop). Everything works fine when using just a single proc, but I'm not sure how to modify the code such that, when using e.g. addprocs(4), the data/results stored in the Int64 array can be processed once the for-loop ends. The algorithm (a separate function) is available to all procs (using the require() function). Just using the Int64 array in the for-loop (using @parallel for k=1:10) does not work as each proc receives its own copy, so after the for-loop it contains just zeros (as illustrated in a set of slides on the Julia language). I guess it involves @spawn and fetch() and/or pmap(). Any suggestions or examples would be much appreciated :).
Re: [julia-users] Swapping two columns (or rows) of an array efficiently
Hi Milan, Did you run your benchmarks on 0.4 ? Thanks, Jan Dňa štvrtok, 12. marca 2015 19:19:08 UTC+1 Milan Bouchet-Valat napísal(-a): Le jeudi 12 mars 2015 à 11:01 -0500, Tim Holy a écrit : This is something that many people (understandably) have a hard time appreciating, so I think this post should be framed and put up on the julia wall. We go to considerable lengths to try to make code work efficiently in the general case (check out subarray.jl and subarray2.jl in master some time...), but sometimes there's no competing with a hand-rolled version for a particular case. Folks should not be shy to implement such tricks in their own code. Though with the new array views in 0.4, the vectorized version should be more efficient than in 0.3. I've tried it, and indeed it looks like unrolling is not really needed, though it's still faster and uses less RAM: X = rand(100_000, 5) function f1(X, i, j) for _ in 1:1000 X[:, i], X[:, j] = X[:, j], X[:, i] end end function f2(X, i, j) for _ in 1:1000 a = sub(X, :, i) b = sub(X, :, j) a[:], b[:] = b, a end end function f3(X, i, j) for _ in 1:1000 @inbounds for k in 1:size(X, 1) X[k, i], X[k, j] = X[k, j], X[k, i] end end end julia f1(X, 1, 5); f2(X, 1, 5); f3(X, 1, 5); julia @time f1(X, 1, 5) elapsed time: 1.027090951 seconds (1526 MB allocated, 3.63% gc time in 69 pauses with 0 full sweep) julia @time f2(X, 1, 5) elapsed time: 0.172375013 seconds (390 kB allocated) julia @time f3(X, 1, 5) elapsed time: 0.155069259 seconds (80 bytes allocated) Regards --Tim On Thursday, March 12, 2015 07:49:49 AM Steven G. Johnson wrote: As a general rule, with Julia one needs to unlearn the instinct (from Matlab or Python) that efficiency == clever use of library functions, which turns all optimization questions into is there a built-in function for X (and if the answer is no you are out of luck). Loops are fast, and you can easily beat general-purpose library functions with your own special-purpose code.
Re: [julia-users] Re: Block Matrices: problem enforcing parametric type constraints
Note that BlockMatrix{S,TA:AbstractMatrix{S},TB:AbstractMatrix{S},TC: AbstractMatrix{S},TD:AbstractMatrix{S}} is a syntax that (1) has come to be called triangular dispatch and (2) is not actually supported by julia yet, although neither does it throw an error. It just won't always do what you think it will do. There are more indirect strategies to achieve this; see one here: http://docs.julialang.org/en/release-0.3/manual/faq/#how-should-i-declare-abstract-container-type-fields (the part about the inner constructor). --Tim On Thursday, March 12, 2015 07:20:45 PM Greg Plowman wrote: I don't really understand how this works, but this might point someone in the right direction. It seems Julia can't fully infer types, in particular the element type S. So we get further if we give a hint: type BlockMatrix{S,TA:AbstractMatrix{S},TB:AbstractMatrix{S},TC: AbstractMatrix{S},TD:AbstractMatrix{S}} : AbstractMatrix{S} A::TA B::TB C::TC D::TD end typealias Block{S} BlockMatrix{S,AbstractMatrix{S},AbstractMatrix{S}, AbstractMatrix{S},AbstractMatrix{S}} # not really sure what size() should be, but need to define for output Base.size(x::BlockMatrix) = (size(x.A,1) + size(x.C,1), size(x.A,2) + size(x .B,2)) N = Block{Float64}(A,A,A,B) julia N.A 4x4 Array{Float64,2}: 0.805914 0.473687 0.721984 0.464178 0.306 0.728015 0.148804 0.776728 0.439048 0.566558 0.72709 0.524761 0.255731 0.16528 0.331941 0.167353 julia N.B 4x4 Array{Float64,2}: 0.805914 0.473687 0.721984 0.464178 0.306 0.728015 0.148804 0.776728 0.439048 0.566558 0.72709 0.524761 0.255731 0.16528 0.331941 0.167353 julia N.C 4x4 Array{Float64,2}: 0.805914 0.473687 0.721984 0.464178 0.306 0.728015 0.148804 0.776728 0.439048 0.566558 0.72709 0.524761 0.255731 0.16528 0.331941 0.167353 julia N.D 4x4 Diagonal{Float64}: 1.0 0.0 0.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0 4.0 On Friday, March 13, 2015 at 8:49:41 AM UTC+11, Gabriel Mitchell wrote: @g Sorry, I guess I didn't state my intent that clearly. While your example does enforce the Matrix/eltype constraint that is only part of what I am after. Having a type parameter for each block is a main thing that I am interested in. The reason is that I can write methods that dispatch on those types. An example of such a method with an explicit 4-ary structure would be #generic fallback det(A::Matrix,B::Matrix,C::Matrix,D::Matrix) = det([A B; C D]) #specialized method, should actually, check for A invertible, but you get the idea det(A::Diagonal,B::Diagonal,C::Diagonal,D::Diagonal) = det(A)*det(D-C*inv(A)*B) #etc... In my applications there are at least a dozen situations where certain block structure allow for significantly more efficient implementations than the generic fallback. One would like to make the calls to these methods (det,inv,trace, and so on) with the normal 1-ary argument, that matrix M itself. This would be possible if the type information of the blocks could be read out of the type of M. I hope this clear up my motivation for the above question. On Thursday, March 12, 2015 at 8:49:05 PM UTC+1, g wrote: BlockMatrix only needs one type parameter to fully specify the type, so you should probably only use one type parameter. Like so: *type BlockMatrix{S} : AbstractMatrix{S}* *A::AbstractMatrix{S}* *B::AbstractMatrix{S}* *C::AbstractMatrix{S}* *D::AbstractMatrix{S}* *end* I'm sure someone else can explain in more detail why yours didn't work.
Re: [julia-users] Swapping two columns (or rows) of an array efficiently
Apparently it was 0.4 ... I tried your f2 on Julia v0.36 and it takes forever. f3 is however a blast! Here are my timing on Julia v0.36: @time f1(X, 1, 5) elapsed time: 2.210965858 seconds (1600177296 bytes allocated, 65.31% gc time) @time f2(X, 1, 5) elapsed time: 53.146697892 seconds (22368945936 bytes allocated, 41.76% gc time) @time f3(X, 1, 5) elapsed time: 0.142597211 seconds (80 bytes allocated) I assume function sub() in v0.4 is substantially different. Thanks, Jan Dňa piatok, 13. marca 2015 10:35:45 UTC+1 Ján Dolinský napísal(-a): Hi Milan, Did you run your benchmarks on 0.4 ? Thanks, Jan Dňa štvrtok, 12. marca 2015 19:19:08 UTC+1 Milan Bouchet-Valat napísal(-a): Le jeudi 12 mars 2015 à 11:01 -0500, Tim Holy a écrit : This is something that many people (understandably) have a hard time appreciating, so I think this post should be framed and put up on the julia wall. We go to considerable lengths to try to make code work efficiently in the general case (check out subarray.jl and subarray2.jl in master some time...), but sometimes there's no competing with a hand-rolled version for a particular case. Folks should not be shy to implement such tricks in their own code. Though with the new array views in 0.4, the vectorized version should be more efficient than in 0.3. I've tried it, and indeed it looks like unrolling is not really needed, though it's still faster and uses less RAM: X = rand(100_000, 5) function f1(X, i, j) for _ in 1:1000 X[:, i], X[:, j] = X[:, j], X[:, i] end end function f2(X, i, j) for _ in 1:1000 a = sub(X, :, i) b = sub(X, :, j) a[:], b[:] = b, a end end function f3(X, i, j) for _ in 1:1000 @inbounds for k in 1:size(X, 1) X[k, i], X[k, j] = X[k, j], X[k, i] end end end julia f1(X, 1, 5); f2(X, 1, 5); f3(X, 1, 5); julia @time f1(X, 1, 5) elapsed time: 1.027090951 seconds (1526 MB allocated, 3.63% gc time in 69 pauses with 0 full sweep) julia @time f2(X, 1, 5) elapsed time: 0.172375013 seconds (390 kB allocated) julia @time f3(X, 1, 5) elapsed time: 0.155069259 seconds (80 bytes allocated) Regards --Tim On Thursday, March 12, 2015 07:49:49 AM Steven G. Johnson wrote: As a general rule, with Julia one needs to unlearn the instinct (from Matlab or Python) that efficiency == clever use of library functions, which turns all optimization questions into is there a built-in function for X (and if the answer is no you are out of luck). Loops are fast, and you can easily beat general-purpose library functions with your own special-purpose code.
Re: [julia-users] How to introduce scope inside macro
Wrapping it in a let block only seems to slow things down more. @time let a=a, b=b, c=c @map(sqrt(a^2 + b^2) + c, a, b) end elapsed time: 4.951837524 seconds (1839984144 bytes allocated, 19.48% gc time) On Friday, March 13, 2015 at 11:01:35 AM UTC+2, Toivo Henningsson wrote: To introduce a local scope, shouldn't it be enough to wrap the emitted code in a let block?
[julia-users] Re: question about module, using/import
Hi Thanks to answer, please try this : *julia **Pkg.clone(https://github.com/DeVerMyst/MyModule.git;)* *INFO: Cloning MyModule from https://github.com/DeVerMyst/MyModule.git* *INFO: Computing changes...* *julia **exit()* *➜ **~ * julia *_* *_** _ **_**(_)**_** | A fresh approach to technical computing* *(_)** | **(_)* *(_)**| Documentation: http://docs.julialang.org* * _ _ _| |_ __ _ | Type help() for help.* * | | | | | | |/ _` | |* * | | |_| | | | (_| | | Version 0.3.4 (2014-12-26 10:42 UTC)* * _/ |\__'_|_|_|\__'_| | Official http://julialang.org/ release* *|__/ | x86_64-apple-darwin13.4.0* *julia **using MyModule* *julia **mymainfct(2,2,true)* *ERROR: imshow not defined* * in mymainfct at /Users/antonyschutz/.julia/v0.3/MyModule/src/mymain.jl:7* *julia **using PyPlot* *INFO: Loading help data...* *julia **mymainfct(2,2,true)* *ERROR: imshow not defined* * in mymainfct at /Users/antonyschutz/.julia/v0.3/MyModule/src/mymain.jl:7* Le jeudi 12 mars 2015 16:07:22 UTC+1, antony schutz a écrit : Hello I'm trying to generalize an algorithm for alpha user. The algorithm can draw plot but I dont want this to be mandatory, so in the module i don't import the library (for example, i dont call using PyPlot) I want the plot drawing to be an option and has to be done by the user. Unfortunately, when I call using Pyplot and if i am not working in the folder containing the module, the package is not recognized by the algorithm. module mymodule using needed_library export needed_function include(needed_files) end julia using mymodule julia using PyPlot *INFO: Loading help data...* *ERROR: figure not defined* I tried to define 2 module with different name but I can't load the second module (mymodulePyPlot) because the module is inside folder mymodule and not in folder mymodulePyPlot Is somebody know a solution to this problem ? Thanks in advance
Re: [julia-users] Parallel for-loops
Check out SharedArrays. --Tim On Thursday, March 12, 2015 04:56:12 PM Pieter Barendrecht wrote: I'm wondering how to save data/results in a parallel for-loop. Let's assume there is a single Int64 array, initialised using zeros() before starting the for-loop. In the for-loop (typically ~100,000 iterations, that's the reason I'm interested in parallel processing) the entries of this Int64 array should be increased (based on the results of an algorithm that's invoked in the for-loop). Everything works fine when using just a single proc, but I'm not sure how to modify the code such that, when using e.g. addprocs(4), the data/results stored in the Int64 array can be processed once the for-loop ends. The algorithm (a separate function) is available to all procs (using the require() function). Just using the Int64 array in the for-loop (using @parallel for k=1:10) does not work as each proc receives its own copy, so after the for-loop it contains just zeros (as illustrated in a set of slides on the Julia language). I guess it involves @spawn and fetch() and/or pmap(). Any suggestions or examples would be much appreciated :).
Re: [julia-users] How to introduce scope inside macro
To introduce a local scope, shouldn't it be enough to wrap the emitted code in a let block?
[julia-users] Time type
Hi, is there a Time datatype, analogous to the Date type? I ran into a situation where I need to represent times (like 12:34 pm) that don't have a date associated. I understand that in the case of dates that don't have a time component I'd use Date (instead of DateTime), but I couldn't find anything the other way around, for times that don't have a date. Thanks, David
Re: [julia-users] Generic functions within Julia closures
Ok, but the closure (the first version) in the 2012 post was not working julia typeof(Rrss) Nothing while now it does typeof(gridfgen([1.; 3.], [.2; 4.])) Function or maybe I am missing something. On Friday, March 13, 2015 at 2:02:34 PM UTC, Stefan Karpinski wrote: Closures have always worked, they just have some performance issues still. On Fri, Mar 13, 2015 at 9:05 AM, Matteo Fasiolo matteo@gmail.com javascript: wrote: Hi All, This (2012) discussion https://gist.github.com/dmbates/3939427 seems to suggest that generic functions cannot be used within closures in Julia. But this seems to work now function creator(y) function power(x) return x .^ y end function myNorm(x) return sum( power(x) ) end return power, myNorm end julia (a, b) = creator(2.) (power,myNorm) julia a(2) 4.0 julia b([1. 2.]) 5.0 I use generic functions because they support keyword optional arguments and I want to do things such as function a(y, extra...) function b(x, extra...; z = true) something end end which seems to work in terms of not mixing up the ... with the optional arguments. Notice that I am mainly an R user, currently experimenting with Julia, so my approach might be completely wrong. [1]: https://gist.github.com/dmbates/3939427
Re: [julia-users] How to introduce scope inside macro
Wrapping it in a function seems to work fine, although it will be if you don't declare c as const: 1. macro map_(expr, args...) 2. quote 3. @assert all([map(length, ($(args...),))...] .== length($(args[ 1]))) 4. out = Array(typeof($(indexify(expr, 1, args))), size($(args[1] ))) 5. for i in 1:length(out) 6. @inbounds out[i] = $(indexify(expr, :i, args)) 7. end 8. out 9. end 10. end 11. 12. macro map(expr, args...) 13. quote 14. function dummy_fun($(args...)) 15. @map_($expr,$(args...)) 16. end 17. dummy_fun($(args...)) 18. end 19. end
[julia-users] Re: how to paste png into ipython julia notebook?
Awesome, thanks for the quick response! On Wednesday, March 11, 2015 at 12:05:07 PM UTC-4, Edward Chen wrote: from IPython.display import Image Image(filename='image.png') doesn't seem to work Thanks! -Ed
Re: [julia-users] Time type
There isn't, mainly because there hasn't seem to be much demand/use for it (plus the ever constrained dev effort required). I'm pretty sure I mocked something up at one point, but don't know if I have any code still lying around. Happy to help push this if there's enough interest. -Jacob On Fri, Mar 13, 2015 at 7:06 AM, David Anthoff anth...@berkeley.edu wrote: Hi, is there a Time datatype, analogous to the Date type? I ran into a situation where I need to represent times (like 12:34 pm) that don’t have a date associated. I understand that in the case of dates that don’t have a time component I’d use Date (instead of DateTime), but I couldn’t find anything the other way around, for times that don’t have a date. Thanks, David
Re: [julia-users] Parallel for-loops
Thanks! I tried both approaches you suggested. Some results using SharedArrays (100,000 simulations) #workers #time 1 ~120s 3 ~42s 6 ~40s Short question. The first print statement after the for-loop is already executed before the for-loop ends. How do I prevent this from happening? Some results using the other approach (again 100,000 simulations) #workers #time 1 ~118s 2 ~60s 3 ~42s 4 ~38s 6 ~40s 6 ~40s Couple of questions. My equivalent of myfunc_pure() also requires a second argument. In addition, I don't make use of the startindex argument in the function. What's the common approach here? Next, there are actually multiple variables that should be returned, not just result. Overall, I'm a bit surprised that using more than 3 or 4 workers does not decrease the running time. Any ideas? I'm using Julia 0.3.6 on a 64bit Arch Linux system, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz. On Friday, March 13, 2015 at 8:37:19 AM UTC, René Donner wrote: Perhaps SharedArrays are what you need here? http://docs.julialang.org/en/release-0.3/stdlib/parallel/?highlight=sharedarray#Base.SharedArray Reading from a shared array in workers is fine, but when different workers try to update the same part of that array you will get racy behaviour and most likely not the correct result. Can you somehow re-formulate your problem along these lines, using a map and reduce approach using a pure function? @everywhere function myfunc_pure(startindex) result = zeros(Int,10) for i in startindex + (0:19) # 20 iterations result[mod(i,length(result))+1] += 1 end result end reduce(+,pmap(myfunc_pure, 1:5)) # 5 blocks of 20 iterations Like this you don't have a shared mutable state and thus no risk for mess-ups. Am 13.03.2015 um 00:56 schrieb Pieter Barendrecht pjbare...@gmail.com javascript:: I'm wondering how to save data/results in a parallel for-loop. Let's assume there is a single Int64 array, initialised using zeros() before starting the for-loop. In the for-loop (typically ~100,000 iterations, that's the reason I'm interested in parallel processing) the entries of this Int64 array should be increased (based on the results of an algorithm that's invoked in the for-loop). Everything works fine when using just a single proc, but I'm not sure how to modify the code such that, when using e.g. addprocs(4), the data/results stored in the Int64 array can be processed once the for-loop ends. The algorithm (a separate function) is available to all procs (using the require() function). Just using the Int64 array in the for-loop (using @parallel for k=1:10) does not work as each proc receives its own copy, so after the for-loop it contains just zeros (as illustrated in a set of slides on the Julia language). I guess it involves @spawn and fetch() and/or pmap(). Any suggestions or examples would be much appreciated :).
Re: [julia-users] Parallel for-loops
Am 13.03.2015 um 16:20 schrieb Pieter Barendrecht pjbarendre...@gmail.com: Thanks! I tried both approaches you suggested. Some results using SharedArrays (100,000 simulations) #workers #time 1 ~120s 3 ~42s 6 ~40s Short question. The first print statement after the for-loop is already executed before the for-loop ends. How do I prevent this from happening? Some results using the other approach (again 100,000 simulations) #workers #time 1 ~118s 2 ~60s 3 ~42s 4 ~38s 6 ~40s 6 ~40s Could you post a simplified code snippet? Either here on in a gist. It is difficult to know what exactly you doing ;-) Couple of questions. My equivalent of myfunc_pure() also requires a second argument. Is that argument changing, or is this there to switch between different algorithms etc? In addition, I don't make use of the startindex argument in the function. What's the common approach here? Next, there are actually multiple variables that should be returned, not just result. You can always return (a,b,c) instead of a, i.e. a tuple. The function you provide to reduce then has the following signature: myreducer(a::Tuple, b::Tuple). Combine the tuples, and again return a tuple. Overall, I'm a bit surprised that using more than 3 or 4 workers does not decrease the running time. Any ideas? I'm using Julia 0.3.6 on a 64bit Arch Linux system, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz. Can be any number of things, could be the memory bandwidth being the limiting factor, or that the computation is actually nicely sped up and a lot of what you see is communication overhead. In that case, work on chunks of data / batches of itertations, i.e. dont pmap over millions of things but only a couple dozen. Looking at the code might shed some light. On Friday, March 13, 2015 at 8:37:19 AM UTC, René Donner wrote: Perhaps SharedArrays are what you need here? http://docs.julialang.org/en/release-0.3/stdlib/parallel/?highlight=sharedarray#Base.SharedArray Reading from a shared array in workers is fine, but when different workers try to update the same part of that array you will get racy behaviour and most likely not the correct result. Can you somehow re-formulate your problem along these lines, using a map and reduce approach using a pure function? @everywhere function myfunc_pure(startindex) result = zeros(Int,10) for i in startindex + (0:19) # 20 iterations result[mod(i,length(result))+1] += 1 end result end reduce(+,pmap(myfunc_pure, 1:5)) # 5 blocks of 20 iterations Like this you don't have a shared mutable state and thus no risk for mess-ups. Am 13.03.2015 um 00:56 schrieb Pieter Barendrecht pjbare...@gmail.com: I'm wondering how to save data/results in a parallel for-loop. Let's assume there is a single Int64 array, initialised using zeros() before starting the for-loop. In the for-loop (typically ~100,000 iterations, that's the reason I'm interested in parallel processing) the entries of this Int64 array should be increased (based on the results of an algorithm that's invoked in the for-loop). Everything works fine when using just a single proc, but I'm not sure how to modify the code such that, when using e.g. addprocs(4), the data/results stored in the Int64 array can be processed once the for-loop ends. The algorithm (a separate function) is available to all procs (using the require() function). Just using the Int64 array in the for-loop (using @parallel for k=1:10) does not work as each proc receives its own copy, so after the for-loop it contains just zeros (as illustrated in a set of slides on the Julia language). I guess it involves @spawn and fetch() and/or pmap(). Any suggestions or examples would be much appreciated :).
[julia-users] Re: how to paste png into ipython julia notebook?
See also the discussion at: https://github.com/ipython/ipython/issues/4111
[julia-users] Re: Time type
On Friday, March 13, 2015 at 9:06:28 AM UTC-4, David Anthoff wrote: is there a Time datatype, analogous to the Date type? I ran into a situation where I need to represent times (like 12:34 pm) that don’t have a date associated. I understand that in the case of dates that don’t have a time component I’d use Date (instead of DateTime), but I couldn’t find anything the other way around, for times that don’t have a date. Couldn't you represent this as a time interval, i.e. Dates.Hour(12) + Dates.Minute(34) ?
[julia-users] Re: how to paste png into ipython julia notebook?
On Friday, March 13, 2015 at 11:44:57 AM UTC-4, Randy Zwitch wrote: You can also use plain HTML in a markdown cell. Yes, you can use img ... tags, but then the image data is not included in the notebook file itself, so unless the image is on a server somewhere that makes it more annoying to share the notebook (as you have to remember to copy the image file along with the notebook).
Re: [julia-users] Indenting by 2 spaces in ESS[Julia]
Thanks you! I'll try it out. --shiv-- On Thu, Mar 12, 2015 at 11:15 PM, Tamas Papp tkp...@gmail.com wrote: (defun customize-julia () ;; also put other Julia customizations here (interactive) (setq julia-basic-offset 2)) (add-hook 'julia-mode-hook 'customize-julia) BTW, I am working on ESS Julia repl interaction, you might find this useful (will be merged eventually I hope): https://github.com/tpapp/ESS-julia-extensions Best, Tamas On Fri, Mar 13 2015, Shivkumar Chandrasekaran wrote: I have tried all possible combinations of suggestions from the ESS manual to get the ESS[Julia] mode to convert indentation to 2 spaces rather than 4 with no luck. Has anybody else succeeded, and if so could you please post your magic sauce? Thanks. --shiv-- -- --shiv--
RE: [julia-users] Time type
Ok, great! I won’t code it, but will be looking forward to whatever one of you produces. In the meantime I’m returning strings. Can you open an issue for this, so that I can link to it from my package? I would open it, but not sure in which repo it should be (because this stuff moved into the core, or not?). From: julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] On Behalf Of Stefan Karpinski Sent: Friday, March 13, 2015 2:46 PM To: Julia Users Subject: Re: [julia-users] Time type I had need for this recently and would support having it. I can also take a crack at coding it up if Jacob or someone else doesn't beat me to it. On Fri, Mar 13, 2015 at 9:34 AM, David Anthoff anth...@berkeley.edu mailto:anth...@berkeley.edu wrote: I’ve created a new Excel file reading package, and in Excel you can have cells that have a time, but no date part. I’m trying to figure out in what type I should return those values… I could certainly just format them as strings for now, I guess… I myself don’t have a use for it, I just want to make sure the package does something meaningful if it comes across a file with such a cell. From: julia-users@googlegroups.com mailto:julia-users@googlegroups.com [mailto:julia-users@googlegroups.com mailto:julia-users@googlegroups.com ] On Behalf Of Jacob Quinn Sent: Friday, March 13, 2015 2:27 PM To: julia-users@googlegroups.com mailto:julia-users@googlegroups.com Subject: Re: [julia-users] Time type There isn't, mainly because there hasn't seem to be much demand/use for it (plus the ever constrained dev effort required). I'm pretty sure I mocked something up at one point, but don't know if I have any code still lying around. Happy to help push this if there's enough interest. -Jacob On Fri, Mar 13, 2015 at 7:06 AM, David Anthoff anth...@berkeley.edu mailto:anth...@berkeley.edu wrote: Hi, is there a Time datatype, analogous to the Date type? I ran into a situation where I need to represent times (like 12:34 pm) that don’t have a date associated. I understand that in the case of dates that don’t have a time component I’d use Date (instead of DateTime), but I couldn’t find anything the other way around, for times that don’t have a date. Thanks, David
Re: [julia-users] Time type
Go for it Stefan; in the mean time, I'll try to get your TimeZones.jl running :) -Jacob On Fri, Mar 13, 2015 at 7:45 AM, Stefan Karpinski ste...@karpinski.org wrote: I had need for this recently and would support having it. I can also take a crack at coding it up if Jacob or someone else doesn't beat me to it. On Fri, Mar 13, 2015 at 9:34 AM, David Anthoff anth...@berkeley.edu wrote: I’ve created a new Excel file reading package, and in Excel you can have cells that have a time, but no date part. I’m trying to figure out in what type I should return those values… I could certainly just format them as strings for now, I guess… I myself don’t have a use for it, I just want to make sure the package does something meaningful if it comes across a file with such a cell. *From:* julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] *On Behalf Of *Jacob Quinn *Sent:* Friday, March 13, 2015 2:27 PM *To:* julia-users@googlegroups.com *Subject:* Re: [julia-users] Time type There isn't, mainly because there hasn't seem to be much demand/use for it (plus the ever constrained dev effort required). I'm pretty sure I mocked something up at one point, but don't know if I have any code still lying around. Happy to help push this if there's enough interest. -Jacob On Fri, Mar 13, 2015 at 7:06 AM, David Anthoff anth...@berkeley.edu wrote: Hi, is there a Time datatype, analogous to the Date type? I ran into a situation where I need to represent times (like 12:34 pm) that don’t have a date associated. I understand that in the case of dates that don’t have a time component I’d use Date (instead of DateTime), but I couldn’t find anything the other way around, for times that don’t have a date. Thanks, David
Re: [julia-users] Time type
Ok, but won't happen until tomorrow at the earliest. On Fri, Mar 13, 2015 at 9:56 AM, Jacob Quinn quinn.jac...@gmail.com wrote: Go for it Stefan; in the mean time, I'll try to get your TimeZones.jl running :) -Jacob On Fri, Mar 13, 2015 at 7:45 AM, Stefan Karpinski ste...@karpinski.org wrote: I had need for this recently and would support having it. I can also take a crack at coding it up if Jacob or someone else doesn't beat me to it. On Fri, Mar 13, 2015 at 9:34 AM, David Anthoff anth...@berkeley.edu wrote: I’ve created a new Excel file reading package, and in Excel you can have cells that have a time, but no date part. I’m trying to figure out in what type I should return those values… I could certainly just format them as strings for now, I guess… I myself don’t have a use for it, I just want to make sure the package does something meaningful if it comes across a file with such a cell. *From:* julia-users@googlegroups.com [mailto: julia-users@googlegroups.com] *On Behalf Of *Jacob Quinn *Sent:* Friday, March 13, 2015 2:27 PM *To:* julia-users@googlegroups.com *Subject:* Re: [julia-users] Time type There isn't, mainly because there hasn't seem to be much demand/use for it (plus the ever constrained dev effort required). I'm pretty sure I mocked something up at one point, but don't know if I have any code still lying around. Happy to help push this if there's enough interest. -Jacob On Fri, Mar 13, 2015 at 7:06 AM, David Anthoff anth...@berkeley.edu wrote: Hi, is there a Time datatype, analogous to the Date type? I ran into a situation where I need to represent times (like 12:34 pm) that don’t have a date associated. I understand that in the case of dates that don’t have a time component I’d use Date (instead of DateTime), but I couldn’t find anything the other way around, for times that don’t have a date. Thanks, David
Re: [julia-users] Swapping two columns (or rows) of an array efficiently
Le vendredi 13 mars 2015 à 03:14 -0700, Ján Dolinský a écrit : Apparently it was 0.4 ... I tried your f2 on Julia v0.36 and it takes forever. f3 is however a blast! Here are my timing on Julia v0.36: @time f1(X, 1, 5) elapsed time: 2.210965858 seconds (1600177296 bytes allocated, 65.31% gc time) @time f2(X, 1, 5) elapsed time: 53.146697892 seconds (22368945936 bytes allocated, 41.76% gc time) @time f3(X, 1, 5) elapsed time: 0.142597211 seconds (80 bytes allocated) I assume function sub() in v0.4 is substantially different. Yes, that relies on the new array views in 0.4. Now you'll have a reason to update when the release is out! Regards Thanks, Jan Dňa piatok, 13. marca 2015 10:35:45 UTC+1 Ján Dolinský napísal(-a): Hi Milan, Did you run your benchmarks on 0.4 ? Thanks, Jan Dňa štvrtok, 12. marca 2015 19:19:08 UTC+1 Milan Bouchet-Valat napísal(-a): Le jeudi 12 mars 2015 à 11:01 -0500, Tim Holy a écrit : This is something that many people (understandably) have a hard time appreciating, so I think this post should be framed and put up on the julia wall. We go to considerable lengths to try to make code work efficiently in the general case (check out subarray.jl and subarray2.jl in master some time...), but sometimes there's no competing with a hand-rolled version for a particular case. Folks should not be shy to implement such tricks in their own code. Though with the new array views in 0.4, the vectorized version should be more efficient than in 0.3. I've tried it, and indeed it looks like unrolling is not really needed, though it's still faster and uses less RAM: X = rand(100_000, 5) function f1(X, i, j) for _ in 1:1000 X[:, i], X[:, j] = X[:, j], X[:, i] end end function f2(X, i, j) for _ in 1:1000 a = sub(X, :, i) b = sub(X, :, j) a[:], b[:] = b, a end end function f3(X, i, j) for _ in 1:1000 @inbounds for k in 1:size(X, 1) X[k, i], X[k, j] = X[k, j], X[k, i] end end end julia f1(X, 1, 5); f2(X, 1, 5); f3(X, 1, 5); julia @time f1(X, 1, 5) elapsed time: 1.027090951 seconds (1526 MB allocated, 3.63% gc time in 69 pauses with 0 full sweep) julia @time f2(X, 1, 5) elapsed time: 0.172375013 seconds (390 kB allocated) julia @time f3(X, 1, 5) elapsed time: 0.155069259 seconds (80 bytes allocated) Regards --Tim On Thursday, March 12, 2015 07:49:49 AM Steven G. Johnson wrote: As a general rule, with Julia one needs to unlearn the instinct (from Matlab or Python) that efficiency == clever use of library functions, which turns all optimization questions into is there a built-in function for X (and if the answer is no you are out of luck). Loops are fast, and you can easily beat general-purpose library functions with your own special-purpose code.
Re: [julia-users] Generic functions within Julia closures
Closures have always worked, they just have some performance issues still. On Fri, Mar 13, 2015 at 9:05 AM, Matteo Fasiolo matteo.fasi...@gmail.com wrote: Hi All, This (2012) discussion https://gist.github.com/dmbates/3939427 seems to suggest that generic functions cannot be used within closures in Julia. But this seems to work now function creator(y) function power(x) return x .^ y end function myNorm(x) return sum( power(x) ) end return power, myNorm end julia (a, b) = creator(2.) (power,myNorm) julia a(2) 4.0 julia b([1. 2.]) 5.0 I use generic functions because they support keyword optional arguments and I want to do things such as function a(y, extra...) function b(x, extra...; z = true) something end end which seems to work in terms of not mixing up the ... with the optional arguments. Notice that I am mainly an R user, currently experimenting with Julia, so my approach might be completely wrong. [1]: https://gist.github.com/dmbates/3939427
[julia-users] Generic functions within Julia closures
Hi All, This (2012) discussion https://gist.github.com/dmbates/3939427 seems to suggest that generic functions cannot be used within closures in Julia. But this seems to work now function creator(y) function power(x) return x .^ y end function myNorm(x) return sum( power(x) ) end return power, myNorm end julia (a, b) = creator(2.) (power,myNorm) julia a(2) 4.0 julia b([1. 2.]) 5.0 I use generic functions because they support keyword optional arguments and I want to do things such as function a(y, extra...) function b(x, extra...; z = true) something end end which seems to work in terms of not mixing up the ... with the optional arguments. Notice that I am mainly an R user, currently experimenting with Julia, so my approach might be completely wrong. [1]: https://gist.github.com/dmbates/3939427
Re: [julia-users] Time type
I had need for this recently and would support having it. I can also take a crack at coding it up if Jacob or someone else doesn't beat me to it. On Fri, Mar 13, 2015 at 9:34 AM, David Anthoff anth...@berkeley.edu wrote: I’ve created a new Excel file reading package, and in Excel you can have cells that have a time, but no date part. I’m trying to figure out in what type I should return those values… I could certainly just format them as strings for now, I guess… I myself don’t have a use for it, I just want to make sure the package does something meaningful if it comes across a file with such a cell. *From:* julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] *On Behalf Of *Jacob Quinn *Sent:* Friday, March 13, 2015 2:27 PM *To:* julia-users@googlegroups.com *Subject:* Re: [julia-users] Time type There isn't, mainly because there hasn't seem to be much demand/use for it (plus the ever constrained dev effort required). I'm pretty sure I mocked something up at one point, but don't know if I have any code still lying around. Happy to help push this if there's enough interest. -Jacob On Fri, Mar 13, 2015 at 7:06 AM, David Anthoff anth...@berkeley.edu wrote: Hi, is there a Time datatype, analogous to the Date type? I ran into a situation where I need to represent times (like 12:34 pm) that don’t have a date associated. I understand that in the case of dates that don’t have a time component I’d use Date (instead of DateTime), but I couldn’t find anything the other way around, for times that don’t have a date. Thanks, David
[julia-users] Re: question about module, using/import
You can check out the Requires.jl https://github.com/one-more-minute/Requires.jl package which has very nice shorthand syntax for this sort of thing: @require PyPlot begin # This block is only executed when PyPlot is loaded # Inside this block you can either refer to PyPlot objects with their fully qualified names (PyPlot.imshow) # Or simply call: using PyPlot # ... PyPlot glue code here end Matt On Friday, March 13, 2015 at 4:31:00 AM UTC-4, antony schutz wrote: Hi Thanks to answer, please try this : *julia **Pkg.clone(https://github.com/DeVerMyst/MyModule.git https://github.com/DeVerMyst/MyModule.git)* *INFO: Cloning MyModule from https://github.com/DeVerMyst/MyModule.git https://github.com/DeVerMyst/MyModule.git* *INFO: Computing changes...* *julia **exit()* *➜ **~ * julia *_* *_** _ **_**(_)**_** | A fresh approach to technical computing* *(_)** | **(_)* *(_)**| Documentation: http://docs.julialang.org http://docs.julialang.org* * _ _ _| |_ __ _ | Type help() for help.* * | | | | | | |/ _` | |* * | | |_| | | | (_| | | Version 0.3.4 (2014-12-26 10:42 UTC)* * _/ |\__'_|_|_|\__'_| | Official http://julialang.org/ http://julialang.org/ release* *|__/ | x86_64-apple-darwin13.4.0* *julia **using MyModule* *julia **mymainfct(2,2,true)* *ERROR: imshow not defined* * in mymainfct at /Users/antonyschutz/.julia/v0.3/MyModule/src/mymain.jl:7* *julia **using PyPlot* *INFO: Loading help data...* *julia **mymainfct(2,2,true)* *ERROR: imshow not defined* * in mymainfct at /Users/antonyschutz/.julia/v0.3/MyModule/src/mymain.jl:7* Le jeudi 12 mars 2015 16:07:22 UTC+1, antony schutz a écrit : Hello I'm trying to generalize an algorithm for alpha user. The algorithm can draw plot but I dont want this to be mandatory, so in the module i don't import the library (for example, i dont call using PyPlot) I want the plot drawing to be an option and has to be done by the user. Unfortunately, when I call using Pyplot and if i am not working in the folder containing the module, the package is not recognized by the algorithm. module mymodule using needed_library export needed_function include(needed_files) end julia using mymodule julia using PyPlot *INFO: Loading help data...* *ERROR: figure not defined* I tried to define 2 module with different name but I can't load the second module (mymodulePyPlot) because the module is inside folder mymodule and not in folder mymodulePyPlot Is somebody know a solution to this problem ? Thanks in advance
Re: [julia-users] Swapping two columns (or rows) of an array efficiently
Indeed I do :). When is stable release 0.4 comming out ? Regards, Jan Dňa piatok, 13. marca 2015 14:45:42 UTC+1 Milan Bouchet-Valat napísal(-a): Le vendredi 13 mars 2015 à 03:14 -0700, Ján Dolinský a écrit : Apparently it was 0.4 ... I tried your f2 on Julia v0.36 and it takes forever. f3 is however a blast! Here are my timing on Julia v0.36: @time f1(X, 1, 5) elapsed time: 2.210965858 seconds (1600177296 bytes allocated, 65.31% gc time) @time f2(X, 1, 5) elapsed time: 53.146697892 seconds (22368945936 bytes allocated, 41.76% gc time) @time f3(X, 1, 5) elapsed time: 0.142597211 seconds (80 bytes allocated) I assume function sub() in v0.4 is substantially different. Yes, that relies on the new array views in 0.4. Now you'll have a reason to update when the release is out! Regards Thanks, Jan Dňa piatok, 13. marca 2015 10:35:45 UTC+1 Ján Dolinský napísal(-a): Hi Milan, Did you run your benchmarks on 0.4 ? Thanks, Jan Dňa štvrtok, 12. marca 2015 19:19:08 UTC+1 Milan Bouchet-Valat napísal(-a): Le jeudi 12 mars 2015 à 11:01 -0500, Tim Holy a écrit : This is something that many people (understandably) have a hard time appreciating, so I think this post should be framed and put up on the julia wall. We go to considerable lengths to try to make code work efficiently in the general case (check out subarray.jl and subarray2.jl in master some time...), but sometimes there's no competing with a hand-rolled version for a particular case. Folks should not be shy to implement such tricks in their own code. Though with the new array views in 0.4, the vectorized version should be more efficient than in 0.3. I've tried it, and indeed it looks like unrolling is not really needed, though it's still faster and uses less RAM: X = rand(100_000, 5) function f1(X, i, j) for _ in 1:1000 X[:, i], X[:, j] = X[:, j], X[:, i] end end function f2(X, i, j) for _ in 1:1000 a = sub(X, :, i) b = sub(X, :, j) a[:], b[:] = b, a end end function f3(X, i, j) for _ in 1:1000 @inbounds for k in 1:size(X, 1) X[k, i], X[k, j] = X[k, j], X[k, i] end end end julia f1(X, 1, 5); f2(X, 1, 5); f3(X, 1, 5); julia @time f1(X, 1, 5) elapsed time: 1.027090951 seconds (1526 MB allocated, 3.63% gc time in 69 pauses with 0 full sweep) julia @time f2(X, 1, 5) elapsed time: 0.172375013 seconds (390 kB allocated) julia @time f3(X, 1, 5) elapsed time: 0.155069259 seconds (80 bytes allocated) Regards --Tim On Thursday, March 12, 2015 07:49:49 AM Steven G. Johnson wrote: As a general rule, with Julia one needs to unlearn the instinct (from Matlab or Python) that efficiency == clever use of library functions, which turns all optimization questions into is there a built-in function for X (and if the answer is no you are out of luck). Loops are fast, and you can easily beat general-purpose library functions with your own special-purpose code.
[julia-users] isnull woes
So JavaCall has always had an isnull(::JavaObject) method, which is exported. Now that Base has Nullable in v0.4, it also has an isnull(::Nullable) method exported. So far so good. But I want to use Compat for JavaCall, primarily for the Int/int change. Now, Compat also has an isnull(::Nullable) method exported, but that is in the module Compat! So I cant figure out how to write the definition of isnull in JavaCall, that'll work in all versions of Julia. Any help appreciated. Regards - Avik PS Maybe this is another reason why new functionality should NOT be backported, even in compat?
Re: [julia-users] Re: how to paste png into ipython julia notebook?
the src url of the img ... tag can itself be a base64 encoded image: http://en.wikipedia.org/wiki/Data_URI_scheme#HTML On Fri, Mar 13, 2015 at 11:50 AM Steven G. Johnson stevenj@gmail.com wrote: On Friday, March 13, 2015 at 11:44:57 AM UTC-4, Randy Zwitch wrote: You can also use plain HTML in a markdown cell. Yes, you can use img ... tags, but then the image data is not included in the notebook file itself, so unless the image is on a server somewhere that makes it more annoying to share the notebook (as you have to remember to copy the image file along with the notebook).
[julia-users] Re: how to paste png into ipython julia notebook?
True. I share my notebooks as GitHub repos, so I've always passed the images along. On Friday, March 13, 2015 at 11:50:32 AM UTC-4, Steven G. Johnson wrote: On Friday, March 13, 2015 at 11:44:57 AM UTC-4, Randy Zwitch wrote: You can also use plain HTML in a markdown cell. Yes, you can use img ... tags, but then the image data is not included in the notebook file itself, so unless the image is on a server somewhere that makes it more annoying to share the notebook (as you have to remember to copy the image file along with the notebook).
Re: [julia-users] Re: how to paste png into ipython julia notebook?
You can also base64 encode an image and paste the resulting string into a HTML img tag. Works well for smaller images, but might get unwieldy for larger ones.
Re: [julia-users] Re: Time type
Yes, but the representation is quite inefficient. This would be an efficient scalar type. On Fri, Mar 13, 2015 at 11:44 AM, Steven G. Johnson stevenj@gmail.com wrote: On Friday, March 13, 2015 at 9:06:28 AM UTC-4, David Anthoff wrote: is there a Time datatype, analogous to the Date type? I ran into a situation where I need to represent times (like 12:34 pm) that don’t have a date associated. I understand that in the case of dates that don’t have a time component I’d use Date (instead of DateTime), but I couldn’t find anything the other way around, for times that don’t have a date. Couldn't you represent this as a time interval, i.e. Dates.Hour(12) + Dates.Minute(34) ?
Re: [julia-users] Generic functions within Julia closures
At the time method declarations evaluated to nothing. Because of this, we changed it so method declarations evaluate to the generic function that the method is being added to. On Fri, Mar 13, 2015 at 10:31 AM, Matteo Fasiolo matteo.fasi...@gmail.com wrote: Ok, but the closure (the first version) in the 2012 post was not working julia typeof(Rrss) Nothing while now it does typeof(gridfgen([1.; 3.], [.2; 4.])) Function or maybe I am missing something. On Friday, March 13, 2015 at 2:02:34 PM UTC, Stefan Karpinski wrote: Closures have always worked, they just have some performance issues still. On Fri, Mar 13, 2015 at 9:05 AM, Matteo Fasiolo matteo@gmail.com wrote: Hi All, This (2012) discussion https://gist.github.com/dmbates/3939427 seems to suggest that generic functions cannot be used within closures in Julia. But this seems to work now function creator(y) function power(x) return x .^ y end function myNorm(x) return sum( power(x) ) end return power, myNorm end julia (a, b) = creator(2.) (power,myNorm) julia a(2) 4.0 julia b([1. 2.]) 5.0 I use generic functions because they support keyword optional arguments and I want to do things such as function a(y, extra...) function b(x, extra...; z = true) something end end which seems to work in terms of not mixing up the ... with the optional arguments. Notice that I am mainly an R user, currently experimenting with Julia, so my approach might be completely wrong. [1]: https://gist.github.com/dmbates/3939427
Re: [julia-users] Parallel for-loops
On Friday, March 13, 2015 at 10:20:10 AM UTC-5, Pieter Barendrecht wrote: Overall, I'm a bit surprised that using more than 3 or 4 workers does not decrease the running time. Any ideas? I'm using Julia 0.3.6 on a 64bit Arch Linux system, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz. At four workers, you now have a process occupying every physical core (assuming the scheduler is doing what we want), plus your main coordinating process which is also occuping one of those four cores but presumably not doing any simultaneous computation. Many workloads do not see significant acceleration from hyperthreading; if this is such a workload, adding more workers won't give you any more speedup, and as René mentions overhead can start to dominate. Patrick