Re: [julia-users] Copy a BigFloat?
I suppose it's not too bad. my_nextfloat(x)=with_bigfloat_precision(()-nextfloat(x),x.prec)
Re: [julia-users] constants environment
Works! I placed it in a freshly made juliarc.jl. Thanks! On Sunday, September 14, 2014 10:35:58 AM UTC+10, Patrick O'Leary wrote: On Saturday, September 13, 2014 11:06:25 AM UTC-5, Isaiah wrote: Macros are useful for this sort of thing. I'm on an isexpr() evangelism mission. Also going to use if statements for additional exposition. using Base.Meta macro allconst(args...) exp = Expr(:block) if isexpr(args[1], :block) for a in args[1].args if isexpr(a, :(=)) push!(exp.args, Expr(:const, Base.esc(a))) end end end exp end ``` macro allconst(args...) exp = Expr(:block) args[1].head == :block for a in args[1].args a.head == :(=) push!(exp.args, Expr(:const, Base.esc(a) )) end exp end @allconst begin a = 1 b = 2 end ``` (see the Metaprogramming section in the manual for more information) On Sat, Sep 13, 2014 at 8:37 AM, Yakir Gagnon 12.y...@gmail.com javascript: wrote: I often define a bunch of constants at the beginning of a program. Wouldn't it be nice if we could start an environment of constants to avoid writing `const` before every one of the rows at the beginning of a program? Kind of like: ```Julia begin const a = 1 b = 2 c = neverChange end ``` instead of: ```Julia const a = 1 const b = 2 const c = neverChange ```
[julia-users] Why does Pkg.add(NLopt) use sudo?
I was surprised to see sudo apt-get install libnlopt0 on installing NLopt. I'd much prefer a sandboxed install into the .julia directory. Is there a reason? Are there many other packages that require sudo for install? Thanks.
[julia-users] Re: Why does this function incur so much garbage collection (and how can I make faster) ?
Thanks Viral for the quick reply, that's good to know. I was able to squeeze a little more performance out with axpy (see below). I tried devectorizing the inner loop, but it was much slower, I believe because it was no longer taking full advantage of MKL for the matrix multiply. So far I've got the code running at 1.4x what I had in Matlab and according to @time I still have 44.41% gc time. So 0.4 can't come soon enough! Great work guys, I'm really enjoying learning Julia. function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. rg =size(w,2)*size(d,2); for ti=1:size(w,3), ti2 = 1:size(d,3) Base.LinAlg.BLAS.axpy!(1,w[:,:,ti]'*d[:,:,ti2],range(1,rg),deltas.d[:,:,ti+ti2-1],range(1,rg)) end deltas.d end On Saturday, September 13, 2014 10:10:25 PM UTC-7, Viral Shah wrote: The garbage is generated from the indexing operations. In 0.4, we should have array views that should solve this problem. For now, you can either manually devectorize the inner loop, or use the @devectorize macros in the Devectorize package, if they work out in this case. -viral On Sunday, September 14, 2014 10:34:45 AM UTC+5:30, Michael Oliver wrote: Hi all, I've implemented a time delay neural network module and have been trying to optimize it now. This function is for propagating the error backwards through the network. The deltas.d is just a container for holding the errors so I can do things in place and don't have to keep initializing arrays. w and d are collections of weights and errors respectively for different time lags. This function gets called many many times and according to profiling, there is a lot of garbage collection being induced by the fourth line, specifically within multidimensional.jl getindex and setindex! and array.jl + function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. for ti=1:size(w,3), ti2 = 1:size(d,3) deltas.d[:,:,ti+ti2-1] += w[:,:,ti]'*d[:,:,ti2]; end deltas.d end Any advice would be much appreciated! Best, Michael
Re: [julia-users] Why does this function incur so much garbage collection (and how can I make faster) ?
Oh never mind - I see that you have a matrix multiply there that benefits from calling BLAS. If it is a matrix multiply, how come you can get away with axpy? Shouldn’t you need a gemm? Another way to avoid creating temporary arrays with indexing is to use subArrays, which the linear algebra routines can work with. -viral On 14-Sep-2014, at 2:43 pm, Viral Shah vi...@mayin.org wrote: That is great! However, by devectorizing, I meant writing the loop statement itself as two more loops, so that you end up with 3 nested loops effectively. You basically do not want all those w[:,:,ti] calls that create matrices every time. You could also potentially hoist the deltas.d out of the loop. Try something like: function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. dd = deltas.d for ti=1:size(w,3), ti2 = 1:size(d,3) for i=1:size(w,1) for j=size(w,2) dd[i,j,ti+ti2-1] += w[i,j,ti]'*d[i,j,ti2] end end end deltas.d end -viral On 14-Sep-2014, at 12:47 pm, Michael Oliver michael.d.oli...@gmail.com wrote: Thanks Viral for the quick reply, that's good to know. I was able to squeeze a little more performance out with axpy (see below). I tried devectorizing the inner loop, but it was much slower, I believe because it was no longer taking full advantage of MKL for the matrix multiply. So far I've got the code running at 1.4x what I had in Matlab and according to @time I still have 44.41% gc time. So 0.4 can't come soon enough! Great work guys, I'm really enjoying learning Julia. function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. rg =size(w,2)*size(d,2); for ti=1:size(w,3), ti2 = 1:size(d,3) Base.LinAlg.BLAS.axpy!(1,w[:,:,ti]'*d[:,:,ti2],range(1,rg),deltas.d[:,:,ti+ti2-1],range(1,rg)) end deltas.d end On Saturday, September 13, 2014 10:10:25 PM UTC-7, Viral Shah wrote: The garbage is generated from the indexing operations. In 0.4, we should have array views that should solve this problem. For now, you can either manually devectorize the inner loop, or use the @devectorize macros in the Devectorize package, if they work out in this case. -viral On Sunday, September 14, 2014 10:34:45 AM UTC+5:30, Michael Oliver wrote: Hi all, I've implemented a time delay neural network module and have been trying to optimize it now. This function is for propagating the error backwards through the network. The deltas.d is just a container for holding the errors so I can do things in place and don't have to keep initializing arrays. w and d are collections of weights and errors respectively for different time lags. This function gets called many many times and according to profiling, there is a lot of garbage collection being induced by the fourth line, specifically within multidimensional.jl getindex and setindex! and array.jl + function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. for ti=1:size(w,3), ti2 = 1:size(d,3) deltas.d[:,:,ti+ti2-1] += w[:,:,ti]'*d[:,:,ti2]; end deltas.d end Any advice would be much appreciated! Best, Michael
Re: [julia-users] dispatch based on expression head
Avoiding allocation during parsing could really be helpful. It would be possible to make the number of subexpressions a type parameter. On Sun, Sep 14, 2014 at 12:52 AM, Jake Bolewski jakebolew...@gmail.com wrote: Performance would come from reducing the allocation of unnecessary Any[] arrays for a handful of expression nodes, which has some overhead. Serialization / deserialization of the AST could potentially be faster under a more compact representation as we know that hitting the gc is a performance bottleneck when doing large amounts of (de)serialization. These two factors have some impact on startup time as well. One version of a function could handle all parametric expression types, it would just have to do runtime lookup similar to what you would have to do now, so isa(arg, Expr) (arg::Expr).head === :headB would then just become isa(arg, Expr{:headB}). I wouldn't suspect this optimization to have a large impact on performance, but it could make things a some % faster (you hope). On Saturday, September 13, 2014 6:06:02 PM UTC-4, Tim Holy wrote: I wonder whether it would really have much of a performance advantage. Usually expressions are nested, and so an Expr{:headA} contains among its args an Expr{:headB}. But you don't know until runtime what type it is, so runtime lookup would have to be faster than isa(arg, Expr) (arg::Expr).head == :headB ...` I don't actually know, but I would be a bit surprised if it would help that much. Before someone thinks about tackling this, it would make sense to mock it up; make a MyExpr{:headsym}, nest a bunch of them based on real expressions, and then see if you get any real benefit from having the head as a type parameter. The other negative is it would add to compilation time---currently one version of a function handles all expressions, but with this change you'd have to compile a version for each parametric expression type. That means yet slower startup of any packages that do expression-parsing, and startup speed is already a pretty big problem. --Tim On Saturday, September 13, 2014 01:17:00 PM Kevin Squire wrote: While this would greatly affect Match.jl, it would be a very welcome change! Cheers, Kevin On Saturday, September 13, 2014, Leah Hanson astri...@gmail.com wrote: I would expect the Expr type to be abstract, with different concrete subtypes for each current value of head. Each value of head indicates a specific structure in args, and this can just be reflected in the definition of the subtypes. (Then you can dispatch on Expr type, use subtypes(Expr) to see all possible kinds of Expr, etc.) -- Leah On Sat, Sep 13, 2014 at 10:47 AM, Jake Bolewski jakebo...@gmail.com javascript:_e(%7B%7D,'cvml','jakebolew...@gmail.com'); wrote: We've actually discussed changing our expression representation to use types instead of the more lisp-like symbols for distinguishing expression types. That would allow dispatch on expression types and be more compact. It would, however, break almost all macros that do any kind of expression inspection. Hmm, interesting. I guess the Expr type would then be Expr{:head} with getindex / setindex overloaded to manipulate the arguments? This would be a nice change as for many nodes you would not have to allocate an args array which could be a performance win (i guess the serialized ast's would be more compact as well). Can't comment on whether it would be enough of a win to justify such a massively breaking change. On Sat, Sep 13, 2014 at 2:48 AM, Gray Calhoun gcal...@iastate.edu wrote: On Wednesday, September 10, 2014 11:50:44 AM UTC-5, Steven G. Johnson wrote: On Wednesday, September 10, 2014 12:20:59 PM UTC-4, Gray Calhoun wrote: Are there better ways to do this in general? For this kind of expression-matching code, you may find the Match.jl package handy (https://github.com/kmsquire/Match.jl), to get ML- or Scala-like symbolic pattern-matching. Thanks, that's pretty cool. For simple cases like I'm using, do you know if there are advantages (or disadvantages) to using Match.jl, or should I just view it as a nicer syntax? (Obviously, when things get more complicated Match.jl looks very appealing).
[julia-users] Re: Why does Pkg.add(NLopt) use sudo?
It depends on whether or not the package's deps/build.jl script has any package manager providers listed for the relevant library. If you manually install libnlopt0 outside of Julia then I believe BinDeps will see that and not ask for sudo during installation. If you'd rather compile nlopt from source, you should be able to edit ~/.julia/v0.3/NLopt/deps/build.jl and comment out the provides(AptGet, libnlopt0, libnlopt) line. I'm not sure whether BinDeps is set up to fall back to building from source if you were to abort the apt-get installation. On Saturday, September 13, 2014 11:32:47 PM UTC-7, Don MacMillen wrote: I was surprised to see sudo apt-get install libnlopt0 on installing NLopt. I'd much prefer a sandboxed install into the .julia directory. Is there a reason? Are there many other packages that require sudo for install? Thanks.
[julia-users] type change while filling array with comprehension
I am a biologist with no knowledge in programing. I recently started to write small bits of code to analyze gene co-expression networks. By the way, congratulations to those who developped Julia and the many available packages. It's really great and fairly easy to make simple things in a matter of days with the intuitive Julia syntax. I am not familar with type management in programing languages but some behaviors are puzzling for a tenderfoot like myself. Here is a piece of stupid code just to illustrate my question (I'm using JuliaStudio): julia A=[1,2] 2-element Array{Int64,1}: 1 2 julia typeof([A[1]]) Array{Int64,1} julia typeof([A[i] for i=1]) Array{Any,1} Of course int() is doing the job. julia typeof(int([A[i] for i=1])) Array{Int64,1} Yet, I find this behavior strange and using int(), float()... makes the code unnecessarily heavy. I looked through the posts on julia-users but could not find information about how to avoid type change during comprehension. Any advice ?
[julia-users] Re: Julia strange behaviour with GC
Interesting. Then I'll restate things and try to answer correctly to your comments. Sorry, that will be a long post again :/ but I really need that to show that the CLT partly fails in this case. *1- Restating things: the function run a single time * I know this is not what you are interested in. But I hope I'll be clearer there. Some functions can be used rarely but can take a substantial time in the whole calculation. From what I see, people just repeat the calculation *n* times, take the arithmetic mean and choose whichever gives the smallest number. *Question:* which function of *f1* and *f2* is the fastest for a single calculation given that set of inputs on *that* computer? *Method 1:* put @time in front of each of them and get timing results. *Problem 1:* this is not the answer to the question above: - Instead of trying to answer to: *which function would always be faster on my computer with that set of input?* - he answered: *which function was faster that time?* *Problem 2: *Chances are high that always is too strong. Generally could be used instead. *Method 2:* if you perform an operation n times, it will take about nμ seconds. If you want your code to run as fast as possible that's probably what matters. So, if I repeat the measurement *n* times and divide the resulting execution time by *n*, I should get a good estimate of the expected execution time. *Problem:* g*enerally* is a word so vague that it is not enough! What does it mean? Pick your poison: - Is *f1* faster than* f2* in more than *X*% of the cases? (hint: the arithmetic mean tells us *nothing* about that) - What is the most probable execution time? (hint: this is usually *not* the arithmetic mean, eg. the geometric mean for the lognormal ditribution) - What's the range including *X*% of the execution times for* f1*/*f2*? (hint: its center is usually *not* the arithmetic mean) - etc. So not everyone wants the same thing. Plus, if you calculate the arithmetic mean of some repetitions, you take into account effects that may slow down (GC, OS over time, etc.) or speed up (cache, buffers, etc.) the calculation and give a perfectly wrong estimate. The second thing is that the arithmetic mean is almost always not what you want and is always not enough (at least provide the variance if you are sure the distribution is Gaussian). And as John emphasized, the mean or any other location estimator tells us nothing about the distribution spread. *Method 3:* plot histograms/PDF and let other people pick their own poison. *Problem: *not a single number, but a complete distribution. *Advantages:* everything is included and distribution metrics can be calculated from that; the spread is known; normality can actually be checked, if you observe so, you can tell is some function is *always* slower than another one, etc. *2- How the CLT may sometimes partly failed * Ok, let's state it now: the CLT is working quite well, it's just that some of our assumptions about it aren't. :) In the world of technical computing, we are mostly interested in running the same computationally intensive functions over and over, and we usually only care about how long that will take. comes down to: *Question:* which function of *f1* and *f2* is the best suited to put inside of a tight loop? You appropriately and correctly invoked the CLT: the central limit theorem guarantees that, given a function that takes a mean time μ to run, the amount of time it takes to run that function n times is approximately normally distributed with mean nμ for sufficiently large n regardless of the distribution of the individual function evaluation times as long as it has finite variance. But let's split that a little bit and check it :) *2.1- The central limit theorem* The CLT is all about sums. If you draw *n* samples from the distribution* D* and perform a sum of them, you will get one number *N1*. If you draw *n* new samples from the same distribution* D* and perform a sum of them, you will get a new number *N2. *etc. Let's say you performed that operation *m* times. Then you got *m* sums of *n* elements from distribution *D*: *N1...Nm*. The central limit theorem states that the distribution of the numbers *N1...Nm *should converge to a normal distribution as *n* goes to infinity. It can be justified by convolution: the distribution of the *N1...Nm* corresponds to the self-convolution of the initial distribution *D*, *n* times. And it so happen (and was proven) that any distribution with finite variance should converge to a Gaussian as *n* tends to infinity. Even more interesting, if you divide these sums *N1...Nm* by *n*, you get a mean which variance is *always* smaller than the variance of the initial distribution D. These two behaviors are the roots of the CLT. *2.2- Wash it, repeat, rinse, n times* You are proposing to calculate *N1* only, *not* *N1...Nm*, *not* the
Re: [julia-users] type change while filling array with comprehension
Le dimanche 14 septembre 2014 à 06:25 -0700, Laurent Journot a écrit : I am a biologist with no knowledge in programing. I recently started to write small bits of code to analyze gene co-expression networks. By the way, congratulations to those who developped Julia and the many available packages. It's really great and fairly easy to make simple things in a matter of days with the intuitive Julia syntax. I am not familar with type management in programing languages but some behaviors are puzzling for a tenderfoot like myself. Here is a piece of stupid code just to illustrate my question (I'm using JuliaStudio): julia A=[1,2] 2-element Array{Int64,1}: 1 2 julia typeof([A[1]]) Array{Int64,1} julia typeof([A[i] for i=1]) Array{Any,1} Of course int() is doing the job. julia typeof(int([A[i] for i=1])) Array{Int64,1} Yet, I find this behavior strange and using int(), float()... makes the code unnecessarily heavy. I looked through the posts on julia-users but could not find information about how to avoid type change during comprehension. Any advice ? This issue usually happens when using comprehensions at the top-level, as opposed to putting them in functions. AFAIK, the problem is that Julia is not (yet?) able to detect that the code in the comprehension does not change the type of A. Type inference may get better in the future. You can declare A as being constant to work around this: julia const A = [1,2] 2-element Array{Int64,1}: 1 2 julia typeof([A[i] for i=1]) Array{Int64,1} Also, instead of int([A[i] for i=1]), you can write Int[A[i] for i=1], to avoid converting the array after it has been created (and it's also slightly less typing). Regards
Re: [julia-users] Help needed with creating Julia package
On 14 September 2014 06:14, Tony Kelman t...@kelman.net wrote: We tend not to use tags. But there's no problem introducing a named tag to pin things. Any reason why not? Your code is on github but your releases aren't, would be great if it was possible to track exactly which commits correspond to exactly which numbered release version. We have been using branches, not tags for that. We like to only push fixes to release branches, whilst work continues in trunk, and make those minor point releases, 2.4.0, 2.4.1. 2.4.2, etc. I suppose we could tag all the releases in their branches too. The way it currently works is the user will specify --prefix. This is where libflint will be installed (in a subdirectory called lib). Obviously a default is chosen otherwise. Flint is at first built inside its own source tree. Flint knows where to find the file relative to this source tree. But when make install is issued, it will be moved across to --prefix. As flint is compiled, --prefix is used to generate an absolute path where the text file will be stored, and this is baked into the library, which then also looks in this absolute path. Of course the absolute path is computed at compile time to be some path relative to --prefix. It is passed to the relevant parts of flint via a -D define. None of this is going to work if you want the library to be usable on a different machine than it gets compiled on. That's a pretty big restriction to put on your users. A bit like using Gentoo - some people think it's fun out of masochism or something, but not many. Most users would rather save time and use binaries wherever possible (assuming the results are the same and you don't introduce any compatibility problems - if you use an automated build service and a package manager these are easy). We'd like to fix the problem. We just don't know how. At some point, the library has to read from a large text file. There must be a relocatable way of doing that. Of course you can pass an explicit directory to flint's configure to tell it where to put this text file. Perhaps there is some canonical place. I don't know of a way to encode a path relative to where the library is installed. That would be very useful, if possible. But I'm not sure if it can be done. I'm actually not sure how fopen() with relative paths would work inside library code - it might depend on the current working directory of the calling process, which is also something you probably shouldn't be imposing restrictions on. This path is going to need to be runtime configurable, I don't see any other way around it. That should be possible. I'll add a function to set it at runtime. Unless you want to generate the file in such a way that it becomes valid C syntax assigning into one string per line or an array of integers per line, then use it during compilation as a .c or .h file. Since git is installed with Julia, I'd like to git clone flint as part of the install process, so that the user always has the source code of flint (and the license, etc). This isn't necessary. The flint source code is on github, it's pretty simple to document that and point people to the right place if they want to look at it. Keep in mind that not all Julia package users are going to understand or care about the source code of a C library. The Julia package existing and providing access to some of the functionality is enough for many users. Flint is GPL, not LGPL. There is no linking exemption for flint. On the other hand, the FSF seems to be pretty isolated in its position that dynamic linking to a library results in a derived work. Fortunately there is almost no FSF code in flint and it is all LGPL'd. No one else is going to care. I don't know if we've gotten a clear legal intepretation of exactly where Julia's ccall falls with respect to licensing. I don't think it's exactly the same thing as linking. The Free Software Foundation explicitly says that bindings from a programming language interpreter, not just linking, constitute a derived work and therefore the combination becomes GPL'd. They are very explicit about this on their website. I also had someone write to me the other day and refuse to access the source code via Github because they refused to indemnify Github against all damages. It's not clear if I am required under the GPL to send them the source code some other way... We're also not yet fully in compliance with the GPL. Because it is an interpreted environment GPL v2 (which flint uses) requires a notice to be printed when Nemo starts up saying that the program comes with no warranty. I'll have to add that, if I can figure out how. We also need to either include the full source code of flint with the binary, or a written offer to supply it upon request. Here the FSF claims a link to a website is not really good enough because the URL might change. As a result of this, I am personally
[julia-users] Re: type change while filling array with comprehension
On Sunday, September 14, 2014 8:25:59 AM UTC-5, Laurent Journot wrote: I am not familar with type management in programing languages but some behaviors are puzzling for a tenderfoot like myself. Here is a piece of stupid code just to illustrate my question (I'm using JuliaStudio): Not directly related to your question, but Julia Studio is going to pin you to a quite old version of Julia. I highly recommend you find a way to upgrade Julia. julia typeof([A[i] for i=1]) Array{Any,1} Hey look, it's my second-oldest open issue, good ol' https://github.com/JuliaLang/julia/issues/524. The deal, summarized imprecisely, is that Julia has trouble with typing in global scopes, such as at the REPL (read-eval-print loop, i.e., the julia prompt), to preserve interactivity. So the assumption is that even though A is reasonably typed now, it could suddenly change, in which case the array comprehension's type will be incorrect. You'll find things work better in function contexts. It is somewhat idiomatic to use typed array comprehensions, which are a bit cleaner: julia typeof(Int[A[i] for i=1]) should work fine for you. Thanks for giving Julia a shot! Feel free to ask if you have more questions.
[julia-users] Re: constants and containers
I may have missed something but wouldn't immutable t x y end immutable t x y end type u x y end work? julia myvar = t(1,2) julia myvar.x=5 ERROR: type t is immutable julia v = u(t(1,2), t(3,4)) u(t(1,2),t(3,4)) julia v.x t(1,2) v.x=t(5,6) t(5,6) v.x.x=42 ERROR: type t is immutable If you really want to guaranty constant fields, you have to type them to some constant type.
Re: [julia-users] Lint.jl status update
This looks awesome. Regarding the Array parameter issue (which I'm really glad to see in the linter; this issue really tripped me up when learning Julia), if https://github.com/JuliaLang/julia/issues/6984 ever finds a resolution, it would be great to suggest that new syntax in the lint message. Then if the linter becomes common-place, beginners that have never heard of type variance will have a path to understanding. On Sunday, September 14, 2014 1:38:50 AM UTC-4, Tony Fong wrote: That's a good question. They can be used together, obviously. I can easily speak for Lint. The key trade-off made in Lint is that it does not strive for very in-depth type analysis. The focus is finding dodgy AST, where it is located in the source file, and with a bit of explanation around issues. The analyses are done recursively in a very small neighborhood around each node in the AST, although the locality issue has improved somewhat with the new type-tracking ability. The type guessing and tracking could leverage Typecheck.jl, only possible since about last week (with the new features), and it's a very exciting prospect. Lint already provides functionality to return an array of lint messages (from a file, a code snippet, or a module), so it could be used in IDE integration I suppose. Tony On Sunday, September 14, 2014 10:08:09 AM UTC+7, Spencer Russell wrote: Any comments on how Lint.jl and @astrieanna's also-awesome TypeCheck.jl relate? Are you two working together, or are there different use cases for the two libraries? peace, s On Sat, Sep 13, 2014 at 3:34 PM, Tony Fong tony.h...@gmail.com wrote: Fellow Julians, I think it is time to post an update on Lint.jl https://github.com/tonyhffong/Lint.jl, as it has improved quite a bit from the initial version I started about 3 months ago. Notable new features - Local variable type tracking, which enables a range of features, such as - Variable type stability warning within a function scope. - Incompatibility between type assertion and assignment - Omission of returning the constructed object in a type constructor - Check the call signature of a selected set of methods with collection (push!, append!, etc.) - More function checks, such as - repeated arguments - wrong signatures, e.g. f( x::Array{Number,1} ) - Mispelled constructor (calls new but the function name doesn't match the enclosing type) - Ability to silence lint warning via lintpragma() function, e.g. - lintpragma( Ignore unstable type variable [variable name] ) - lintpragma( Ignore Unused [variable name] ) Also, there is now quite a range of test scripts showing sample codes with lint problems, so it's easy to grep your own lint warnings in that folder and see a distilled version of the issue. Again, please let me know about gaps and false positives. Tony
Re: [julia-users] Re: self dot product of all columns in a matrix: Julia vs. Octave
I wonder if we should provide access to DSFMT's random array generation, so that one can use an array generator. The requirements are that one has to generate at least 384 random numbers at a time or more, and the size of the array must necessarily be even. We should not allow this with the global seed, and it can be through a randarray!() function. We can even avoid exporting this function by default, since there are lots of conditions it needs, but it gives really high performance. Are the conditions needed limited to n384 and n even? Why not providing it by default then with a single if statement to check for the n384 condition? The n even condition is not really a problem as Julia does not allocate the exact amount of data needed. Even for fixed-size array, adding 1 extra element (not user accessible) does not seem to be much of a drawback. -viral
Re: [julia-users] How to trace execution?
Yes, I'm not getting useful backtraces... that has several reasons: (1) I'm using OS X and LLVM 3.5 (my fault). But I'm also running on Linux, and backtraces are working fine there. (2) I'm running in parallel, and failing @test or @assers often leads to a hang instead of an error (or backtrace). (3) I'm also running distributed via MPI, and may even have errors in my program leading to deadlocks. Currently I'm adding many info statements. I'm sure there must be a better way... Maybe in 0.4. -erik On Sat, Sep 13, 2014 at 6:14 PM, Tim Holy tim.h...@gmail.com wrote: I take it you're not getting a useful backtrace from your error? That fact alone would be worth reporting, if you have a simple test case. If you run into troubles by not getting enough samples, you can always decrease the delay down to 10 microseconds or so. The default setting of 1 ms is designed to avoid any substantive performance impact, but for tracing you might prefer more samples. --Tim On Saturday, September 13, 2014 02:38:53 PM Elliot Saba wrote: Unfortunately, `@profile` is the closest we have now. You could conceivably call `@profile` from within a `try` so that you can print out the profile results at the end. -E On Sat, Sep 13, 2014 at 12:13 PM, Erik Schnetter schnet...@gmail.com wrote: I want to trace the execution of a Julia program to find out where an error occurs. Is there something like a @trace macro, similar to @time or @profile? -erik -- Erik Schnetter schnet...@cct.lsu.edu http://www.perimeterinstitute.ca/personal/eschnetter/
Re: [julia-users] Help needed with creating Julia package
On Sunday, 14 September 2014 15:52:58 UTC+2, Bill Hart wrote: On 14 September 2014 06:14, Tony Kelman t...@kelman.net wrote: We tend not to use tags. But there's no problem introducing a named tag to pin things. Any reason why not? Your code is on github but your releases aren't, would be great if it was possible to track exactly which commits correspond to exactly which numbered release version. We have been using branches, not tags for that. We like to only push fixes to release branches, whilst work continues in trunk, and make those minor point releases, 2.4.0, 2.4.1. 2.4.2, etc. I suppose we could tag all the releases in their branches too. The way it currently works is the user will specify --prefix. This is where libflint will be installed (in a subdirectory called lib). Obviously a default is chosen otherwise. Flint is at first built inside its own source tree. Flint knows where to find the file relative to this source tree. But when make install is issued, it will be moved across to --prefix. As flint is compiled, --prefix is used to generate an absolute path where the text file will be stored, and this is baked into the library, which then also looks in this absolute path. Of course the absolute path is computed at compile time to be some path relative to --prefix. It is passed to the relevant parts of flint via a -D define. None of this is going to work if you want the library to be usable on a different machine than it gets compiled on. That's a pretty big restriction to put on your users. A bit like using Gentoo - some people think it's fun out of masochism or something, but not many. Most users would rather save time and use binaries wherever possible (assuming the results are the same and you don't introduce any compatibility problems - if you use an automated build service and a package manager these are easy). We'd like to fix the problem. We just don't know how. At some point, the library has to read from a large text file. There must be a relocatable way of doing that. Of course you can pass an explicit directory to flint's configure to tell it where to put this text file. Perhaps there is some canonical place. I don't know of a way to encode a path relative to where the library is installed. That would be very useful, if possible. But I'm not sure if it can be done. I'm actually not sure how fopen() with relative paths would work inside library code - it might depend on the current working directory of the calling process, which is also something you probably shouldn't be imposing restrictions on. This path is going to need to be runtime configurable, I don't see any other way around it. That should be possible. I'll add a function to set it at runtime. Unless you want to generate the file in such a way that it becomes valid C syntax assigning into one string per line or an array of integers per line, then use it during compilation as a .c or .h file. Since git is installed with Julia, I'd like to git clone flint as part of the install process, so that the user always has the source code of flint (and the license, etc). This isn't necessary. The flint source code is on github, it's pretty simple to document that and point people to the right place if they want to look at it. Keep in mind that not all Julia package users are going to understand or care about the source code of a C library. The Julia package existing and providing access to some of the functionality is enough for many users. Flint is GPL, not LGPL. There is no linking exemption for flint. On the other hand, the FSF seems to be pretty isolated in its position that dynamic linking to a library results in a derived work. Fortunately there is almost no FSF code in flint and it is all LGPL'd. No one else is going to care. I don't know if we've gotten a clear legal intepretation of exactly where Julia's ccall falls with respect to licensing. I don't think it's exactly the same thing as linking. The Free Software Foundation explicitly says that bindings from a programming language interpreter, not just linking, constitute a derived work and therefore the combination becomes GPL'd. They are very explicit about this on their website. I also had someone write to me the other day and refuse to access the source code via Github because they refused to indemnify Github against all damages. It's not clear if I am required under the GPL to send them the source code some other way... We're also not yet fully in compliance with the GPL. Because it is an interpreted environment GPL v2 (which flint uses) requires a notice to be printed when Nemo starts up saying that the program comes with no warranty. I'll have to add that, if I can figure out how. We also need to either include the full source code of flint with the binary, or a written offer to
Re: [julia-users] Re: Change field value of a composite type when name of the field is in a variable
There is a shortcut you can use: inst.(symbol(varToChange)) = 20 However, note that this is not a good idea for performance-sensitive code. On Sun, Sep 14, 2014 at 10:37 AM, curiousle...@gmail.com wrote: Thank you, Don. That works great! I had tried setfield! earlier, but I was doing: setfield!(inst, varToChange, 20) and was (obviously) getting error. I wonder why this is not automatically done by setfield! On Saturday, September 13, 2014 8:06:31 PM UTC-4, Don MacMillen wrote: setfield!(inst, symbol(varToChange), 20) On Saturday, September 13, 2014 4:47:46 PM UTC-7, curiou...@gmail.com wrote: Hi All, Suppose I have a composite type and an instance of it: type myType numLines::Int avgLength::Float64 end inst = myType(10, 8.5) I want to change, say numLines of inst1 to 20. I know I can do inst.numLines = 20 However, suppose the field that has to be changed is determined by the program. Say, I have, varToChange = numLines How can I use *varToChange* to change the value of *numLines* in *inst* ? Thank you.
[julia-users] Re: Change field value of a composite type when name of the field is in a variable
On Saturday, September 13, 2014 6:47:46 PM UTC-5, curiou...@gmail.com wrote: However, suppose the field that has to be changed is determined by the program. Say, I have, varToChange = numLines How can I use *varToChange* to change the value of *numLines* in *inst*? Here are a couple of alternatives. Depending on the source of your numLines, you can assign the symbol directly, rather than via a string and a call to symbol(), combining it with either Don or Isaiah's syntaxes: varToChange = :numLines If this is the sort of thing you find you are doing often, a composite type may not be the correct data structure for your application. Consider a Dict: inst = Dict{String, Any}() inst[numLines] = 10 inst[avgLength] = 8.5 inst[varToChange] = 20 Patrick
Re: [julia-users] Re: Why does Pkg.add(NLopt) use sudo?
Thanks Tony. Don On Sep 14, 2014 3:27 AM, Tony Kelman t...@kelman.net wrote: It depends on whether or not the package's deps/build.jl script has any package manager providers listed for the relevant library. If you manually install libnlopt0 outside of Julia then I believe BinDeps will see that and not ask for sudo during installation. If you'd rather compile nlopt from source, you should be able to edit ~/.julia/v0.3/NLopt/deps/build.jl and comment out the provides(AptGet, libnlopt0, libnlopt) line. I'm not sure whether BinDeps is set up to fall back to building from source if you were to abort the apt-get installation. On Saturday, September 13, 2014 11:32:47 PM UTC-7, Don MacMillen wrote: I was surprised to see sudo apt-get install libnlopt0 on installing NLopt. I'd much prefer a sandboxed install into the .julia directory. Is there a reason? Are there many other packages that require sudo for install? Thanks.
[julia-users] Is this expected behavior from nb_available?
julia r,_ = redirect_stdout() (Pipe(open, 0 bytes waiting),Pipe(open, 0 bytes waiting)) julia nb_available(r) 0 julia println(foo) julia nb_available(r) 0 julia println(foo) julia nb_available(r) 0 julia readline(r) foo\n julia nb_available(r) 4 Even if there's data available, nb_available returns 0. From the name, I would have expected this function to return the number of bytes available. However, since there's no documentation, either in help(nb_available) or in the code itself, I can't tell if this is a bug or if this is behavior is by design.
Re: [julia-users] Help needed with creating Julia package
Of course you can pass an explicit directory to flint's configure to tell it where to put this text file. Perhaps there is some canonical place. Julia has settled on some good default choices for where to put packages by now, the problem is users are free to override the default locations and can have good reasons for wanting to do so. The Free Software Foundation explicitly says that bindings from a programming language interpreter, not just linking, constitute a derived work and therefore the combination becomes GPL'd. They are very explicit about this on their website. I also had someone write to me the other day and refuse to access the source code via Github because they refused to indemnify Github against all damages. It's not clear if I am required under the GPL to send them the source code some other way... Oh dear. Well what I've been doing when building binaries is including the Readme and the License files as part of the packaging, so those will always get installed along with the dll's. We also need to either include the full source code of flint with the binary, or a written offer to supply it upon request. Here the FSF claims a link to a website is not really good enough because the URL might change. As a result of this, I am personally very reluctant to not just supply the source code for flint. I also want people to have the git repository automatically so they can contribute to it if they want. Okay. It's not the end of the world to checkout the source code even when downloading a binary. Though keep in mind that for Julia 0.4, the plan is to transition Julia's package manager from relying on shelling out to command-line git to using ccall bindings to libgit2, so it won't necessarily be a safe assumption to make that you can always shell out to git. That's still a ways out though, and you can always put a try-catch around the checkout since it's not an absolute requirement. We're also not yet fully in compliance with the GPL. Because it is an interpreted environment GPL v2 (which flint uses) requires a notice to be printed when Nemo starts up saying that the program comes with no warranty. I'll have to add that, if I can figure out how. That should be pretty easy with a println in your Nemo.jl module file, should come up any time anyone calls using Nemo. You can also write a function called __init__() that gets called on package startup. Not sure if/where in the docs that's listed. Unfortunately it doesn't even work on my machine. It seems ok for some calls into the dll, but as soon as I try to say print something using a function in the MPIR dll it segfaults. I suppose it must be linked against subtly different Microsoft dll's than Julia and some kind of conflict results. Or maybe different GCC dll's. Exactly which version of MinGW-w64 are you using? Might not be compatible with what was used to build Julia. Or there could be some issue due to having gmp and mpir both loaded within the same process? We've seen sort-of-similar issues with blas and lapack and various packages, though not so much on Windows. The problem must be what libtool is doing on mingw64. Make install doesn't even copy the generated dll across to the install directory, so you have to do this manually. Hm, yeah, libtool can be very picky, especially when it comes to dll's. I think I've used the subtle and quick to anger quote on this list before when talking about libtool... I've found configuring with lt_cv_deplibs_check_method=pass_all can sometimes help. I also can't build flint against the Julia mpir and mpfr since Julia doesn't seem to distribute the gmp.h and mpfr.h files, and flint requires these when building. Oh, right, damn. Sorry I forgot about that! That is an issue, maybe we should open a separate discussion on that general topic. It has come up before that various packages would like to link against the same libraries as Julia (in the Blas and Lapack cases we're building with an incompatible ABI which makes this actually not a good idea in most cases, but for GMP and MPFR I think we're configuring them in the default way), so installing the corresponding headers would actually be necessary. Even though I'm not entirely sure what Nemo or Flint do or whether I would personally have a use for them, I have some strange desire to see more Julia packages be painless to install cross-platform and want to help here. Let me know how the runtime configuration of the text file location goes, then I can mock up what BinDeps would look like. It should simplify your deps/build.jl script by automating the standard download, extract, configure, make steps. There are also some Julia idioms like joinpath(a,b) that would be cleaner than what you're doing now with if statements to switch between forward slashes and backslashes, and things like cd(newdir) do ... end that work in a nicer way including returning to the old directory even on
[julia-users] Re: Change field value of a composite type when name of the field is in a variable
Thank you Isaiah and Patrick. The source is a Dict{ASCIIstring, Any}. From your answers I realized, I can instead use Dict{Symbol, Any} as the source and then use either Don's or Isaiah's syntax. Yes, I was thinking that may be I should use a Dict. But then I would have to use Dict{ASCIIstring, Any} type. I was not sure (because I don't know about these things) whether that would affect the performance adversely (because of the Any). While Dict{ASCIIstring, Any} is a source that is used to change *inst, inst *is used multiple times in the program. Once it's value is changed, its type is known completely when it is used those multiple times. On Sunday, September 14, 2014 11:09:56 AM UTC-4, Patrick O'Leary wrote: On Saturday, September 13, 2014 6:47:46 PM UTC-5, curiou...@gmail.com wrote: However, suppose the field that has to be changed is determined by the program. Say, I have, varToChange = numLines How can I use *varToChange* to change the value of *numLines* in *inst*? Here are a couple of alternatives. Depending on the source of your numLines, you can assign the symbol directly, rather than via a string and a call to symbol(), combining it with either Don or Isaiah's syntaxes: varToChange = :numLines If this is the sort of thing you find you are doing often, a composite type may not be the correct data structure for your application. Consider a Dict: inst = Dict{String, Any}() inst[numLines] = 10 inst[avgLength] = 8.5 inst[varToChange] = 20 Patrick
Re: [julia-users] Re: dispatch based on expression head
Hi Gray, Sorry, I got sidetracked with the rest of the discussion and forgot to come back to this until now. On Fri, Sep 12, 2014 at 5:48 PM, Gray Calhoun gcalh...@iastate.edu wrote: Thanks, that's pretty cool. For simple cases like I'm using, do you know if there are advantages (or disadvantages) to using Match.jl, or should I just view it as a nicer syntax? (Obviously, when things get more complicated Match.jl looks very appealing). After compilation, Match.jl is basically a bunch of nested if statements. For simple things, where you're always checking against the same type (e.g., Expr), there may be a slight compilation overhead, but no execution overhead. When things get more complicated, the main potential for slowdown (I think) is when matching against arrays. 1. the nested ifs contain explicit array size/bounds checks, in order to avoid bounds errors and try/next blocks 2. SubArrays are used to match parts of multidimensional arrays, and SubArrays are currently a little slow in Julia Also, as with any Julia code, wrapping in a function will greatly improve type inference, and hence performance. Finally, matching against Expr is quite functional, but slightly annoying: Exprs are constructed with Expr(:+, 1,2,3) but the actual Expr object contains the 1,2,3 in an array, and has an additional type field. So Expr rewriting currently looks something like Expr(:+, [a,b,c], _) = Expr(:+, c,b,a) I've thought about special casing Expr specifically to avoid this, but haven't gotten around to it. I used Match.jl quite extensively to match and rewrite Exprs when wrapping VideoIO. A clean Expr match example is https://github.com/kmsquire/VideoIO.jl/blob/master/util/wrap_libav_split.jl#L215-L226. You can search in the rest of that file for examples of Expr rewriting. Hope this is useful. Cheers! Kevin
[julia-users] Bug in ccall with pass float array?
Hi all, I am porting OpenCV API to Julia. I already have some progress, maybe I will public it later. During the porting. I encounter a problem, I think it maybe a bug. Suppose we have a C function like this: ``` void foo(int *a, float *b) { cout Int array: a[0] endl; cout Float array: b[0] endl; } ``` In Julia, I wrote a wrapper function like this: ``` function bar(a::Array{Int, 1}, b::Array{Float64, 1} ) ccall( (:foo, ../libfoo), Void, (Ptr{Int}, Ptr{Float64}), a, b) end ``` I call the wrapper function in Julia: ``` bar([1], [2.5]) ``` The Int array could passed to C function, but Float64 array could not. It always be ZERO. Do I miss anything? My machine is 32bits Linux, GCC 4.8.2, I didn't tested in other platform. Regards, Sun
Re: [julia-users] Bug in ccall with pass float array?
A float in C matches either a Cfloat or Float32 (these are equivalent) in Julia. On Sunday, September 14, 2014, Boxiang Sun daetalu...@gmail.com wrote: Hi all, I am porting OpenCV API to Julia. I already have some progress, maybe I will public it later. During the porting. I encounter a problem, I think it maybe a bug. Suppose we have a C function like this: ``` void foo(int *a, float *b) { cout Int array: a[0] endl; cout Float array: b[0] endl; } ``` In Julia, I wrote a wrapper function like this: ``` function bar(a::Array{Int, 1}, b::Array{Float64, 1} ) ccall( (:foo, ../libfoo), Void, (Ptr{Int}, Ptr{Float64}), a, b) end ``` I call the wrapper function in Julia: ``` bar([1], [2.5]) ``` The Int array could passed to C function, but Float64 array could not. It always be ZERO. Do I miss anything? My machine is 32bits Linux, GCC 4.8.2, I didn't tested in other platform. Regards, Sun
Re: [julia-users] Is this expected behavior from nb_available?
There are two answers here: One is that writing the stream does not imply the data is immediately readable, or will even be split up the same when received. The second part, is that Julia does not start consuming the data until you start reading it. This is important because you might want to pass this handle to an external process, and not have Julia consume part of the data I the meantime. On Sunday, September 14, 2014, Dan Luu dan...@gmail.com wrote: julia r,_ = redirect_stdout() (Pipe(open, 0 bytes waiting),Pipe(open, 0 bytes waiting)) julia nb_available(r) 0 julia println(foo) julia nb_available(r) 0 julia println(foo) julia nb_available(r) 0 julia readline(r) foo\n julia nb_available(r) 4 Even if there's data available, nb_available returns 0. From the name, I would have expected this function to return the number of bytes available. However, since there's no documentation, either in help(nb_available) or in the code itself, I can't tell if this is a bug or if this is behavior is by design.
Re: [julia-users] Bug in ccall with pass float array?
Thanks! 在 2014年9月15日星期一UTC+8上午12时13分36秒,Jameson写道: A float in C matches either a Cfloat or Float32 (these are equivalent) in Julia. On Sunday, September 14, 2014, Boxiang Sun daeta...@gmail.com javascript: wrote: Hi all, I am porting OpenCV API to Julia. I already have some progress, maybe I will public it later. During the porting. I encounter a problem, I think it maybe a bug. Suppose we have a C function like this: ``` void foo(int *a, float *b) { cout Int array: a[0] endl; cout Float array: b[0] endl; } ``` In Julia, I wrote a wrapper function like this: ``` function bar(a::Array{Int, 1}, b::Array{Float64, 1} ) ccall( (:foo, ../libfoo), Void, (Ptr{Int}, Ptr{Float64}), a, b) end ``` I call the wrapper function in Julia: ``` bar([1], [2.5]) ``` The Int array could passed to C function, but Float64 array could not. It always be ZERO. Do I miss anything? My machine is 32bits Linux, GCC 4.8.2, I didn't tested in other platform. Regards, Sun
Re: [julia-users] Is this expected behavior from nb_available?
Thanks! A related question I have is, is there an existing function that acts like recv/recvmsg/recvmmsg in C (with O_NONBLOCKING), which returns whatever's available? I was hoping readavailable would do that, but it blocks if nothing's available, which is why I'm checking nb_available. I can see why you'd warn people that the data may not be split the same way, but in this case I'm ok with receiving partial frames for whatever the definition of a frame is. On Sun, Sep 14, 2014 at 11:17 AM, Jameson Nash vtjn...@gmail.com wrote: There are two answers here: One is that writing the stream does not imply the data is immediately readable, or will even be split up the same when received. The second part, is that Julia does not start consuming the data until you start reading it. This is important because you might want to pass this handle to an external process, and not have Julia consume part of the data I the meantime. On Sunday, September 14, 2014, Dan Luu dan...@gmail.com wrote: julia r,_ = redirect_stdout() (Pipe(open, 0 bytes waiting),Pipe(open, 0 bytes waiting)) julia nb_available(r) 0 julia println(foo) julia nb_available(r) 0 julia println(foo) julia nb_available(r) 0 julia readline(r) foo\n julia nb_available(r) 4 Even if there's data available, nb_available returns 0. From the name, I would have expected this function to return the number of bytes available. However, since there's no documentation, either in help(nb_available) or in the code itself, I can't tell if this is a bug or if this is behavior is by design.
Re: [julia-users] Help needed with creating Julia package
On 14 September 2014 17:46, Tony Kelman t...@kelman.net wrote: Unfortunately it doesn't even work on my machine. It seems ok for some calls into the dll, but as soon as I try to say print something using a function in the MPIR dll it segfaults. I suppose it must be linked against subtly different Microsoft dll's than Julia and some kind of conflict results. Or maybe different GCC dll's. Exactly which version of MinGW-w64 are you using? I've just updated the msys2 packages with pacman. Recently I updated just some of the packages and not others, and I thought this might have introduced some incompatibilities. I noticed that if I build MPIR even with msys2 and not via Julia, the resulting dll doesn't work any more. After updating msys2 fully to msys 2.0.1, it still doesn't work. So it looks to me like some new issues have been introduced by the more recent msys or something. Their gas also barfs on one of the assembly files which we now have to patch. This issue affects GMP too, not just MPIR since we use the same code for that file. Might not be compatible with what was used to build Julia. Or there could be some issue due to having gmp and mpir both loaded within the same process? Just to reiterate, it was working fine before. I even went back to the precise configure invocation I used before in my bash history to ensure I was building MPIR the same way as before, and I checked which alpha version of MPIR I used. Admittedly I was using Julia 0.2.1 before, not Julia 0.3, which I have now upgraded to. I can try downloading Julia 0.2 for Windows again and see if that fixes the issue I guess. We've seen sort-of-similar issues with blas and lapack and various packages, though not so much on Windows. The problem must be what libtool is doing on mingw64. Make install doesn't even copy the generated dll across to the install directory, so you have to do this manually. Hm, yeah, libtool can be very picky, especially when it comes to dll's. I think I've used the subtle and quick to anger quote on this list before when talking about libtool... I've found configuring with lt_cv_deplibs_check_method=pass_all can sometimes help. I also can't build flint against the Julia mpir and mpfr since Julia doesn't seem to distribute the gmp.h and mpfr.h files, and flint requires these when building. Oh, right, damn. Sorry I forgot about that! That is an issue, maybe we should open a separate discussion on that general topic. It has come up before that various packages would like to link against the same libraries as Julia (in the Blas and Lapack cases we're building with an incompatible ABI which makes this actually not a good idea in most cases, but for GMP and MPFR I think we're configuring them in the default way), so installing the corresponding headers would actually be necessary. Even though I'm not entirely sure what Nemo or Flint do or whether I would personally have a use for them, I have some strange desire to see more Julia packages be painless to install cross-platform and want to help here. I can understand that. Let me know how the runtime configuration of the text file location goes, It won't happen for a while. I'm afraid I'm currently 14 days behind schedule at work. I've spent days trying to get this to work on Windows and have just run out of time to keep working on it. I will have to come back to it when I catch up with everything else that has been piling up. Perhaps I can spend some more time on it next weekend. then I can mock up what BinDeps would look like. It should simplify your deps/build.jl script by automating the standard download, extract, configure, make steps. There are also some Julia idioms like joinpath(a,b) that would be cleaner than what you're doing now with if statements to switch between forward slashes and backslashes, and things like cd(newdir) do ... end that work in a nicer way including returning to the old directory even on failure. Thanks, I will try to incorporate these suggestions in a later incarnation of the code. Bill.
Re: [julia-users] Help needed with creating Julia package
I checked that the MPIR dll's produced with the updated msys2 also don't work with Julia 0.2. I can't think of many other variables here. It has to be msys2 related. I could try not patching the assembly file it barfs on, but have it built from C fallback code. But I know for a fact that file is not being used in the functions I'm calling that cause it to segfault. I can try uninstalling msys2 completely and reinstalling it and mingw64 and see if I get any joy. On Sunday, 14 September 2014 19:05:55 UTC+2, Bill Hart wrote: On 14 September 2014 17:46, Tony Kelman t...@kelman.net wrote: Unfortunately it doesn't even work on my machine. It seems ok for some calls into the dll, but as soon as I try to say print something using a function in the MPIR dll it segfaults. I suppose it must be linked against subtly different Microsoft dll's than Julia and some kind of conflict results. Or maybe different GCC dll's. Exactly which version of MinGW-w64 are you using? I've just updated the msys2 packages with pacman. Recently I updated just some of the packages and not others, and I thought this might have introduced some incompatibilities. I noticed that if I build MPIR even with msys2 and not via Julia, the resulting dll doesn't work any more. After updating msys2 fully to msys 2.0.1, it still doesn't work. So it looks to me like some new issues have been introduced by the more recent msys or something. Their gas also barfs on one of the assembly files which we now have to patch. This issue affects GMP too, not just MPIR since we use the same code for that file. Might not be compatible with what was used to build Julia. Or there could be some issue due to having gmp and mpir both loaded within the same process? Just to reiterate, it was working fine before. I even went back to the precise configure invocation I used before in my bash history to ensure I was building MPIR the same way as before, and I checked which alpha version of MPIR I used. Admittedly I was using Julia 0.2.1 before, not Julia 0.3, which I have now upgraded to. I can try downloading Julia 0.2 for Windows again and see if that fixes the issue I guess. We've seen sort-of-similar issues with blas and lapack and various packages, though not so much on Windows. The problem must be what libtool is doing on mingw64. Make install doesn't even copy the generated dll across to the install directory, so you have to do this manually. Hm, yeah, libtool can be very picky, especially when it comes to dll's. I think I've used the subtle and quick to anger quote on this list before when talking about libtool... I've found configuring with lt_cv_deplibs_check_method=pass_all can sometimes help. I also can't build flint against the Julia mpir and mpfr since Julia doesn't seem to distribute the gmp.h and mpfr.h files, and flint requires these when building. Oh, right, damn. Sorry I forgot about that! That is an issue, maybe we should open a separate discussion on that general topic. It has come up before that various packages would like to link against the same libraries as Julia (in the Blas and Lapack cases we're building with an incompatible ABI which makes this actually not a good idea in most cases, but for GMP and MPFR I think we're configuring them in the default way), so installing the corresponding headers would actually be necessary. Even though I'm not entirely sure what Nemo or Flint do or whether I would personally have a use for them, I have some strange desire to see more Julia packages be painless to install cross-platform and want to help here. I can understand that. Let me know how the runtime configuration of the text file location goes, It won't happen for a while. I'm afraid I'm currently 14 days behind schedule at work. I've spent days trying to get this to work on Windows and have just run out of time to keep working on it. I will have to come back to it when I catch up with everything else that has been piling up. Perhaps I can spend some more time on it next weekend. then I can mock up what BinDeps would look like. It should simplify your deps/build.jl script by automating the standard download, extract, configure, make steps. There are also some Julia idioms like joinpath(a,b) that would be cleaner than what you're doing now with if statements to switch between forward slashes and backslashes, and things like cd(newdir) do ... end that work in a nicer way including returning to the old directory even on failure. Thanks, I will try to incorporate these suggestions in a later incarnation of the code. Bill.
Re: [julia-users] Why does this function incur so much garbage collection (and how can I make faster) ?
I was using axpy to replace the += only and doing the matrix muliply in the argument to axpy. But you're right gemm! is actually what I should be using (I'm just starting to learn the BLAS library names). Using gemm! the code is now 1.68x faster than my Matlab code (I mean a whole epoch of backprop training)! And down to 40% gc time. My goal of 2x speed up is in sight! I'll look into subArrays next. function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. for ti=1:size(w,3), ti2 = 1:size(d,3) Base.LinAlg.BLAS.gemm!('T', 'N', one(Float32), w[:,:,ti], d[:,:,ti2], one(Float32), deltas.d[:,:,ti+ti2-1]) end deltas.d end On Sunday, September 14, 2014 2:18:07 AM UTC-7, Viral Shah wrote: Oh never mind - I see that you have a matrix multiply there that benefits from calling BLAS. If it is a matrix multiply, how come you can get away with axpy? Shouldn’t you need a gemm? Another way to avoid creating temporary arrays with indexing is to use subArrays, which the linear algebra routines can work with. -viral On 14-Sep-2014, at 2:43 pm, Viral Shah vi...@mayin.org javascript: wrote: That is great! However, by devectorizing, I meant writing the loop statement itself as two more loops, so that you end up with 3 nested loops effectively. You basically do not want all those w[:,:,ti] calls that create matrices every time. You could also potentially hoist the deltas.d out of the loop. Try something like: function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. dd = deltas.d for ti=1:size(w,3), ti2 = 1:size(d,3) for i=1:size(w,1) for j=size(w,2) dd[i,j,ti+ti2-1] += w[i,j,ti]'*d[i,j,ti2] end end end deltas.d end -viral On 14-Sep-2014, at 12:47 pm, Michael Oliver michael@gmail.com javascript: wrote: Thanks Viral for the quick reply, that's good to know. I was able to squeeze a little more performance out with axpy (see below). I tried devectorizing the inner loop, but it was much slower, I believe because it was no longer taking full advantage of MKL for the matrix multiply. So far I've got the code running at 1.4x what I had in Matlab and according to @time I still have 44.41% gc time. So 0.4 can't come soon enough! Great work guys, I'm really enjoying learning Julia. function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. rg =size(w,2)*size(d,2); for ti=1:size(w,3), ti2 = 1:size(d,3) Base.LinAlg.BLAS.axpy!(1,w[:,:,ti]'*d[:,:,ti2],range(1,rg),deltas.d[:,:,ti+ti2-1],range(1,rg)) end deltas.d end On Saturday, September 13, 2014 10:10:25 PM UTC-7, Viral Shah wrote: The garbage is generated from the indexing operations. In 0.4, we should have array views that should solve this problem. For now, you can either manually devectorize the inner loop, or use the @devectorize macros in the Devectorize package, if they work out in this case. -viral On Sunday, September 14, 2014 10:34:45 AM UTC+5:30, Michael Oliver wrote: Hi all, I've implemented a time delay neural network module and have been trying to optimize it now. This function is for propagating the error backwards through the network. The deltas.d is just a container for holding the errors so I can do things in place and don't have to keep initializing arrays. w and d are collections of weights and errors respectively for different time lags. This function gets called many many times and according to profiling, there is a lot of garbage collection being induced by the fourth line, specifically within multidimensional.jl getindex and setindex! and array.jl + function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. for ti=1:size(w,3), ti2 = 1:size(d,3) deltas.d[:,:,ti+ti2-1] += w[:,:,ti]'*d[:,:,ti2]; end deltas.d end Any advice would be much appreciated! Best, Michael
Re: [julia-users] Re: ANN: ApproxFun v0.0.3 with general linear PDE solving on rectangles
I'll add to the partial list, just in case it is useful: a) Algorithm for the convolution of Chebyshev series b) Bivariate rootfinding c) Linearity detection of operators (closely related to (5)) d) Automatic (though a little rough) approximation of functions with singularities e) Remez, cf, least squares, pade polynomial approximation f) Vector calculus g) Field of values algorithm It may be good to have 3), 4), 6), 7), b). Not sure if Julia offers any advantage for 2), 5), 8), c). On Thursday, 11 September 2014 19:43:27 UTC-4, Sheehan Olver wrote: Chebfun is a lot more full featured, and ApproxFun is _very_ rough around the edges. ApproxFun will probably end up a very different animal than chebfun: right now the goal is to tackle PDEs on a broader class of domains, something I think is beyond the scope of Chebfun due to issues with Matlab's speed, memory management, etc. Here’s a partial list of features in Chebfun not in ApproxFun: 1)Automatic edge detection and domain splitting 2)Support for delta functions 3)Built-in time stepping (pde15s) 4)Eigenvalue problems 5)Automatic nonlinear ODE solver 6)Operator exponential 7)Smarter constructor for determining convergence 8)Automatic differentiation I have no concrete plans at the moment of adding these features, though eigenvalue problems and operator exponentials will likely find their way in at some point. Sheehan On 12 Sep 2014, at 12:14 am, Steven G. Johnson steve...@gmail.com javascript: wrote: This is great! At this point, what are the major differences in functionality between ApproxFun and Chebfun?
Re: [julia-users] ANN: ApproxFun v0.0.3 with general linear PDE solving on rectangles
Sheehan I notice that ApproxFun handles 1D and 2D domains. Do you plan to extend it to 3D or 4D as well? Would that be complicated? If so, is this about software engineering, or about the numerical analysis behind the package? -erik On Wed, Sep 10, 2014 at 6:22 PM, Sheehan Olver dlfivefi...@gmail.com wrote: This is to announce a new version of ApproxFun (https://github.com/dlfivefifty/ApproxFun.jl), a package for approximating functions. The biggest new feature is support for PDE solving. The following lines solve Helmholtz equation u_xx + u_yy + 100 u = 0 with the solution held to be one on the boundary: d=Interval()⊗Interval()# the domain to solve is a rectangle u=[dirichlet(d),lap(d)+100I]\ones(4) # first 4 entries are boundary conditions, further entries are assumed zero contour(u) # contour plot of the solution, requires GadFly PDE solving is based on a recent preprint with Alex Townsend (http://arxiv.org/abs/1409.2789). Only splitting rank 2 PDEs are implemented at the moment. Examples included are: examples/RectPDE Examples.ipynb: Poisson equation, Wave equation, linear KdV, semiclassical Schrodinger equation with a potential, and convection/convection-diffusion equations. examples/Wave and Klein–Gordon equation on a square.ipynb: On-the-fly 3D simulation of time-evolution PDEs on a square. Requires GLPlot.jl (https://github.com/SimonDanisch/GLPlot.jl). examples/Manipulate Helmholtz.upynb: On-the-fly variation of Helmholtz frequency. Requires Interact.jl (https://github.com/JuliaLang/Interact.jl) Another new feature is faster root finding, thanks to Alex. -- Erik Schnetter schnet...@cct.lsu.edu http://www.perimeterinstitute.ca/personal/eschnetter/
Re: [julia-users] Why does this function incur so much garbage collection (and how can I make faster) ?
With subArrays the GC time should go to zero, and the arrays are contiguous, so I think it should just work out. This should get replaced with array views in 0.4, and you won’t even have to explicitly create subArrays - but that is what you need to do for now. I think you should reach your 2x goal with this. Also, there is a significant improvement to GC in the works, which should greatly help - but the best way to beat GC is to avoid allocating in the first place! You may see higher speeds by using MKL, depending on your matrix sizes. -viral On 15-Sep-2014, at 12:38 am, Michael Oliver michael.d.oli...@gmail.com wrote: I was using axpy to replace the += only and doing the matrix muliply in the argument to axpy. But you're right gemm! is actually what I should be using (I'm just starting to learn the BLAS library names). Using gemm! the code is now 1.68x faster than my Matlab code (I mean a whole epoch of backprop training)! And down to 40% gc time. My goal of 2x speed up is in sight! I'll look into subArrays next. function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. for ti=1:size(w,3), ti2 = 1:size(d,3) Base.LinAlg.BLAS.gemm!('T', 'N', one(Float32), w[:,:,ti], d[:,:,ti2], one(Float32), deltas.d[:,:,ti+ti2-1]) end deltas.d end On Sunday, September 14, 2014 2:18:07 AM UTC-7, Viral Shah wrote: Oh never mind - I see that you have a matrix multiply there that benefits from calling BLAS. If it is a matrix multiply, how come you can get away with axpy? Shouldn’t you need a gemm? Another way to avoid creating temporary arrays with indexing is to use subArrays, which the linear algebra routines can work with. -viral On 14-Sep-2014, at 2:43 pm, Viral Shah vi...@mayin.org wrote: That is great! However, by devectorizing, I meant writing the loop statement itself as two more loops, so that you end up with 3 nested loops effectively. You basically do not want all those w[:,:,ti] calls that create matrices every time. You could also potentially hoist the deltas.d out of the loop. Try something like: function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. dd = deltas.d for ti=1:size(w,3), ti2 = 1:size(d,3) for i=1:size(w,1) for j=size(w,2) dd[i,j,ti+ti2-1] += w[i,j,ti]'*d[i,j,ti2] end end end deltas.d end -viral On 14-Sep-2014, at 12:47 pm, Michael Oliver michael@gmail.com wrote: Thanks Viral for the quick reply, that's good to know. I was able to squeeze a little more performance out with axpy (see below). I tried devectorizing the inner loop, but it was much slower, I believe because it was no longer taking full advantage of MKL for the matrix multiply. So far I've got the code running at 1.4x what I had in Matlab and according to @time I still have 44.41% gc time. So 0.4 can't come soon enough! Great work guys, I'm really enjoying learning Julia. function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. rg =size(w,2)*size(d,2); for ti=1:size(w,3), ti2 = 1:size(d,3) Base.LinAlg.BLAS.axpy!(1,w[:,:,ti]'*d[:,:,ti2],range(1,rg),deltas.d[:,:,ti+ti2-1],range(1,rg)) end deltas.d end On Saturday, September 13, 2014 10:10:25 PM UTC-7, Viral Shah wrote: The garbage is generated from the indexing operations. In 0.4, we should have array views that should solve this problem. For now, you can either manually devectorize the inner loop, or use the @devectorize macros in the Devectorize package, if they work out in this case. -viral On Sunday, September 14, 2014 10:34:45 AM UTC+5:30, Michael Oliver wrote: Hi all, I've implemented a time delay neural network module and have been trying to optimize it now. This function is for propagating the error backwards through the network. The deltas.d is just a container for holding the errors so I can do things in place and don't have to keep initializing arrays. w and d are collections of weights and errors respectively for different time lags. This function gets called many many times and according to profiling, there is a lot of garbage collection being induced by the fourth line, specifically within multidimensional.jl getindex and setindex! and array.jl + function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. for ti=1:size(w,3), ti2 = 1:size(d,3) deltas.d[:,:,ti+ti2-1] += w[:,:,ti]'*d[:,:,ti2]; end deltas.d end Any advice would be much appreciated! Best, Michael
Re: [julia-users] Why does this function incur so much garbage collection (and how can I make faster) ?
I don't think this version does what you wan't it to do. deltas.d[:,:,ti+ti2-1] makes a copy so deltas.d will be unmodified by gemm!. You can use sub as Viral mentions or you could try ArrayViews.jl. Another thing you could consider is to use a Vector{Matrix{Float32}} instead of Array{Float32,3}. It can be slightly unintuitive but if you index a Vector{Matrix{Float32}} no copy is made. Med venlig hilsen Andreas Noack 2014-09-14 15:08 GMT-04:00 Michael Oliver michael.d.oli...@gmail.com: I was using axpy to replace the += only and doing the matrix muliply in the argument to axpy. But you're right gemm! is actually what I should be using (I'm just starting to learn the BLAS library names). Using gemm! the code is now 1.68x faster than my Matlab code (I mean a whole epoch of backprop training)! And down to 40% gc time. My goal of 2x speed up is in sight! I'll look into subArrays next. function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. for ti=1:size(w,3), ti2 = 1:size(d,3) Base.LinAlg.BLAS.gemm!('T', 'N', one(Float32), w[:,:,ti], d[:,:,ti2], one(Float32), deltas.d[:,:,ti+ti2-1]) end deltas.d end On Sunday, September 14, 2014 2:18:07 AM UTC-7, Viral Shah wrote: Oh never mind - I see that you have a matrix multiply there that benefits from calling BLAS. If it is a matrix multiply, how come you can get away with axpy? Shouldn’t you need a gemm? Another way to avoid creating temporary arrays with indexing is to use subArrays, which the linear algebra routines can work with. -viral On 14-Sep-2014, at 2:43 pm, Viral Shah vi...@mayin.org wrote: That is great! However, by devectorizing, I meant writing the loop statement itself as two more loops, so that you end up with 3 nested loops effectively. You basically do not want all those w[:,:,ti] calls that create matrices every time. You could also potentially hoist the deltas.d out of the loop. Try something like: function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. dd = deltas.d for ti=1:size(w,3), ti2 = 1:size(d,3) for i=1:size(w,1) for j=size(w,2) dd[i,j,ti+ti2-1] += w[i,j,ti]'*d[i,j,ti2] end end end deltas.d end -viral On 14-Sep-2014, at 12:47 pm, Michael Oliver michael@gmail.com wrote: Thanks Viral for the quick reply, that's good to know. I was able to squeeze a little more performance out with axpy (see below). I tried devectorizing the inner loop, but it was much slower, I believe because it was no longer taking full advantage of MKL for the matrix multiply. So far I've got the code running at 1.4x what I had in Matlab and according to @time I still have 44.41% gc time. So 0.4 can't come soon enough! Great work guys, I'm really enjoying learning Julia. function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. rg =size(w,2)*size(d,2); for ti=1:size(w,3), ti2 = 1:size(d,3) Base.LinAlg.BLAS.axpy!(1,w[:,: ,ti]'*d[:,:,ti2],range(1,rg),deltas.d[:,:,ti+ti2-1],range(1,rg)) end deltas.d end On Saturday, September 13, 2014 10:10:25 PM UTC-7, Viral Shah wrote: The garbage is generated from the indexing operations. In 0.4, we should have array views that should solve this problem. For now, you can either manually devectorize the inner loop, or use the @devectorize macros in the Devectorize package, if they work out in this case. -viral On Sunday, September 14, 2014 10:34:45 AM UTC+5:30, Michael Oliver wrote: Hi all, I've implemented a time delay neural network module and have been trying to optimize it now. This function is for propagating the error backwards through the network. The deltas.d is just a container for holding the errors so I can do things in place and don't have to keep initializing arrays. w and d are collections of weights and errors respectively for different time lags. This function gets called many many times and according to profiling, there is a lot of garbage collection being induced by the fourth line, specifically within multidimensional.jl getindex and setindex! and array.jl + function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. for ti=1:size(w,3), ti2 = 1:size(d,3) deltas.d[:,:,ti+ti2-1] += w[:,:,ti]'*d[:,:,ti2]; end deltas.d end Any advice would be much appreciated! Best, Michael
[julia-users] slow julia version of c code
Hi I'm trying to port some c++ code to julia for approximating functions on a sparse grid and have run into a strange 100x slow down that I can't work out. https://gist.github.com/Zac12345/3da7be1fe99681a5bd14 shows the julia code and https://github.com/Zac12345/Sparse has the whole module (though building the shared library can be a faff) Though the library uses multi-threading this tends to only give a 6-7x speedup. Is there anything obvious I'm missing out on here? ps: profiling shows most of the time is spend (obviously) in the innermost loop - line 39 of the gist - but a simple comparison of the julia/c++ basis functions shows the julia version to be considerably faster! many thanks
[julia-users] macro expansion for @until
New Julia user here :) Following the scipy/julia tutorial by D. Sanders and playing around with Macros http://nbviewer.ipython.org/github/dpsanders/scipy_2014_julia/blob/master/Metaprogramming.ipynb He has in there an example of macros. I was playing around with that and I ran into a case where I don't understand why the expanded code ends up introducing what seems to be a local variable. And this depends solely on whether `expr2` contains `i += 1` or `i = i + 1`. Can someone take the time to explain why i one case the macro expansion introduces a local variable? And how would one get around that in this case? Thanks, -z In [111]: macro until(expr1, expr2) quote #:( while !($expr1) # code interpolation $expr2 end #) end end In [122]: expr1 = quote i = 0 @until i==10 begin print(i) i += 1 end end; expr2 = quote i = 0 @until i==10 begin print(i) i = i + 1 end end; In [123]: eval(expr1) 0123456789 In [124]: eval(expr2) i not defined while loading In[124], in expression starting on line 1 in anonymous at In[122]:4 In [125]: macroexpand(expr1) Out[125]: quote # In[122], line 3: i = 0 # line 4: begin # In[111], line 4: while !(i == 10) # line 5: begin # In[122], line 5: print(i) # line 6: i += 1 end end end end In [126]: macroexpand(expr2) Out[126]: quote # In[122], line 11: i = 0 # line 12: begin # In[111], line 4: while !(#3677#i == 10) # line 5: begin # In[122], line 13: print(#3677#i) # line 14: #3677#i = #3677#i + 1 end end end end
Re: [julia-users] Help needed with creating Julia package
I traced through the precise calls to libflint and libgmp that make it segfault and wrote a few test programs from the msys2 command line, taking Julia right out of the loop. At first I thought that it was only segfaulting when gmp was called from flint. But in fact I can make it segfault calling gmp directly from the program, though only in slightly strange circumstances. If I take the integer a = 123456789, I can compute the square and cube of this without problems. But if I try to compute the fourth power it segfaults. If I call gmp_printf it segfaults. And I checked that the mpz_t I'm trying to print contains valid data and that accessing it doesn't cause a segfault. In fact, the MPIR test suite fails most of its tests. In particular I notice that C only functions pass their tests, but assembly ones don't. I've got a feeling that msys have changed how it reports itself in a way that autotools hasn't kept up with, causing it to compile in linux assembly functions instead of Windows ones. Given that patching the linux assembly file and not the Windows one causes msys2 to not barf on that file, I'd say that's a certainty. Now I just have to hack autotools one more time to recognise msys2 and I should be ok. This is why I don't use autotools in flint. Bill. On Sunday, 14 September 2014 19:21:53 UTC+2, Bill Hart wrote: I checked that the MPIR dll's produced with the updated msys2 also don't work with Julia 0.2. I can't think of many other variables here. It has to be msys2 related. I could try not patching the assembly file it barfs on, but have it built from C fallback code. But I know for a fact that file is not being used in the functions I'm calling that cause it to segfault. I can try uninstalling msys2 completely and reinstalling it and mingw64 and see if I get any joy. On Sunday, 14 September 2014 19:05:55 UTC+2, Bill Hart wrote: On 14 September 2014 17:46, Tony Kelman t...@kelman.net wrote: Unfortunately it doesn't even work on my machine. It seems ok for some calls into the dll, but as soon as I try to say print something using a function in the MPIR dll it segfaults. I suppose it must be linked against subtly different Microsoft dll's than Julia and some kind of conflict results. Or maybe different GCC dll's. Exactly which version of MinGW-w64 are you using? I've just updated the msys2 packages with pacman. Recently I updated just some of the packages and not others, and I thought this might have introduced some incompatibilities. I noticed that if I build MPIR even with msys2 and not via Julia, the resulting dll doesn't work any more. After updating msys2 fully to msys 2.0.1, it still doesn't work. So it looks to me like some new issues have been introduced by the more recent msys or something. Their gas also barfs on one of the assembly files which we now have to patch. This issue affects GMP too, not just MPIR since we use the same code for that file. Might not be compatible with what was used to build Julia. Or there could be some issue due to having gmp and mpir both loaded within the same process? Just to reiterate, it was working fine before. I even went back to the precise configure invocation I used before in my bash history to ensure I was building MPIR the same way as before, and I checked which alpha version of MPIR I used. Admittedly I was using Julia 0.2.1 before, not Julia 0.3, which I have now upgraded to. I can try downloading Julia 0.2 for Windows again and see if that fixes the issue I guess. We've seen sort-of-similar issues with blas and lapack and various packages, though not so much on Windows. The problem must be what libtool is doing on mingw64. Make install doesn't even copy the generated dll across to the install directory, so you have to do this manually. Hm, yeah, libtool can be very picky, especially when it comes to dll's. I think I've used the subtle and quick to anger quote on this list before when talking about libtool... I've found configuring with lt_cv_deplibs_check_method=pass_all can sometimes help. I also can't build flint against the Julia mpir and mpfr since Julia doesn't seem to distribute the gmp.h and mpfr.h files, and flint requires these when building. Oh, right, damn. Sorry I forgot about that! That is an issue, maybe we should open a separate discussion on that general topic. It has come up before that various packages would like to link against the same libraries as Julia (in the Blas and Lapack cases we're building with an incompatible ABI which makes this actually not a good idea in most cases, but for GMP and MPFR I think we're configuring them in the default way), so installing the corresponding headers would actually be necessary. Even though I'm not entirely sure what Nemo or Flint do or whether I would personally have a use for them, I have some strange desire to see more Julia
Re: [julia-users] slow julia version of c code
Le dimanche 14 septembre 2014 à 12:30 -0700, Zac a écrit : Hi I'm trying to port some c++ code to julia for approximating functions on a sparse grid and have run into a strange 100x slow down that I can't work out. https://gist.github.com/Zac12345/3da7be1fe99681a5bd14 shows the julia code and https://github.com/Zac12345/Sparse has the whole module (though building the shared library can be a faff) Though the library uses multi-threading this tends to only give a 6-7x speedup. Is there anything obvious I'm missing out on here? ps: profiling shows most of the time is spend (obviously) in the innermost loop - line 39 of the gist - but a simple comparison of the julia/c++ basis functions shows the julia version to be considerably faster! I've just looked quickly at the code, but I've found two places where you are making copies of arrays, which hurts performance. @inbounds w[1] = A[1] fill!(dA,w[1]) Aold += dA The last line is equivalent to Aold = Aold + dA, so it replaces Aold with the result of the addition. Given the code above, you can just do Aold += A[1]. @inbounds w[G.lvl_l[q]:G.lvl_l[q+1]-1]=A[G.lvl_l[q]:G.lvl_l[q+1]-1]-Aold[G.lvl_l[q]:G.lvl_l[q+1]-1] Here a copy of the extracted slice is taken. See the other thread Why does this function incur so much garbage collection (and how can I make faster) ?. You should be able to use ArrayViews, SubArrays, or refactor your code. Aold += dA Same as the first issue. Better write this as a loop. Regards
Re: [julia-users] Help needed with creating Julia package
That was the problem. I've added hack number 40001 to the MPIR autotools to fix this issue. Nemo now passes its tests on Window 64 for the first time ever. Bill. On Sunday, 14 September 2014 21:40:45 UTC+2, Bill Hart wrote: I traced through the precise calls to libflint and libgmp that make it segfault and wrote a few test programs from the msys2 command line, taking Julia right out of the loop. At first I thought that it was only segfaulting when gmp was called from flint. But in fact I can make it segfault calling gmp directly from the program, though only in slightly strange circumstances. If I take the integer a = 123456789, I can compute the square and cube of this without problems. But if I try to compute the fourth power it segfaults. If I call gmp_printf it segfaults. And I checked that the mpz_t I'm trying to print contains valid data and that accessing it doesn't cause a segfault. In fact, the MPIR test suite fails most of its tests. In particular I notice that C only functions pass their tests, but assembly ones don't. I've got a feeling that msys have changed how it reports itself in a way that autotools hasn't kept up with, causing it to compile in linux assembly functions instead of Windows ones. Given that patching the linux assembly file and not the Windows one causes msys2 to not barf on that file, I'd say that's a certainty. Now I just have to hack autotools one more time to recognise msys2 and I should be ok. This is why I don't use autotools in flint. Bill. On Sunday, 14 September 2014 19:21:53 UTC+2, Bill Hart wrote: I checked that the MPIR dll's produced with the updated msys2 also don't work with Julia 0.2. I can't think of many other variables here. It has to be msys2 related. I could try not patching the assembly file it barfs on, but have it built from C fallback code. But I know for a fact that file is not being used in the functions I'm calling that cause it to segfault. I can try uninstalling msys2 completely and reinstalling it and mingw64 and see if I get any joy. On Sunday, 14 September 2014 19:05:55 UTC+2, Bill Hart wrote: On 14 September 2014 17:46, Tony Kelman t...@kelman.net wrote: Unfortunately it doesn't even work on my machine. It seems ok for some calls into the dll, but as soon as I try to say print something using a function in the MPIR dll it segfaults. I suppose it must be linked against subtly different Microsoft dll's than Julia and some kind of conflict results. Or maybe different GCC dll's. Exactly which version of MinGW-w64 are you using? I've just updated the msys2 packages with pacman. Recently I updated just some of the packages and not others, and I thought this might have introduced some incompatibilities. I noticed that if I build MPIR even with msys2 and not via Julia, the resulting dll doesn't work any more. After updating msys2 fully to msys 2.0.1, it still doesn't work. So it looks to me like some new issues have been introduced by the more recent msys or something. Their gas also barfs on one of the assembly files which we now have to patch. This issue affects GMP too, not just MPIR since we use the same code for that file. Might not be compatible with what was used to build Julia. Or there could be some issue due to having gmp and mpir both loaded within the same process? Just to reiterate, it was working fine before. I even went back to the precise configure invocation I used before in my bash history to ensure I was building MPIR the same way as before, and I checked which alpha version of MPIR I used. Admittedly I was using Julia 0.2.1 before, not Julia 0.3, which I have now upgraded to. I can try downloading Julia 0.2 for Windows again and see if that fixes the issue I guess. We've seen sort-of-similar issues with blas and lapack and various packages, though not so much on Windows. The problem must be what libtool is doing on mingw64. Make install doesn't even copy the generated dll across to the install directory, so you have to do this manually. Hm, yeah, libtool can be very picky, especially when it comes to dll's. I think I've used the subtle and quick to anger quote on this list before when talking about libtool... I've found configuring with lt_cv_deplibs_check_method=pass_all can sometimes help. I also can't build flint against the Julia mpir and mpfr since Julia doesn't seem to distribute the gmp.h and mpfr.h files, and flint requires these when building. Oh, right, damn. Sorry I forgot about that! That is an issue, maybe we should open a separate discussion on that general topic. It has come up before that various packages would like to link against the same libraries as Julia (in the Blas and Lapack cases we're building with an incompatible ABI which makes this actually not a good idea in most cases, but for GMP and MPFR I think we're configuring them in
[julia-users] Re: Control system library for Julia?
Hello again Uwe! It's fun running into someone I know on a language geek forum :) I'm helping one of our bachelor's students implement an LQR controller on our carousel in Freiburg. It's an ugly hack, but I'm calling an octave script to recompute the feedback gains online. Octave wraps slicot, so if the licenses are compatible, perhaps wrapping slicot is the way to go for some functions, if the licenses are compatible. Personally, I have a burning desire for a better language we can actually do control in (rust?). I doubt Julia qualifies due to the garbage collection, but does anyone know if Julia has some sort of way to JIT Julia expressions to code that does ~not have any garbage collection? If so, is there a way to export them as object files and link against them from C? Then you'd still have to write glue code in a systems language, but at least the implementation of the controller wouldn't have to cross a language boundary... Cheers, Andrew On Thursday, February 20, 2014 10:56:20 PM UTC+1, Uwe Fechner wrote: Hello, I could not find any control system library for Julia yet. Would that make sense? There is a control system library available for Python: http://www.cds.caltech.edu/~murray/wiki/index.php/Python-control Perhaps this could be used as starting point? I think that implementing this in Julia should be easier and faster than in Python. Any comments? Should I open a feature request? Uwe Fechner, TU Delft, The Netherlands
Re: [julia-users] Lint.jl status update
I've definitely been meaning to work on integrating this with Sublime-IJulia. Hopefully in the next week or so. -Jacb On Sun, Sep 14, 2014 at 10:19 AM, Adam Smith swiss.army.engin...@gmail.com wrote: This looks awesome. Regarding the Array parameter issue (which I'm really glad to see in the linter; this issue really tripped me up when learning Julia), if https://github.com/JuliaLang/julia/issues/6984 ever finds a resolution, it would be great to suggest that new syntax in the lint message. Then if the linter becomes common-place, beginners that have never heard of type variance will have a path to understanding. On Sunday, September 14, 2014 1:38:50 AM UTC-4, Tony Fong wrote: That's a good question. They can be used together, obviously. I can easily speak for Lint. The key trade-off made in Lint is that it does not strive for very in-depth type analysis. The focus is finding dodgy AST, where it is located in the source file, and with a bit of explanation around issues. The analyses are done recursively in a very small neighborhood around each node in the AST, although the locality issue has improved somewhat with the new type-tracking ability. The type guessing and tracking could leverage Typecheck.jl, only possible since about last week (with the new features), and it's a very exciting prospect. Lint already provides functionality to return an array of lint messages (from a file, a code snippet, or a module), so it could be used in IDE integration I suppose. Tony On Sunday, September 14, 2014 10:08:09 AM UTC+7, Spencer Russell wrote: Any comments on how Lint.jl and @astrieanna's also-awesome TypeCheck.jl relate? Are you two working together, or are there different use cases for the two libraries? peace, s On Sat, Sep 13, 2014 at 3:34 PM, Tony Fong tony.h...@gmail.com wrote: Fellow Julians, I think it is time to post an update on Lint.jl https://github.com/tonyhffong/Lint.jl, as it has improved quite a bit from the initial version I started about 3 months ago. Notable new features - Local variable type tracking, which enables a range of features, such as - Variable type stability warning within a function scope. - Incompatibility between type assertion and assignment - Omission of returning the constructed object in a type constructor - Check the call signature of a selected set of methods with collection (push!, append!, etc.) - More function checks, such as - repeated arguments - wrong signatures, e.g. f( x::Array{Number,1} ) - Mispelled constructor (calls new but the function name doesn't match the enclosing type) - Ability to silence lint warning via lintpragma() function, e.g. - lintpragma( Ignore unstable type variable [variable name] ) - lintpragma( Ignore Unused [variable name] ) Also, there is now quite a range of test scripts showing sample codes with lint problems, so it's easy to grep your own lint warnings in that folder and see a distilled version of the issue. Again, please let me know about gaps and false positives. Tony
Re: [julia-users] Why does this function incur so much garbage collection (and how can I make faster) ?
You are 100% right Andreas. Using ArrayViews seems to work and now code is 2.5x faster than my Matlab code! Any obvious problems with this? function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. for ti=1:size(w,3), ti2 = 1:size(d,3) Base.LinAlg.BLAS.gemm!('T', 'N', one(Float32), view(w,:,:,ti), view(d,:,:,ti2), one(Float32), view(deltas.d,:,:,ti+ti2-1)) end deltas.d end Still around 30% gc time, so I'll have to do some more profiling to determine the source/ On Sunday, September 14, 2014 12:29:11 PM UTC-7, Andreas Noack wrote: I don't think this version does what you wan't it to do. deltas.d[:,:,ti+ti2-1] makes a copy so deltas.d will be unmodified by gemm!. You can use sub as Viral mentions or you could try ArrayViews.jl. Another thing you could consider is to use a Vector{Matrix{Float32}} instead of Array{Float32,3}. It can be slightly unintuitive but if you index a Vector{Matrix{Float32}} no copy is made. Med venlig hilsen Andreas Noack 2014-09-14 15:08 GMT-04:00 Michael Oliver michael@gmail.com javascript:: I was using axpy to replace the += only and doing the matrix muliply in the argument to axpy. But you're right gemm! is actually what I should be using (I'm just starting to learn the BLAS library names). Using gemm! the code is now 1.68x faster than my Matlab code (I mean a whole epoch of backprop training)! And down to 40% gc time. My goal of 2x speed up is in sight! I'll look into subArrays next. function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. for ti=1:size(w,3), ti2 = 1:size(d,3) Base.LinAlg.BLAS.gemm!('T', 'N', one(Float32), w[:,:,ti], d[:,:,ti2], one(Float32), deltas.d[:,:,ti+ti2-1]) end deltas.d end On Sunday, September 14, 2014 2:18:07 AM UTC-7, Viral Shah wrote: Oh never mind - I see that you have a matrix multiply there that benefits from calling BLAS. If it is a matrix multiply, how come you can get away with axpy? Shouldn’t you need a gemm? Another way to avoid creating temporary arrays with indexing is to use subArrays, which the linear algebra routines can work with. -viral On 14-Sep-2014, at 2:43 pm, Viral Shah vi...@mayin.org wrote: That is great! However, by devectorizing, I meant writing the loop statement itself as two more loops, so that you end up with 3 nested loops effectively. You basically do not want all those w[:,:,ti] calls that create matrices every time. You could also potentially hoist the deltas.d out of the loop. Try something like: function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. dd = deltas.d for ti=1:size(w,3), ti2 = 1:size(d,3) for i=1:size(w,1) for j=size(w,2) dd[i,j,ti+ti2-1] += w[i,j,ti]'*d[i,j,ti2] end end end deltas.d end -viral On 14-Sep-2014, at 12:47 pm, Michael Oliver michael@gmail.com wrote: Thanks Viral for the quick reply, that's good to know. I was able to squeeze a little more performance out with axpy (see below). I tried devectorizing the inner loop, but it was much slower, I believe because it was no longer taking full advantage of MKL for the matrix multiply. So far I've got the code running at 1.4x what I had in Matlab and according to @time I still have 44.41% gc time. So 0.4 can't come soon enough! Great work guys, I'm really enjoying learning Julia. function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. rg =size(w,2)*size(d,2); for ti=1:size(w,3), ti2 = 1:size(d,3) Base.LinAlg.BLAS.axpy!(1,w[:,: ,ti]'*d[:,:,ti2],range(1,rg),deltas.d[:,:,ti+ti2-1],range(1,rg)) end deltas.d end On Saturday, September 13, 2014 10:10:25 PM UTC-7, Viral Shah wrote: The garbage is generated from the indexing operations. In 0.4, we should have array views that should solve this problem. For now, you can either manually devectorize the inner loop, or use the @devectorize macros in the Devectorize package, if they work out in this case. -viral On Sunday, September 14, 2014 10:34:45 AM UTC+5:30, Michael Oliver wrote: Hi all, I've implemented a time delay neural network module and have been trying to optimize it now. This function is for propagating the error backwards through the network. The deltas.d is just a container for holding the errors so I can do things in place and don't have to keep initializing arrays. w and d are collections of weights and errors respectively for different time lags. This function gets called many many times and according to profiling, there is a lot of garbage collection being induced by the fourth line, specifically within
Re: [julia-users] slow julia version of c code
Thanks for the reply, I was under the misapprehension that the arrayviews improvements were already committed to 0.4. Unfortunately cleaning up those lines doesn't give great gains.
[julia-users] Re: constants and containers
My original intention was to ask if there was any way we could declare a const array who's elements are also constants. Since the following is possible: julia const a = [1,2,3] 3-element Array{Int64,1}: 1 2 3 julia a[1] = 2 2 and it would be useful to have arrays that are as constant as a variable can be, without the need of declaring a new immutable type. For instance, can we have an immutable Dict, who's fields AND their values are const? Const arrays would be nice though. On Monday, September 15, 2014 12:09:25 AM UTC+10, gael@gmail.com wrote: I may have missed something but wouldn't immutable t x y end immutable t x y end type u x y end work? julia myvar = t(1,2) julia myvar.x=5 ERROR: type t is immutable julia v = u(t(1,2), t(3,4)) u(t(1,2),t(3,4)) julia v.x t(1,2) v.x=t(5,6) t(5,6) v.x.x=42 ERROR: type t is immutable If you really want to guaranty constant fields, you have to type them to some constant type.
Re: [julia-users] slow julia version of c code
Sadly I've already used these!
Re: [julia-users] slow julia version of c code
Have you looked at the output of code_typed? Maybe you could paste it in a gist, its really hard to give any more advice without being able to run the code directly. On Sunday, September 14, 2014 6:36:21 PM UTC-4, Zac wrote: Sadly I've already used these!
Re: [julia-users] macro expansion for @until
You need to call esc on the expressions to provide macro hygiene. Sorry for the brief reply, but let me know if that helps. Btw, did you mean to include output from macroexpand above? If so, it doesn't seem to have gotten included. On Sun, Sep 14, 2014 at 9:28 PM, Zouhair Mahboubi zouhair.mahbo...@gmail.com wrote: New Julia user here :) Following the scipy/julia tutorial by D. Sanders and playing around with Macros http://nbviewer.ipython.org/github/dpsanders/scipy_2014_julia/blob/master/Metaprogramming.ipynb He has in there an example of macros. I was playing around with that and I ran into a case where I don't understand why the expanded code ends up introducing what seems to be a local variable. And this depends solely on whether `expr2` contains `i += 1` or `i = i + 1`. Can someone take the time to explain why i one case the macro expansion introduces a local variable? And how would one get around that in this case? Thanks, -z In [111]: macro until(expr1, expr2) quote #:( while !($expr1) # code interpolation $expr2 end #) end end In [122]: expr1 = quote i = 0 @until i==10 begin print(i) i += 1 end end; expr2 = quote i = 0 @until i==10 begin print(i) i = i + 1 end end; In [123]: eval(expr1) 0123456789 In [124]: eval(expr2) i not defined while loading In[124], in expression starting on line 1 in anonymous at In[122]:4 In [125]: macroexpand(expr1) Out[125]: quote # In[122], line 3: i = 0 # line 4: begin # In[111], line 4: while !(i == 10) # line 5: begin # In[122], line 5: print(i) # line 6: i += 1 end end end end In [126]: macroexpand(expr2) Out[126]: quote # In[122], line 11: i = 0 # line 12: begin # In[111], line 4: while !(#3677#i == 10) # line 5: begin # In[122], line 13: print(#3677#i) # line 14: #3677#i = #3677#i + 1 end end end end
Re: [julia-users] Lint.jl status update
We can get really nice integration for Lint.jl and Light Table – I've actually already got some of the GUI parts worked out, so it won't be crazy difficult to do. Quick question: How's Lint.jl's support for modules? For LT it's pretty essential that the API looks like (block of code as text) + (file/line info) + (module) = (warnings/errors). Of course, it's fine if I can wrap Lint.jl's existing functionality to have that, but current AFAICT it currently only works in terms of whole files. On Sunday, 14 September 2014 01:12:49 UTC-4, Viral Shah wrote: I wonder if these can be integrated into LightTable and IJulia, so that they always automatically are running in the background on all code one writes. -viral On Sunday, September 14, 2014 8:38:09 AM UTC+5:30, Spencer Russell wrote: Any comments on how Lint.jl and @astrieanna's also-awesome TypeCheck.jl relate? Are you two working together, or are there different use cases for the two libraries? peace, s On Sat, Sep 13, 2014 at 3:34 PM, Tony Fong tony.h...@gmail.com javascript: wrote: Fellow Julians, I think it is time to post an update on Lint.jl https://github.com/tonyhffong/Lint.jl, as it has improved quite a bit from the initial version I started about 3 months ago. Notable new features - Local variable type tracking, which enables a range of features, such as - Variable type stability warning within a function scope. - Incompatibility between type assertion and assignment - Omission of returning the constructed object in a type constructor - Check the call signature of a selected set of methods with collection (push!, append!, etc.) - More function checks, such as - repeated arguments - wrong signatures, e.g. f( x::Array{Number,1} ) - Mispelled constructor (calls new but the function name doesn't match the enclosing type) - Ability to silence lint warning via lintpragma() function, e.g. - lintpragma( Ignore unstable type variable [variable name] ) - lintpragma( Ignore Unused [variable name] ) Also, there is now quite a range of test scripts showing sample codes with lint problems, so it's easy to grep your own lint warnings in that folder and see a distilled version of the issue. Again, please let me know about gaps and false positives. Tony
[julia-users] Re: macro expansion for @until
El domingo, 14 de septiembre de 2014 14:28:47 UTC-5, Zouhair Mahboubi escribió: New Julia user here :) Following the scipy/julia tutorial by D. Sanders and playing around with Macros http://nbviewer.ipython.org/github/dpsanders/scipy_2014_julia/blob/master/Metaprogramming.ipynb I'm glad you find the tutorial useful! But it's certainly far from perfect, and certainly needs some modifications, which I will be happy to implement if you send a Pull Request! He has in there an example of macros. I was playing around with that and I ran into a case where I don't understand why the expanded code ends up introducing what seems to be a local variable. And this depends solely on whether `expr2` contains `i += 1` or `i = i + 1`. Can someone take the time to explain why i one case the macro expansion introduces a local variable? And how would one get around that in this case? Thanks, -z In [111]: macro until(expr1, expr2) quote #:( while !($expr1) # code interpolation $expr2 end #) end end In [122]: expr1 = quote i = 0 @until i==10 begin print(i) i += 1 end end; expr2 = quote i = 0 @until i==10 begin print(i) i = i + 1 end end; In [123]: eval(expr1) 0123456789 In [124]: eval(expr2) i not defined while loading In[124], in expression starting on line 1 in anonymous at In[122]:4 In [125]: macroexpand(expr1) div style=position: absolute; height: 30px; w ...
Re: [julia-users] Why does this function incur so much garbage collection (and how can I make faster) ?
No I think it looks right. There is a small bit of allocation when you create view which might the reason for the gc time. Try profiling and see where the time is spent. If most of the time is spent within gemm! then there is probably not much more to optimize. Med venlig hilsen Andreas Noack 2014-09-14 17:11 GMT-04:00 Michael Oliver michael.d.oli...@gmail.com: You are 100% right Andreas. Using ArrayViews seems to work and now code is 2.5x faster than my Matlab code! Any obvious problems with this? function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. for ti=1:size(w,3), ti2 = 1:size(d,3) Base.LinAlg.BLAS.gemm!('T', 'N', one(Float32), view(w,:,:,ti), view(d,:,:,ti2), one(Float32), view(deltas.d,:,:,ti+ti2-1)) end deltas.d end Still around 30% gc time, so I'll have to do some more profiling to determine the source/ On Sunday, September 14, 2014 12:29:11 PM UTC-7, Andreas Noack wrote: I don't think this version does what you wan't it to do. deltas.d[:,:,ti+ti2-1] makes a copy so deltas.d will be unmodified by gemm!. You can use sub as Viral mentions or you could try ArrayViews.jl. Another thing you could consider is to use a Vector{Matrix{Float32}} instead of Array{Float32,3}. It can be slightly unintuitive but if you index a Vector{Matrix{Float32}} no copy is made. Med venlig hilsen Andreas Noack 2014-09-14 15:08 GMT-04:00 Michael Oliver michael@gmail.com: I was using axpy to replace the += only and doing the matrix muliply in the argument to axpy. But you're right gemm! is actually what I should be using (I'm just starting to learn the BLAS library names). Using gemm! the code is now 1.68x faster than my Matlab code (I mean a whole epoch of backprop training)! And down to 40% gc time. My goal of 2x speed up is in sight! I'll look into subArrays next. function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. for ti=1:size(w,3), ti2 = 1:size(d,3) Base.LinAlg.BLAS.gemm!('T', 'N', one(Float32), w[:,:,ti], d[:,:,ti2], one(Float32), deltas.d[:,:,ti+ti2-1]) end deltas.d end On Sunday, September 14, 2014 2:18:07 AM UTC-7, Viral Shah wrote: Oh never mind - I see that you have a matrix multiply there that benefits from calling BLAS. If it is a matrix multiply, how come you can get away with axpy? Shouldn’t you need a gemm? Another way to avoid creating temporary arrays with indexing is to use subArrays, which the linear algebra routines can work with. -viral On 14-Sep-2014, at 2:43 pm, Viral Shah vi...@mayin.org wrote: That is great! However, by devectorizing, I meant writing the loop statement itself as two more loops, so that you end up with 3 nested loops effectively. You basically do not want all those w[:,:,ti] calls that create matrices every time. You could also potentially hoist the deltas.d out of the loop. Try something like: function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. dd = deltas.d for ti=1:size(w,3), ti2 = 1:size(d,3) for i=1:size(w,1) for j=size(w,2) dd[i,j,ti+ti2-1] += w[i,j,ti]'*d[i,j,ti2] end end end deltas.d end -viral On 14-Sep-2014, at 12:47 pm, Michael Oliver michael@gmail.com wrote: Thanks Viral for the quick reply, that's good to know. I was able to squeeze a little more performance out with axpy (see below). I tried devectorizing the inner loop, but it was much slower, I believe because it was no longer taking full advantage of MKL for the matrix multiply. So far I've got the code running at 1.4x what I had in Matlab and according to @time I still have 44.41% gc time. So 0.4 can't come soon enough! Great work guys, I'm really enjoying learning Julia. function errprop!(w::Array{Float32,3}, d::Array{Float32,3}, deltas) deltas.d[:] = 0. rg =size(w,2)*size(d,2); for ti=1:size(w,3), ti2 = 1:size(d,3) Base.LinAlg.BLAS.axpy!(1,w[:,: ,ti]'*d[:,:,ti2],range(1,rg),deltas.d[:,:,ti+ti2-1],range(1,rg)) end deltas.d end On Saturday, September 13, 2014 10:10:25 PM UTC-7, Viral Shah wrote: The garbage is generated from the indexing operations. In 0.4, we should have array views that should solve this problem. For now, you can either manually devectorize the inner loop, or use the @devectorize macros in the Devectorize package, if they work out in this case. -viral On Sunday, September 14, 2014 10:34:45 AM UTC+5:30, Michael Oliver wrote: Hi all, I've implemented a time delay neural network module and have been trying to optimize it now. This function is for propagating the error backwards through the network. The deltas.d is just a container for holding the errors so I can do things in place and don't have
Re: [julia-users] Help needed with creating Julia package
I've compiled many many dozens of libraries using MinGW-w64 on MSYS, and cross-compiling from Cygwin and various Linux distributions, and not encountered nearly as many problems as you seem to be. Pretending to be autotools but hacking it up yourself is the worst possible thing you could do - you're completely nonstandard and nonfunctional in cross-compilation for example, and you have as many or more dependencies - requiring bash for example is often not acceptable on FreeBSD and similar systems. Part of the problem is the way MSYS2 identifies itself changes depending on what environment variables you have set (or which bat file you start it with). The MSYS uname is unrecognized by autotools because MSYS2 is new, and you almost never want to actually be compiling with the MSYS host, since that causes your application to link to the msys-2.0.dll posix compatibility layer. It's like using the standard gcc in Cygwin, any users would need Cygwin and the associated GPL posix runtime library to use the compiled code. The MINGW64 uname is also new and unrecognized, and totally nonstandard. This is why you should be querying the COMPILER for information about what system the generated code is supposed to run on, NOT the build environment. The accepted GNU triples for MinGW are: i686-pc-mingw32: MinGW.org 32 bit compiler, outdated and obsolete, don't use this i686-w64-mingw32: 32 bit compiler from MinGW-w64 project, use this for 32 bit host x86_64-w64-mingw32: 64 bit compiler from MinGW-w64 project, use this for 64 bit host Yes the names are confusing and make no sense, but this is the GNU standard that every other conforming project in the world is using. LLVM decided to go out on a limb and rearrange some of these names recently, adding to the confusion, but GCC, autotools, and MinGW are unlikely to follow their lead. On Sunday, September 14, 2014 1:21:42 PM UTC-7, Bill Hart wrote: That was the problem. I've added hack number 40001 to the MPIR autotools to fix this issue. Nemo now passes its tests on Window 64 for the first time ever. Bill. On Sunday, 14 September 2014 21:40:45 UTC+2, Bill Hart wrote: I traced through the precise calls to libflint and libgmp that make it segfault and wrote a few test programs from the msys2 command line, taking Julia right out of the loop. At first I thought that it was only segfaulting when gmp was called from flint. But in fact I can make it segfault calling gmp directly from the program, though only in slightly strange circumstances. If I take the integer a = 123456789, I can compute the square and cube of this without problems. But if I try to compute the fourth power it segfaults. If I call gmp_printf it segfaults. And I checked that the mpz_t I'm trying to print contains valid data and that accessing it doesn't cause a segfault. In fact, the MPIR test suite fails most of its tests. In particular I notice that C only functions pass their tests, but assembly ones don't. I've got a feeling that msys have changed how it reports itself in a way that autotools hasn't kept up with, causing it to compile in linux assembly functions instead of Windows ones. Given that patching the linux assembly file and not the Windows one causes msys2 to not barf on that file, I'd say that's a certainty. Now I just have to hack autotools one more time to recognise msys2 and I should be ok. This is why I don't use autotools in flint. Bill. On Sunday, 14 September 2014 19:21:53 UTC+2, Bill Hart wrote: I checked that the MPIR dll's produced with the updated msys2 also don't work with Julia 0.2. I can't think of many other variables here. It has to be msys2 related. I could try not patching the assembly file it barfs on, but have it built from C fallback code. But I know for a fact that file is not being used in the functions I'm calling that cause it to segfault. I can try uninstalling msys2 completely and reinstalling it and mingw64 and see if I get any joy. On Sunday, 14 September 2014 19:05:55 UTC+2, Bill Hart wrote: On 14 September 2014 17:46, Tony Kelman to...@kelman.net javascript: wrote: Unfortunately it doesn't even work on my machine. It seems ok for some calls into the dll, but as soon as I try to say print something using a function in the MPIR dll it segfaults. I suppose it must be linked against subtly different Microsoft dll's than Julia and some kind of conflict results. Or maybe different GCC dll's. Exactly which version of MinGW-w64 are you using? I've just updated the msys2 packages with pacman. Recently I updated just some of the packages and not others, and I thought this might have introduced some incompatibilities. I noticed that if I build MPIR even with msys2 and not via Julia, the resulting dll doesn't work any more. After updating msys2 fully to msys 2.0.1, it still doesn't work. So it looks to me like some new issues have been