Matt Godbolt's Compiler Explorer supports Mir libraries
Hi all, Compiler Explorer [1] is an interactive compiler. The left-hand pane shows the editable code. The right, the assembly output of having compiled the code with a given compiler and settings. HowTo 1. open the site [1] 2. press 'Libraries' bottom on the right window and pick required libraries with all their dependencies. For example, mir-algorithm(trunk) and its dependency mir-core(trunk). 3. Pick LDC compiler 4. Use LDC's -mtriple= or -mcpu= flags to pick the target you want. LDC can do cross-compilation for ARM CPUs. 5. Add compiler flags like -O -release -boundscheck=off -mcpu=native 6. Past you code in the left window. 7. Enjoy [1] https://d.godbolt.org/
Re: DIP 1030-- Named Arguments--Formal Assessment
On Monday, 7 December 2020 at 12:13:35 UTC, zoujiaqing wrote: On Thursday, 17 September 2020 at 12:58:06 UTC, Mike Parker wrote: [...] Very practical features, similar to trailing closures, are expected. Implementing the DIP is added to the list of possible gsoc21 projects. Hopefully D is this year elected. https://github.com/dlang/projects/issues/76 It will also make porting python libraries to D easier as it minimizes coding differences. Kind regards Andre
Re: Mir vs. Numpy: Reworked!
On Monday, 7 December 2020 at 13:48:51 UTC, jmh530 wrote: On Monday, 7 December 2020 at 13:41:17 UTC, Ola Fosheim Grostad wrote: On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote: [snip] "no need to calculate inverse matrix" What? Since when? I dont know what he meant in this context, but a common technique in computer graphics is to build the inverse as as you apply computations. Ah, well if you have a small matrix, then it's not so hard to calculate the inverse anyway. It is an optimization, maybe also for accuracy, dunno. So, instead of ending up with a transform from coordinate system A to B, you also get the transform from B to A for cheap. This may matter when the next step is to go from B to C... And so on...
Re: Mir vs. Numpy: Reworked!
On Monday, 7 December 2020 at 13:41:17 UTC, Ola Fosheim Grostad wrote: On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote: [snip] "no need to calculate inverse matrix" What? Since when? I dont know what he meant in this context, but a common technique in computer graphics is to build the inverse as as you apply computations. Ah, well if you have a small matrix, then it's not so hard to calculate the inverse anyway.
Re: Mir vs. Numpy: Reworked!
On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote: On Monday, 7 December 2020 at 11:21:16 UTC, Igor Shirkalin wrote: [snip] Agreed. As a matter of fact the simplest convolutions of tensors are out of date. It is like there's no need to calculate inverse matrix. Mir is the usefull work for author, of course, and practically almost not used. Every one who needs something fast in his own tasks should make same things again in D. "no need to calculate inverse matrix" What? Since when? I dont know what he meant in this context, but a common technique in computer graphics is to build the inverse as as you apply computations.
Re: Mir vs. Numpy: Reworked!
On Monday, 7 December 2020 at 11:21:16 UTC, Igor Shirkalin wrote: [snip] Agreed. As a matter of fact the simplest convolutions of tensors are out of date. It is like there's no need to calculate inverse matrix. Mir is the usefull work for author, of course, and practically almost not used. Every one who needs something fast in his own tasks should make same things again in D. "no need to calculate inverse matrix" What? Since when?
Re: Mir vs. Numpy: Reworked!
On Monday, 7 December 2020 at 12:28:39 UTC, data pulverizer wrote: On Monday, 7 December 2020 at 02:14:41 UTC, 9il wrote: I don't know. Tensors aren't so complex. The complex part is a design that allows Mir to construct and iterate various kinds of lazy tensors of any complexity and have quite a universal API, and all of these are boosted by the fact that the user-provided kernel(lambda) function is optimized by the compiler without the overhead. I agree that a basic tensor is not hard to implement, but the specific design to choose is not always obvious. Your benchmarks shows that design choices have a large impact on performance, and performance is certainly a very important consideration in tensor design. For example I had no idea that your ndslice variant was using more than one array internally to achieve its performance - it wasn't obvious to me. ndslice tensor type uses exactly one iterator. However, the iterator is generic and lazy iterators may contain any number of other iterators and pointers.
Re: Mir vs. Numpy: Reworked!
On Monday, 7 December 2020 at 02:14:41 UTC, 9il wrote: I don't know. Tensors aren't so complex. The complex part is a design that allows Mir to construct and iterate various kinds of lazy tensors of any complexity and have quite a universal API, and all of these are boosted by the fact that the user-provided kernel(lambda) function is optimized by the compiler without the overhead. I agree that a basic tensor is not hard to implement, but the specific design to choose is not always obvious. Your benchmarks shows that design choices have a large impact on performance, and performance is certainly a very important consideration in tensor design. For example I had no idea that your ndslice variant was using more than one array internally to achieve its performance - it wasn't obvious to me. I think literature that discuss various design choices and approaches would be useful and informative. There is plenty of literature on creating tree structures, linked lists, stacks, queues, hash tables and so forth, but virtually nothing on tensor data structures. It isn't as if implementing a linked list is any more complex than a tensor. I just think it's a bit strange that there is so little on the topic - given the widespread use of tensors in computational science.
Re: DIP 1030-- Named Arguments--Formal Assessment
On Thursday, 17 September 2020 at 12:58:06 UTC, Mike Parker wrote: DIP 1030, "Named Arguments", has been accepted. During the assessment, Walter and Atila had a discussion regarding this particular criticism: https://forum.dlang.org/post/mailman.1117.1581368593.31109.digitalmar...@puremagic.com "Named arguments breaks this very important pattern: auto wrapper(alias origFun)(Parameters!origFun args) { // special sauce return origFun(args); }" They say that, though it's true that `Parameters!func` will not work in a wrapper, it "doesn't really work now"---default arguments and storage classes must be accounted for. This can be done with string mixins, or using a technique referred to by Jean-Louis Leroy as "refraction", both of which are clumsy. So they decided that a new `std.traits` template and a corresponding `__traits` option are needed which expand into the exact function signature of another function. They also acknowledge that when an API's parameter names change, code depending on the old parameter names will break. Struct literals have the same problem and no one complains (the same is true for C99). And in any case, when such a change occurs, it's a hard failure as any code using named arguments with the old parameter names will fail to compile, making it easy to see how to resolve the issue. Given this, they find the benefits of the feature outweigh the potential for such breakage. Very practical features, similar to trailing closures, are expected.
Re: Mir vs. Numpy: Reworked!
On Monday, 7 December 2020 at 02:14:41 UTC, 9il wrote: On Sunday, 6 December 2020 at 17:30:13 UTC, data pulverizer wrote: On Saturday, 5 December 2020 at 07:44:33 UTC, 9il wrote: sweep_ndslice uses (2*N - 1) arrays to index U, this allows LDC to unroll the loop. I don't know. Tensors aren't so complex. The complex part is a design that allows Mir to construct and iterate various kinds of lazy tensors of any complexity and have quite a universal API, and all of these are boosted by the fact that the user-provided kernel(lambda) function is optimized by the compiler without the overhead. Agreed. As a matter of fact the simplest convolutions of tensors are out of date. It is like there's no need to calculate inverse matrix. Mir is the usefull work for author, of course, and practically almost not used. Every one who needs something fast in his own tasks should make same things again in D.
Re: DConf Online Video & Slide Links
On Monday, 7 December 2020 at 10:29:24 UTC, Mike Parker wrote: After a brief respite, I've gotten back to work. I've just updated the DConf Online site with links to all the slides (including Mathis Beer's) and prerecorded videos. https://dconf.org/2020/online/index.html
DConf Online Video & Slide Links
After a brief respite, I've gotten back to work. I've just updated the DConf Online site with links to all the slides (including Mathis Beer's) and prerecorded videos. In the coming days, I'll add the slide links to the video descriptions on YouTube and chop up the Q & A livestreams into individual videos for a Q & A playlist. The full streams won't go anywhere, though.
Re: BeerConf Mid-December Edition
On Monday, 7 December 2020 at 08:35:35 UTC, Iain Buclaw wrote: So one more time for 2020, grab your best-loved beverages and revered D topics, and join us December 19-20th to celebrate all that we've collectively achieved this year, before finally banishing 2020 into history's dustbin (and sanitize it twice for good measure). Woohoo! I think I'll exchange the cider for actual beer this time.
BeerConf Mid-December Edition
Happy Monday everyone, This month, as the last weekend falls on Boxing Day, we'll be signing off the year a week earlier than usual. So one more time for 2020, grab your best-loved beverages and revered D topics, and join us December 19-20th to celebrate all that we've collectively achieved this year, before finally banishing 2020 into history's dustbin (and sanitize it twice for good measure). As always, a link will be posted to the stream on Saturday. Stay safe! Iain.