On Monday, 16 November 2015 at 22:45:35 UTC, Jack Stouffer wrote:
docs: http://dtest.thecybershadow.net/results/bac6211c1d73b2cf62bc18c9844c8c82c92c21e1/5c6071ca953cf113febd8786b4b68916cbb2cdaf/

I have already posted some things on Github, so If you will indulge me signing the praises of this addition for a moment:

Something dawned on me when I was scanning the numpy documentation, looking for anything obvious that was missed by this PR. What I noticed is that D really made the right bet when it came to putting a lot of range code in the standard library, because in order to make numpy useful at all, the authors had to recreate parts of the Python standard library in their C code so that it takes advantage of the type information that numpy arrays provide, where as all of std.algorithm and std.range can be used with this no problem. For example to create a zero initialized n-d array in numpy, you would have to use the numpy function np.zeros((10, 5)), but with this you can repeat(0, 50).sliced(10, 5) with no performance penalty. In numpy to take the sum of a certain axis, you can use Python's len or sum like sum(array[:, 0]) but it's slow so you have to remember the special case that you need to use array[:, 0].sum() instead to take advantage of numpy's types. With this it's just array[0..$, 0].sum with the same function you always use in the way you always use it. There are a ton of these special cases, and the consequence is it makes it very hard for non numpy code and numpy code to mix in a way that doesn't incur penalties or is a huge pain.

Also, this is a great showcase for how powerful D templates are as this solution is entirely a library solution where as numpy uses mountains of C code glued to Python. So, I think that this can be more powerful than numpy, as this fits naturally into the rest of your code base and can work efficiently with any range based code already written.

Reply via email to