Re: Beginner's Comparison Benchmark
On Tuesday, 5 May 2020 at 20:07:54 UTC, RegeleIONESCU wrote: [...] Python should be ruled out, this is not its war :) I have done benchmarks against NumPy if you are interested: https://github.com/tastyminerals/mir_benchmarks
Re: Multiplying transposed matrices in mir
On Monday, 20 April 2020 at 02:50:29 UTC, 9il wrote: On Monday, 20 April 2020 at 02:42:33 UTC, 9il wrote: On Sunday, 19 April 2020 at 20:29:54 UTC, p.shkadzko wrote: On Sunday, 19 April 2020 at 20:06:23 UTC, jmh530 wrote: [...] Thanks. I somehow missed the whole point of "a * a.transposed" not working because "a.transposed" is not allocated. In the same time, the SliceKind isn't matter for assignment operations: auto b = a.slice; // copy a to b b[] *= a.transposed; // works well BTW for the following operation auto b = a * a.transposed.slice; `b` isn't allocated as well because `*` is lazy. auto b = a.slice; // copy a to b b[] *= a.transposed; // works well So, the assignment operations are preferable anyway. Interesting, thanks for the examples.
Re: Multiplying transposed matrices in mir
On Sunday, 19 April 2020 at 21:27:43 UTC, jmh530 wrote: On Sunday, 19 April 2020 at 20:29:54 UTC, p.shkadzko wrote: [snip] Thanks. I somehow missed the whole point of "a * a.transposed" not working because "a.transposed" is not allocated. a.transposed is just a view of the original matrix. Even when I tried to do a raw for loop I ran into issues because modifying the original a in any way caused all the calculations to be wrong. Honestly, it's kind of rare that I would do an element-wise multiplication of a matrix and its transpose. It is. I was trying to calculate the covariance matrix of some dataset X which would be XX^T.
Re: Multiplying transposed matrices in mir
On Sunday, 19 April 2020 at 20:06:23 UTC, jmh530 wrote: On Sunday, 19 April 2020 at 19:20:28 UTC, p.shkadzko wrote: [snip] well no, "assumeContiguous" reverts the results of the "transposed" and it's "a * a". I would expect it to stay transposed as NumPy does "assert np.all(np.ascontiguous(a.T) == a.T)". Ah, you're right. I use it in other places where it hasn't been an issue. I can do it with an allocation (below) using the built-in syntax, but not sure how do-able it is without an allocation (Ilya would know better than me). /+dub.sdl: dependency "lubeck" version="~>1.1.7" dependency "mir-algorithm" version="~>3.7.28" +/ import mir.ndslice; import lubeck; void main() { auto a = [2.1, 1.0, 3.2, 4.5, 2.4, 3.3, 1.5, 0, 2.1].sliced(3, 3); auto b = a * a.transposed.slice; } Thanks. I somehow missed the whole point of "a * a.transposed" not working because "a.transposed" is not allocated.
Re: Multiplying transposed matrices in mir
On Sunday, 19 April 2020 at 19:13:14 UTC, p.shkadzko wrote: On Sunday, 19 April 2020 at 18:59:00 UTC, jmh530 wrote: On Sunday, 19 April 2020 at 17:55:06 UTC, p.shkadzko wrote: snip So, lubeck mtimes is equivalent to NumPy "a.dot(a.transpose())". There are elementwise operation on two matrices of the same size and then there is matrix multiplication. Two different things. You had initially said using an mxn matrix to do the calculation. Elementwise multiplication only works for matrices of the same size, which is only true in your transpose case when they are square. The mtimes function is like dot or @ in python and does real matrix multiplication, which works for generic mxn matrices. If you want elementwise multiplication of a square matrix and it’s transpose in mir, then I believe you need to call assumeContiguous after transposed. "assumeContiguous" that's what I was looking for. Thanks! well no, "assumeContiguous" reverts the results of the "transposed" and it's "a * a". I would expect it to stay transposed as NumPy does "assert np.all(np.ascontiguous(a.T) == a.T)".
Re: Multiplying transposed matrices in mir
On Sunday, 19 April 2020 at 18:59:00 UTC, jmh530 wrote: On Sunday, 19 April 2020 at 17:55:06 UTC, p.shkadzko wrote: snip So, lubeck mtimes is equivalent to NumPy "a.dot(a.transpose())". There are elementwise operation on two matrices of the same size and then there is matrix multiplication. Two different things. You had initially said using an mxn matrix to do the calculation. Elementwise multiplication only works for matrices of the same size, which is only true in your transpose case when they are square. The mtimes function is like dot or @ in python and does real matrix multiplication, which works for generic mxn matrices. If you want elementwise multiplication of a square matrix and it’s transpose in mir, then I believe you need to call assumeContiguous after transposed. "assumeContiguous" that's what I was looking for. Thanks!
Re: Multiplying transposed matrices in mir
On Sunday, 19 April 2020 at 17:22:12 UTC, jmh530 wrote: On Sunday, 19 April 2020 at 17:07:36 UTC, p.shkadzko wrote: I'd like to calculate XX^T where X is some [m x n] matrix. // create a 3 x 3 matrix Slice!(double*, 2LU) a = [2.1, 1.0, 3.2, 4.5, 2.4, 3.3, 1.5, 0, 2.1].sliced(3, 3); auto b = a * a.transposed; // error Looks like it is not possible due to "incompatible types for (a) * (transposed(a)): Slice!(double*, 2LU, cast(mir_slice_kind)2) and Slice!(double*, 2LU, cast(mir_slice_kind)0)" I'd like to understand why and how should this operation be performed in mir. Also, what does the last number "0" or "2" means in the type definition "Slice!(double*, 2LU, cast(mir_slice_kind)0)"? 2 is Contiguous, 0 is Universal, 1 is Canonical. To this day, I don’t have the greatest understanding of the difference. Try the mtimes function in lubeck. Ah, I see. There are docs on internal representations of Slices but nothing about the rationale. It would be nice to have them since it is pretty much the core of Slice. "a.mtimes(a.transposed);" works but the results are different from what NumPy gives. For example: a = np.array([[1, 2], [3, 4]]) a * a.transpose() # [[1, 6], [6, 16]] Slice!(int*, 2LU) a = [1, 2, 3, 4].sliced(2,2); writeln(a.mtimes(a.transposed)); // [[5, 11], [11, 25]] So, lubeck mtimes is equivalent to NumPy "a.dot(a.transpose())".
Multiplying transposed matrices in mir
I'd like to calculate XX^T where X is some [m x n] matrix. // create a 3 x 3 matrix Slice!(double*, 2LU) a = [2.1, 1.0, 3.2, 4.5, 2.4, 3.3, 1.5, 0, 2.1].sliced(3, 3); auto b = a * a.transposed; // error Looks like it is not possible due to "incompatible types for (a) * (transposed(a)): Slice!(double*, 2LU, cast(mir_slice_kind)2) and Slice!(double*, 2LU, cast(mir_slice_kind)0)" I'd like to understand why and how should this operation be performed in mir. Also, what does the last number "0" or "2" means in the type definition "Slice!(double*, 2LU, cast(mir_slice_kind)0)"?
Re: .get refuses to work on associative array
On Wednesday, 15 April 2020 at 22:09:32 UTC, H. S. Teoh wrote: On Wed, Apr 15, 2020 at 09:46:58PM +, p.shkadzko via Digitalmars-d-learn wrote: [...] Are you sure the error is on the line you indicated? The error message claims that your argument types are (double[string], string, string), but your code clearly has argument types (double[sting], string, double). Are you sure dub is compiling the source file(s) you think it's compiling? Which source file(s) are shown by `dub -v`? T I should stop programming at night. Indeed it was the incorrect .get("a", "NULL") instead of .get("a", 0.0), sigh. Sorry guys.
.get refuses to work on associative array
I am quite confused by the following exception during dub build: dub build --single demo.d --compiler=ldc2 --force Performing "debug" build using ldc2 for x86_64. demo ~master: building configuration "application"... demo.d(221,20): Error: template object.get cannot deduce function from argument types !()(double[string], string, string), candidates are: C:\ldc2-1.20.0-windows-x64\bin\..\import\object.d(2645,10): get(K, V)(inout(V[K]) aa, K key, lazy inout(V) defaultValue) C:\ldc2-1.20.0-windows-x64\bin\..\import\object.d(2652,10): get(K, V)(inout(V[K])* aa, K key, lazy inout(V) defaultValue) The code that causes it: """ void main(string[] args) { double[string] scores = calculateScores("test.txt"); double score = scores.get("hello", 0.0); // <-- exception } """ It works if I just do "double score = scores["hello"];" Both dmd and ldc2 throw this exception. Is it a bug?
Re: How to correctly import tsv-utilites functions?
On Tuesday, 14 April 2020 at 20:05:28 UTC, Steven Schveighoffer wrote: On 4/14/20 3:34 PM, p.shkadzko wrote: [...] What about using dependency tsv-utils:common ? Looks like tsv-utils is a collection of subpackages, and the main package just serves as a namespace. -Steve Yes, it works! Thank you.
How to correctly import tsv-utilites functions?
I need to use "bufferedByLine" function from "tsv-utilities" package located: https://github.com/eBay/tsv-utils/blob/master/common/src/tsv_utils/common/utils.d I have the following dub config /+ dub.sdl: name "demo" dependency "tsv-utils" version="~>1.6.0" dflags-ldc "-mcpu=native" targetType "executable" +/ Now, in the script file I am trying to import tsv_utils.common.utils: bufferedByLine; and this fails "Error: module utils is in file 'tsv_utils\common\utils.d' which cannot be read" tsv-utils do not have a "source" dir which I guess is needed for the imports to work correctly. Maybe it's not supposed to be used this way?
Re: Linear array to matrix
On Sunday, 5 April 2020 at 18:58:17 UTC, p.shkadzko wrote: On Saturday, 4 April 2020 at 09:25:14 UTC, Giovanni Di Maria wrote: [...] Why not use "chunks" from std.range? import std.range: chunks; void main() { int[] arr = [10,20,30,40,50,60,70,80,90,100,110,120]; auto matrix1 = arr.chunks(3).chunks(4); // no allocation int[][][] matrix2 = arr.chunks(3).array.chunks(4).array; } But, keep in mind using array of arrays is not efficient. For multidimensional arrays use Mir Slices. If you need more information on how to create matrices, see this article: https://tastyminerals.github.io/tasty-blog/random/2020/03/22/multidimensional_arrays_in_d.html it should be just one call to chunks --> arr.chunks(3), otherwise you'll get two nested arrays while you need only one. Sorry for confusion.
Re: Linear array to matrix
On Saturday, 4 April 2020 at 09:25:14 UTC, Giovanni Di Maria wrote: Hi. Is there a Built-in function (no code, only a built-in function) that transform a linear array to a Matrix? For example: From [10,20,30,40,50,60,70,80,90,100,110,120]; To [ [10,20,30], [40,50,60], [70,80,90], [100,110,120] ]; Thank You very much Cheers. Giovanni Why not use "chunks" from std.range? import std.range: chunks; void main() { int[] arr = [10,20,30,40,50,60,70,80,90,100,110,120]; auto matrix1 = arr.chunks(3).chunks(4); // no allocation int[][][] matrix2 = arr.chunks(3).array.chunks(4).array; } But, keep in mind using array of arrays is not efficient. For multidimensional arrays use Mir Slices. If you need more information on how to create matrices, see this article: https://tastyminerals.github.io/tasty-blog/random/2020/03/22/multidimensional_arrays_in_d.html
Re: Blog post about multidimensional arrays in D
On Friday, 27 March 2020 at 13:10:00 UTC, jmh530 wrote: On Friday, 27 March 2020 at 10:57:10 UTC, p.shkadzko wrote: I decided to write a small blog post about multidimensional arrays in D on what I learnt so far. It should serve as a brief introduction to Mir slices and how to do basic manipulations with them. It started with a small file with snippets for personal use but then kind of escalated into an idea of a blog post. However, given the limited about of time I spent in Mir docs and their conciseness, it would be great if anyone had a second look and tell me what is wrong or missing because I have a feeling a lot of things might. It would be a great opportunity for me to learn and also improve it or rewrite some parts. All is here: https://github.com/tastyminerals/tasty-blog/blob/master/_posts/2020-03-22-multidimensional_arrays_in_d.md Thanks for doing this. A small typo on this line a.byDim1; I think there would be a lot of value in doing another blogpost to cover some more advanced topics. For instance, mir supports three different SliceKinds and the documentation explaining the difference has never been very clear. I don't really feel like I've ever had a clear understanding of the low-level differences between them. The pack/ipack/unpack functions are also pretty hard to understand from the documentation. I agree. I was planning to do several follow-ups after this first brief overview. For example, looks like just one "byDim" requires a separate post. The goal was just to show ppl who know nothing or a little about D and Mir that Mir exists and is usable. Because what I am lacking is not the API docs but introductory examples how to do mundane tasks like creating matrices and reshaping. Treat the first post as such and if you have suggestions on what is redundant or good to have I shall update it accordingly.
Re: Blog post about multidimensional arrays in D
On Friday, 27 March 2020 at 11:19:06 UTC, WebFreak001 wrote: On Friday, 27 March 2020 at 10:57:10 UTC, p.shkadzko wrote: [...] I don't really know mir myself, but for the start of the content: [...] Ok, looks like I need to reread the slices topic. It always confused me especially when it comes to function parameters but I just swallowed it and continued on. Thank you.
Blog post about multidimensional arrays in D
I decided to write a small blog post about multidimensional arrays in D on what I learnt so far. It should serve as a brief introduction to Mir slices and how to do basic manipulations with them. It started with a small file with snippets for personal use but then kind of escalated into an idea of a blog post. However, given the limited about of time I spent in Mir docs and their conciseness, it would be great if anyone had a second look and tell me what is wrong or missing because I have a feeling a lot of things might. It would be a great opportunity for me to learn and also improve it or rewrite some parts. All is here: https://github.com/tastyminerals/tasty-blog/blob/master/_posts/2020-03-22-multidimensional_arrays_in_d.md
How to sort 2D Slice along 0 axis in mir.ndslice ?
I need to reproduce numpy sort for 2D array. -- import numpy as np a = [[1, -1, 3, 2], [0, -2, 3, 1]] b = np.sort(a) b # array([[-1, 1, 2, 3], #[-2, 0, 1, 3]]) -- Numpy sorted the array by columns, which visually looks like each row of elements was sorted individually. Going through http://docs.algorithm.dlang.io/latest/mir_ndslice_sorting.html I couldn't find analogous operation. So, attempting to do the same in mir.ndslice results in the following: --- import mir.ndslice; auto m = [1, -1, 3, 2, 0, -2, 3, 1].sliced(4, 2); // [[1, -1], [3, 2], [0, -2], [3, 1]] m.sort; // [[-2, -1], [0, 1], [1, 2], [3, 3]] --- It basically flattened the 2D slice, sorted and reshaped it back into 2D with elements being moved. Trying to do something like m.map!(a => a.sort); won't work because "a" is an int "1" and not a slice of two ints "[1, -1]". You can do it with foreach loop but then you'll have to allocate new elements. How do you do it in-place with mir?
Re: Improving dot product for standard multidimensional D arrays
On Tuesday, 3 March 2020 at 10:25:27 UTC, maarten van damme wrote: it is difficult to write an efficient matrix matrix multiplication in any language. If you want a fair comparison, implement your naive method in python and compare those timings. Op di 3 mrt. 2020 om 04:20 schreef 9il via Digitalmars-d-learn < digitalmars-d-learn@puremagic.com>: On Sunday, 1 March 2020 at 20:58:42 UTC, p.shkadzko wrote: > [...] Matrix multiplication is about cache-friendly blocking. https://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf `mir-blas` package can be used for matrix operations for ndslice. `cblas` - if you want to work with your own matrix type . Yeah, got it. After some reading, I understand that's not trivial once bigger matrices are involved.
Re: Improving dot product for standard multidimensional D arrays
On Monday, 2 March 2020 at 20:56:50 UTC, jmh530 wrote: On Monday, 2 March 2020 at 20:22:55 UTC, p.shkadzko wrote: [snip] Interesting growth of processing time. Could it be GC? +--+-+ | matrixDotProduct | time (sec.) | +--+-+ | 2x[100 x 100]|0.01 | | 2x[1000 x 1000] |2.21 | | 2x[1500 x 1000] | 5.6 | | 2x[1500 x 1500] |9.28 | | 2x[2000 x 2000] | 44.59 | | 2x[2100 x 2100] | 55.13 | +--+-+ Your matrixDotProduct creates a new Matrix and then returns it. When you look at the Matrix struct, it is basically building upon D's GC-backed slices. So yes, you are using the GC here. You could try creating the output matrices outside of the matrixDotProduct function and then pass them by pointer or reference into the function if you want to profile just the calculation. I tried using ref (pointer to struct) but it only made things slower by 0.5 s. I an not passing the result matrix to "toIdx" anymore, this is not necessary we just need the column value. This didn't change anything though. Here is how the code looks now. * pragma(inline) static int toIdx(int matrixCols, in int i, in int j) { return matrixCols * i + j; } Matrix!T matrixDotProduct(T)(Matrix!T m1, Matrix!T m2, ref Matrix!T initM) in { assert(m1.rows == m2.cols); } do { /// This implementation requires opIndex in Matrix struct. for (int i; i < m1.rows; ++i) { for (int j; j < m2.cols; ++j) { for (int k; k < m2.rows; ++k) { initM.data[toIdx(initM.cols, i, j)] += m1[i, k] * m2[k, j]; } } } return initM; } void main() { Matrix!double initMatrix = Matrix!double(m1.rows, m2.cols); auto e = matrixDotProduct!double(m5, m6, initMatrix).to2D; } * I tried disabling GC via GC.disable; GC.enable; before and after 3 loops in matrixDotProduct to see what happens. But nothing changed :(
Re: Improving dot product for standard multidimensional D arrays
On Sunday, 1 March 2020 at 20:58:42 UTC, p.shkadzko wrote: Hello again, Thanks to previous thread on multidimensional arrays, I managed to play around with pure D matrix representations and even benchmark a little against numpy: [...] Interesting growth of processing time. Could it be GC? +--+-+ | matrixDotProduct | time (sec.) | +--+-+ | 2x[100 x 100]|0.01 | | 2x[1000 x 1000] |2.21 | | 2x[1500 x 1000] | 5.6 | | 2x[1500 x 1500] |9.28 | | 2x[2000 x 2000] | 44.59 | | 2x[2100 x 2100] | 55.13 | +--+-+
Re: Improving dot product for standard multidimensional D arrays
On Monday, 2 March 2020 at 15:00:56 UTC, jmh530 wrote: On Monday, 2 March 2020 at 13:35:15 UTC, p.shkadzko wrote: [snip] Thanks. I don't have time right now to review this thoroughly. My recollection is that the dot product of two matrices is actually matrix multiplication, correct? It generally makes sense to defer to other people's implementation of this. I recommend trying lubeck's version against numpy. It uses a blas/lapack implementation. mir-glas, I believe, also has a version. Also, I'm not sure if the fastmath attribute would do anything here, but something worth looking into. Yes, this it is a sum of multiplications between elements of two matrices or a scalar product in case of vectors. This is not simple element-wise multiplication that I did in earlier benchmarks. I tested @fastmath and @optmath for toIdx function and that didn't change anyting.
Re: Improving dot product for standard multidimensional D arrays
On Monday, 2 March 2020 at 11:33:25 UTC, jmh530 wrote: On Sunday, 1 March 2020 at 20:58:42 UTC, p.shkadzko wrote: Hello again, [snip] What compiler did you use and what flags? Ah yes, sorry. I used latest ldc2 (1.20.0-x64) for Windows. Dflags -mcpu=native and "inline", "optimize", "releaseMode". Here is a dub.json of the project: { "name": "app", "targetType": "executable", "dependencies": { "mir": "~>3.2.0" }, "dflags-ldc": ["-mcpu=native"], "buildTypes": { "release": { "buildOptions": ["releaseMode", "inline", "optimize"], "dflags": ["-boundscheck=off"] }, "tests": { "buildOptions": ["unittests"] } } }
Improving dot product for standard multidimensional D arrays
Hello again, Thanks to previous thread on multidimensional arrays, I managed to play around with pure D matrix representations and even benchmark a little against numpy: +-++---+ | benchmark | time (sec) | vs Numpy | +-++---+ | Sum of two [5000, 6000] int array of arrays | ~0.28 | x4.5 | | Multiplication of two [5000, 6000] double array of arrays | ~0.3 | x2.6 | | Sum of two [5000, 6000] int struct matrices | ~0.039 | x0.6 | | Multiplication of two [5000, 6000] double struct matrices | ~0.135 | x1.2 | | L2 norm of [5000, 6000] double struct matrix| ~0.015 | x15 | | Sort of [5000, 6000] double struct matrix (axis=-1) | ~2.435 | x1.9 | | Dot product of [500, 600]&[600, 500] double struct matrices | ~0.172 | --| +-++---+ However, there is one benchmark I am trying to make at least a little comparable. That is the dot product of two struct matrices. Concretely [A x B] @ [B x A] = [A, A] operation. There is a dotProduct function in std.numeric but it works with 1D ranges only. After it was clear that array of arrays are not very good to represent multidimensional data, I used struct to represent a multidimensional arrays like so: ** struct Matrix(T) { T[] data; // keep our data as 1D array and reshape to 2D when needed int rows; int cols; // allow Matrix[] instead of Matrix.data[] alias data this; this(int rows, int cols) { this.data = new T[rows * cols]; this.rows = rows; this.cols = cols; } this(int rows, int cols, T[] data) { assert(data.length == rows * cols); this.data = data; this.rows = rows; this.cols = cols; } T[][] to2D() { return this.data.chunks(this.cols).array; } /// Allow element 2D indexing, e.g. Matrix[row, col] T opIndex(in int r, in int c) { return this.data[toIdx(this, r, c)]; } } pragma(inline) static int toIdx(T)(Matrix!T m, in int i, in int j) { return m.cols * i + j; } ** And here is the dot product function: ** Matrix!T matrixDotProduct(T)(Matrix!T m1, Matrix!T m2) in { assert(m1.rows == m2.cols); } do { Matrix!T m3 = Matrix!T(m1.rows, m2.cols); for (int i; i < m1.rows; ++i) { for (int j; j < m2.cols; ++j) { for (int k; k < m2.rows; ++k) { m3.data[toIdx(m3, i, j)] += m1[i, k] * m2[k, j]; } } } return m3; } ** However, attempting to run dotProduct on two 5000x6000 struct Matrices took ~20 min while 500x600 only 0.172 sec. And I wondered if there is something really wrong with the matrixDotProduct function. I can see that accessing the appropriate array member in Matrix.data is costly due to toIdx operation but, I can hardly explain why it gets so much costly. Maybe there is a better way to do it after all?
Re: How to sum multidimensional arrays?
On Friday, 28 February 2020 at 16:51:10 UTC, AB wrote: On Thursday, 27 February 2020 at 14:15:26 UTC, p.shkadzko wrote: [...] Your Example with a minimal 2D array. module test2; import std.random : Xorshift, unpredictableSeed, uniform; import std.range : generate, take, chunks; import std.array : array; import std.stdio : writeln; struct Matrix(T) { int rows; T[] data; alias data this; int cols() {return cast(int) data.length/rows;} this(int r, int c) { data=new int[r*c]; rows=r;} this(int r, int c, T[] d) {assert(r*c==data.length); data=d; rows=r; } auto opIndex(int r, int c) {return data[rows*c+r];} } Can you please explain what is the purpose of "alias data this" in your Matrix struct? As I remember "alias this" is used for implicit type conversions but I don't see where "data" is converted.
Re: How to sum multidimensional arrays?
On Thursday, 27 February 2020 at 15:48:53 UTC, bachmeier wrote: On Thursday, 27 February 2020 at 14:15:26 UTC, p.shkadzko wrote: [...] This works but it does not look very efficient considering we flatten and then calling array twice. It will get even worse with 3D arrays. Is there a better way without relying on mir.ndslice? Is there a reason you can't create a struct around a double[] like this? struct Matrix { double[] data; } Then to add Matrix A to Matrix B, you use A.data[] + B.data[]. But since I'm not sure what exactly you're doing, maybe that won't work. Right! Ok, here is how I do it. ``` struct Matrix(T) { T[] elems; int cols; T[][] to2D() { return elems.chunks(cols).array; } } ``` and Matrix summing and random array generator functions ``` auto matrixSum(Matrix!int m1, Matrix!int m2) { Matrix!int m3; m3.cols = m1.cols; m3.elems.length = m1.elems.length; m3.elems[] = m1.elems[] + m2.elems[]; return m3.to2D; } static T[] rndArr(T)(in T max, in int elems) { Xorshift rnd; return generate(() => uniform(0, max, rnd)).take(elems).array; } ``` Then we do the following ``` auto m1 = Matrix!int(rndArr!int(10, 5000 * 6000), 6000); auto m2 = Matrix!int(rndArr!int(10, 5000 * 6000), 6000); auto m3 = matrixSum(m1, m2); ``` And it works effortlessly! Sum of two 5000 x 6000 int arrays is just 0.105 sec! (on a Windows machine though but with weaker CPU). I bet using mir.ndslice instead of D arrays would be even faster.
Re: How to sum multidimensional arrays?
On Thursday, 27 February 2020 at 16:31:07 UTC, 9il wrote: On Thursday, 27 February 2020 at 14:15:26 UTC, p.shkadzko wrote: Is there a better way without relying on mir.ndslice? ndslice Poker Face /+dub.sdl: dependency "mir-algorithm" version="~>3.7.17" dependency "mir-random" version="~>2.2.10" +/ import mir.ndslice; import mir.random: threadLocal; import mir.random.variable: uniformVar; import mir.random.algorithm: randomSlice; import mir.random.engine.xorshift; void main() { Slice!(int*, 2) m1 = threadLocal!Xorshift.randomSlice(uniformVar!int(0, 10), [2, 3]); Slice!(int*, 2) m2 = threadLocal!Xorshift.randomSlice(uniformVar!int(0, 10), [2, 3]); Slice!(int*, 2) c = slice(m1 + m2); } Yes, mir.ndslice is a straightforward choice for multidimensional arrays. I shall do some benchmarks with it next. But first, I try to do it with standard D ops and see what's the rough difference against numpy's C.
Re: How to sum multidimensional arrays?
On Thursday, 27 February 2020 at 14:15:26 UTC, p.shkadzko wrote: This works but it does not look very efficient considering we flatten and then calling array twice. It will get even worse with 3D arrays. And yes, benchmarks show that summing 2D arrays like in the example above is significantly slower than in numpy. But that is to be expected... I guess. D -- sum of two 5000 x 6000 2D arrays: 3.4 sec. numpy -- sum of two 5000 x 6000 2D arrays: 0.0367800739913946 sec.
How to sum multidimensional arrays?
I'd like to sum 2D arrays. Let's create 2 random 2D arrays and sum them. ``` import std.random : Xorshift, unpredictableSeed, uniform; import std.range : generate, take, chunks; import std.array : array; static T[][] rndMatrix(T)(T max, in int rows, in int cols) { Xorshift rnd; rnd.seed(unpredictableSeed); const amount = rows * cols; return generate(() => uniform(0, max, rnd)).take(amount).array.chunks(cols).array; } void main() { int[][] m1 = rndMatrix(10, 2, 3); int[][] m2 = rndMatrix(10, 2, 3); auto c = m1[] + m2[]; } ``` This won't work because the compiler will throw "Error: array operation m[] + m2[] without destination memory not allowed". Looking at https://forum.dlang.org/thread/wnjepbggivhutgbyj...@forum.dlang.org, I modified the code to: ``` void main() { int[][] m1 = rndMatrix(10, 2, 3); int[][] m2 = rndMatrix(10, 2, 3); int[][] c; c.length = m[0].length; c[1].length = m[1].length; c[] = m1[] + m2[]; } ``` Well then I am getting "/dlang/dmd/linux/bin64/../../src/druntime/import/core/internal/array/operations.d(165): Error: static assert: "Binary + not supported for types int[] and int[]." Right, then I am trying the following ``` void main() { int[][] m1 = rndMatrix(10, 2, 3); int[][] m2 = rndMatrix(10, 2, 3); auto c = zip(m[], m2[]).map!((a, b) => a + b); } ``` Doesn't work either because "Error: template D main.__lambda1 cannot deduce function from argument types !()(Tuple!(int[], int[])), candidates are: onlineapp.d(21):__lambda1 (...) So, I have to flatten first, then zip + sum and then reshape back to the original. ``` auto c = zip(m.joiner, m2.joiner).map!(t => t[0] + t[1]).array.chunks(3).array; ``` This works but it does not look very efficient considering we flatten and then calling array twice. It will get even worse with 3D arrays. Is there a better way without relying on mir.ndslice?
Re: books for learning D
On Wednesday, 29 January 2020 at 08:56:26 UTC, rumbu wrote: On Wednesday, 29 January 2020 at 08:40:48 UTC, p.shkadzko wrote: Has anyone read "d programming language tutorial: A Step By Step Appoach: Learn d programming language Fast"? https://www.goodreads.com/book/show/38328553-d-programming-language-tutorial?from_search=true&qid=G9QIeXioOJ&rank=3 Beware, this is a scam. This guy has hundreds of "books". These books are promoted on various forums for download. Of course, you must enter your CC to "prove your identity". Uh, ok, thanks for information.
Re: books for learning D
Has anyone read "d programming language tutorial: A Step By Step Appoach: Learn d programming language Fast"? https://www.goodreads.com/book/show/38328553-d-programming-language-tutorial?from_search=true&qid=G9QIeXioOJ&rank=3
Re: How to create meson.build with external libs?
On Tuesday, 14 January 2020 at 20:14:30 UTC, p.shkadzko wrote: On Tuesday, 14 January 2020 at 15:15:09 UTC, Rasmus Thomsen wrote: On Tuesday, 14 January 2020 at 09:54:18 UTC, p.shkadzko wrote: On Monday, 13 January 2020 at 21:15:51 UTC, p.shkadzko wrote: On Monday, 13 January 2020 at 20:53:43 UTC, p.shkadzko wrote: [...] It's very odd to me that Manjaros pkg-config doesn't include that pkg-config path by default and also doesn't include that library path by default, every distro I've so far used did that. I added /usr/local/include/d/cblas/cblas to $PATH but that didn't help :( Can you show us your meson.build? You need to set `dependencies:` properly to include all dependencies so ninja includes the right dirs and links the required libraries. You can look at other projects using D for that, e.g. I do it like this: https://github.com/Cogitri/corecollector/blob/master/source/corectl/meson.build Here is my meson.build file --- project('demo_proj', 'd', version : '0.1', default_options : ['warning_level=3'] ) mir_alg = dependency('mir-algorithm', method: 'pkg-config') lubeck = dependency('lubeck', method: 'pkg-config') required_deps = [mir_alg, lubeck] ed = executable('demo_proj', 'app.d', dependencies: required_deps, install : true) As I mentioned earlier cblas.d is installed in /usr/local/include/d/cblas/cblas/cblas.d If I replace lubeck with mir-lapack (which I also installed into /usr/local/lib), repeat "meson build && cd build && ninja" and call ./demo_proj, I get this: - ./demo_proj: error while loading shared libraries: subprojects/mir-algorithm/libmir-algorithm.so.3.4.0:ject file: No such file or directory But libmir-algorithm.so.3.4.0 is installed in /usr/local/lib and it is in $PATH :(
Re: How to create meson.build with external libs?
On Tuesday, 14 January 2020 at 15:15:09 UTC, Rasmus Thomsen wrote: On Tuesday, 14 January 2020 at 09:54:18 UTC, p.shkadzko wrote: On Monday, 13 January 2020 at 21:15:51 UTC, p.shkadzko wrote: On Monday, 13 January 2020 at 20:53:43 UTC, p.shkadzko wrote: On Monday, 13 January 2020 at 19:56:35 UTC, p.shkadzko wrote: On Monday, 13 January 2020 at 17:14:29 UTC, p.shkadzko wrote: [...] I had to set PKG_CONFIG_PATH to "/usr/local/lib/pkgconfig". For some reason Manjaro distro doesn't have it set by default. After setting the pkgconfig path, lubeck is getting found and everything works. It's very odd to me that Manjaros pkg-config doesn't include that pkg-config path by default and also doesn't include that library path by default, every distro I've so far used did that. I added /usr/local/include/d/cblas/cblas to $PATH but that didn't help :( Can you show us your meson.build? You need to set `dependencies:` properly to include all dependencies so ninja includes the right dirs and links the required libraries. You can look at other projects using D for that, e.g. I do it like this: https://github.com/Cogitri/corecollector/blob/master/source/corectl/meson.build Here is my meson.build file --- project('demo_proj', 'd', version : '0.1', default_options : ['warning_level=3'] ) mir_alg = dependency('mir-algorithm', method: 'pkg-config') lubeck = dependency('lubeck', method: 'pkg-config') required_deps = [mir_alg, lubeck] ed = executable('demo_proj', 'app.d', dependencies: required_deps, install : true) As I mentioned earlier cblas.d is installed in /usr/local/include/d/cblas/cblas/cblas.d
Re: How to create meson.build with external libs?
On Tuesday, 14 January 2020 at 11:26:30 UTC, Andre Pany wrote: On Tuesday, 14 January 2020 at 09:54:18 UTC, p.shkadzko wrote: [...] May I ask, whether you have tried to use Dub, or is s.th. blocking you from using Dub? Kind regards André I tested dub and it fetched and compiled mir.ndslice and lubeck without issues. Then somebody mentioned meson and that it gives you more control and speed over the build. I also saw that mir library uses meson so decided to give it a try.
Re: How to create meson.build with external libs?
On Monday, 13 January 2020 at 21:15:51 UTC, p.shkadzko wrote: On Monday, 13 January 2020 at 20:53:43 UTC, p.shkadzko wrote: On Monday, 13 January 2020 at 19:56:35 UTC, p.shkadzko wrote: On Monday, 13 January 2020 at 17:14:29 UTC, p.shkadzko wrote: [...] I had to set PKG_CONFIG_PATH to "/usr/local/lib/pkgconfig". For some reason Manjaro distro doesn't have it set by default. After setting the pkgconfig path, lubeck is getting found and everything works. After I ran "meson build" I got the following output: --- The Meson build system Version: 0.52.1 Source dir: /home/tastyminerals/dev/test_proj Build dir: /home/tastyminerals/dev/test_proj/build Build type: native build Project name: demo_proj Project version: 0.1 D compiler for the host machine: ldc2 (llvm 1.18.0 "LDC - the LLVM D compiler (1.18.0):") D linker for the host machine: GNU ld.gold 2.33.1 Host machine cpu family: x86_64 Host machine cpu: x86_64 Found pkg-config: /usr/bin/pkg-config (1.6.3) Run-time dependency mir-algorithm found: YES 3.4.0 Run-time dependency lubeck found: YES 1.0.0 Build targets in project: 1 Found ninja-1.9.0 at /usr/bin/ninja I then cd to build/ dir and run "ninja" [2/2] Linking target test_proj.tart@exe/app.d.o'. But when I try to run compiled app.d file as demo_run, I get: ./demo_run: error while loading shared libraries: libmir-algorithm.so: cannot open shared object file: No such file or directory I thought that if meson builds and finds the external libs successfully and ninja links everything, all should be fine. I don't understand. Why the compiled file cannot find the external library? $LD_LIBRARY_PATH was not set to "/usr/local/lib" where libmir-algorithm.so is located. I had to set it explicitly and recompile the project but then I get another error message: ./demo_proj: error while loading shared libraries: subprojects/mir-algorithm/libmir-algorithm.so.3.4.0: cannot open shared object file: No such file or directory (ಠ_ಠ) My test app.d didn't have any imports for mir.ndslice and lubeck. After I imported them, ninja threw the following error. --- [1/2] Compiling D object 'demo_proj@exe/app.d.o'. FAILED: demo_proj@exe/app.d.o ldc2 -I=demo_proj@exe -I=. -I=.. -I/usr/local/include/d/lubeck -I/usr/local/include/d/mir-algorithm -I/usr/local/include/d/mir-core -I/usr/local/include/d/mir-blas -I/usr/local/include/d/mir-lapack -I/usr/local/include/d/mir-random -enable-color -wi -dw -g -d-debug -of='demo_proj@exe/app.d.o' -c ../app.d /usr/local/include/d/lubeck/lubeck.d(8): Error: module cblas is in file 'cblas.d' which cannot be read import path[0] = demo_proj@exe import path[1] = . import path[2] = .. import path[3] = /usr/local/include/d/lubeck import path[4] = /usr/local/include/d/mir-algorithm import path[5] = /usr/local/include/d/mir-core import path[6] = /usr/local/include/d/mir-blas import path[7] = /usr/local/include/d/mir-lapack import path[8] = /usr/local/include/d/mir-random import path[9] = /usr/include/dlang/ldc ninja: build stopped: subcommand failed. cblas.d is located in /usr/local/include/d/cblas/cblas/cblas.d I added /usr/local/include/d/cblas/cblas to $PATH but that didn't help :(
Re: How to create meson.build with external libs?
On Monday, 13 January 2020 at 20:53:43 UTC, p.shkadzko wrote: On Monday, 13 January 2020 at 19:56:35 UTC, p.shkadzko wrote: On Monday, 13 January 2020 at 17:14:29 UTC, p.shkadzko wrote: [...] I had to set PKG_CONFIG_PATH to "/usr/local/lib/pkgconfig". For some reason Manjaro distro doesn't have it set by default. After setting the pkgconfig path, lubeck is getting found and everything works. After I ran "meson build" I got the following output: --- The Meson build system Version: 0.52.1 Source dir: /home/tastyminerals/dev/test_proj Build dir: /home/tastyminerals/dev/test_proj/build Build type: native build Project name: mir_quickstart Project version: 0.1 D compiler for the host machine: ldc2 (llvm 1.18.0 "LDC - the LLVM D compiler (1.18.0):") D linker for the host machine: GNU ld.gold 2.33.1 Host machine cpu family: x86_64 Host machine cpu: x86_64 Found pkg-config: /usr/bin/pkg-config (1.6.3) Run-time dependency mir-algorithm found: YES 3.4.0 Run-time dependency lubeck found: YES 1.0.0 Build targets in project: 1 Found ninja-1.9.0 at /usr/bin/ninja I then cd to build/ dir and run "ninja" [2/2] Linking target test_proj.tart@exe/app.d.o'. But when I try to run compiled app.d file as demo_run, I get: ./demo_run: error while loading shared libraries: libmir-algorithm.so: cannot open shared object file: No such file or directory I thought that if meson builds and finds the external libs successfully and ninja links everything, all should be fine. I don't understand. Why the compiled file cannot find the external library? $LD_LIBRARY_PATH was not set to "/usr/local/lib" where libmir-algorithm.so is located. I had to set it explicitly and recompile the project but then I get another error message: ./demo_proj: error while loading shared libraries: subprojects/mir-algorithm/libmir-algorithm.so.3.4.0: cannot open shared object file: No such file or directory (ಠ_ಠ)
Re: How to create meson.build with external libs?
On Monday, 13 January 2020 at 19:56:35 UTC, p.shkadzko wrote: On Monday, 13 January 2020 at 17:14:29 UTC, p.shkadzko wrote: On Sunday, 12 January 2020 at 22:12:14 UTC, Rasmus Thomsen wrote: On Sunday, 12 January 2020 at 22:00:33 UTC, p.shkadzko wrote: [...] In difference to dub, meson will _not_ auto-download required software for you. You have to ways to go forward with this: [...] I followed the 1 step, namely git cloned the lubeck and installed it with meson into /usr/local however, I still get the same error. Could I miss anything? I had to set PKG_CONFIG_PATH to "/usr/local/lib/pkgconfig". For some reason Manjaro distro doesn't have it set by default. After setting the pkgconfig path, lubeck is getting found and everything works. After I ran "meson build" I got the following output: --- The Meson build system Version: 0.52.1 Source dir: /home/tastyminerals/dev/test_proj Build dir: /home/tastyminerals/dev/test_proj/build Build type: native build Project name: mir_quickstart Project version: 0.1 D compiler for the host machine: ldc2 (llvm 1.18.0 "LDC - the LLVM D compiler (1.18.0):") D linker for the host machine: GNU ld.gold 2.33.1 Host machine cpu family: x86_64 Host machine cpu: x86_64 Found pkg-config: /usr/bin/pkg-config (1.6.3) Run-time dependency mir-algorithm found: YES 3.4.0 Run-time dependency lubeck found: YES 1.0.0 Build targets in project: 1 Found ninja-1.9.0 at /usr/bin/ninja I then cd to build/ dir and run "ninja" [2/2] Linking target test_proj.tart@exe/app.d.o'. But when I try to run compiled app.d file as demo_run, I get: ./demo_run: error while loading shared libraries: libmir-algorithm.so: cannot open shared object file: No such file or directory I thought that if meson builds and finds the external libs successfully and ninja links everything, all should be fine. I don't understand. Why the compiled file cannot find the external library?
Re: Compilation error: undefined reference to 'cblas_dgemv' / 'cblas_dger' / 'cblas_dgemm'
On Sunday, 12 January 2020 at 13:07:33 UTC, dnsmt wrote: On Saturday, 11 January 2020 at 16:45:22 UTC, p.shkadzko wrote: I am trying to run example code from https://tour.dlang.org/tour/en/dub/lubeck ... This is Linux Manjaro with openblas package installed. The Lubeck library depends on CBLAS, but the openblas package in the Arch repository is compiled without CBLAS. You can see that here (note the NO_CBLAS=1 parameter): https://git.archlinux.org/svntogit/community.git/tree/trunk/PKGBUILD?h=packages/openblas Try installing the cblas package: https://www.archlinux.org/packages/extra/x86_64/cblas/ yeah, so I thought too but I have CBLAS installed as a distro package. I also have OpenBLAS installed instead of BLAS because BLAS was removed since it conflicts with OpenBLAS ¯\_(ツ)_/¯
Re: How to create meson.build with external libs?
On Monday, 13 January 2020 at 17:14:29 UTC, p.shkadzko wrote: On Sunday, 12 January 2020 at 22:12:14 UTC, Rasmus Thomsen wrote: On Sunday, 12 January 2020 at 22:00:33 UTC, p.shkadzko wrote: [...] In difference to dub, meson will _not_ auto-download required software for you. You have to ways to go forward with this: [...] I followed the 1 step, namely git cloned the lubeck and installed it with meson into /usr/local however, I still get the same error. Could I miss anything? I had to set PKG_CONFIG_PATH to "/usr/local/lib/pkgconfig". For some reason Manjaro distro doesn't have it set by default. After setting the pkgconfig path, lubeck is getting found and everything works.
Re: How to create meson.build with external libs?
On Sunday, 12 January 2020 at 22:12:14 UTC, Rasmus Thomsen wrote: On Sunday, 12 January 2020 at 22:00:33 UTC, p.shkadzko wrote: [...] In difference to dub, meson will _not_ auto-download required software for you. You have to ways to go forward with this: [...] I followed the 1 step, namely git cloned the lubeck and installed it with meson into /usr/local however, I still get the same error. Could I miss anything?
Re: How to create meson.build with external libs?
On Sunday, 12 January 2020 at 22:12:14 UTC, Rasmus Thomsen wrote: On Sunday, 12 January 2020 at 22:00:33 UTC, p.shkadzko wrote: [...] In difference to dub, meson will _not_ auto-download required software for you. You have to ways to go forward with this: [...] Thanks! I shall try it out.
Re: How to create meson.build with external libs?
On Sunday, 12 January 2020 at 22:12:14 UTC, Rasmus Thomsen wrote: On Sunday, 12 January 2020 at 22:00:33 UTC, p.shkadzko wrote: What do I need to do in order to build the project with "lubeck" dependency in meson? In difference to dub, meson will _not_ auto-download required software for you. You have to ways to go forward with this: 1 (IMHO the better way, especially if you ever want a distro to package your thing): Install lubeck ala `git clone https://github.com/kaleidicassociates/lubeck && cd lubeck && meson build && ninja -C build install`. This will install lubeck to your system (by default into `/usr/local`, you can set a different by passing `--prefix` to meson). This will generate a so called pkg-config (`.pc`) file: https://github.com/kaleidicassociates/lubeck/blob/master/meson.build#L49 which meson will discover. 2 (The probably easier way in the short term): Install lubeck via meson, then discover the dependency like specified here: https://mesonbuild.com/Dependencies.html#dependency-method Why do you think 1 is the better way? I feel like it is a lot of manual work for just one dependency. Also, it is not a good idea to pollute your /usr/local with non-distro packages.
How to create meson.build with external libs?
Ok, I am trying to meson and is struggling with meson.build file. I looked up the examples page: https://github.com/mesonbuild/meson/tree/master/test%20cases/d which has a lot of examples but not the one that shows you how to build your project with some external dependency :) Let's say we have a simple dir "myproj" with "meson.build" in it and some source files like "app.d" and "helper_functions.d". ~/myproj app.d helper_functions.d meson.build "helper_functions.d" uses let's say lubeck library which according to https://forum.dlang.org/thread/nghoprwkihazjikyh...@forum.dlang.org is supported by meson. Here is my meson.build: --- project('demo', 'd', version : '0.1', default_options : ['warning_level=3'] ) lubeck = dependency('lubeck', version: '>=1.1.7') ed = executable('mir_quickstart', 'app.d', dependencies: lubeck, install : true) However, when I try to build it I get the following error: - $ meson build The Meson build system Version: 0.52.1 Source dir: /home/user/dev/github/demo Build dir: /home/user/dev/github/demo/build Build type: native build Project name: demo Project version: 0.1 D compiler for the host machine: ldc2 (llvm 1.18.0 "LDC - the LLVM D compiler (1.18.0):") D linker for the host machine: GNU ld.gold 2.33.1 Host machine cpu family: x86_64 Host machine cpu: x86_64 Found pkg-config: /usr/bin/pkg-config (1.6.3) Found CMake: /usr/bin/cmake (3.16.2) Run-time dependency lubeck found: NO (tried pkgconfig and cmake) meson.build:8:0: ERROR: Dependency "lubeck" not found, tried pkgconfig and cmake A full log can be found at /home/user/dev/github/demo/build/meson-l - What do I need to do in order to build the project with "lubeck" dependency in meson?
Compilation error: undefined reference to 'cblas_dgemv' / 'cblas_dger' / 'cblas_dgemm'
I am trying to run example code from https://tour.dlang.org/tour/en/dub/lubeck example.d: --- /+dub.sdl: dependency "lubeck" version="~>1.1" +/ import lubeck: mtimes; import mir.algorithm.iteration: each; import mir.ndslice; import std.stdio: writeln; void main() { auto n = 5; // Magic Square auto matrix = n.magic.as!double.slice; // [1 1 1 1 1] auto vec = 1.repeat(n).as!double.slice; // Uses CBLAS for multiplication matrix.mtimes(vec).writeln; "-".writeln; matrix.mtimes(matrix).byDim!0.each!writeln; } --- I try to compile it via: --- dub build --compiler="ldc" --single example.d -v --- And get the error below: --- Linking... ldc -of.dub/build/application-debug-linux.posix-x86_64-ldc_2088-588551E77C1C779CBF3BECA58D19211B/matrix_dot .dub/build/application-debug-linux.posix-x86_64-ldc_2088-588551E77C1C779CBF3BECA58D19211B/matrix_dot.o ../../../../../.dub/packages/lubeck-1.1.7/lubeck/.dub/build/library-debug-linux.posix-x86_64-ldc_2088-09723F6E7A90ABDEB9EDD35B9DC7E7CE/liblubeck.a ../../../../../.dub/packages/mir-lapack-1.2.1/mir-lapack/.dub/build/library-debug-linux.posix-x86_64-ldc_2088-69D3CA650230A9C73F912A6D1AB44EE1/libmir-lapack.a ../../../../../.dub/packages/mir-blas-1.1.9/mir-blas/.dub/build/library-debug-linux.posix-x86_64-ldc_2088-9B27CD5D2C27A78F627CDC352C376E52/libmir-blas.a ../../../../../.dub/packages/mir-algorithm-3.7.13/mir-algorithm/.dub/build/default-debug-linux.posix-x86_64-ldc_2088-255337055B86988DA608A6F2BE06058A/libmir-algorithm.a ../../../../../.dub/packages/mir-core-1.0.2/mir-core/.dub/build/library-debug-linux.posix-x86_64-ldc_2088-554F8271B5559801959D6785138A5389/libmir-core.a -! L--no-as-needed -L-lopenblas -g /home/tastyminerals/dev/github/mir_quickstart/source/benchmarks/../../../../../.dub/packages/mir-blas-1.1.9/mir-blas/source/mir/blas.d:305: error: undefined reference to 'cblas_dgemv' /home/tastyminerals/dev/github/mir_quickstart/source/benchmarks/../../../../../.dub/packages/mir-blas-1.1.9/mir-blas/source/mir/blas.d:210: error: undefined reference to 'cblas_dger' /home/tastyminerals/dev/github/mir_quickstart/source/benchmarks/../../../../../.dub/packages/mir-blas-1.1.9/mir-blas/source/mir/blas.d:385: error: undefined reference to 'cblas_dgemm' collect2: error: ld returned 1 exit status Error: /usr/bin/gcc failed with status: 1 --- This is Linux Manjaro with openblas package installed.
Re: What kind of Editor, IDE you are using and which one do you like for D language?
On Sunday, 29 December 2019 at 14:41:46 UTC, Russel Winder wrote: The more the D community advertise that IDEs are for wimps, the less likelihood that people will come to D usage. It is so. And yet, I can't use Java or Scala without IDE and I tried. I believe the same is true for C++.
Re: What kind of Editor, IDE you are using and which one do you like for D language?
On Sunday, 22 December 2019 at 17:20:51 UTC, BoQsc wrote: There are lots of editors/IDE's that support D language: https://wiki.dlang.org/Editors What kind of editor/IDE are you using and which one do you like the most? Tried almost all of them that support D including Dlang IDE, Dexed, Poseidon, Zeus but VS Code is currently the best among fat IDEs. Used VS Code for a while but eventually found it cumbersome and taking too much space. Just look at those lovely GBs scattered around your system! I then switched entirely to Vim and never ever been that happy. p.s. I found it quite satisfying that D does not really need an IDE, you will be fine even with nano.