Re: Problem Computing Dot Product with mir

2021-02-22 Thread 9il via Digitalmars-d-learn

On Tuesday, 23 February 2021 at 03:48:15 UTC, Max Haughton wrote:

On Monday, 22 February 2021 at 07:14:26 UTC, 9il wrote:
On Sunday, 21 February 2021 at 16:18:05 UTC, Kyle Ingraham 
wrote:
I am trying to convert sRGB pixel values to XYZ with mir 
using the following guide: 
http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html


[...]


mir-glas is deprecated experimental project. It is worth use 
mir-blas or lubeck instead. There is also naming issue. In the 
classic BLAS naming dot refers a function that accept two 1D 
vectors.


Deprecated as in formally dead or postponed?


Postponed until someone wish to invest in BLAS library in D.


Re: Problem Computing Dot Product with mir

2021-02-21 Thread 9il via Digitalmars-d-learn

On Sunday, 21 February 2021 at 16:18:05 UTC, Kyle Ingraham wrote:
I am trying to convert sRGB pixel values to XYZ with mir using 
the following guide: 
http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html


[...]


mir-glas is deprecated experimental project. It is worth use 
mir-blas or lubeck instead. There is also naming issue. In the 
classic BLAS naming dot refers a function that accept two 1D 
vectors.


Re: Using mir to work with matrices

2021-01-29 Thread 9il via Digitalmars-d-learn

On Friday, 29 January 2021 at 15:35:49 UTC, drug wrote:
Between is there a plan to implement some sort of static slice 
where the lengths of the dimensions are known in compile time? 
Compiler help is very useful.


No. BLAS/LAPACK API's can't use compile-time information. User 
matrix loops can be optimized by the compiler using constants and 
without introducing new types. If you need a stack-allocated 
matrix, then a 1D stack-allocated array can be used


import mir.slice.slice;

double[12] payload;
auto matrix = payload[].sliced(3, 4);



Re: Using mir to work with matrices

2021-01-29 Thread 9il via Digitalmars-d-learn

On Tuesday, 26 January 2021 at 14:43:08 UTC, drug wrote:
It is not easy to understand what mir library one should use to 
work with matrices. mir-glas turns out unsupported now and I 
try to use mir-blas. I need to reimplement my Kalman filter 
version to use more high dimension matrix than 4x4 plus 
Kronecker product. Is mir-blas recommended to work with 
matrices?


Yes, it is wrapper around a common BLAS libraries such as 
OpenBLAS or Intel MKL.




Re: mir.algebraic: Visitor cannot be called

2020-12-09 Thread 9il via Digitalmars-d-learn

On Wednesday, 9 December 2020 at 14:34:18 UTC, Andre Pany wrote:

Hi,

I want to port some Python coding and try have as much similiar 
coding as

possible.

I thought I can have a mir variant which stores either class A 
or B

and I can call at runtime a method like this:

```
/+ dub.sdl:
name "app"
dependency "mir-core" version="1.1.51"
+/

import std.stdio: writeln;
import mir.algebraic;

class A {
void foo(int i){writeln("A.foo");}
}

class B {
void foo(int i, string s){writeln("B.foo");}
}

void main() {
Variant!(A,B) v = new A();
v.foo(3);
}
```

But it fails with:
Error: static assert:  "Algebraic!(A, B): the visitor cann't be 
caled with arguments (B, int)"


The error message seems strange. Is the behavior I want somehow 
possible?
(At runtime I know whether I have an object of A or B and will 
only call

with the correct method signature).

Kind regards
André


For .member access mir.algebraic checks at compile time that all 
underlying types (except typeof(null)) can be called with 
provided arguments. It is kind of API protection.


Alternatives:

With compile-time known type
```
v.get!A.foo(3); // will throw if it isn't A
v.trustedGet!A.foo(3); // will assert if it isn't A
```

Without compile-time known type
```
v.tryGetMember!"foo"(3); // will throw if it isn't A
v.optionalGetMember!"foo"(3); // will return null Nullable!void 
of it isn't A

```

tryGetMember and optionalGetMember are alternative visitor 
handlers in mir.algebraic.


Kind regards,
Ilya



Re: mir.ndslice : multi dimensional associative array - Example/Docs

2020-10-26 Thread 9il via Digitalmars-d-learn

On Monday, 26 October 2020 at 14:31:00 UTC, Vino wrote:

Hi All,

  Is it possible to create a multi dimensional associative 
array using mir.ndslice,

  if yes,
   (1): request you to point me to some example / docs
   (2): below is an example multi dimensional associative array 
using the core d module, and

how can we to implement the same using mir.ndslice.
   (3): What are the pros and cons of using mir.ndslice over 
the core d module.


import std.stdio;
void main () {
   string[int][string] aa;
   aa["Name"] = [1: "test01", 2:"test02"];
   aa["Pool"] = [1: "Development", 2:"Quality"];
   foreach(i; aa["Pool"].byValue) { writeln(i); } 
}

From,
Vino.B


No. ndslice provides rectangular arrays. An associative array (as 
it defined in D) can't be really 2D, instead, it is an AA of AAs 
like in your example. A real 2D analog associative arrays are 
DataFrames. It is a postponed WIP ndslice feature.


Ilya


Re: Help on asdf json module

2020-10-25 Thread 9il via Digitalmars-d-learn

On Sunday, 25 October 2020 at 06:05:27 UTC, Vino wrote:

Hi All,

   Currently we are testing various json module such as 
"std.json, std_data_json, vibe.data.json and asdf", the below 
code works perfectely while use  "std_data_json or 
vibe.data.json" but not working as expected when we use "asdf" 
and throwing the below error, hence request your help on the 
same.


[...]


Hi Vino,

byElement should be used here for ASDF.

foreach(j; jv["items"].byElement)

http://asdf.libmir.org/asdf_asdf.html#.Asdf.byElement


Re: How to work with and load/save sparse compressed matrices? (numir, mir.sparse)

2020-10-04 Thread 9il via Digitalmars-d-learn
On Tuesday, 29 September 2020 at 04:52:11 UTC, Shaleen Chhabra 
wrote:
I wish to use load / save for sparse compressed matrices using 
mir.


import mir.sparse;
auto sp = sparse!double(5, 8);
auto crs = sp.compress;


How can I save/load sparse compressed arrays in `npz` format?
(format: ``csc``, ``csr``, ``bsr``, ``dia`` or coo``)


Mir doesn't have I/O support for sparse tensors for now.


how can i again decompress the compressed sparse array to dense?


// 
http://mir-algorithm.libmir.org/mir_ndslice_allocation.html#.slice

import mir.ndslice.allocation: slice;

// for DOK:
auto dense = slice(sp);

For crs you may need to iterate with byCoordinateValue
http://mir.libmir.org/mir_sparse.html#.byCoordinateValue
and initialize the dense slice.

To estimate the raw length one may need to iterate all rows and 
get the maximum of the last element indices.




Thanks
Shaleen





Re: Red-Black Gauss-seidel with mir

2020-09-14 Thread 9il via Digitalmars-d-learn

On Monday, 14 September 2020 at 09:50:16 UTC, Christoph wrote:

Hi Ilya,

On Sunday, 13 September 2020 at 19:29:31 UTC, 9il wrote:

[...]


I have tested it with dmd and ldc and called them just with
$ dub build --compiler=ldc(dmd)
with no more configurations in the dub.json file.

[...]


On Monday, 14 September 2020 at 09:50:16 UTC, Christoph wrote:

For a release performance, it should be run in release mode
```
dub build --build=release --compiler=ldc2
```
I expect it will speed up the slow version a few times.

Also, the slow version has a few times more memory access then 
the fast version and Python. The improvement would look more like 
the C code and require inner loops.


Your fast version looks good too me. If it is correct, it is very 
good.


Re: Red-Black Gauss-seidel with mir

2020-09-13 Thread 9il via Digitalmars-d-learn

On Sunday, 13 September 2020 at 14:48:30 UTC, Christoph wrote:

Hi all,

I am trying to implement a sweep method for a 2D Red-black 
Gauss-Seidel Solver with the help of mir and its slices.

The fastest Version I discovered so far looks like this:
```
void sweep(T, size_t Dim : 2, Color color)(in Slice!(T*, 2) F, 
Slice!(T*, 2) U, T h2)

{
const auto m = F.shape[0];
const auto n = F.shape[1];
auto UF = U.field;
auto FF = F.field;

[...]


Hi Christoph,

More details are required. What compiler and command line has 
been used?

The full source of the benchmark would be helpful.

Kind regards,
Ilya


Re: How to use libmir --> mir-algorithm, numir, mir-random?

2020-09-02 Thread 9il via Digitalmars-d-learn
On Wednesday, 2 September 2020 at 07:01:48 UTC, Shaleen Chhabra 
wrote:

Hi,

The libmir libraries can be found here: 
https://github.com/libmir


I wish to use mir-algorithm and numir so that i can directly 
use .npy format from python and perform the required analysis.


I checked out latest commits of each of the libraries mentioned 
--> mir-algorithm, mir-random and numir.


But they don't seem to build together. what are the correct 
dependencies for each library.


You can just import numir, it will automatically include 
mir-algorithm, mir-core, and mir-random.


https://github.com/libmir/numir/blob/master/dub.json#L9

TASK:  how can i read / write mir.ndslice matrices and in what 
preferable format, an example should be good. I also wish to 
read / write in .npy format, how can i do this?


import std.stdio;
import mir.ndslice;

void main() {
 auto mat = [[1, 2, 3],
 [4, 5, 6],
 [7, 8, 9]].fuse;

 writefln("%(%(%d %)\n%)", mat);
 writeln();

 writefln("[%(%(%d %)\n %)]", mat);
 writeln();

 writefln("[%([%(%d %)]%|\n %)]", mat);
 writeln();
}

See also https://dlang.org/phobos/std_format.html



Re: How to sort a multidimensional ndslice?

2020-08-19 Thread 9il via Digitalmars-d-learn

On Tuesday, 18 August 2020 at 13:07:56 UTC, Arredondo wrote:

On Tuesday, 18 August 2020 at 04:07:56 UTC, 9il wrote:

To reorder the columns data according to precomputed index:
auto index = a.byDim!1.map!sum.slice;


Hello Ilya, thanks for the answer!

Unfortunately I can't use it because I don't have (and can't 
define) a sorting index for my columns. I only have a predicate 
`larger(c1, c2)` that compares two columns to decide which one 
is "larger".


Cheers!
Armando.


This should work. But reallocates the data.

/+dub.sdl:
dependency "mir-algorithm" version="~>3.9.24"
+/
import std.stdio;

import mir.array.allocation;
import mir.ndslice;
import mir.ndslice.sorting;

void main() {
auto a = [[1, -1, 3, 2],
  [0, -2, 3, 1]].fuse;

writeln(a);
auto b = a.byDim!1.array;
b.sort!larger;
auto c = b.fuse!1;
writeln(c);
}

auto larger(C)(C u, C v) {
import mir.math.sum : sum;
return sum(u) > sum(v);
}



Re: How to sort a multidimensional ndslice?

2020-08-17 Thread 9il via Digitalmars-d-learn

The following code just sorts each row:

--
/+dub.sdl:
dependency "mir-algorithm" version="~>3.9.24"
+/
import mir.ndslice;
import mir.ndslice.sorting;
import mir.algorithm.iteration: each;

void main() {
// fuse, not sliced if you use an array of arrays for argument
auto a = [[1, -1, 3, 2],
  [0, -2, 3, 1]].fuse;

// make an index
a.byDim!0.each!(sort!larger);

import std.stdio;
writeln(a);
  // [[3, 2, 1, -1], [3, 1, 0, -2]]
}

auto larger(C)(C u, C v) {
return u > v;
}
--



To reorder the columns data according to precomputed index:

--
/+dub.sdl:
dependency "mir-algorithm" version="~>3.9.24"
+/
import mir.ndslice;
import mir.series;
import mir.ndslice.sorting;
import mir.algorithm.iteration: each;
import mir.math.sum;

void main() {
// fuse, not sliced if you use an array of arrays for argument
auto a = [[1, -1, 3, 2],
  [0, -2, 3, 1]].fuse;

// make an index
auto index = a.byDim!1.map!sum.slice;

auto s = index.series(a.transposed);
auto indexBuffer = uninitSlice!int(s.length);
auto dataBuffer = uninitSlice!int(s.length);
sort!larger(s, indexBuffer, dataBuffer);

import std.stdio;
writeln(a);
   /// [[3, 2, 1, -1], [3, 1, 0, -2]]
}

auto larger(C)(C u, C v) {
return u > v;
}

-


Re: D Mir: standard deviation speed

2020-07-15 Thread 9il via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 11:37:23 UTC, jmh530 wrote:

On Wednesday, 15 July 2020 at 11:26:19 UTC, 9il wrote:

[snip]


@fmamath private double sd(T)(Slice!(T*, 1) flatMatrix)


@fastmath violates all summation algorithms except `"fast"`.
The same bug is in the original author's post.


I hadn't realized that @fmamath was the problem, rather than 
@fastmath overall. @fmamathis used on many mir.math.stat 
functions, though admittedly not in the accumulators.


Ah, no, my bad! You write @fmamath, I have read it as @fastmath. 
@fmamath is OK here.


Re: D Mir: standard deviation speed

2020-07-15 Thread 9il via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 11:23:00 UTC, jmh530 wrote:

On Wednesday, 15 July 2020 at 05:57:56 UTC, tastyminerals wrote:

[snip]

Here is a (WIP) project as of now.
Line 160 in 
https://github.com/tastyminerals/mir_benchmarks_2/blob/master/source/basic_ops.d


std of [60, 60] matrix 0.0389492 (> 0.001727)
std of [300, 300] matrix 1.03592 (> 0.043452)
std of [600, 600] matrix 4.2875 (> 0.182177)
std of [800, 800] matrix 7.9415 (> 0.345367)


I changed the dflags-ldc to "-mcpu-native -O" and compiled with 
`dub run --compiler=ldc2`. I got similar results as yours for 
both in the initial run.


I changed sd to

@fmamath private double sd(T)(Slice!(T*, 1) flatMatrix)


@fastmath violates all summation algorithms except `"fast"`.
The same bug is in the original author's post.



Re: D Mir: standard deviation speed

2020-07-15 Thread 9il via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 07:34:59 UTC, tastyminerals wrote:

On Wednesday, 15 July 2020 at 06:57:21 UTC, 9il wrote:

On Wednesday, 15 July 2020 at 06:55:51 UTC, 9il wrote:
On Wednesday, 15 July 2020 at 06:00:46 UTC, tastyminerals 
wrote:

On Wednesday, 15 July 2020 at 02:08:48 UTC, 9il wrote:

[...]


Good to know. So, it's fine to use it with sum!"fast" but 
better avoid it for general purposes.


They both are more precise by default.


This was a reply to the other your post in the thread, sorry. 
Mir algorithms are more precise by default then the algorithms 
you have provided.


Right. Is this why standardDeviation is significantly slower?


Yes. It allows you to pick a summation option, you can try others 
then default in benchmarks.


Re: D Mir: standard deviation speed

2020-07-15 Thread 9il via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 06:00:46 UTC, tastyminerals wrote:

On Wednesday, 15 July 2020 at 02:08:48 UTC, 9il wrote:

On Tuesday, 14 July 2020 at 19:04:45 UTC, tastyminerals wrote:

@fastmath private double sd0(T)(Slice!(T*, 1) flatMatrix)


@fastmath shouldn't be really used with summation algorithms 
except the `"fast"` version of them. Otherwise, they may or 
may not behave like "fast".


Good to know. So, it's fine to use it with sum!"fast" but 
better avoid it for general purposes.


They both are more precise by default.


Re: D Mir: standard deviation speed

2020-07-15 Thread 9il via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 06:55:51 UTC, 9il wrote:

On Wednesday, 15 July 2020 at 06:00:46 UTC, tastyminerals wrote:

On Wednesday, 15 July 2020 at 02:08:48 UTC, 9il wrote:

On Tuesday, 14 July 2020 at 19:04:45 UTC, tastyminerals wrote:

@fastmath private double sd0(T)(Slice!(T*, 1) flatMatrix)


@fastmath shouldn't be really used with summation algorithms 
except the `"fast"` version of them. Otherwise, they may or 
may not behave like "fast".


Good to know. So, it's fine to use it with sum!"fast" but 
better avoid it for general purposes.


They both are more precise by default.


This was a reply to the other your post in the thread, sorry. Mir 
algorithms are more precise by default then the algorithms you 
have provided.


Re: Question about publishing a useful function I have written

2020-07-14 Thread 9il via Digitalmars-d-learn

On Tuesday, 14 July 2020 at 21:58:49 UTC, Cecil Ward wrote:



Does anyone know if this has already been published by someone 
else?




https://github.com/libmir/mir-core/blob/master/source/mir/utility.d#L29

We test LDC and DMC. CI needs an update to be actually tested 
with GDC.


Re: misc questions about a DUB package

2020-07-14 Thread 9il via Digitalmars-d-learn

On Tuesday, 14 July 2020 at 20:56:06 UTC, DanielG wrote:
I have some D-wrapped C libraries I'm considering publishing to 
DUB, mainly for my own use but also for anybody else who might 
benefit. I've never done this before so I have some questions:


- Should there be any obvious relationship between the DUB 
package version and the version of the C library? What are the 
best practices for connecting the two, if at all?


No. Usually, a DUB package supports a range of C library version 
or just a fixes set of C API. The version behavior of the dub 
package is up to you. Usually, D API changes more frequently than 
the underlying C library.


- I'm very much about the "full fat" D experience, so I 
presently have no intention of designing my D wrappers for 
betterC/the-runtime-is-lava usage. That being the case, is 
there any compelling reason to avoid initializing the C library 
in a 'shared static this()' method? (ie automatically)


Yes. For a large project and services, it may be required to 
initialize shared a library with a function call instead of at 
startup. Furthermore, missing dependency may not be a fatal issue 
for service. In other hand, it all depends on your usage case.


Re: D Mir: standard deviation speed

2020-07-14 Thread 9il via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 02:08:48 UTC, 9il wrote:

On Tuesday, 14 July 2020 at 19:04:45 UTC, tastyminerals wrote:

@fastmath private double sd0(T)(Slice!(T*, 1) flatMatrix)


@fastmath shouldn't be really used with summation algorithms 
except the `"fast"` version of them. Otherwise, they may or may 
not behave like "fast".


`mean` is the summation algorithm too


Re: D Mir: standard deviation speed

2020-07-14 Thread 9il via Digitalmars-d-learn

On Tuesday, 14 July 2020 at 19:04:45 UTC, tastyminerals wrote:

@fastmath private double sd0(T)(Slice!(T*, 1) flatMatrix)


@fastmath shouldn't be really used with summation algorithms 
except the `"fast"` version of them. Otherwise, they may or may 
not behave like "fast".


What is the best CI to test with (almost) latest GDC?

2020-07-14 Thread 9il via Digitalmars-d-learn
For now, Mir doesn't really support GDC. But we want to. Is there 
are a clear way to get a specific version of GDC. Is there a 
table of GDC compilers with correspnding DMD FE versions? 
dlang.org refers to a deprecated page, it is weird.


real.mant_dig on windows?

2020-06-22 Thread 9il via Digitalmars-d-learn

Should it always be 53? or it can be 64, when?

Thank you


Re: Mir Slice Column or Row Major

2020-05-27 Thread 9il via Digitalmars-d-learn

On Wednesday, 27 May 2020 at 16:53:37 UTC, jmh530 wrote:

On Wednesday, 27 May 2020 at 16:07:58 UTC, welkam wrote:
On Wednesday, 27 May 2020 at 01:31:23 UTC, data pulverizer 
wrote:

column major


Cute puppies die when people access their arrays in column 
major.


Not always true...many languages support column-major order 
(Fortran, most obviously). The Eigen C++ library allows the 
user to specify row major or column major. I had brought this 
up with Ilya early on in mir and he thought it would increase 
complexity to allow both and could also require more memory. So 
mir is row major.


Actually it is a question of notation. For example, mir-lapack 
uses ndslice as column-major Fortran arrays. This may cause some 
headaches because the data needs to be transposed in mind. We can 
think about ndslice as about column-major nd-arrays with the 
reversed order of indexing.


The current template looks like

Slice(Iterator, size_t N = 1, SliceKind kind = 1)

If we add a special column-major notation, then it will look like

Slice(Iterator, size_t N = 1, SliceKind kind = Contiguous, 
PayloadOrder = RowMajor)


A PR that adds this feature will be accepted.



Re: Mir Slice.shape is not consistent with the actual array shape

2020-05-24 Thread 9il via Digitalmars-d-learn

On Sunday, 24 May 2020 at 14:17:33 UTC, Pavel Shkadzko wrote:

I am confused by the return value of Mir shape.
Consider the following example.

///
import std.stdio;
import std.conv;
import std.array: array;
import std.range: chunks;
import mir.ndslice;

int[] getShape(T : int)(T obj, int[] dims = null)
{
return dims;
}

// return arr shape
int[] getShape(T)(T obj, int[] dims = null)
{
dims ~= obj.length.to!int;
return getShape!(typeof(obj[0]))(obj[0], dims);
}

void main() {
int[] arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 
15, 16];

int[][][] a = arr.chunks(4).array.chunks(2).array;

writeln(arr);
writeln(arr.shape);

auto arrSlice = arr.sliced;
writeln(arrSlice);
writeln(arrSlice.shape);

}
///

[[[1, 2, 3, 4], [5, 6, 7, 8]], [[9, 10, 11, 12], [13, 14, 15, 
16]]]

[2, 2, 4] <-- correct shape
[[[1, 2, 3, 4], [5, 6, 7, 8]], [[9, 10, 11, 12], [13, 14, 15, 
16]]]

[2] <-- which shape is that?

I would expect sliced to create a Slice with the same dims. 
Well, sliced returns a shell over the array, but why does it 
return its own shape instead of the shape of the array it 
provides view into? This makes it even more confusing once you 
print both representations.

What's the rationale here?


BTW, the code example above doesn't compiles.

OT:
Instead of

int[] arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 
15, 16];

int[][][] a = arr.chunks(4).array.chunks(2).array;


you can generate the same common D array using Mir:

auto a = [2, 2, 4].iota!int(1).ndarray;



Re: Mir Slice.shape is not consistent with the actual array shape

2020-05-24 Thread 9il via Digitalmars-d-learn

On Sunday, 24 May 2020 at 14:17:33 UTC, Pavel Shkadzko wrote:

I am confused by the return value of Mir shape.
Consider the following example.

[...]


`sliced` returns a view on the array data, a 1-dimensional slice 
composed of common D arrays. Try to use `fuse` instead of 
`sliced`.


Re: How to flatten N-dimensional array?

2020-05-24 Thread 9il via Digitalmars-d-learn

On Saturday, 23 May 2020 at 18:15:32 UTC, Pavel Shkadzko wrote:
I have tried to implement a simple flatten function for 
multidimensional arrays with recursive templates but got stuck. 
Then I googled a little and stumped into complex 
https://rosettacode.org/wiki/Flatten_a_list#D implementation 
which requires your arrays to be either TreeList or Algebraic. 
That is, it does not work out-of-the-box on something like 
int[][][].


I'd like to clarify a couple of questions first.

How come Phobos doesn't have "flatten" function for arrays?

Is there an implementation of flatten that works out-of-the-box 
on N-dim arrays outside of Phobos? Excluding "flattened" from 
mir.ndslice since it works on Slices.


If the common nd-array isn't jugged (a parallelotop), you can use 
fuse function.


--
/+dub.sdl:
dependency "mir-algorithm" version="~>3.8.12"
+/
import std.stdio: writeln;
import mir.ndslice;

void main() {
auto arr =
[[[0, 1, 2, 3, 4],
  [5, 6, 7, 8, 9]],
 [[10, 11, 12, 13, 14],
  [15, 16, 17, 18, 19]],
 [[20, 21, 22, 23, 24],
  [25, 26, 27, 28, 29]]];
auto flatten = arr.fuse.field;

static assert(is(typeof(flatten) == int[]));
assert(flatten == 30.iota);
}
--

It performs exactly one allocation.


Re: Cross product in lib mir or lubeck

2020-05-01 Thread 9il via Digitalmars-d-learn

On Friday, 1 May 2020 at 11:29:55 UTC, Erdem wrote:

Hi,

I am looking for cross product function in libmir or lubeck. 
But I couldn't find it.

Does anyone know if it exists or not?

Erdem


Hi,

Libmir doesn't provide cross-product function.

Ilya


Re: Why libmir has to add its own algorithm functions

2020-05-01 Thread 9il via Digitalmars-d-learn

On Friday, 1 May 2020 at 11:31:29 UTC, Erdem wrote:

As can be seen in the link below :

http://mir-algorithm.libmir.org/mir_algorithm_iteration.html

Libmir provides almost the same function as std. Why is benefit 
of doing that? Wouldn't it be better to not duplicate std stuff?


Erdem


Some benefits:

1. Mir API can handle elementwise ndslice (multidimensional 
random-access ranges created using Slice type). Phobos API 
handles them as random-access ranges. For example, Mir's `map` 
applied to matrix returns a matrix, Phobos returns a lazy range 
or fail to compile depending on the lambda.


2. Mir iteration API (each, all, any, and others) can handle 
multiple arguments at once without zipping them. It is critical 
for multidimensional performance.


3. Mir `zip` operation supports elementwise access by reference. 
It is critical in some cases.


4. some @nogc/nothrow fixes, some optimizations for move 
semantics, and BetterC code.


5. Mir strings lambdas have fused-multiply-add transformations 
enabled by default.


 ...  and more.

Sure, it would be better to do not to duplicate similar API.

Originally ndslice was in the std.experimental. However, it was 
impractical to maintain it in the std and I have moved it to the 
dub package. Mir is targeting to do not use std, except maybe 
std.traits and std.meta. We don't care about the name conflicts 
with std.


Ilya


Re: What is the best way to refer to itself when obtaining Substring of a literal?

2020-04-24 Thread 9il via Digitalmars-d-learn

On Saturday, 25 April 2020 at 01:32:54 UTC, 9il wrote:

On Friday, 24 April 2020 at 22:24:34 UTC, Marcone wrote:

I don't want to use lambda.
I don't want create variable.

What is the best way to refer to itself when obtaining 
Substring withou using lambda and without create variable?



example:

writeln("Hello Word!"[x.indexOf(" "), $]);


no way


alias Seq = AliasSeq!("Hello Word!"); // it isn't a variable, 
lambda or enum

writeln(Seq[0][Seq[0].indexOf(" "), $]);

looks weird anyway


Re: What is the best way to refer to itself when obtaining Substring of a literal?

2020-04-24 Thread 9il via Digitalmars-d-learn

On Friday, 24 April 2020 at 22:24:34 UTC, Marcone wrote:

I don't want to use lambda.
I don't want create variable.

What is the best way to refer to itself when obtaining 
Substring withou using lambda and without create variable?



example:

writeln("Hello Word!"[x.indexOf(" "), $]);


no way


Re: Multiplying transposed matrices in mir

2020-04-19 Thread 9il via Digitalmars-d-learn

On Monday, 20 April 2020 at 02:42:33 UTC, 9il wrote:

On Sunday, 19 April 2020 at 20:29:54 UTC, p.shkadzko wrote:

On Sunday, 19 April 2020 at 20:06:23 UTC, jmh530 wrote:

[...]


Thanks. I somehow missed the whole point of "a * a.transposed" 
not working because "a.transposed" is not allocated.


In the same time, the SliceKind isn't matter for assignment 
operations:


auto b = a.slice; // copy a to b
b[] *= a.transposed; // works well


BTW for the following operation

auto b = a * a.transposed.slice;

`b` isn't allocated as well because `*` is lazy.


auto b = a.slice; // copy a to b
b[] *= a.transposed; // works well


So, the assignment operations are preferable anyway.


Re: Multiplying transposed matrices in mir

2020-04-19 Thread 9il via Digitalmars-d-learn

On Sunday, 19 April 2020 at 20:29:54 UTC, p.shkadzko wrote:

On Sunday, 19 April 2020 at 20:06:23 UTC, jmh530 wrote:

On Sunday, 19 April 2020 at 19:20:28 UTC, p.shkadzko wrote:

[...]


Ah, you're right. I use it in other places where it hasn't 
been an issue.


I can do it with an allocation (below) using the built-in 
syntax, but not sure how do-able it is without an allocation 
(Ilya would know better than me).


/+dub.sdl:
dependency "lubeck" version="~>1.1.7"
dependency "mir-algorithm" version="~>3.7.28"
+/
import mir.ndslice;
import lubeck;

void main() {
auto a = [2.1, 1.0, 3.2, 4.5, 2.4, 3.3, 1.5, 0, 
2.1].sliced(3, 3);

auto b = a * a.transposed.slice;
}


Thanks. I somehow missed the whole point of "a * a.transposed" 
not working because "a.transposed" is not allocated.


In the same time, the SliceKind isn't matter for assignment 
operations:


auto b = a.slice; // copy a to b
b[] *= a.transposed; // works well


Re: mir: How to change iterator?

2020-04-19 Thread 9il via Digitalmars-d-learn

On Sunday, 19 April 2020 at 22:07:30 UTC, jmh530 wrote:

On Thursday, 16 April 2020 at 20:59:36 UTC, jmh530 wrote:

[snip]

/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.28"
+/

import mir.ndslice;

void foo(Iterator, SliceKind kind)(Slice!(Iterator, 1, kind) 
x, Slice!(Iterator, 1, kind) y) {

import std.stdio : writeln;
writeln("here");
}

void main() {
auto x = [0.5, 0.5].sliced(2);
auto y = x * 5.0;
foo(x, y);
}


This is really what I was looking for (need to make allocation, 
unfortunately)


/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.28"
+/

import mir.ndslice;

void foo(Iterator, SliceKind kind)(Slice!(Iterator, 1, kind) x, 
Slice!(Iterator, 1, kind) y) {

import std.stdio : writeln;
writeln("here");
}

void main() {
auto x = [0.5, 0.5].sliced(2);
auto y = x * 5.0;
foo(x, y.slice);
}


Using two arguments Iterator1, Iterator2 works without allocation

/+dub.sdl: dependency "mir-algorithm" version="~>3.7.28" +/
import mir.ndslice;

void foo(Iterator1, Iterator2, SliceKind kind)
(Slice!(Iterator1, 1, kind) x, Slice!(Iterator2, 1, kind) y)
{
import std.stdio : writeln;
writeln("here");
}

void main() {
auto x = [0.5, 0.5].sliced(2);
auto y = x * 5.0;
foo(x, y);
}



Re: mir: How to change iterator?

2020-04-18 Thread 9il via Digitalmars-d-learn

On Sunday, 19 April 2020 at 02:56:30 UTC, 9il wrote:

On Friday, 17 April 2020 at 08:40:36 UTC, WebFreak001 wrote:

On Tuesday, 14 April 2020 at 20:24:05 UTC, jmh530 wrote:

[...]


Use std.algorithm:equal for range compare with approxEqual for 
your comparator:


assert(equal!approxEqual(y, [2.5, 2.5].sliced(2)));


or mir.algorithm.iteration: each that can work with nd-ranges.


EDIT:

 or mir.algorithm.iteration: equal that can work with nd-ranges.



Re: mir: How to change iterator?

2020-04-18 Thread 9il via Digitalmars-d-learn

On Friday, 17 April 2020 at 08:40:36 UTC, WebFreak001 wrote:

On Tuesday, 14 April 2020 at 20:24:05 UTC, jmh530 wrote:

[...]


Use std.algorithm:equal for range compare with approxEqual for 
your comparator:


assert(equal!approxEqual(y, [2.5, 2.5].sliced(2)));


or mir.algorithm.iteration: each that can work with nd-ranges.


Re: Linear array to matrix

2020-04-04 Thread 9il via Digitalmars-d-learn
On Saturday, 4 April 2020 at 09:25:14 UTC, Giovanni Di Maria 
wrote:

Hi.
Is there a Built-in function (no code, only a built-in function)
that transform a linear array to a Matrix?

For example:

From

[10,20,30,40,50,60,70,80,90,100,110,120];


To

[
[10,20,30],
[40,50,60],
[70,80,90],
[100,110,120]
];

Thank You very much
Cheers.
Giovanni


You may want to look into a mir-algorithm package that supports 
rectangular multidimensional arrays like NumPy.


/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.27"
+/

// http://mir-algorithm.libmir.org/mir_ndslice.html

import mir.ndslice;

void main()
{
//
auto intArray = [10,20,30,40,50,60,70,80,90,100,110,120];
auto intMatrix = intArray.sliced(4, 3);

static assert(is(typeof(intMatrix) == Slice!(int*, 2)));

// lazy matrix
auto lazyMatrix = iota!int([4, 3]/*shape*/, 10/*start*/, 
10/*stride*/);

assert(intMatrix == lazyMatrix);
//or
foreach(i; 0 .. intMatrix.length)
foreach(j; 0 .. intMatrix.length!1)
assert(intMatrix[i, j] == lazyMatrix[i, j]);

}



Re: string to char* in betterC

2020-03-11 Thread 9il via Digitalmars-d-learn

On Wednesday, 11 March 2020 at 16:10:48 UTC, 9il wrote:

On Wednesday, 11 March 2020 at 16:07:06 UTC, Abby wrote:
What is the proper way to get char* from string which is used 
in c functions? toStringz does returns:


/usr/include/dmd/phobos/std/array.d(965,49): Error: TypeInfo 
cannot be used with -betterC


and I think string.ptr is not safe because it's not zero 
termined. So what should I do? realloc each string with /0?


Thank you for your help


3. You can use mir-algorithm for simplicity and speed


/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.18"
+/
import mir.format;
import core.stdc.stdio;

void main() {
printf("some_string %s", (stringBuf() << "other_string" << 
"\0" << getData).ptr);

}


stringBuf() uses stack if the inner string fits into it. So, It 
is a mutch master than malloc/free. However, in this case C 
function should return pointer ownership to the caller.


Re: string to char* in betterC

2020-03-11 Thread 9il via Digitalmars-d-learn

On Wednesday, 11 March 2020 at 16:07:06 UTC, Abby wrote:
What is the proper way to get char* from string which is used 
in c functions? toStringz does returns:


/usr/include/dmd/phobos/std/array.d(965,49): Error: TypeInfo 
cannot be used with -betterC


and I think string.ptr is not safe because it's not zero 
termined. So what should I do? realloc each string with /0?


Thank you for your help


1. Yes.
2. A compile-time known or constants always contain trailing zero.

static immutable "some text"; // contains \0 after the data.



Re: How to sort 2D Slice along 0 axis in mir.ndslice ?

2020-03-11 Thread 9il via Digitalmars-d-learn

On Wednesday, 11 March 2020 at 00:24:13 UTC, jmh530 wrote:

On Tuesday, 10 March 2020 at 23:31:55 UTC, p.shkadzko wrote:

[snip]


Below does the same thing as the numpy version.

/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.18"
+/
import mir.ndslice.sorting : sort;
import mir.ndslice.topology : byDim;
import mir.ndslice.slice : sliced;

void main() {
auto m = [1, -1, 3, 2, 0, -2, 3, 1].sliced(2, 4);
m.byDim!0.each!(a => a.sort);
}


Almost the same, just fixed import for `each` and a bit polished

/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.18"
+/
import mir.ndslice;
import mir.ndslice.sorting;
import mir.algorithm.iteration: each;

void main() {
auto m = [[1, -1, 3, 2],
  [0, -2, 3, 1]].fuse;
m.byDim!0.each!sort;

import std.stdio;
m.byDim!0.each!writeln;
}



Re: Improving dot product for standard multidimensional D arrays

2020-03-02 Thread 9il via Digitalmars-d-learn

On Sunday, 1 March 2020 at 20:58:42 UTC, p.shkadzko wrote:

Hello again,

Thanks to previous thread on multidimensional arrays, I managed 
to play around with pure D matrix representations and even 
benchmark a little against numpy:


[...]


Matrix multiplication is about cache-friendly blocking.
https://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf

`mir-blas` package can be used for matrix operations for ndslice. 
 `cblas`  - if you want to work with your own matrix type .


Re: Strange counter-performance in an alternative `decimalLength9` function

2020-02-28 Thread 9il via Digitalmars-d-learn

On Friday, 28 February 2020 at 10:11:23 UTC, Bruce Carneal wrote:

On Friday, 28 February 2020 at 06:50:55 UTC, 9il wrote:
On Wednesday, 26 February 2020 at 00:50:35 UTC, Basile B. 
wrote:
So after reading the translation of RYU I was interested too 
see if the decimalLength() function can be written to be 
faster, as it cascades up to 8 CMP.


[...]


bsr can be done in one/two CPU operation, quite quick. But 
core.bitop.bsr wouldn't be inlined. Instead, mir-core 
(mir.bitop: ctlz) or LDC intrinsics llvm_ctlz can be used for 
to get code with inlining.


That's surprising.  I just got ldc to inline core.bitop.bsr on 
run.dlang.io using ldc -O3 -mcpu=native. (not sure what the 
target CPU is)


Ah, my bad. It fails to inline with LDC <= 1.14
https://d.godbolt.org/z/iz9p-6

Under what conditions should I be guarding against an inlining 
failure?


Mark it with `pragma(inline, true)`. LDC also has cross-module 
inlining for non-templated functions.


Re: Strange counter-performance in an alternative `decimalLength9` function

2020-02-27 Thread 9il via Digitalmars-d-learn

On Wednesday, 26 February 2020 at 00:50:35 UTC, Basile B. wrote:
So after reading the translation of RYU I was interested too 
see if the decimalLength() function can be written to be 
faster, as it cascades up to 8 CMP.


[...]


bsr can be done in one/two CPU operation, quite quick. But 
core.bitop.bsr wouldn't be inlined. Instead, mir-core (mir.bitop: 
ctlz) or LDC intrinsics llvm_ctlz can be used for to get code 
with inlining.


Re: How to sum multidimensional arrays?

2020-02-27 Thread 9il via Digitalmars-d-learn

On Thursday, 27 February 2020 at 23:15:28 UTC, p.shkadzko wrote:

And it works effortlessly!
Sum of two 5000 x 6000 int arrays is just 0.105 sec! (on a 
Windows machine though but with weaker CPU).


I bet using mir.ndslice instead of D arrays would be even 
faster.


Yes, the output for the following benchmark shows that Mir is 43% 
faster.
However, when I have checked the assembler output, both Mir and 
Std (really LDC in both cases) generate almost the same and best 
possible loops with AVX instructions for summation.


In another hand, Mir is faster because it generates random 
matrixes faster and uses uninitialized memory for the summation 
target.


Output:
```
std: 426 ms, 432 μs, and 1 hnsec |10
mir: 297 ms, 694 μs, and 3 hnsecs |10
```

Run command:

`dub --build=release --single --compiler=ldc2 test.d`

Note that -mcpu=native flag is passed to LDC.

Source:
```
/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.17"
dependency "mir-random" version="~>2.2.10"
dflags "-mcpu=native" platform="ldc"
+/

int val;

void testStd()
{
pragma(inline, false);
static struct Matrix(T)
{
import std.range;
T[] elems;
int cols;

T[][] to2D()
{
return elems.chunks(cols).array;
}
}

static auto matrixSum(Matrix!int m1, Matrix!int m2)
{
Matrix!int m3;
m3.cols = m1.cols;
m3.elems.length = m1.elems.length;
m3.elems[] = m1.elems[] + m2.elems[];
return m3.to2D;
}

static T[] rndArr(T)(in T max, in int elems)
{
import std.random;
import std.range;
Xorshift rnd;
return generate(() => uniform(0, max, 
rnd)).take(elems).array;

}
auto m1 = Matrix!int(rndArr!int(10, 5000 * 6000), 6000);
auto m2 = Matrix!int(rndArr!int(10, 5000 * 6000), 6000);
auto m3 = matrixSum(m1, m2);
val = m3[$-1][$-1];
}

void testMir()
{
pragma(inline, false);
import mir.ndslice;
import mir.random: threadLocal;
import mir.random.variable: uniformVar;
import mir.random.algorithm: randomSlice;
import mir.random.engine.xorshift;

auto m1 = threadLocal!Xorshift.randomSlice(uniformVar!int(0, 
10), [5000, 6000]);
auto m2 = threadLocal!Xorshift.randomSlice(uniformVar!int(0, 
10), [5000, 6000]);

auto m3 = slice(m1 + m2);
val = m3[$-1][$-1];
}

void main()
{
import std.datetime.stopwatch;
import std.stdio;
import core.memory;
GC.disable;
StopWatch clock;
clock.reset;
clock.start;
testStd;
clock.stop;
writeln("std: ", clock.peek, " |", val);
clock.reset;
clock.start;
testMir;
clock.stop;
writeln("mir: ", clock.peek, " |", val);
}
```


Re: How to sum multidimensional arrays?

2020-02-27 Thread 9il via Digitalmars-d-learn

On Thursday, 27 February 2020 at 16:31:49 UTC, jmh530 wrote:

On Thursday, 27 February 2020 at 15:28:01 UTC, p.shkadzko wrote:
On Thursday, 27 February 2020 at 14:15:26 UTC, p.shkadzko 
wrote:
This works but it does not look very efficient considering we 
flatten and then calling array twice. It will get even worse 
with 3D arrays.


And yes, benchmarks show that summing 2D arrays like in the 
example above is significantly slower than in numpy. But that 
is to be expected... I guess.


D -- sum of two 5000 x 6000 2D arrays: 3.4 sec.
numpy -- sum of two 5000 x 6000 2D arrays: 0.0367800739913946 
sec.


What's the performance of mir like?

The code below seems to work without issue.

/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.17"
dependency "mir-random" version="~>2.2.10"
+/
import std.stdio : writeln;
import mir.random : Random, unpredictableSeed;
import mir.random.variable: UniformVariable;
import mir.random.algorithm: randomSlice;

auto rndMatrix(T)(T max, in int rows, in int cols)
{
auto gen = Random(unpredictableSeed);
auto rv = UniformVariable!T(0.0, max);
return randomSlice(gen, rv, rows, cols);
}

void main() {
auto m1 = rndMatrix(10.0, 2, 3);
auto m2 = rndMatrix(10.0, 2, 3);
auto m3 = m1 + m2;

writeln(m1);
writeln(m2);
writeln(m3);
}


The same as numpy for large matrixes because the cost is memory 
access. Mir+LDC will be faster for small matrixes because it will 
flatten the inner loop and use SIMD instructions.


Few performances nitpick for your example to be fair with 
benchmarking againt the test:

1. Random (default) is slower then Xorfish.
2. double is twice larger then int and requires twice more 
memory, so it would be twice slower then int for large matrixes.


Check the prev. post, we have posted almost in the same time ;)
https://forum.dlang.org/post/izoflhyerkiladngy...@forum.dlang.org


Re: How to sum multidimensional arrays?

2020-02-27 Thread 9il via Digitalmars-d-learn

On Thursday, 27 February 2020 at 14:15:26 UTC, p.shkadzko wrote:

Is there a better way without relying on mir.ndslice?


ndslice Poker Face

/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.17"
dependency "mir-random" version="~>2.2.10"
+/
import mir.ndslice;
import mir.random: threadLocal;
import mir.random.variable: uniformVar;
import mir.random.algorithm: randomSlice;
import mir.random.engine.xorshift;

void main() {
Slice!(int*, 2) m1 = 
threadLocal!Xorshift.randomSlice(uniformVar!int(0, 10), [2, 3]);
Slice!(int*, 2) m2 = 
threadLocal!Xorshift.randomSlice(uniformVar!int(0, 10), [2, 3]);

Slice!(int*, 2) c = slice(m1 + m2);
}



Re: 2D matrix operation (subtraction)

2020-02-22 Thread 9il via Digitalmars-d-learn

On Friday, 21 February 2020 at 13:42:24 UTC, Andre Pany wrote:
Mir is great and actually I try to rewrite some Python Pandas 
Dataframe index logic.


Maybe mir.series [1] can work for you.

Series!(Key*, Value*) - is a pair of two 1D ndslices, they can be 
sorted according to the first one ndslice (keys). Series has 
`get` methods.


Series!(Key*, Value*, 2) is a pair of 1D ndslice (keys) and 2D 
ndslice (values matrix).


Series has slicing primitives.

Keys corresponds to the first dimension.

http://mir-algorithm.libmir.org/mir_series.html#Series


Re: How to create meson.build with external libs?

2020-01-15 Thread 9il via Digitalmars-d-learn

On Sunday, 12 January 2020 at 22:00:33 UTC, p.shkadzko wrote:
Ok, I am trying to meson and is struggling with meson.build 
file. I looked up the examples page: 
https://github.com/mesonbuild/meson/tree/master/test%20cases/d 
which has a lot of examples but not the one that shows you how 
to build your project with some external dependency :)


Let's say we have a simple dir "myproj" with "meson.build" in 
it and some source files like "app.d" and "helper_functions.d".


~/myproj
   app.d
   helper_functions.d
   meson.build

"helper_functions.d" uses let's say lubeck library which 
according to 
https://forum.dlang.org/thread/nghoprwkihazjikyh...@forum.dlang.org is supported by meson.


Here is my meson.build:
---
project('demo', 'd',
  version : '0.1',
  default_options : ['warning_level=3']
  )

lubeck = dependency('lubeck', version: '>=1.1.7')
ed = executable('mir_quickstart', 'app.d', dependencies: 
lubeck, install : true)



However, when I try to build it I get the following error:
-
$ meson build
The Meson build system
Version: 0.52.1
Source dir: /home/user/dev/github/demo
Build dir: /home/user/dev/github/demo/build
Build type: native build
Project name: demo
Project version: 0.1
D compiler for the host machine: ldc2 (llvm 1.18.0 "LDC - the 
LLVM D compiler (1.18.0):")

D linker for the host machine: GNU ld.gold 2.33.1
Host machine cpu family: x86_64
Host machine cpu: x86_64
Found pkg-config: /usr/bin/pkg-config (1.6.3)
Found CMake: /usr/bin/cmake (3.16.2)
Run-time dependency lubeck found: NO (tried pkgconfig and cmake)

meson.build:8:0: ERROR: Dependency "lubeck" not found, tried 
pkgconfig and cmake


A full log can be found at 
/home/user/dev/github/demo/build/meson-l

-

What do I need to do in order to build the project with 
"lubeck" dependency in meson?


Seems like you have missed the *.wrap files in subprojects 
folder. It's a bad idea to install D meson libs into system as 
probably you will want to control versions easily.


You need all *wrap files recursively.
Check this 
https://github.com/kaleidicassociates/lubeck/tree/master/subprojects

and add a wrap file for Lubeck.
You can specify tags instead of the master branches. - it is 
recommended for stable work.


Ilya


Re: CI: Why Travis & Circle

2019-11-19 Thread 9il via Digitalmars-d-learn

On Thursday, 14 November 2019 at 13:47:32 UTC, jmh530 wrote:
I'm curious what the typical motivation is for using both 
Travis CI and Circle CI in a project is.


Thanks.


Circle CI is more flexible but with quite limited free resources.


Re: Is there any writeln like functions without GC?

2019-11-13 Thread 9il via Digitalmars-d-learn

On Sunday, 10 November 2019 at 07:57:38 UTC, dangbinghoo wrote:

On Sunday, 3 November 2019 at 05:46:53 UTC, 9il wrote:

On Thursday, 31 October 2019 at 03:56:56 UTC, lili wrote:

Hi:
   why writeln need GC?


See also Mir's @nogc formatting module

https://github.com/libmir/mir-runtime/blob/master/source/mir/format.d


hi, is mir right now fully implemented using betterC?

thanks!
--
binghoo


Nope, but you can write a lot of things that would not require to 
link with DRuntime.
The betterC flags also means that you can't use DRuntime during 
compilation, which makes D generics almost useless.


Re: Is there any writeln like functions without GC?

2019-11-02 Thread 9il via Digitalmars-d-learn

On Thursday, 31 October 2019 at 03:56:56 UTC, lili wrote:

Hi:
   why writeln need GC?


See also Mir's @nogc formatting module

https://github.com/libmir/mir-runtime/blob/master/source/mir/format.d


Re: Ranges to deal with corner cases and "random access"

2019-10-05 Thread 9il via Digitalmars-d-learn

On Saturday, 5 October 2019 at 00:38:06 UTC, Brett wrote:
Typically a lot of algorithms have corner cases such as 
referencing elements that end up out of bounds at the start or 
end (k-c or k+c).


[...]


mir-algorithm package provides lazy padding and concatenation 
routines


http://mir-algorithm.libmir.org/mir_ndslice_concatenation.html

It may be slightly more complex then you expect as the library 
was created for a multidimensional random-access ranges 
(ndslices).


Lazy `Concatenation` structure can be effectively eagerly 
evaluated to a memory allocated ndslice or be lazily iterated 
using `opIndex` for random access or input range primitives for 
sequential access. It doesn't provide a backward range primitive 
but you are welcome to open PR to add them if required.


Best,
Ilya


Re: do mir modules run in parallell

2019-10-05 Thread 9il via Digitalmars-d-learn

On Saturday, 5 October 2019 at 14:51:03 UTC, David wrote:

On Saturday, 5 October 2019 at 04:38:34 UTC, 9il wrote:

On Friday, 4 October 2019 at 20:32:59 UTC, David wrote:

Hi

I am wondering if MIR modules run in parallel by default or 
if I can enforce it by a compiler flag?


Thanks
David


Hey David,

Do you mean unittests run in parallel or mir algorithms 
themselves run in parallel?


Ilya


Hi Ilya

Thanks for coming back on this and sorry for not being precise. 
I am wondering about the Mir algorithms.


David


mir-blas, mir-lapack, and lubeck parallelism depend on system 
BLAS/LAPACK library (OpenBLAS, Intel MKL, or Accelerate Framework 
for macos).


mir-optim by default single thread but can use TaskPool from D 
standard library as well as user-defined thread pools.


mir-random was created for multithread programs, check the 
documentation for a particular engine. The general idea is that 
each thread has its own engine.


Other libraries are single thread but can be used in multithread 
programs with Phobos threads or other thread libraries.


Best,
Ilya


Re: do mir modules run in parallell

2019-10-04 Thread 9il via Digitalmars-d-learn

On Friday, 4 October 2019 at 20:32:59 UTC, David wrote:

Hi

I am wondering if MIR modules run in parallel by default or if 
I can enforce it by a compiler flag?


Thanks
David


Hey David,

Do you mean unittests run in parallel or mir algorithms 
themselves run in parallel?


Ilya


Re: Finding Max Value of Column in Multi-Dimesional Array

2019-07-04 Thread 9il via Digitalmars-d-learn

On Friday, 5 July 2019 at 00:54:15 UTC, Samir wrote:
Is there a cleaner way of finding the maximum value of say the 
third column in a multi-dimensional array than this?

int[][] p = [[1,2,3,4], [9,0,5,4], [0,6,2,1]];
writeln([p[0][2], p[1][2], p[2][2]].max);

I've tried the following
writeln([0, 1, 2].map!(p[a][2]).max);

but get an "Error: undefined identifier a" error.

I know there doesn't seem to be much of a difference between 
two examples but my real-world array is more complex which is 
why I'm looking for a more scalable option.


Thanks
Samir


Hi Samir,

You may want to take a look into mir-algorithm [1] library.
It contains ndsilce package [2] to work with multidimensional 
data.


The following example can be run online [3]:


--

/+dub.sdl:
dependency "mir-algorithm" version="~>3.4.4"
+/

import mir.algorithm.iteration: reduce;
import mir.ndslice: fuse, map, byDim;
import mir.utility: max;

import std.stdio: writeln;


void main()
{
// create 2D matrix type of Slice!(int*, 2);
auto matrix = [[1,2,3,4], [9,0,5,4], [0,6,2,1]].fuse;

matrix
.byDim!1 // by columns
.map!(c => int.min.reduce!max(c))
.writeln; // [9, 6, 5, 4]
}

--

1. https://github.com/libmir/mir-algorithm
2. http://mir-algorithm.libmir.org/mir_ndslice.html
3. https://run.dlang.io/is/OW6zvF


Re: What external libraries are available

2019-06-05 Thread 9il via Digitalmars-d-learn

On Wednesday, 5 June 2019 at 01:20:46 UTC, Mike Brockus wrote:

If you never herd about Meson before:
樂. https://mesonbuild.com/

Hay there I was just wondering, what is the D equivalent to C++ 
Boost and or Poco libraries?


Just wondering because I would like to start playing with other 
developers libraries and use them in a collection of examples 
for the library.  The examples will be lunched to a GitHub 
repository as a show case so the library developers can show a 
link to new potential users that seek examples.


What I mean by Boost or Poco equivalent is a grouping of fully 
functional packages/modules.  It’s ok if you recommend a single 
library and it is a plus if Meson build is apart of the library.


I am aware of Mir Libraries being a group of libraries and 
incorporates the use of Meson, however I normally like to see 
what my options are to better plan my applications.


https://github.com/libmir/mir-algorithm
https://github.com/libmir/mir-core
https://github.com/libmir/mir-optim
https://github.com/libmir/mir-rundom
https://github.com/libmir/mir-runtime (exprimental)

All of them comes with Meson and are used in daily production.



Re: Reuse/reset dynamic rectangular array?

2019-05-27 Thread 9il via Digitalmars-d-learn

On Saturday, 25 May 2019 at 16:17:40 UTC, Robert M. Münch wrote:

On 2019-05-25 14:28:24 +, Robert M. Münch said:

How can I reset a rectangualr array without having to loop 
through it?


int[][] myRectData = new int[][](10,10);

myRectData.length = 0;
myRectData[].length = 0;
myRectData[][].length = 0;  

They all give: slice expression .. is not a modifiable lvalue.


My question was unprecise: I want to keep the first dimension 
and only reset the arrays of the 2nd dimension. So that I can 
append stuff again.


myRectData[] = null;


Re: Example of append & rectangualar arrays?

2019-05-27 Thread 9il via Digitalmars-d-learn

On Saturday, 25 May 2019 at 14:17:43 UTC, Robert M. Münch wrote:
Does anyone has an example using Appender with a rectangual 
array?


Appender!(T[][]) can append rows of type T[]. It does not check 
their lengths, the T[][] is an array of arrays, not a matrix.


To append columns one needs an array of Appenders, 
Appenders!(T[])[].


T[][] can be converted to Slice!(T*, 2) (ndslice matrix) using 
the mir.ndslice.fuse module [1].


Then the matrix can be transposed. Zero cost transposition can be 
found in the second example at [1].


`ndarray` function can be used [2] to convert matrix back to an 
array of array.


http://mir-algorithm.libmir.org/mir_ndslice_fuse.html#.fuse
http://mir-algorithm.libmir.org/mir_ndslice_allocation.html#ndarray



Re: Impose structure on array

2019-05-26 Thread 9il via Digitalmars-d-learn

On Monday, 20 May 2019 at 12:09:02 UTC, Alex wrote:
given some array, is there some way to easily impose structure 
on that array at runtime?


void* data;

auto x = cast(byte[A,B,C])data;

X is then an AxBxC matrix.

I'm having to compute the index myself and it just seems 
unnecessary. A and B are not known at compile time though.


Obviously it should be just as efficient as computing the 
offset manually.


I could probably do this with a struct and override opIndex but 
I'm wondering what is already out there. If it's slower or 
consumes more memory than manual it's not worth it(since my 
code already calcs the index).


Slightly updated version of the prev. example.
```
import mir.ndslice;
byte data[];
...
// canonical is `optional`
auto tensor = data.sliced(A, B, C).canonical;
...
byte elem = tensor[i, j, k];
auto matrix = tensor[i, j];
auto otherKindOfMatrix = tensor[0..$, i, j];
```

It is efficient as handwritten code.


Re: Memory management by interfacing C/C++

2019-04-29 Thread 9il via Digitalmars-d-learn
On Saturday, 27 April 2019 at 22:25:58 UTC, Ferhat Kurtulmuş 
wrote:

Hi,

I am wrapping some C++ code for my personal project (opencvd), 
and I am creating so many array pointers at cpp side and 
containing them in structs. I want to learn if I am leaking 
memory like crazy, although I am not facing crashes so far. Is 
GC of D handling things for me? Here is an example:


```
//declaration in d
struct IntVector {
int* val;
int length;
}

// in cpp
typedef struct IntVector {
int* val;
int length;
} IntVector;

// cpp function returning a struct containing an array pointer 
allocated with "new" op.

IntVector Subdiv2D_GetLeadingEdgeList(Subdiv2D sd){
std::vector iv;
sd->getLeadingEdgeList(iv);

int *cintv = new int[iv.size()]; // I don't call delete 
anywhere?


for(size_t i=0; i < iv.size(); i++){
cintv[i] = iv[i];
}
IntVector ret = {cintv, (int)iv.size()};
return ret;
};

// call extern c function in d:
extern (C) IntVector Subdiv2D_GetLeadingEdgeList(Subdiv2d sd);

int[] getLeadingEdgeList(){
IntVector intv = Subdiv2D_GetLeadingEdgeList(this);
int[] ret = intv.val[0..intv.length]; // just D magic. 
Still no delete anywhere!

return ret;
}
```

The question is now: what will happen to "int *cintv" which is 
allocated with new operator in cpp code? I have many code 
similar in the project, but I have not encounter any problem so 
far even in looped video processings. Is GC of D doing 
deallocation automagically?

https://github.com/aferust/opencvd


Hello Ferhat,

You can use RCArray!T or Slice!(RCI!T) [1, 2] as common thread 
safe @nogc types for D and C++ code.

See also integration C++ example [3] and C++ headers [4].


RCArray (fixed length)
[1] http://mir-algorithm.libmir.org/mir_rc_array.html
RCSlice (allows to get subslices)
[2] 
http://mir-algorithm.libmir.org/mir_ndslice_allocation.html#rcslice

C++ integration example
[3] 
https://github.com/libmir/mir-algorithm/tree/master/cpp_example

C++ headers
[4] 
https://github.com/libmir/mir-algorithm/tree/master/include/mir




Re: Unexpected behaviour in associative array

2019-04-20 Thread 9il via Digitalmars-d-learn

On Saturday, 20 April 2019 at 22:16:22 UTC, Arredondo wrote:

On Saturday, 20 April 2019 at 14:24:34 UTC, 9il wrote:

On Friday, 19 April 2019 at 12:37:10 UTC, Arredondo wrote:

Slice!(Contiguous, [2], byte*) payload;


BTW, any reason not to use the new version of ndslice?

For new API it would be:

Slice!(byte*, 2, Contiguous)

or just

Slice!(byte*, 2)


I think this new ndslice API is newer than my code. I might 
consider upgrading though, maybe in the new version 
Slice.field() is const, so I can use my preferred 
implementation of toHash()?


In the latest release you can do

yourSlice.lightConst.field

lightConst converts from const slice to slice of const.

I will add const and immutable field to the next major release.

Yoy can fill an issue in case you would also need other 
functionality.


Best,
Ilya


Re: Unexpected behaviour in associative array

2019-04-20 Thread 9il via Digitalmars-d-learn

On Friday, 19 April 2019 at 12:37:10 UTC, Arredondo wrote:

Slice!(Contiguous, [2], byte*) payload;


BTW, any reason not to use the new version of ndslice?

For new API it would be:

Slice!(byte*, 2, Contiguous)

or just

Slice!(byte*, 2)



Re: Phobos in BetterC

2019-03-09 Thread 9il via Digitalmars-d-learn

On Saturday, 9 March 2019 at 19:40:27 UTC, Sebastiaan Koppe wrote:

On Saturday, 9 March 2019 at 17:14:37 UTC, 9il wrote:
It was fixed to be used in BetterC. If it still does not work 
you can open an issue and ping me (@9il).


That is awesome. I suppose support for betterC is only from v3 
upwards?


Yes. However, it is hard to test. Also, BetterC works better in 
LDC. I have never seen a real production betterC program compiled 
with DMD.


Re: Phobos in BetterC

2019-03-09 Thread 9il via Digitalmars-d-learn

On Friday, 8 March 2019 at 09:24:25 UTC, Vasyl Teliman wrote:
I've tried to use Mallocator in BetterC but it seems it's not 
available there:


https://run.dlang.io/is/pp3HDq

This produces a linker error.

I'm wondering why Mallocator is not available in this mode (it 
would be intuitive to assume that it's working). Also I would 
like to know what parts of Phobos are available there (e.g. 
std.traits, std.typecons...).


Thanks in advance.


Try this package
https://github.com/dlang-community/stdx-allocator

(v3.0.2)

It was fixed to be used in BetterC. If it still does not work you 
can open an issue and ping me (@9il).


Best,
Ilya


Re: mir.ndslice: assign a vector to a matrix row

2018-12-28 Thread 9il via Digitalmars-d-learn

On Friday, 28 December 2018 at 08:09:09 UTC, 9il wrote:

On Thursday, 27 December 2018 at 21:17:48 UTC, David wrote:

On Wednesday, 26 December 2018 at 18:59:25 UTC, 9il wrote:

On Saturday, 15 December 2018 at 19:04:37 UTC, David wrote:

[...]


matrix[2][] = vector;

Or

matrix[2,0..$] = vector;


great many thanks!! Is there any logic why getting a row works 
by


auto row = matrix[0];

but assigning to a row works (only) by the two variant you 
posted?


This case gets a slice of a row, it does not copy the data. So 
row[i] is matrix[0, i], the same number in the RAM.


auto row = matrix[0];

This case gets a slice of a row, it does not copy the data.

If you wish to copy data you need to use a slice on the right 
side:




EDIT: a slice on the LEFT side


Re: mir.ndslice: assign a vector to a matrix row

2018-12-28 Thread 9il via Digitalmars-d-learn

On Thursday, 27 December 2018 at 21:17:48 UTC, David wrote:

On Wednesday, 26 December 2018 at 18:59:25 UTC, 9il wrote:

On Saturday, 15 December 2018 at 19:04:37 UTC, David wrote:

Hi

I am wondering if it is possible to assign a vector to a row 
of a matrix?


 main.d ==
import mir.ndslice;

void main() {

  auto matrix = slice!double(3, 4);
  matrix[] = 0;
  matrix.diagonal[] = 1;

  auto row = matrix[0];
  row[3] = 4;
  assert(matrix[0, 3] == 4);

  // assign it to rows of a matrix?
  auto vector = sliced!(double)([10, 11, 12, 13]);

  // ??? Here I would like to assign the vector to the last 
(but it is not working)

  // matrix[2] = vector;
}


So I am wondering what the correct way is to do such an 
assignment without looping?


matrix[2][] = vector;

Or

matrix[2,0..$] = vector;


great many thanks!! Is there any logic why getting a row works 
by


auto row = matrix[0];

but assigning to a row works (only) by the two variant you 
posted?


This case gets a slice of a row, it does not copy the data. So 
row[i] is matrix[0, i], the same number in the RAM.


auto row = matrix[0];

This case gets a slice of a row, it does not copy the data.

If you wish to copy data you need to use a slice on the right 
side:


row[] = matrix[0];

or

auto row = matrix[0].slice; // 'slice' allocates new data

For columns:

col[] = matrix[0 .. $, 0];



Re: mir.ndslice: assign a vector to a matrix row

2018-12-26 Thread 9il via Digitalmars-d-learn

On Saturday, 15 December 2018 at 19:04:37 UTC, David wrote:

Hi

I am wondering if it is possible to assign a vector to a row of 
a matrix?


 main.d ==
import mir.ndslice;

void main() {

  auto matrix = slice!double(3, 4);
  matrix[] = 0;
  matrix.diagonal[] = 1;

  auto row = matrix[0];
  row[3] = 4;
  assert(matrix[0, 3] == 4);

  // assign it to rows of a matrix?
  auto vector = sliced!(double)([10, 11, 12, 13]);

  // ??? Here I would like to assign the vector to the last 
(but it is not working)

  // matrix[2] = vector;
}


So I am wondering what the correct way is to do such an 
assignment without looping?


matrix[2][] = vector;

Or

matrix[2,0..$] = vector;


Copyright for reworked Phobos code in Mir

2018-12-26 Thread 9il via Digitalmars-d-learn

Hi folks,

I am slightly confused by copyright mess in some of Mir modules. 
As you may know, some of them contain reworked Phobos functions. 
Plus I am not sure that I understand the meaning of Copyright in 
the context that both Phobos and Mir are Boost licensed.


For example, currently, I am creating mir.numeric that will 
contain findRoot, findLocalMin rework of Phobos and other stuff. 
And findLocalMin in Phobos is my work.


std.numeric contains:
Copyright: Copyright Andrei Alexandrescu 2008 - 2009.

What copyright should contain mir.numeric?

Another example is that sometimes I write a new implementation 
but use Phobos unittests.


There was an inverse precedent - Mersenne Twister. Mir version 
was backported to Phobos.


Best,
Ilya


Re: updated mir interface

2018-11-07 Thread 9il via Digitalmars-d-learn

On Wednesday, 7 November 2018 at 19:09:50 UTC, Alex wrote:
Ok... sorry for being penetrant, but there is still something 
strange. Having dependencies as you had,


[...]


Well, fixed in v2.1.3


Re: updated mir interface

2018-11-07 Thread 9il via Digitalmars-d-learn

On Wednesday, 7 November 2018 at 14:46:17 UTC, Alex wrote:

On Wednesday, 7 November 2018 at 14:07:32 UTC, 9il wrote:

This is a regression. It is fixed in mir-random v2.1.2.


Thanks. But I have another one:

´´´
import mir.random.algorithm;
import std.experimental.all;

void main()
{
S[] arr;
arr.length = 42;
arr.each!((i, ref el) => el.i = i);
auto res = rne.sample(arr.map!((ref el) => el.i), 1);
}

struct S { size_t i; }
´´´

Does not depend, on whether I use (ref el) or just el... And 
should not depend on that ;)


I have updated template constraints.
http://docs.random.dlang.io/latest/mir_random_algorithm.html#.sample

The problem that looks like Phobos map does not define all 
required primitives like popFrontExactly. I suggest using Mir 
instead of Phobos if possible:



https://run.dlang.io/is/NBTfwF

/+dub.sdl:
dependency "mir-algorithm" version="~>3.0.3"
dependency "mir-random" version="~>2.1.1"
+/

import mir.random.algorithm;
import mir.algorithm.iteration;
import mir.ndslice: sliced, iota, map, member;

void main()
{
S[] arr;
arr.length = 42;
// using each
arr.length.iota.each!((i, ref el) => el.i = i)(arr);

// or using each and member
arr.length.iota.each!"b = a"(arr.member!"i");

// or using assign
arr.member!"i"[] = arr.length.iota;

auto res0 = rne.sample(arr.map!((ref el) => el.i), 1);
// or using member
auto res1 = rne.sample(arr.member!"i", 1);

}

struct S { size_t i; }




Re: updated mir interface

2018-11-07 Thread 9il via Digitalmars-d-learn

On Wednesday, 7 November 2018 at 09:33:32 UTC, Alex wrote:

I'm referring to the example
http://docs.random.dlang.io/latest/mir_random_algorithm.html#.sample

[...]


This is a regression. It is fixed in mir-random v2.1.2.


Re: Can I create static c callable library?

2018-09-25 Thread 9il via Digitalmars-d-learn

On Tuesday, 25 September 2018 at 11:03:11 UTC, John Burton wrote:

I need to write a library to statically link into a c program.
Can I write this library in D?
Will I be able to use proper D abilities like gc? Obviously the 
public interface will need to be basic c callable functions...


I 'main' is a c program will this work?


Yes, for example https://github.com/libmir/mir-optim
It has *.cpp example.


Re: Making mir.random.ndvariable.multivariateNormalVar create bigger data sets than 2

2018-09-10 Thread 9il via Digitalmars-d-learn

On Tuesday, 27 February 2018 at 09:23:49 UTC, kerdemdemir wrote:

I need a classifier in my project.
Since it is I believe most easy to implement I am trying to 
implement logistic regression.


[...]


Mir Random v1.0.0 has new `range` overloads that can work 
NdRandomVariable.

Example: https://run.dlang.io/is/jte3gx


Re: How to use math functions in dcompute?

2018-08-27 Thread 9il via Digitalmars-d-learn

On Monday, 27 August 2018 at 09:57:18 UTC, Sobaya wrote:

On Monday, 27 August 2018 at 09:41:34 UTC, 9il wrote:

On Monday, 27 August 2018 at 08:25:14 UTC, Sobaya wrote:

I'm using dcompute(https://github.com/libmir/dcompute).

In the development, I have got to use math functions such as 
sqrt in @compute function.


But LDC says "can only call functions from other @compute 
modules in @compute code", so can't I call any math functions 
with dcompute?


Is there any way to use predefined math functions in dcompute?

Thanks.


You may want to try ldc.intrinsics / mir.math.common


Do you mean llvm_sqrt in ldc.intrinsics?

These functions are also not @compute code, so they cause the 
same error.


Ah, interesting


Re: How to use math functions in dcompute?

2018-08-27 Thread 9il via Digitalmars-d-learn

On Monday, 27 August 2018 at 08:25:14 UTC, Sobaya wrote:

I'm using dcompute(https://github.com/libmir/dcompute).

In the development, I have got to use math functions such as 
sqrt in @compute function.


But LDC says "can only call functions from other @compute 
modules in @compute code", so can't I call any math functions 
with dcompute?


Is there any way to use predefined math functions in dcompute?

Thanks.


You may want to try ldc.intrinsics / mir.math.common


Re: Error calling geqrs function from lubeck package.

2018-07-05 Thread 9il via Digitalmars-d-learn

On Tuesday, 17 April 2018 at 03:26:25 UTC, Jamie wrote:
I'm attempting to use the lubeck package, as described here 
https://forum.dlang.org/post/axacgiisczwvygyef...@forum.dlang.org


I have lubeck, mir-algorithm, mir-blas, mir-lapack downloaded 
and accessible by the compiler, and I have installed 
liblapack-dev and libblas-dev.


When I attempt to run the example for geqrs, 
https://github.com/libmir/mir-lapack/blob/master/examples/geqrs/source/app.d , I get the following error


undefined reference to 'dgeqrs_'



dgeqrs does not exists in many LAPACK implementations. It is 
quite new addition to LAPACK. -- Ilya




Re: Help using lubeck on Windows

2018-07-03 Thread 9il via Digitalmars-d-learn

On Wednesday, 4 July 2018 at 00:23:36 UTC, 9il wrote:

On Sunday, 25 February 2018 at 14:26:24 UTC, Arredondo wrote:
On Friday, 23 February 2018 at 18:29:09 UTC, Ilya Yaroshenko 
wrote:

[...]


It is not working my friend. I've been at this for nearly two 
full days now. All the .lib/.a files I have tried for BLAS and 
LAPACK just fail to link, including those from openblas.net.

rdmd insists on:

Error 42: Symbol Undefined _cblas_dgemm
Error 42: Symbol Undefined _cblas_dger
Error: linker exited with status 2

Am I missing something?
Thank you.


CBLAS. Lubeck uses its API. Intel MKL do have it. Just pick 
required libs (there multiple variants plus core and thread 
libs).


Openblas also has cblas api, but it may need explicitly included 
into the project. See its command line config param.


Re: Help using lubeck on Windows

2018-07-03 Thread 9il via Digitalmars-d-learn

On Sunday, 25 February 2018 at 14:26:24 UTC, Arredondo wrote:
On Friday, 23 February 2018 at 18:29:09 UTC, Ilya Yaroshenko 
wrote:
openblas.net contains precompiled openblas library for 
Windows. It may not be optimised well for exactly your CPU but 
it is fast enought to start. Put the library files into your 
prodject and add openblas library to your project dub 
configuration. A .dll files are dinamic, you need also a .lib 
/.a to link with.


OpenBLAS contains both cblas and lapack api by default.

We defenetely need to add an example for Windows

Best
Ilya


It is not working my friend. I've been at this for nearly two 
full days now. All the .lib/.a files I have tried for BLAS and 
LAPACK just fail to link, including those from openblas.net.

rdmd insists on:

Error 42: Symbol Undefined _cblas_dgemm
Error 42: Symbol Undefined _cblas_dger
Error: linker exited with status 2

Am I missing something?
Thank you.


CBLAS. Lubeck uses its API. Intel MKL do have it. Just pick 
required libs (there multiple variants plus core and thread libs).


Re: Docs for subpackages?

2018-06-13 Thread 9il via Digitalmars-d-learn

On Wednesday, 13 June 2018 at 14:56:10 UTC, 9il wrote:

Hi,

I am trying to build a large project that is split into dozen 
of sub-packages.

How I can do it using dub without writing my own doc scripts?
--combined does not help here.

Best regards,
Ilya


UPDATE: --combined works, but DDOX fails

std.json.JSONException@std/json.d(1394): Got JSON of type 
undefined, expected string.


4   ddox0x000107c667fa const 
pure @safe void 
vibe.data.json.Json.checkType!(immutable(char)[]).checkType(immutable(char)[]) + 278
5   ddox0x000107c666cb inout 
@property @trusted inout(immutable(char)[]) 
vibe.data.json.Json.get!(immutable(char)[]).get() + 31
6   ddox0x000107dba2bc 
ddox.entities.Declaration 
ddox.parsers.jsonparser.Parser.parseDecl(vibe.data.json.Json, 
ddox.entities.Entity) + 72
7   ddox0x000107dba106 int 
ddox.parsers.jsonparser.Parser.parseDeclList(vibe.data.json.Json, 
ddox.entities.Entity).__foreachbody3(ref vibe.data.json.Json) + 90
8   ddox0x000107fe1fd7 int 
vibe.data.json.Json.opApply(scope int delegate(ref 
vibe.data.json.Json)) + 159
9   ddox0x000107dba09a 
ddox.entities.Declaration[] 
ddox.parsers.jsonparser.Parser.parseDeclList(vibe.data.json.Json, 
ddox.entities.Entity) + 78
10  ddox0x000107dbb024 
ddox.entities.CompositeTypeDeclaration 
ddox.parsers.jsonparser.Parser.parseCompositeDecl(vibe.data.json.Json, ddox.entities.Entity) + 712
11  ddox0x000107dba520 
ddox.entities.Declaration 
ddox.parsers.jsonparser.Parser.parseDecl(vibe.data.json.Json, 
ddox.entities.Entity) + 684
12  ddox0x000107dba106 int 
ddox.parsers.jsonparser.Parser.parseDeclList(vibe.data.json.Json, 
ddox.entities.Entity).__foreachbody3(ref vibe.data.json.Json) + 90
13  ddox0x000107fe1fd7 int 
vibe.data.json.Json.opApply(scope int delegate(ref 
vibe.data.json.Json)) + 159
14  ddox0x000107dba09a 
ddox.entities.Declaration[] 
ddox.parsers.jsonparser.Parser.parseDeclList(vibe.data.json.Json, 
ddox.entities.Entity) + 78
15  ddox0x000107db8ef3 void 
ddox.parsers.jsonparser.Parser.parseModuleDecls(vibe.data.json.Json, ddox.entities.Package) + 583
16  ddox0x000107db8c9f int 
ddox.parsers.jsonparser.parseJsonDocs(vibe.data.json.Json, 
ddox.entities.Package).__foreachbody3(ref vibe.data.json.Json) + 
91
17  ddox0x000107fe1fd7 int 
vibe.data.json.Json.opApply(scope int delegate(ref 
vibe.data.json.Json)) + 159
18  ddox0x000107db8c2d 
ddox.entities.Package 
ddox.parsers.jsonparser.parseJsonDocs(vibe.data.json.Json, 
ddox.entities.Package) + 89
19  ddox0x000107d9d53c 
ddox.entities.Package ddox.main.parseDocFile(immutable(char)[], 
ddox.settings.DdoxSettings) + 168
20  ddox0x000107d9c1ad int 
ddox.main.setupGeneratorInput(ref immutable(char)[][], out 
ddox.settings.GeneratorSettings, out ddox.entities.Package) + 777
21  ddox0x000107d9bab6 int 
ddox.main.cmdGenerateHtml(immutable(char)[][]) + 42
22  ddox0x000107d9b945 int 
ddox.main.ddoxMain(immutable(char)[][]) + 201
23  ddox0x000107c617bf _Dmain 
+ 31
24  ddox0x00010800d037 void 
rt.dmain2._d_run_main(int, char**, extern (C) int 
function(char[][])*).runAll().__lambda1() + 39
25  ddox0x00010800cec7 void 
rt.dmain2._d_run_main(int, char**, extern (C) int 
function(char[][])*).tryExec(scope void delegate()) + 31
26  ddox0x00010800cfa2 void 
rt.dmain2._d_run_main(int, char**, extern (C) int 
function(char[][])*).runAll() + 138
27  ddox0x00010800cec7 void 
rt.dmain2._d_run_main(int, char**, extern (C) int 
function(char[][])*).tryExec(scope void delegate()) + 31
28  ddox0x00010800ce35 
_d_run_main + 485
29  ddox0x000107c617e9 main + 
33
30  libdyld.dylib   0x7fff7d479014 start 
+ 0

31  ??? 0x0004 0x0 + 4


Docs for subpackages?

2018-06-13 Thread 9il via Digitalmars-d-learn

Hi,

I am trying to build a large project that is split into dozen of 
sub-packages.

How I can do it using dub without writing my own doc scripts?
--combined does not help here.

Best regards,
Ilya


Re: How/where to hack DMD to generate docs for string mixed members.

2018-04-16 Thread 9il via Digitalmars-d-learn

On Sunday, 15 April 2018 at 08:17:21 UTC, Jonathan M Davis wrote:
On Sunday, April 15, 2018 07:59:17 Stefan Koch via 
Digitalmars-d-learn wrote:

On Sunday, 15 April 2018 at 05:20:31 UTC, 9il wrote:
> Hey,
>
> How/where to hack DMD to generate docs for string mixed 
> members?

>
> struct S
> {
>
> mixin("
>
>  ///
>  auto bar() {}
>
> ");
>
> }
>
> Best regards,
> Ilya Yaroshenko

hmm you should be able to see docs for string mixins, if not. 
try using -vcg-ast and try to run ddoc on the cg file


AFAIK, it's never worked to see any ddoc from string mixins. 
Certainly, I'm quite sure that it didn't used to work, so if it 
does now, something changed within the last couple of years.


The closest that I'm aware of is that putting /// on a template 
mixin works so that you can do something like


class MyException : Exception
{
///
mixin basicExceptionCtors;
}

and have the ddoc within the template mixin show up.

- Jonathan M Davis


Mixin templates works. The problem is the use case for the 
library (you know it) I am working on is looks like:


--

struct S
{
mixin(WithGetters!("private", // or WithGettersAndConstructor
Date, "startDate",
Date, "endDate",
DayCount, "dayCount",
double, "yearFraction",
double, "spread",
Calculation, "calculation",
));
}

--

It should define members and getters and maybe one or more 
constructors.
So mixin strings will be here anyway either in the struct or in a 
mixin template.




Re: How/where to hack DMD to generate docs for string mixed members.

2018-04-16 Thread 9il via Digitalmars-d-learn

On Sunday, 15 April 2018 at 07:59:17 UTC, Stefan Koch wrote:

On Sunday, 15 April 2018 at 05:20:31 UTC, 9il wrote:

Hey,

How/where to hack DMD to generate docs for string mixed 
members?


struct S
{
mixin("
 ///
 auto bar() {}
");
}

Best regards,
Ilya Yaroshenko


hmm you should be able to see docs for string mixins, if not.
try using -vcg-ast and try to run ddoc on the cg file


-vcg-ast does not help:

---
import object;
struct S
{
mixin("\x0a ///\x0a auto bar() {}\x0a");
}
RTInfo!(S)
{
enum typeof(null) RTInfo = null;

}
---

Is it bug or is it possible to improve it?


How/where to hack DMD to generate docs for string mixed members.

2018-04-14 Thread 9il via Digitalmars-d-learn

Hey,

How/where to hack DMD to generate docs for string mixed members?

struct S
{
mixin("
 ///
 auto bar() {}
");
}

Best regards,
Ilya Yaroshenko



Re: .sort vs sort(): std.algorithm not up to the task?

2017-06-08 Thread 9il via Digitalmars-d-learn

On Thursday, 8 June 2017 at 01:57:47 UTC, Andrew Edwards wrote:
Ranges may be finite or infinite but, while the destination may 
be unreachable, we can definitely tell how far we've traveled. 
So why doesn't this work?


import std.traits;
import std.range;

void main()
{
string[string] aa;

// what others have referred to as
// standard sort works but is deprecated
//auto keys = aa.keys.sort;

// Error: cannot infer argument types, expected 1 argument, 
not 2

import std.algorithm: sort;
auto keys = aa.keys.sort();

// this works but why should I have to?
//import std.array: array;
//auto keys = aa.keys.sort().array;

foreach (i, v; keys){}
}

If I hand you a chihuahua for grooming, why am I getting back a 
pit bull? I simply want a groomed chihuahua. Why do I need to 
consult a wizard to get back a groomed chihuahua?


You may want to slice chihuahua first, pass it to 
mir.ndslice.sort [1], and get back your groomed sliced chihuahua.


[1] 
http://docs.algorithm.dlang.io/latest/mir_ndslice_sorting.html#.sort.sort


Re: Guide - Migrating from std.experimental.ndslice to mir-algorithm

2017-06-02 Thread 9il via Digitalmars-d-learn

On Friday, 2 June 2017 at 16:08:20 UTC, Zz wrote:

Hi,

Just tried migrating from std.experimental.ndslice to 
mir-algorithm.


Is there a guide on how migrate old code?

I used the following imports before and using then with ndslice.

import std.experimental.ndslice;
import std.algorithm : each, max, sort;
import std.range : iota, repeat;

simplified example of how it was used.
auto a = cr.iota.sliced(r, c);
auto b = a.reshape(c, r).transposed!1;

auto c = a.reversed!1;
auto d = a.reshape(c, r).transposed!1.reversed!1;

auto f = new int[cr].sliced(r, c);
auto h = f.transposed(1);

how can I do the following in mir-algorithm.

Note: I will be going through the documentation.

Zz


Hello Zz,

std.experimental.ndslice -> mir.ndslice

std.range : iota, repeat -> mir.ndslice.topology: iota, repeat;
std.algorithm : each; -> mir.ndslice.algorithm: each;
std.algorithm : max; -> mir.utility: max;
std.algorithm : sort; -> mir.ndslice.sorting: sort;


Note, that Mir functions has different semantics compared with 
Phobos!
For example, each iterates deep elements, so should be combined 
with `pack` to iterates rows instead of elements.


Ndslices work with Phobos functions but it is suggested to use 
Mir analogs if any.


// Mir's iota! It is already 2D ndslice :-)
auto a = [r, c].iota;

auto b = a
   // returns flattened iota, a has Contiguous kind,
   // so the result type would be equal to `iota(r*c)`
.flattened
// convert 1D iota ndslice to 2D iota ndslice
.sliced(c, r)
// It is required to use transposed
// Convert ndslice kind from Contiguous to Universal.
.universal
// Transpose the Universal ndslice
.transposed;

auto c = a.universal.reversed!1;
auto d = a.flattened.sliced(c, 
r).universal.transposed!1.reversed!1; // see also `rotated`


auto f = slice!int(c, r); // new int[cr].sliced(r, c); works too.
auto h = f.universal.transposed(1);

---
Mir ndslices have three kinds: 
http://docs.algorithm.dlang.io/latest/mir_ndslice_slice.html#.SliceKind


If you have any questions feel free to ask at the Gitter:
https://gitter.im/libmir/public

Best,
Ilya

Best,
Ilya



Re: templatized delegate

2017-05-23 Thread 9il via Digitalmars-d-learn

On Tuesday, 23 May 2017 at 10:30:56 UTC, Alex wrote:

On Monday, 22 May 2017 at 21:44:17 UTC, ag0aep6g wrote:
With that kind of variadics, you're not dealing with a 
template. A (run-time) variadic delegate is an actual 
delegate, i.e. a value that can be passed around. But the 
variadic stuff is a bit weird to use, and probably affects 
performance.


By the way, I'm not even sure, if variadics work in my case. I 
have a strange struct of a random generator, which cannot be 
copied, and I have no idea how to pass it to a variadic 
function:


import std.stdio;
import mir.random;

void main()
{
Random rndGen = Random(unpredictableSeed);
fun(rndGen);
}

void fun(...)
{

}

Yields "... is not copyable because it is annotated with 
@disable" :)


1. Pass its pointer
2. Use variadic template with auto ref:

```
void foo(T...)(auto ref T tup)
{
}
```



Re: "Rolling Hash computation" or "Content Defined Chunking"

2017-05-09 Thread 9il via Digitalmars-d-learn

On Tuesday, 9 May 2017 at 18:17:45 UTC, notna wrote:


I hoped there may already be something in Mir or Weka.io or 
somewhere else... Will read the Golang, C and C++ source and 
see if my Dlang is good enough for ranges and the like magic...


Hello notha,

You may want to open a PR to mir-algorithm.  I will help to make 
code idiomatic.


https://github.com/libmir/mir-algorithm

Thanks,
Ilya



Re: ndslice summary please

2017-04-13 Thread 9il via Digitalmars-d-learn
On Thursday, 13 April 2017 at 15:22:46 UTC, Martin Tschierschke 
wrote:

On Thursday, 13 April 2017 at 08:47:16 UTC, Ali Çehreli wrote:
I haven't played with ndslice nor followed its deprecation 
discussions. Could someone summarize it for us please. Also, 
is it still used outside Phobos or is Ilya or someone else 
rewriting it?


Ali


We should additionally mention sometimes, that the naming of 
ndslice is derived from ndarray
and this is from N-dimensional Array. No one searching for N - 
dimensional Array OR Matrix will find ndslice, without this 
info easily. Just try a google search: "dlang n dimesional 
array"

Regards mt.


... plus link in the spec
https://github.com/dlang/dlang.org/pull/1634


Re: ndslice summary please

2017-04-13 Thread 9il via Digitalmars-d-learn
On Thursday, 13 April 2017 at 15:22:46 UTC, Martin Tschierschke 
wrote:

On Thursday, 13 April 2017 at 08:47:16 UTC, Ali Çehreli wrote:
I haven't played with ndslice nor followed its deprecation 
discussions. Could someone summarize it for us please. Also, 
is it still used outside Phobos or is Ilya or someone else 
rewriting it?


Ali


We should additionally mention sometimes, that the naming of 
ndslice is derived from ndarray
and this is from N-dimensional Array. No one searching for N - 
dimensional Array OR Matrix will find ndslice, without this 
info easily. Just try a google search: "dlang n dimesional 
array"

Regards mt.


The first link for me in incognito mode is:
https://wiki.dlang.org/Dense_multidimensional_arrays

It contained example for std.experimental.ndslice. But thanks for 
the idea. Reworked  for Mir now.


Thanks,
Ilya


Re: ndslice summary please

2017-04-13 Thread 9il via Digitalmars-d-learn

On Thursday, 13 April 2017 at 08:47:16 UTC, Ali Çehreli wrote:
I haven't played with ndslice nor followed its deprecation 
discussions. Could someone summarize it for us please. Also, is 
it still used outside Phobos or is Ilya or someone else 
rewriting it?


Ali


The reasons to use mir-algorithm instead of std.range, 
std.algorithm, std.functional (when applicable):


1. It allows easily construct one and multidimensional random 
access ranges. You may compare `bitwise` implementation in 
mir-algorithm and Phobos. Mir's version few times smaller and do 
not have Phobos bugs like non mutable `front`. See also `bitpack`.

2. Mir devs are very cary about BetterC
3. Slice is universal, full featured, and multidimensional random 
access range. All RARs can be expressed through generic Slice 
struct.

4. It is faster to compile and generates less templates bloat.
For example:

slice.map!fun1.map!fun2

is the same as

slice.map!(pipe!(fun1, fun2))

`map` and `pipe` are from mir-algorithm.



Re: ndslice summary please

2017-04-13 Thread 9il via Digitalmars-d-learn

On Thursday, 13 April 2017 at 08:47:16 UTC, Ali Çehreli wrote:
I haven't played with ndslice nor followed its deprecation 
discussions. Could someone summarize it for us please. Also, is 
it still used outside Phobos or is Ilya or someone else 
rewriting it?


Ali


Hello Ali,

ndslice was removed from Phobos because it is to hard to maintain 
Phobos and Mir at the same time. It is better to have dub 
packages instead of big Phobos, IMHO.


ndslice was completely rewritten and extended in mir-algorithm 
package [0, 1]  since ndslice was deprecated in Phobos. Its API 
is stable enough and includes all kinds of tensors. New ndslice 
is used in two Tamediadigital's projects [3, 4]. See also its 
README for more details.


The old ndslice (like in Phobos) is located in parent Mir package 
[2].

The last Mir version with old ndslice is v0.22.1.

[0] http://docs.algorithm.dlang.io
[1] https://github.com/libmir/mir-algorithm
[2] https://github.com/libmir/mir
[3] https://github.com/tamediadigital/lincount
[4] https://github.com/tamediadigital/hll-d

If you have any questions about ndslice I would be happy to 
answer.


Best regards,
Ilya



Re: Why doesn't this chain of ndslices work?

2016-05-15 Thread 9il via Digitalmars-d-learn

On Saturday, 14 May 2016 at 21:59:48 UTC, Stiff wrote:

Here's the code that doesn't compile:

import std.stdio, std.experimental.ndslice, std.range, 
std.algorithm;


[...]


Coming soon 
https://github.com/libmir/mir/issues/213#issuecomment-219271447 
--Ilya


Re: foreach(i,ref val; ndim_arr)??

2016-05-10 Thread 9il via Digitalmars-d-learn

On Tuesday, 10 May 2016 at 15:18:50 UTC, ZombineDev wrote:

On Tuesday, 10 May 2016 at 10:21:30 UTC, ZombineDev wrote:

(You can try it at: https://dpaste.dzfl.pl/c0327f067fca)

import std.array : array;
import std.experimental.ndslice : byElement, indexSlice, sliced;
import std.range : iota, lockstep, zip;
import std.stdio : writefln;

void main()
{
// needs .array for ref (lvalue) access
// (iota offers only rvalues)
auto slice = iota(2 * 3 * 4).array.sliced(2, 3, 4);

auto indexed_range = lockstep(
slice.shape.indexSlice.byElement(),
slice.byElement()
);

writefln("%s", slice);

foreach (idx, ref elem; indexed_range)
writefln("Element at %s = %s", idx, ++elem);
}



The code above is slow because it calculates index using multiple 
assembler ops.


Following would be faster:

for(auto elems = slice.byElement; !elems.empty; elems.popFront)
{
size_t[2] index = elems.index;
elems.front = index[0] * 10 + index[1] * 3;
}

For really fast code just use multiple foreach loops:

foreach(i; matrix.length)
{
auto row = matrix[i]; // optional
foreach(j; matrix.length!1)
{
...
}
}


Re: foreach(i,ref val; ndim_arr)??

2016-05-10 Thread 9il via Digitalmars-d-learn

On Monday, 9 May 2016 at 18:50:32 UTC, Jay Norwood wrote:
I noticed some discussion of Cartesian indexes in Julia, where 
the index is a tuple, along with some discussion of optimizing 
the index created for cache efficiency.  I could find 
foreach(ref val, m.byElement()), but didn't find an example 
that returned a tuple index.   Is that supported?


http://julialang.org/blog/2016/02/iteration

http://julialang.org/blog/2016/03/arrays-iteration


This example is form documentation:
http://dlang.org/phobos/std_experimental_ndslice_selection.html#.byElement

import std.experimental.ndslice.slice;
auto slice = new long[20].sliced(5, 4);

for(auto elems = slice.byElement; !elems.empty; elems.popFront)
{
size_t[2] index = elems.index;
elems.front = index[0] * 10 + index[1] * 3;
}
assert(slice ==
[[ 0,  3,  6,  9],
 [10, 13, 16, 19],
 [20, 23, 26, 29],
 [30, 33, 36, 39],
 [40, 43, 46, 49]]);

ndslice does not have opApply, because it overrides range 
primitives.
Iteration with byElement may be a little bit slower then 
iteration with common foreach loops. A method, say, `forElement` 
for `foreach` loops may be added, thought.


Best regards,
llya



Re: Issue with 2.071: Regression or valid error?

2016-04-08 Thread 9il via Digitalmars-d-learn
On Thursday, 7 April 2016 at 15:55:16 UTC, Steven Schveighoffer 
wrote:

On 4/6/16 11:10 AM, Andre wrote:

[...]


Just FYI, you don't need a semicolon there.


[...]


Wow, totally agree with you. Compiler shouldn't make you jump 
through this hoop:


void foo(Cat cat)
{
   Animal a = cat;
   a.create();
}

Please file a bug report, not sure why this happened.

-Steve


Why this is a bug? private methods are not virtual, are they? 
--Ilya