Re: alias this - am I using it wrong?

2021-08-25 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 25 August 2021 at 12:23:06 UTC, Adam D Ruppe wrote:

[snip]


That's a lot about alias this that I didn't know. Thanks.


Re: How to get element type of a slice?

2021-08-18 Thread jmh530 via Digitalmars-d-learn
On Tuesday, 17 August 2021 at 14:40:20 UTC, Ferhat Kurtulmuş 
wrote:

[snip]

Very informative, thanks. My code is lying here[1]. I want my 
struct to accept 2d static arrays, random access ranges, and 
"std.container.Array". I could achieve it with its present 
form, and I will probably slightly modify it based on your 
comments.


[1]: 
https://github.com/aferust/earcut-d/blob/master/source/earcutd.d#L34


If it would only accept dynamic arrays, you could use something 
like below


```d
import std.traits: isDynamicArray;

template DynamicArrayOf(T : U[], U)
if (isDynamicArray!T)
{
alias DynamicArrayOf = U;
}

struct Point {}

void main()
{
static assert(is(DynamicArrayOf!(int[]) == int));
static assert(is(DynamicArrayOf!(Point[]) == Point));
}
```


Re: equivalent of std.functional.partial for templates?

2021-08-11 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 11 August 2021 at 14:08:59 UTC, Paul Backus wrote:

[snip]

Should have read further--this does not work with template 
functions due to [issue 1807.][1] My mistake.


[1]: https://issues.dlang.org/show_bug.cgi?id=1807


Looks like that strengthens the case for moving forward with 
DIP1023 (or something more general).


Re: Integer programming in D?

2021-07-19 Thread jmh530 via Digitalmars-d-learn

On Monday, 19 July 2021 at 12:39:41 UTC, Arredondo wrote:
Is there an integer linear programming/discrete optimization 
library for D? an equivalent to the JuMP library for Julia for 
instance. Doesn't have to be too big, I really only need to 
solve a few smallish binary linear systems, but taking a quick 
look at code.dlang I did not immediately find anything.


Cheers!
Arredondo.


glpk can handle mixed integer programming problems. Since it is a 
C library, it would be pretty easy to call from D but I don't see 
a binding or anything available. I would try to call it with dpp 
and if that doesn't work then something else like dstep.


There is probably scope for building a wrapper on top of it that 
makes for a more D-like interface.


Re: assert(false) and GC

2021-07-08 Thread jmh530 via Digitalmars-d-learn

On Thursday, 8 July 2021 at 18:11:50 UTC, DLearner wrote:

Hi

Please confirm that:
`
   assert(false, __FUNCTION__ ~ "This is an error message");
`

Will _not_ trigger GC issues, as the text is entirely known at 
compile time.


Best regards


Consider below. Only z will generate an error. This is called 
string literal concatenation, which comes from C [1].


```d
@nogc void main() {
string x = __FUNCTION__ ~ "This is an error message";
string y = "This is an error message";
string z = __FUNCTION__ ~ y;
}
```

[1] 
https://en.wikipedia.org/wiki/String_literal#String_literal_concatenation


Re: float price; if (price == float.nan) { // initialized } else { // uninitialized } ... valid ?

2021-06-30 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 30 June 2021 at 04:17:19 UTC, someone wrote:
On Wednesday, 30 June 2021 at 03:55:05 UTC, Vladimir Panteleev 
wrote:



If you want to give any type a "null" value, you could use

[`std.typecons.Nullable`](https://dlang.org/library/std/typecons/nullable.html).

At LEAST for some things with currency types like prices which 
cannot be zero because 0 makes no sense for a price:

[snip]


You've never given something away for free?


Re: Financial Library

2021-06-13 Thread jmh530 via Digitalmars-d-learn

On Sunday, 13 June 2021 at 22:32:16 UTC, Bastiaan Veelo wrote:

On Sunday, 13 June 2021 at 12:46:29 UTC, Financial Wiz wrote:
What are some of the best Financial Libraries for D? I would 
like to be able to aggregate as much accurate information as 
possible.


Thanks.


I am not into financials, but these libs show up in a search: 
https://code.dlang.org/search?q=Decimal. Perhaps you know some 
other relevant terms to search for.


[snip]


Assignment contracts also make it more difficult for people who 
work for financial firms to work on open-source projects that are 
directly related to finance.




Re: dual-context deprecation

2021-05-17 Thread jmh530 via Digitalmars-d-learn
On Monday, 17 May 2021 at 14:35:51 UTC, Steven Schveighoffer 
wrote:

[snip]
The feature is deprecated in its current form. The issue as I 
understand it (i.e. very little) is that compilers other than 
DMD could not use this same way to implement dual contexts, and 
so they could not have the feature. This means that valid code 
in DMD would not compile on GDC or LDC.


The way forward was to deprecate the mechanism used to 
implement it for DMD, and at some point tackle it in a 
backend-agnostic way.


Personally, I don't know why we can't fix it so that it's 
portable, but I understand so little about compilers that I've 
pretty much stayed out of it. The feature is very much needed.


-Steve


That's a good summary. Thanks.


Re: dual-context deprecation

2021-05-17 Thread jmh530 via Digitalmars-d-learn

On Monday, 17 May 2021 at 13:51:32 UTC, Paul Backus wrote:

[snip]

See this issue for context:

https://issues.dlang.org/show_bug.cgi?id=5710


Thanks. Lots of details there that I don't follow all of.

I mentioned in the deprecation PR [1] that it was not listed in 
the list of deprecated features.


[1] https://github.com/dlang/dmd/pull/9702


dual-context deprecation

2021-05-17 Thread jmh530 via Digitalmars-d-learn
The code below (simplified from my actual problem) generates a 
warning that member function b "requires a dual-context, which is 
deprecated".


However when I look at the list of deprecated features [1], I'm 
not seeing which one this is referring to. Is it a valid 
deprecation?


I could only find this [2] reference to dual-contexts, which 
suggests that the problem relates to passing aliases into member 
functions. Moving it to a member function fixes the problem. 
Alternately, I could make the alias part of Foo's type. My use 
case it is just a little easier structured like this, but I get 
that there are workarounds.


My bigger question is about why it isn't listed more than 
anything. I.e., should I file something in bugzilla.


```d
struct Foo
{
double a;

this(double x)
{
this.a = x;
}

double b(alias inverse)()
{
return inverse(a);
}
}

void main()
{
auto foo = Foo(2.0);
auto x = foo.b!(a => (10.0 ^^ a))();
}
```

[1] https://dlang.org/deprecate.html
[2] 
https://forum.dlang.org/thread/mkeumwltwiimkrelg...@forum.dlang.org


Re: Since dmd 2.096.0: import `x.t` is used as a type

2021-05-03 Thread jmh530 via Digitalmars-d-learn

On Sunday, 2 May 2021 at 18:36:25 UTC, Basile B. wrote:

[snip]

BTW during the PR review the problem you encounter [was 
anticipated](https://github.com/dlang/dmd/pull/12178#issuecomment-773886263) si I guess you're stuck with [the author answer](https://github.com/dlang/dmd/pull/12178#issuecomment-773902749), i.e "this worked because of a special case".

[snip]


"this worked because of a special case" or "I'm sure they won't 
mind a bit of change" are not exactly the most fulfilling 
arguments to me. Don't we have transition switches for a reason?


The other solution is to keep the special case and then add 
additional logic to handle when the thing being looked up is not 
public (or whatever).


Re: mir - Help on how to transform multidimentional arrays.

2021-04-29 Thread jmh530 via Digitalmars-d-learn

On Thursday, 29 April 2021 at 15:56:48 UTC, jmh530 wrote:



What you're basically asking for the first one is to convert 
for row major to column major. There doesn't seem to be a 
specific function for that, but you can piece it together. The 
second one is just applying allReversed to the result of that. 
So we have:



[snip]

Got an extra `universal` import there.


Re: mir - Help on how to transform multidimentional arrays.

2021-04-29 Thread jmh530 via Digitalmars-d-learn

On Thursday, 29 April 2021 at 15:26:15 UTC, Newbie wrote:

[snip]

Forgot to add the the first array was created using the 
following code.

auto base = iota(2, 5, 3);


What you're basically asking for the first one is to convert for 
row major to column major. There doesn't seem to be a specific 
function for that, but you can piece it together. The second one 
is just applying allReversed to the result of that. So we have:


```d
/+dub.sdl:
dependency "mir-algorithm" version="~>3.10.25"
+/

import std.stdio: writeln;
import mir.ndslice.topology: iota, flattened, reshape, universal;
import mir.ndslice.dynamic: transposed, allReversed;

void main() {
auto x = iota(2, 5, 3);
int err;
auto y = x.flattened.reshape([3, 5, 2], err).transposed!(1, 
2);

auto z = y.allReversed;

writeln(x);
writeln(y);
writeln(z);
}
```


Re: DIP1000 and immutable

2021-04-27 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 27 April 2021 at 14:44:48 UTC, Adam D. Ruppe wrote:

On Tuesday, 27 April 2021 at 14:28:12 UTC, jmh530 wrote:

However, should it ever matter if you escape an immutable?


Your example is a pretty clear case of use-after-free if gloin 
actually did escape the reference and kept it after main 
returned.


I tried basically the same thing in Rust and it doesn't 
generate errors (their borrow checker should be assuming scope 
by default).


That means it treats gloin as if it is scope, so it isn't the 
same as your D code since the gloin there is NOT borrowing.


Hmmm, good points. Thanks.


DIP1000 and immutable

2021-04-27 Thread jmh530 via Digitalmars-d-learn

What is the motivation for DIP1000 also applying to immutable?

For instance, in the code (compiled with -dip1000), adapted from 
the spec [1], you get the same errors with immutable function 
parameters as you would with mutable ones. However, should it 
ever matter if you escape an immutable?


```d
@safe:

void thorin(scope immutable(int)*) {}
void gloin(immutable(int)*) {}

immutable(int)* balin(scope immutable(int)* q)
{
thorin(q);
gloin(q); // error, gloin() escapes q
return q; // error, cannot return 'scope' q
}

void main() {
immutable(int) x = 2;
immutable(int)* ptrx = 
immutable(int)* ptrz = balin(ptrx);
}
```

I tried basically the same thing in Rust and it doesn't generate 
errors (their borrow checker should be assuming scope by default).


```rust
#![allow(unused_variables)]

fn gloin(x: ) {
}

fn ballin(x: ) ->  {
gloin(x);
return x;
}

fn main() {
let x = 2;
let ptrx = 
let ptrz = ballin(ptrx);
}
```

[1] https://dlang.org/spec/function.html#scope-parameters


Re: How to delete dynamic array ?

2021-03-17 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 17 March 2021 at 16:32:28 UTC, Ali Çehreli wrote:

On 3/17/21 3:54 AM, jmh530 wrote:

On Tuesday, 16 March 2021 at 23:49:00 UTC, H. S. Teoh wrote:



double[] data;
data = cast(double[]) malloc(n * double.sizeof)[0 .. n];



This is one of those things that is not explained well enough.


I have something here:


http://ddili.org/ders/d.en/pointers.html#ix_pointers.slice%20from%20pointer

Ali


That's a little advanced, I think. And you also have
http://ddili.org/ders/d.en/slices.html
saying that slices are just another name for dynamic arrays.


Re: How to delete dynamic array ?

2021-03-17 Thread jmh530 via Digitalmars-d-learn
On Wednesday, 17 March 2021 at 16:20:06 UTC, Steven Schveighoffer 
wrote:

[snip]

I've had online battles about this terminology, and people 
asked me to change my array article to disavow this 
distinction, but I'm not going to change it. It's so much 
easier to understand.


-Steve


I'll be on your side on that one.


Re: How to delete dynamic array ?

2021-03-17 Thread jmh530 via Digitalmars-d-learn
On Wednesday, 17 March 2021 at 14:30:26 UTC, Guillaume Piolat 
wrote:

On Wednesday, 17 March 2021 at 10:54:10 UTC, jmh530 wrote:


This is one of those things that is not explained well enough.


Yes.
I made this article to clear up that point: 
https://p0nce.github.io/d-idioms/#Slices-.capacity,-the-mysterious-property


"That a slice own or not its memory is purely derived from the 
pointed area."


could perhaps better be said

"A slice is managed by the GC when the memory it points to is 
in GC memory"?


I probably skimmed over the link when I originally read it 
without really understanding it. I'm able to understand it now.


I think the underlying issue that needs to get explained better 
is that when you do

int[] x = [1, 2, 3];
the result is always a GC-allocated dynamic array. However, z 
below

int[3] y = [1, 2, 3];
int[] z = y[];
does not touch the GC at all. For a long time, I operated under 
the assumption that dynamic arrays and slices are the same thing 
and that dynamic arrays are always GC-allocated. z is obviously a 
slice of y, but it is also a dynamic array in the sense that you 
can append to it and get an array with one more member than y 
(except in @nogc code). However, when appending to z, it seems 
that what's really happening is that the GC is allocating a new 
part of memory, copying over the original value of y and then 
copying in the new value. So it really becomes a new kind of 
thing (even if the type is unchanged).


One takeaway is there is no issue with a function like below
@nogc void foo(T)(T[] x) {}
so long as you don't actually need the GC within the function. A 
static array can be passed in just using a slice.


Re: How to delete dynamic array ?

2021-03-17 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 16 March 2021 at 23:49:00 UTC, H. S. Teoh wrote:

[snip]

Note that T[] is just a slice, not the dynamic array itself. 
The dynamic array is allocated and managed by the GC when you 
append stuff to it, or when you create a new array with `new` 
or an array literal.


None of the latter, however, precludes you from using T[] for 
memory that you manage yourself. For example, you could do this:


double[] data;
data = cast(double[]) malloc(n * double.sizeof)[0 .. n];

Now you have a slice to memory you allocated yourself, and you 
have to manage its lifetime manually.  When you're done with it:


free(data.ptr);
data = []; // null out dangling pointer, just in case

The GC does not get involved unless you actually allocate from 
it. As long as .ptr does not point to GC-managed memory, the GC 
will not care about it. (Be aware, though, that the ~ and ~= 
operators may allocate from the GC, so you will have to refrain 
from using them. @nogc may help in this regard.)



T


This is one of those things that is not explained well enough.


Re: Why am I getting a dividing by zero error message

2021-01-28 Thread jmh530 via Digitalmars-d-learn
On Thursday, 28 January 2021 at 18:37:37 UTC, Ruby The Roobster 
wrote:

Here is the output/input of the program:
Type in  data for an egg:
Width: 3
Hight: 2

[...]


It might help to separate break this out into smaller functions. 
May make it easier to follow what is happening.


Re: Why many programmers don't like GC?

2021-01-15 Thread jmh530 via Digitalmars-d-learn

On Friday, 15 January 2021 at 16:22:59 UTC, IGotD- wrote:

[snip]

Are we talking about the same things here? You mentioned DMD 
but I was talking about programs compiled with DMD (or GDC, 
LDC), not the nature of the DMD compiler in particular.


Bump the pointer and never return any memory might acceptable 
for short lived programs but totally unacceptable for long 
running programs, like a browser you are using right now.


Just to clarify, in a program that is made in D with the 
default options, will there be absolutely no memory reclamation?


You are talking about different things.

DMD, as a program, uses the bump the pointer allocation strategy.

If you compile a D program with DMD that uses new or appends to a 
dynamic array (or whenver else), then it is using the GC to do 
that. You can also use malloc or your own custom strategy. The GC 
will reclaim memory, but there is no guarantee that malloc or a 
custom allocation strategy will.


Re: Why many programmers don't like GC?

2021-01-15 Thread jmh530 via Digitalmars-d-learn
On Friday, 15 January 2021 at 15:36:37 UTC, Ola Fosheim Grøstad 
wrote:

On Friday, 15 January 2021 at 15:20:05 UTC, jmh530 wrote:
Hypothetically, would it be possible for users to supply their 
own garbage collector that uses write barriers?


Yes. You could translate Google Chrome's Oilpan to D. It uses 
library smart pointers for dirty-marking. But it requires you 
to write a virtual function that points out what should be 
traced (actually does the tracing for the outgoing pointers 
from that object):


The library smart pointers would make it difficult to interact 
with existing D GC code though.


Re: Why many programmers don't like GC?

2021-01-15 Thread jmh530 via Digitalmars-d-learn

On Friday, 15 January 2021 at 14:50:00 UTC, welkam wrote:
On Thursday, 14 January 2021 at 18:51:16 UTC, Ola Fosheim 
Grøstad wrote:
One can follow the same kind of reasoning for D. It makes no 
sense for people who want to stay high level and do batch 
programming. Which is why this disconnect exists in the 
community... I think.


The reasoning of why we do not implement write barriers is that 
it will hurt low level programming. But I feel like if we drew 
a ven diagram of people who rely on GC and those who do a lot 
of writes trough a pointer we would get almost no overlap. In 
other words if D compiler had a switch that turned on write 
barriers and better GC I think many people would use it and 
find the trade offs acceptable.


Hypothetically, would it be possible for users to supply their 
own garbage collector that uses write barriers?


Re: C++ or D?

2020-12-30 Thread jmh530 via Digitalmars-d-learn
On Wednesday, 30 December 2020 at 19:51:07 UTC, Ola Fosheim 
Grøstad wrote:

[snip]

Sort of, in C++ it would be something like this

template class OuterName>
void myfunction(OuterName x){ stuff(); }

[snip]


You mean like this

struct Foo(T)
{
T x;
}

void foo(T : Foo!V, V)(T x) {
import std.stdio: writeln;
writeln("here");
}

void main() {
Foo!int x;
foo(x);
}


Re: Running unit tests from DUB single file packages

2020-12-22 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 22 December 2020 at 15:06:09 UTC, drug wrote:

[snip]


But what do you mean exactly by "work with dependency"? As I 
understand, `dub test` does not run unit tests in dependencies 
and single file packages work with dependencies in general. Do 
you mean something else? I'm finishing the new PR to fix #2051 
finally and I'd like to know if there is something else I 
should include in it.


https://github.com/dlang/dub/pull/2064


Thanks. It looks like your UT with taggedalgebraic does exactly 
what I was looking for.


My problem is that run.dlang.org will skip unittests when you 
have dependencies. I had made some progress on fixing this a few 
months ago [1], but put it on the back burner when I ran into 
similar issues that the OP was dealing with. The problem 
ultimately came down to dub test not working with --single, which 
it looks like this latest PR will fix for good.


[1] https://github.com/dlang-tour/core-exec/pull/56


Re: Running unit tests from DUB single file packages

2020-12-21 Thread jmh530 via Digitalmars-d-learn

On Monday, 21 December 2020 at 11:31:49 UTC, drug wrote:

[snip]
Unfortunately I'm very busy. But I check it again and it turns 
out that the fix does not resolve the problem completely. This 
PR just remove the single file from testing so currently dub 
does not run unit tests in the single file package at all. The 
first variant (https://github.com/dlang/dub/pull/2050) fixes 
the issue indeed. I need to reevaluate these PRs and close the 
issue. I'll do it later.


Thanks for taking a look.


Re: Running unit tests from DUB single file packages

2020-12-20 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 2 December 2020 at 12:51:11 UTC, drug wrote:

[snip]


Thanks! Let's see if it gets merged or if a slightly more 
involved

solution is needed.



Remake it - https://github.com/dlang/dub/pull/2052
This has more chances to be merged


Looks like this got merged and will be part of the newest 
version, which is great news. Have you checked that it works with 
dependencies?


Re: Running unit tests from DUB single file packages

2020-12-01 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 1 December 2020 at 14:15:22 UTC, Johannes Loher wrote:

[snip]

The point of using DUB (and the single file package format) is 
easy access to libraries from the DUB registry. If I didn't 
want to use a dependency, I would not be using DUB at all. That 
said, leaving out the dependency does not solve the issue, it 
also occurs with the following source file:




Thanks. The reason I was asking was because if you've ever tried 
run.dlang.org with dependencies and unit tests, then you'll 
notice that the unittests are skipped, which is basically the 
same issue you are having. If you remove the dependencies, then 
it works. So I was thinking that whatever they used to get 
run.dlang.org working without dependencies might help you. I had 
hoped to try to get run.dlang.org working with dependencies and 
unittests, but haven't found the time to get a solution. Maybe 
this PR might improve matters...





Re: Running unit tests from DUB single file packages

2020-12-01 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 1 December 2020 at 11:40:38 UTC, Johannes Loher wrote:

[snip]

Any hints on how to execute unit tests from single file DUB 
packages? Is it even possible at the moment? Thanks in advance 
for any help!



[1] https://adventofcode.com/


Have you tried it without the imports?


Re: Running unit tests from DUB single file packages

2020-12-01 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 1 December 2020 at 13:52:35 UTC, jmh530 wrote:
On Tuesday, 1 December 2020 at 11:40:38 UTC, Johannes Loher 
wrote:

[snip]

Any hints on how to execute unit tests from single file DUB 
packages? Is it even possible at the moment? Thanks in advance 
for any help!



[1] https://adventofcode.com/


Have you tried it without the imports?


Or rather, without the dependency.


Re: lambdas with types

2020-11-20 Thread jmh530 via Digitalmars-d-learn

On Friday, 20 November 2020 at 14:57:42 UTC, H. S. Teoh wrote:
On Fri, Nov 20, 2020 at 02:47:52PM +, Paul Backus via 
Digitalmars-d-learn wrote: [...]
In this specific case, you could also make `foo` a type-safe 
variadic function [1], which would eliminate the need for 
`allSatisfy`:


void foo(double[] args...)
{
// ...
}

[...]

Yes, and this will also eliminate the template bloat associated 
with .foo, which would have been instantiated once per call 
with a different number of arguments.  But of course, this only 
works if all arguments are of the same type, and if the 
function body does not depend on accessing the number of 
arguments at compile-time.



T


Thanks all.

The template conditions I'm working on are complicated enough 
that this approach might work for some but not all. However, if I 
split out the function I'm working on into a separate one, then I 
might be able to take advantage of that.


lambdas with types

2020-11-20 Thread jmh530 via Digitalmars-d-learn
Doing something like below fails because I don't seem to be able 
to make a templated lambda that just takes types. Is the only way 
to do something similar to create a separate function to handle 
the condition, or is there some other way to do something with 
similar flexibility?


import std.stdio: writeln;
import std.meta: allSatisfy;

void foo(Args...)(Args args)
if (allSatisfy!(x => is(x == double), Args))
{
writeln("works");
}

void main() {
foo(1.0, 2.0);
}


Re: enum and const or immutable ‘variable’ whose value is known at compile time

2020-09-17 Thread jmh530 via Digitalmars-d-learn

On Thursday, 17 September 2020 at 10:53:48 UTC, Mike Parker wrote:

[snip]

I can attest that in the 17 years I've been hanging around 
here, the fact that enum is used to indicate a manifest 
constant has not been a serious source of WTF posts. So I think 
"pretty much everyone coming to D" have decided it's either 
perfectly fine or perfectly tolerable. It's the sort of thing 
that may not be obvious, but once you figure you absorb it and 
get down to coding. I know some people would prefer it were 
something else and some don't care. I'm squarely in the camp 
that thinks it makes perfect sense and it would be silly to 
create a new keyword for it.


A talk at dconf 2019 provided an alternative approach to using 
enum for manifest constants:


http://dconf.org/2019/talks/marques.html


Re: How to use libmir --> mir-algorithm, numir, mir-random?

2020-09-09 Thread jmh530 via Digitalmars-d-learn
On Wednesday, 9 September 2020 at 15:30:33 UTC, Shaleen Chhabra 
wrote:

[snip]

Hi, I updated my dmd version to dmd-2.093.1
Now it throws a conflict error between

1. function mir.ndslice.topology.iota!(long, 1LU).iota at 
mir/ndslice/topology.d(630) conflicts with function 
std.range.iota!int.iota at 
/home/shaleen/.dvm/compilers/dmd-2.093.1/linux/bin/../../src/phobos/std/range/package.d


2. template mir.ndslice.topology.map(fun...) if (fun.length) at 
mir/ndslice/topology.d(2565) conflicts with template 
std.algorithm.iteration.map(fun...) if (fun.length >= 1) at 
/home/shaleen/.dvm/compilers/dmd-2.093.1/linux/bin/../../src/phobos/std/algorithm/iteration.d(482)


Below would generate the same error for iota. There are iota 
functions in std.range and mir.ndslice.topology and the compiler 
does not know which one to use. You can use one or the other or 
use static imports.


In the future, it will be a little easier to identify the issues 
if you post the code as well. You can also start with simpler 
examples and work your way to larger ones.


```
/+dub.sdl:
dependency "mir-algorithm" version="*"
+/
import std.range;
import mir.ndslice.topology;

void main()
{
auto x = iota(5);
}
```


Re: How does D's templated functions implementation differ from generics in C#/Java?

2020-08-07 Thread jmh530 via Digitalmars-d-learn

On Friday, 7 August 2020 at 21:39:44 UTC, H. S. Teoh wrote:

[snip]


"Furthermore, it can dispatch to a type-erased implementation ala 
Java -- at your choice;"


This is interesting. Would you just cast to Object?


Re: Template constraint on alias template parameter.

2020-08-06 Thread jmh530 via Digitalmars-d-learn

On Thursday, 6 August 2020 at 18:09:50 UTC, ag0aep6g wrote:

[snip]

`is(...)` only works on types. You're looking for 
`__traits(isSame, T, Foo)`.


For `is(T!U == Foo!U, U)` to work, the compiler would have to 
guess U. If the first guess doesn't work, it would have to 
guess again, and again, and again, until it finds a U that does 
work. Could take forever.


Thanks for the explanation!


Re: Template constraint on alias template parameter.

2020-08-06 Thread jmh530 via Digitalmars-d-learn

On Thursday, 6 August 2020 at 16:01:35 UTC, jmh530 wrote:

[snip]


It seems as if the T is properly Foo(T) and can only be 
instantiated with actual types. Something like below works and 
might work for me.


template test(alias T)
if (__traits(isTemplate, T))
{
void test(U)(U x)
if (is(T!U : Foo!U))
{
import std.stdio: writeln;
writeln("there");
}
}


Template constraint on alias template parameter.

2020-08-06 Thread jmh530 via Digitalmars-d-learn
The code below compiles, but I want to put an additional 
constraint on the `test` function is only called with a Foo 
struct.


I tried things like is(T == Foo) and is(T : Foo), but those don't 
work. However, something like is(T!int : Foo!int) works, but 
is(T!U == Foo!U, U) doesn't. Any idea why is(T!U == Foo!U, U) 
doesn't work?


struct Foo(T)
{
T x;
}

void test(alias T)()
if (__traits(isTemplate, T))
{
import std.stdio: writeln;
writeln("there");
}

void main()
{
test!Foo();
}



Re: 2-D array initialization

2020-08-02 Thread jmh530 via Digitalmars-d-learn

On Sunday, 2 August 2020 at 19:19:51 UTC, Andy Balba wrote:



I'm not a gitHub fan, but I like the mir functions; and it 
looks like I have to download mir before using it.
mir has quite a few .d files..Is there a quick way to download 
it ?


dub [1] is now packaged with dmd, which is the easiest way to use 
it, by far.


You can also play around with it at run.dlang.org (though it has 
some limitations).


I encourage you to get familiar with git and github, but if you 
want to avoid downloading files one-by-one from the website, 
there should be a big green button on the front page that says 
"Code". If you click on that, there is button for downloading a 
zip file.



[1] https://dub.pm/getting_started


Re: 2-D array initialization

2020-07-31 Thread jmh530 via Digitalmars-d-learn

On Friday, 31 July 2020 at 23:42:45 UTC, Andy Balba wrote:

ubyte[3][4] c ;

How does one initialize c in D ?  none of the statements below 
works


 c = cast(ubyte) [ [5, 5, 5], [15, 15,15], [25, 25,25], [35, 
35,35]  ];


c[0] = ubyte[3] [5, 5, 5]   ;  c[1] = ubyte[3] [15, 15,15] ;
c[2] = ubyte[3] [25, 25,25] ;  c[3] = ubyte[3] [35, 35,35] ;

for (int i= 0; i<3; i++) for (int j= 0; i<4; j++) c[i][j]= 
cast(ubyte)(10*i +j) ;


Below is for a dynamic array. You can also try mir 
(https://github.com/libmir/mir-algorithm).


import std.stdio: writeln;

void main()
{
auto c = cast(ubyte[][]) [ [5, 5, 5], [15, 15,15], [25, 
25,25], [35, 35,35]  ];

writeln(c);
}


Re: D Mir: standard deviation speed

2020-07-15 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 11:41:35 UTC, 9il wrote:

[snip]

Ah, no, my bad! You write @fmamath, I have read it as 
@fastmath. @fmamath is OK here.


I've mixed up @fastmath and @fmamath as well. No worries.


Re: D Mir: standard deviation speed

2020-07-15 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 11:26:19 UTC, 9il wrote:

[snip]


@fmamath private double sd(T)(Slice!(T*, 1) flatMatrix)


@fastmath violates all summation algorithms except `"fast"`.
The same bug is in the original author's post.


I hadn't realized that @fmamath was the problem, rather than 
@fastmath overall. @fmamathis used on many mir.math.stat 
functions, though admittedly not in the accumulators.


Re: D Mir: standard deviation speed

2020-07-15 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 05:57:56 UTC, tastyminerals wrote:

[snip]

Here is a (WIP) project as of now.
Line 160 in 
https://github.com/tastyminerals/mir_benchmarks_2/blob/master/source/basic_ops.d


std of [60, 60] matrix 0.0389492 (> 0.001727)
std of [300, 300] matrix 1.03592 (> 0.043452)
std of [600, 600] matrix 4.2875 (> 0.182177)
std of [800, 800] matrix 7.9415 (> 0.345367)


I changed the dflags-ldc to "-mcpu-native -O" and compiled with 
`dub run --compiler=ldc2`. I got similar results as yours for 
both in the initial run.


I changed sd to

@fmamath private double sd(T)(Slice!(T*, 1) flatMatrix)
{
pragma(inline, false);
if (flatMatrix.empty)
return 0.0;
double n = cast(double) flatMatrix.length;
double mu = flatMatrix.mean;
return (flatMatrix.map!(a => (a - mu) ^^ 2)
.sum!"precise" / n).sqrt;
}

and got

std of [10, 10] matrix 0.0016321
std of [20, 20] matrix 0.0069788
std of [300, 300] matrix 2.42063
std of [60, 60] matrix 0.0828711
std of [600, 600] matrix 9.72251
std of [800, 800] matrix 18.1356

And the biggest change by far was the sum!"precise" instead of 
sum!"fast".


When I ran your benchStd function with
ans = matrix.flattened.standardDeviation!(double, "online", 
"fast");

I got
std of [10, 10] matrix 1e-07
std of [20, 20] matrix 0
std of [300, 300] matrix 0
std of [60, 60] matrix 1e-07
std of [600, 600] matrix 0
std of [800, 800] matrix 0

I got the same result with Summator.naive. That almost seems too 
low.


The default is Summator.appropriate, which is resolved to 
Summator.pairwise in this case. It is faster than 
Summator.precise, but still slower than Summator.naive or 
Summator.fast. Your welfordSD should line up with Summator.naive.


When I change that to
ans = matrix.flattened.standardDeviation!(double, "online", 
"precise");

I get
Running .\mir_benchmarks_2.exe
std of [10, 10] matrix 0.0031737
std of [20, 20] matrix 0.0153603
std of [300, 300] matrix 4.15738
std of [60, 60] matrix 0.171211
std of [600, 600] matrix 17.7443
std of [800, 800] matrix 34.2592

I also tried changing your welfordSD function based on the stuff 
I mentioned above, but it did not make a large difference.


Re: D Mir: standard deviation speed

2020-07-14 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 14 July 2020 at 19:04:45 UTC, tastyminerals wrote:
I am trying to implement standard deviation calculation in Mir 
for benchmark purposes.
I have two implementations. One is the straightforward std = 
sqrt(mean(abs(x - x.mean())**2)) and the other follows 
Welford's algorithm for computing variance (as described here: 
https://www.johndcook.com/blog/standard_deviation/).


However, although the first implementation should be less 
efficient / slower, the benchmarking results show a startling 
difference in its favour. I'd like to understand if I am doing 
something wrong and would appreciate some explanation.


# Naive std
import std.math : abs;
import mir.ndslice;
import mir.math.common : pow, sqrt, fastmath;
import mir.math.sum : sum;
import mir.math.stat : mean;

@fastmath private double sd0(T)(Slice!(T*, 1) flatMatrix)
{
pragma(inline, false);
if (flatMatrix.empty)
return 0.0;
double n = cast(double) flatMatrix.length;
double mu = flatMatrix.mean;
return (flatMatrix.map!(a => (a - mu).abs ^^ 2).sum!"fast" 
/ n).sqrt;

}


# std with Welford's variance
@fastmath double sdWelford(T)(Slice!(T*, 1) flatMatrix)
{
pragma(inline, false);
if (flatMatrix.empty)
return 0.0;

double m0 = 0.0;
double m1 = 0.0;
double s0 = 0.0;
double s1 = 0.0;
double n = 0.0;
foreach (x; flatMatrix.field)
{
++n;
m1 = m0 + (x - m0) / n;
s1 = s0 + (x - m0) * (x - m1);
m0 = m1;
s0 = s1;
}
// switch to n - 1 for sample variance
return (s1 / n).sqrt;
}

Benchmarking:

Naive std (1k loops):
  std of [60, 60] matrix 0.001727
  std of [300, 300] matrix 0.043452
  std of [600, 600] matrix 0.182177
  std of [800, 800] matrix 0.345367

std with Welford's variance (1k loops):
  std of [60, 60] matrix 0.0225476
  std of [300, 300] matrix 0.534528
  std of [600, 600] matrix 2.0714
  std of [800, 800] matrix 3.60142


It would be helpful to provide a link.

You should only need one accumulator for mean and centered sum of 
squares. See the python example under the Welford example

https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
This may have broken optimization somehow.

variance and standardDeviation were recently added to 
mir.math.stat. They have the option to switch between Welford's 
algorithm and the others. What you call as the naive algorithm, 
is VarianceAlgo.twoPass and the Welford algorithm can be toggled 
with VarianceAlgo.online, which is the default option. It also 
would be interesting if you re-did the analysis with the built-in 
mir functions.


There are some other small differences between your 
implementation and the one in mir, beyond the issue discussed 
above. You take the absolute value before the square root and 
force the use of sum!"fast". Another difference is 
VarianceAlgo.online in mir is using a precise calculation of the 
mean rather than the fast update that Welford uses. This may have 
a modest impact on performance, but should provide more accurate 
results.


Re: Upcoming refraction module in bolts [was: DUB project type support for Emacs Projectile]

2020-06-15 Thread jmh530 via Digitalmars-d-learn

On Monday, 15 June 2020 at 17:32:26 UTC, Jean-Louis Leroy wrote:

[snip]


Thanks, cool.




Re: DUB project type support for Emacs Projectile

2020-06-15 Thread jmh530 via Digitalmars-d-learn

On Monday, 15 June 2020 at 13:17:11 UTC, Jean-Louis Leroy wrote:

[snip]

Nah, I saw it. Well. My take on it has been ready for months 
but I had to wait for my employer's permission to publish it. 
They are very open-source friendly, and as a consequence there 
is a glut of requests for open-sourcing personal projects. I 
guess I am going to cancel my request...




Ah. I suppose that depends implementation/performance/feature 
differences...


On the bright side, I just got authorized to contribute my work 
on function refraction (currently part of openmethods) to 
bolts. You can see it here: 
https://github.com/aliak00/bolts/pull/10


I saw when you mentioned it earlier. Though it hasn't been 
something I've needed as yet, it's good to know that it's there.


This allows the function mixins to work when they are in 
different modules, right? I don't see a test for that, but it 
might be useful to include such an example (I'm pretty sure 
Atila's tardy makes use of a similar functionality when they are 
in different modules).


It's interesting that many of the examples for refract are like 
refract!(F, "F") or refract!(answer, "answer"). Would something 
like
Function refract(alias fun, string localSymbol = 
__traits(identifier, fun))()

work for you?


Re: DUB project type support for Emacs Projectile

2020-06-14 Thread jmh530 via Digitalmars-d-learn

On Sunday, 14 June 2020 at 17:19:05 UTC, Jean-Louis Leroy wrote:

[snip]


In case you missed it, I thought you would find this interesting
https://forum.dlang.org/thread/dytpsnkqnmgzniiwk...@forum.dlang.org


Re: Why is there no range iteration with index by the language?

2020-06-10 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 10 June 2020 at 00:53:30 UTC, Seb wrote:

[snip]

Anyhow, I would be highly in favor of DMD doing this. It's one 
of those many things that I have on my list for D3 or a D fork.


Chapel supports zippered iteration [1]. From the discussion here, 
it sounds very much like the implementation is similar to what D 
does with tuples. It probably would be pretty trivial with first 
class tuples.


[1] 
https://chapel-lang.org/docs/language/spec/statements.html#zipper-iteration


Re: Mixin and imports

2020-06-08 Thread jmh530 via Digitalmars-d-learn

On Monday, 8 June 2020 at 14:27:26 UTC, data pulverizer wrote:

[snip]

Out of curiosity what does the "." in front of `foo` mean? I've 
seen that in some D code on the compiler in GitHub and have no 
idea what it does. I tried Googling it to no avail. It doesn't 
have anything to do with UFCS does it?


Thanks


ag0aep6g provided the link to it, but it's one of those things 
that has been difficult for me to understand as well. I believe 
the original code had `foo` in a template. So in that case it was 
necessary. I'm not sure if it still is in my simplified version.


Re: Mixin and imports

2020-06-08 Thread jmh530 via Digitalmars-d-learn

On Monday, 8 June 2020 at 12:20:46 UTC, Adam D. Ruppe wrote:

[snip]

Why do you even want foo!"fabs"? Usually when I see people 
having this problem it is actually a misunderstanding of what 
is possible with the foo!fabs style - which is better in 
basically every way and can be used in most the same places.


So what's your bigger goal?


There were some other functions in the module that allow the use 
of function!"thinginquotes". However, most of those functions are 
using the "thinginquotes" to avoid writing 
function!(SomeEnum.thinginquotes). That really isn't the thing to 
fix in this case. So I think it makes sense for me to give up 
what I was trying to do.


Re: Mixin and imports

2020-06-08 Thread jmh530 via Digitalmars-d-learn

On Monday, 8 June 2020 at 10:28:39 UTC, Paul Backus wrote:

[snip]


Thanks for that suggestion. That works for me.

Unfortunately, it's probably not worth the extra effort though, 
versus doing foo!fabs in my case.


Re: Mixin and imports

2020-06-08 Thread jmh530 via Digitalmars-d-learn

On Monday, 8 June 2020 at 04:13:08 UTC, Mike Parker wrote:

[snip]


The problem isn't the mixin. It's the template. Templates take 
the scope of their declaration, not their instantiation. So the 
mixin is getting the template's scope.


Anyway, this appears to work:

`double z = foo!"std.math.fabs"(x);`


Thanks, that makes sense.

However, I get the same error with the code below. Am I doing 
something wrong?


double foo(alias f)(double x) {
return f(x);
}

template foo(string f)
{
mixin("alias foo = .foo!(" ~ f ~ ");");
}

void main() {
static import std.math;
double x = 2.0;
double y = foo!(std.math.fabs)(x);
double z = foo!"std.math.fabs"(x);
}


Mixin and imports

2020-06-07 Thread jmh530 via Digitalmars-d-learn
In the code below, foo!fabs compiles without issue, but 
foo!"fabs" does not because the import is not available in the 
string mixin. If I move the import to the top of the module, then 
it works. However, then if I move foo to another module, then it 
will no longer compile since it is in different modules.


Is there any way I can make sure the mixin knows about the import?

I am just using fabs as an example. Ideally, I would want to use 
any function from any other module. I can figure out how to do 
it, for instance, if the only functions that were allowed were 
those in one module, like std.math.


```
double foo(alias f)(double x) {
return f(x);
}

template foo(string f) {
mixin("alias foo = .foo!(" ~ f ~ ");");
}

void main() {
import std.math: fabs;

double x = 2.0;
double y = foo!fabs(x);
double z = foo!"fabs"(x);
}
```


Re: Mir Slice Column or Row Major

2020-05-27 Thread jmh530 via Digitalmars-d-learn

On Thursday, 28 May 2020 at 00:51:50 UTC, 9il wrote:

snip
Actually it is a question of notation. For example, mir-lapack 
uses ndslice as column-major Fortran arrays. This may cause 
some headaches because the data needs to be transposed in mind. 
We can think about ndslice as about column-major nd-arrays with 
the reversed order of indexing.


The current template looks like

Slice(Iterator, size_t N = 1, SliceKind kind = 1)

If we add a special column-major notation, then it will look 
like


Slice(Iterator, size_t N = 1, SliceKind kind = Contiguous, 
PayloadOrder = RowMajor)


A PR that adds this feature will be accepted.


Oh, that is news to me. I was under the impression that such a PR 
would not be accepted. The prototype you have is exactly what I 
had been thinking (that’s what eigen does).


Unfortunately, I don’t think I have the time to ensure everything 
works properly with column major. I think my time right now is 
better spent on other mir stuff, but it’s good to know that the 
only obstacle is someone putting the work in.




Re: Mir Slice Column or Row Major

2020-05-27 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 27 May 2020 at 16:07:58 UTC, welkam wrote:
On Wednesday, 27 May 2020 at 01:31:23 UTC, data pulverizer 
wrote:

column major


Cute puppies die when people access their arrays in column 
major.


Not always true...many languages support column-major order 
(Fortran, most obviously). The Eigen C++ library allows the user 
to specify row major or column major. I had brought this up with 
Ilya early on in mir and he thought it would increase complexity 
to allow both and could also require more memory. So mir is row 
major.


Re: [GTK-D] dub run leads to lld-link: error: could not open libcmt.lib: no such file or directory

2020-05-26 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 26 May 2020 at 15:18:42 UTC, jmh530 wrote:

On Tuesday, 26 May 2020 at 15:16:25 UTC, jmh530 wrote:

[snip]
Another short-term fix might be to try compiling with the -m32 
dflag (need to put in your dub.sdl/json).




Sorry, easier is
dub test --arch=x86


You may also have to make sure that bin64 is in the path.

https://dlang.org/changelog/2.091.0.html#windows


Re: [GTK-D] dub run leads to lld-link: error: could not open libcmt.lib: no such file or directory

2020-05-26 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 13 May 2020 at 15:26:48 UTC, BoQsc wrote:

[snip]

Linking...
lld-link: error: could not open libcmt.lib: no such file or 
directory
lld-link: error: could not open OLDNAMES.lib: no such file or 
directory

Error: linker exited with status 1
C:\D\dmd2\windows\bin\dmd.exe failed with exit code 1.


I just ran into this issue as well. I haven't had a chance to fix 
it on my end, but this is what I've found.


This line
Performing "debug" build using C:\D\dmd2\windows\bin\dmd.exe for 
x86_64.

means that it is compiling a 64bit program on Windows.

On Windows, if you are trying to compile a 64bit program, then it 
will try to link with lld if it cannot find a Microsoft linker 
[1]. The failure is likely due your Microsoft linker (or lld) 
either not being installed properly or wrong version or 
configured improperly. If you don't have Visual Studio Community 
installed, that might be a first step. Another short-term fix 
might be to try compiling with the -m32 dflag (need to put in 
your dub.sdl/json).


[1] https://dlang.org/dmd-windows.html#linking


Re: [GTK-D] dub run leads to lld-link: error: could not open libcmt.lib: no such file or directory

2020-05-26 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 26 May 2020 at 15:16:25 UTC, jmh530 wrote:

[snip]
Another short-term fix might be to try compiling with the -m32 
dflag (need to put in your dub.sdl/json).




Sorry, easier is
dub test --arch=x86



Re: Static assert triggered in struct constructor that shouldn't be called

2020-05-24 Thread jmh530 via Digitalmars-d-learn

On Sunday, 24 May 2020 at 21:43:34 UTC, H. S. Teoh wrote:
On Sun, May 24, 2020 at 09:34:53PM +, jmh530 via 
Digitalmars-d-learn wrote:
The following code results in the static assert in the 
constructor being triggered, even though I would have thought 
no constructor would have been called. I know that there is an 
easy fix for this (move the static if outside the 
constructor), but it still seems like it doesn't make sense.

[...]

The problem is that static assert triggers when the function is 
compiled (not when it's called), and since your ctor is not a 
template function, it will always be compiled. Hence the static 
assert will always trigger.



T


Thanks. Makes sense.


Static assert triggered in struct constructor that shouldn't be called

2020-05-24 Thread jmh530 via Digitalmars-d-learn
The following code results in the static assert in the 
constructor being triggered, even though I would have thought no 
constructor would have been called. I know that there is an easy 
fix for this (move the static if outside the constructor), but it 
still seems like it doesn't make sense.


enum Foo
{
A,
B,
}

struct Bar(Foo foo)
{
static if (foo == Foo.A)
{
float x = 0.5;
long y = 1;
}
else static if (foo == Foo.B)
{
int p = 1;
}

this(long exp, float x)
{
static if (foo == Foo.A) {
this.y = exp;
this.x = x;
} else {
static assert(0, "Not implemented");
}
}
}

void main()
{
Bar!(Foo.B) x;
}


Re: Mir Slice.shape is not consistent with the actual array shape

2020-05-24 Thread jmh530 via Digitalmars-d-learn

On Sunday, 24 May 2020 at 14:21:26 UTC, Pavel Shkadzko wrote:

[snip]

Sorry for the typo. It should be "auto arrSlice = a.sliced;"


Try using fuse

/+dub.sdl:
dependency "mir-algorithm" version="*"
+/
import std.stdio;
import std.conv;
import std.array: array;
import std.range: chunks;
import mir.ndslice;

int[] getShape(T : int)(T obj, int[] dims = null)
{
return dims;
}

// return arr shape
int[] getShape(T)(T obj, int[] dims = null)
{
dims ~= obj.length.to!int;
return getShape!(typeof(obj[0]))(obj[0], dims);
}

void main() {
int[] arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 
15, 16];

int[][][] a = arr.chunks(4).array.chunks(2).array;

int err;
writeln(arr);
writeln(a.shape(err));

auto aSlice = a.fuse;
writeln(aSlice);
writeln(aSlice.shape);

}


Re: CTFE and Static If Question

2020-05-07 Thread jmh530 via Digitalmars-d-learn

On Thursday, 7 May 2020 at 17:59:30 UTC, Paul Backus wrote:

On Thursday, 7 May 2020 at 15:00:18 UTC, jmh530 wrote:

Does foo!y0(rt) generate the same code as foo(rt, y0)?

How is the code generated by foo(rt, x0) different from 
foo(rt,y0)?


You can look at the generated code using the Compiler Explorer 
at d.godbolt.org. Here's a link to your example, compiled with 
ldc, with optimizations enabled:


https://d.godbolt.org/z/x5K7P6

As you can see, the non-static-if version has a runtime 
comparison and a conditional jump, and the static-if version 
does not. However, it doesn't make a difference in the end, 
because the calls have been optimized out, leaving an empty 
main function.


Thanks for that. I forgot how much nicer godbolt is for looking 
assembly than run.dlang.org. Or maybe it's just that the 
optimized assembly for ldc looks a lot simpler than dmd?


I eventually played around with it for a bit and ended up with 
what's below. When compiled with ldc -O -release, some of the 
functions have code that are generated a little differently than 
the one above (the assembly looks a little prettier when using 
ints than doubles). It looks like there is a little bit of 
template bloat too, in that it generates something like four 
different versions of the run-time function that all basically do 
the same thing (some slight differences I don't really 
understand). Anyway, I think that's the first time I've ever used 
__traits compiles with a static if and I don't think I've ever 
used an template alias bool before, but I thought it was kind of 
cool.



int foo(alias bool rtct)(int[] x) {
static if (__traits(compiles, {static if (rtct) { enum val = 
rtct;}})) {

static if (rtct) {
return bar(x) / cast(int) x.length;
} else {
return bar(x) / cast(int) (x.length - 1);
}
} else {
if (rtct)
return bar(x) / cast(int) x.length;
else
return bar(x) / cast(int) (x.length - 1);
}
}

int foo(int[] x, bool rtct) {
if (rtct)
return foo!true(x);
else
return foo!false(x);
}

int bar(int[] x) {
return x[0] + x[1];
}


void main() {
import std.stdio: writeln;

int[] a = [1, 2, 3, 4, 5];
bool x0 = true;
bool x1 = false;
int result0 = foo(a, x0);
int result1 = foo(a, x1);
int result2 = foo!x0(a);
int result3 = foo!x1(a);

enum y0 = true;
enum y1 = false;
int result0_ = foo(a, y0);
int result1_ = foo(a, y1);
int result2_ = foo!y0(a);
int result3_ = foo!y1(a);
}


Re: CTFE and Static If Question

2020-05-07 Thread jmh530 via Digitalmars-d-learn

On Thursday, 7 May 2020 at 15:34:21 UTC, ag0aep6g wrote:

[snip]

The `static if` is guaranteed to be evaluated during 
compilation. That means, `foo!y0` effectively becomes this:


auto foo(int rt) { return rt + 1; }

There is no such guarantee for `foo(rt, y0)`. It doesn't matter 
that y0 is an enum.


But a half-decent optimizer will have no problem replacing all 
your calls with their results. Compared with LDC and GDC, DMD 
has a poor optimizer, but even DMD turns this:


int main() {
int rt = 3;
bool x0 = true;
bool x1 = false;
enum y0 = true;
enum y1 = false;
return
foo(rt, x0) +
foo(rt, x1) +
foo!y0(rt) +
foo!y1(rt) +
foo(rt, y0) +
foo(rt, y1);
}

into this:

int main() { return 21; }


Thanks for the reply.

The particular use case I'm thinking of is more like below where 
some function bar (that returns a T) is called before the return. 
Not sure if that matters or not for inlining the results.


T foo(T[] x, bool y) {
if (y)
return bar(x) / x.length;
else
return bar(x) / (x.length - 1);
}


Re: CTFE and Static If Question

2020-05-07 Thread jmh530 via Digitalmars-d-learn

On Thursday, 7 May 2020 at 15:29:01 UTC, H. S. Teoh wrote:

[snip]


You explained things very well, thanks.


CTFE and Static If Question

2020-05-07 Thread jmh530 via Digitalmars-d-learn
I am curious how ctfe and static ifs interact. In particular, if 
an enum bool passed as a template parameter or run-time one will 
turn an if statement into something like a static if statement 
(perhaps after the compiler optimizes other code away). In the 
code below, I am a function that takes a bool as a template 
parameter and another that has it as a run-time parameter. In the 
first, I just pass an compile-time bool (y0) into it. In the 
second I have run-time (x0) and compile-time (y0).


Does foo!y0(rt) generate the same code as foo(rt, y0)?

How is the code generated by foo(rt, x0) different from 
foo(rt,y0)?


auto foo(bool rtct)(int rt) {
static if (rtct)
return rt + 1;
else
return rt;
}

auto foo(int rt, bool rtct) {
if (rtct == true)
return rt + 1;
else
return rt;
}

void main() {
int rt = 3;
bool x0 = true;
bool x1 = false;
assert(foo(rt, x0) == 4);
assert(foo(rt, x1) == 3);

enum y0 = true;
enum y1 = false;
assert(foo!y0(rt) == 4);
assert(foo!y1(rt) == 3);
assert(foo(rt, y0) == 4);
assert(foo(rt, y1) == 3);
}




Re: Python eval() equivalent in Dlang working in Runtime?

2020-05-01 Thread jmh530 via Digitalmars-d-learn

On Friday, 1 May 2020 at 15:42:54 UTC, Baby Beaker wrote:

There is a Python eval() equivalent in Dlang working in Runtime?


You might find arsd's script.d interesting [1], but it's more 
like a blend between D and javascript.

[1]https://github.com/adamdruppe/arsd/blob/d0aec8e606a90c005b9cac6fcfb2047fb61b38fa/script.d


Re: Implicit Function Template Instantiation (IFTI) Question

2020-04-27 Thread jmh530 via Digitalmars-d-learn
On Monday, 27 April 2020 at 17:40:06 UTC, Steven Schveighoffer 
wrote:

[snip]


Thanks for that. Very detailed.

In terms of a use case, we just added a center function to mir 
[1]. It can take an alias to a function. I wanted to add a check 
that the arity of the function was 1, but it turned out that I 
couldn't do that for mean [2] because it has a similar structure 
as what I posted and arity relies on isCallable, which depends on 
isFunction.



[1] http://mir-algorithm.libmir.org/mir_math_stat.html#.center
[2] http://mir-algorithm.libmir.org/mir_math_stat.html#.mean



Implicit Function Template Instantiation (IFTI) Question

2020-04-27 Thread jmh530 via Digitalmars-d-learn
When using a template with multiple functions within it, is it 
possible to access the underlying functions directly? Not sure I 
am missing anything, but what works when the functions are named 
differently from the headline template doesn't work when the 
functions are named the same.


import std.stdio: writeln;
import std.traits: isFunction;

template foo(T) {
void foo(U)(U x) {
writeln("here0");
}

void foo(U, V)(U x, V y) {
writeln("there0");
}
}

template bar(T) {
void baz(U)(U x) {
writeln("here1");
}

void baz(U, V)(U x, V y) {
writeln("there1");
}
}

void foobar(T)(T x) {}

void main() {
foo!int.foo!(float, double)(1f, 2.0); //Error: template 
foo(U)(U x) does not have property foo
writeln(isFunction!(foo!int)); //prints false, as expected 
b/c not smart enough to look through
writeln(isFunction!(foo!int.foo!float)); //Error: template 
identifier foo is not a member of template 
onlineapp.foo!int.foo(U)(U x)

writeln(isFunction!(foo!int.foo!(float, double))); //ditto

bar!int.baz!(float, double)(1f, 2.0); //prints there1
writeln(isFunction!(bar!int.baz!(float, double))); //prints 
true


writeln(isFunction!(foobar!int)); //prints true
}




Attribute inference within template functions

2020-04-22 Thread jmh530 via Digitalmars-d-learn
I was trying to write a function has different behavior depending 
on whether it is called from @nogc code or not. However, I am 
noticing that this does not seem possible because of the timing 
of attribute inference.


If I call getFunctionAttributes within foo below, then it is 
system but then when called from main they are correctly 
inferred. It is as if the attribute inference happens after the 
getFunctionAttributes is called.


Is there any way to get the correct function attributes within a 
template function?


auto foo(T)(T x) {
pragma(msg, __traits(getFunctionAttributes, foo!T));
pragma(msg, __traits(getFunctionAttributes, foo!int));
return x;
}

void main() {
auto x = foo(1);
pragma(msg, __traits(getFunctionAttributes, foo!int));
}


Re: Multiplying transposed matrices in mir

2020-04-20 Thread jmh530 via Digitalmars-d-learn

On Monday, 20 April 2020 at 19:06:53 UTC, p.shkadzko wrote:

[snip]
It is. I was trying to calculate the covariance matrix of some 
dataset X which would be XX^T.


Incorrect. The covariance matrix is calculated with matrix 
multiplication, not element-wise  multiplication. For instance, I 
often work with time series data that is TXN where T > N. 
Couldn't do a calculation with element-wise multiplication in 
that case.


Try using Lubeck's covariance function or checking your results 
with the covariance function in other languages.




Re: mir: How to change iterator?

2020-04-20 Thread jmh530 via Digitalmars-d-learn

On Monday, 20 April 2020 at 00:27:40 UTC, 9il wrote:

[snip]

Using two arguments Iterator1, Iterator2 works without 
allocation


/+dub.sdl: dependency "mir-algorithm" version="~>3.7.28" +/
import mir.ndslice;

void foo(Iterator1, Iterator2, SliceKind kind)
(Slice!(Iterator1, 1, kind) x, Slice!(Iterator2, 1, kind) y)
{
import std.stdio : writeln;
writeln("here");
}

void main() {
auto x = [0.5, 0.5].sliced(2);
auto y = x * 5.0;
foo(x, y);
}


Thanks, but I was thinking about the situation where someone else 
has written the function and didn't allow for multiple iterators 
for whatever reason.


Re: mir: How to change iterator?

2020-04-19 Thread jmh530 via Digitalmars-d-learn

On Thursday, 16 April 2020 at 20:59:36 UTC, jmh530 wrote:

[snip]

/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.28"
+/

import mir.ndslice;

void foo(Iterator, SliceKind kind)(Slice!(Iterator, 1, kind) x, 
Slice!(Iterator, 1, kind) y) {

import std.stdio : writeln;
writeln("here");
}

void main() {
auto x = [0.5, 0.5].sliced(2);
auto y = x * 5.0;
foo(x, y);
}


This is really what I was looking for (need to make allocation, 
unfortunately)


/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.28"
+/

import mir.ndslice;

void foo(Iterator, SliceKind kind)(Slice!(Iterator, 1, kind) x, 
Slice!(Iterator, 1, kind) y) {

import std.stdio : writeln;
writeln("here");
}

void main() {
auto x = [0.5, 0.5].sliced(2);
auto y = x * 5.0;
foo(x, y.slice);
}


Re: Multiplying transposed matrices in mir

2020-04-19 Thread jmh530 via Digitalmars-d-learn

On Sunday, 19 April 2020 at 20:29:54 UTC, p.shkadzko wrote:

[snip]

Thanks. I somehow missed the whole point of "a * a.transposed" 
not working because "a.transposed" is not allocated.


a.transposed is just a view of the original matrix. Even when I 
tried to do a raw for loop I ran into issues because modifying 
the original a in any way caused all the calculations to be wrong.


Honestly, it's kind of rare that I would do an element-wise 
multiplication of a matrix and its transpose.


Re: Multiplying transposed matrices in mir

2020-04-19 Thread jmh530 via Digitalmars-d-learn

On Sunday, 19 April 2020 at 19:20:28 UTC, p.shkadzko wrote:

[snip]
well no, "assumeContiguous" reverts the results of the 
"transposed" and it's "a * a".
I would expect it to stay transposed as NumPy does "assert 
np.all(np.ascontiguous(a.T) == a.T)".


Ah, you're right. I use it in other places where it hasn't been 
an issue.


I can do it with an allocation (below) using the built-in syntax, 
but not sure how do-able it is without an allocation (Ilya would 
know better than me).


/+dub.sdl:
dependency "lubeck" version="~>1.1.7"
dependency "mir-algorithm" version="~>3.7.28"
+/
import mir.ndslice;
import lubeck;

void main() {
auto a = [2.1, 1.0, 3.2, 4.5, 2.4, 3.3, 1.5, 0, 
2.1].sliced(3, 3);

auto b = a * a.transposed.slice;
}


Re: Multiplying transposed matrices in mir

2020-04-19 Thread jmh530 via Digitalmars-d-learn

On Sunday, 19 April 2020 at 17:55:06 UTC, p.shkadzko wrote:

snip

So, lubeck mtimes is equivalent to NumPy "a.dot(a.transpose())".


There are elementwise operation on two matrices of the same size 
and then there is matrix multiplication. Two different things. 
You had initially said using an mxn matrix to do the calculation. 
Elementwise multiplication only works for matrices of the same 
size, which is only true in your transpose case when they are 
square. The mtimes function is like dot or @ in python and does 
real matrix multiplication, which works for generic mxn matrices. 
If you want elementwise multiplication of a square matrix and 
it’s transpose in mir, then I believe you need to call 
assumeContiguous after transposed.


Re: Multiplying transposed matrices in mir

2020-04-19 Thread jmh530 via Digitalmars-d-learn

On Sunday, 19 April 2020 at 17:07:36 UTC, p.shkadzko wrote:

I'd like to calculate XX^T where X is some [m x n] matrix.

// create a 3 x 3 matrix
Slice!(double*, 2LU) a = [2.1, 1.0, 3.2, 4.5, 2.4, 3.3, 1.5, 0, 
2.1].sliced(3, 3);

auto b = a * a.transposed; // error

Looks like it is not possible due to "incompatible types for 
(a) * (transposed(a)): Slice!(double*, 2LU, 
cast(mir_slice_kind)2) and Slice!(double*, 2LU, 
cast(mir_slice_kind)0)"


I'd like to understand why and how should this operation be 
performed in mir.
Also, what does the last number "0" or "2" means in the type 
definition "Slice!(double*, 2LU, cast(mir_slice_kind)0)"?


2 is Contiguous, 0 is Universal, 1 is Canonical. To this day, I 
don’t have the greatest understanding of the difference.


Try the mtimes function in lubeck.


Re: mir: How to change iterator?

2020-04-16 Thread jmh530 via Digitalmars-d-learn

On Thursday, 16 April 2020 at 19:59:57 UTC, Basile B. wrote:

[snip]

And remove the extra assert() BTW... I don't know why this is 
accepted.


Thanks, I hadn't realized about approxEqual. I think that 
resolves my near-term issue, I would need to play around with 
things a little more to be 100% sure though.


That being said, I'm still unsure of what I would need to do to 
get the following code to compile.


/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.28"
+/

import mir.ndslice;

void foo(Iterator, SliceKind kind)(Slice!(Iterator, 1, kind) x, 
Slice!(Iterator, 1, kind) y) {

import std.stdio : writeln;
writeln("here");
}

void main() {
auto x = [0.5, 0.5].sliced(2);
auto y = x * 5.0;
foo(x, y);
}


mir: How to change iterator?

2020-04-14 Thread jmh530 via Digitalmars-d-learn
In the code below, I multiply some slice by 5 and then check 
whether it equals another slice. This fails for mir's approxEqual 
because the two are not the same types (yes, I know that isClose 
in std.math works). I was trying to convert the y variable below 
to have the same double* iterator as the term on the right, but 
without much success. I tried std.conv.to and the as, slice, and 
sliced functions in mir.


I figure I am missing something basic, but I can't quite figure 
it out...



/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.28"
+/

import mir.math.common: approxEqual;
import mir.ndslice.slice : sliced;

void main() {
auto x = [0.5, 0.5].sliced(2);
auto y = x * 5.0;

assert(approxEqual(y, [2.5, 2.5].sliced(2)));
}


Re: @safe function with __gshared as default parameter value

2020-04-08 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 8 April 2020 at 19:29:17 UTC, Anonymouse wrote:

[snip]

It works with `ref int` too.


```
__gshared int gshared = 42;

void foo(ref int i = gshared) @safe
{
++i;
}
void main()
{
assert(gshared == 42);
foo();
assert(gshared == 43);
}
```


Well that definitely shouldn't happen. I would file a bug report.


Re: @safe function with __gshared as default parameter value

2020-04-08 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 8 April 2020 at 18:50:16 UTC, data pulverizer wrote:

On Wednesday, 8 April 2020 at 16:53:05 UTC, Anonymouse wrote:

```
import std.stdio;

@safe:

__gshared int gshared = 42;

void foo(int i = gshared)
{
writeln(i);
}

void main()
{
foo();
}
```

This currently works; `foo` is `@safe` and prints the value of 
`gshared`. Changing the call in main to `foo(gshared)` errors.


Should it work, and can I expect it to keep working?


According to the manual it shouldn't work at all 
https://dlang.org/spec/function.html#function-safety where it 
says Safe Functions: "Cannot access __gshared variables.", I 
don't know why calling as `foo()` works.


You still wouldn't be able to manipulate gshared within the 
function. Though it may still be a problem for @safe...


import std.stdio;

__gshared int gshared = 42;

@safe void foo(int i = gshared)
{
i++;
writeln(i);
}

void main()
{
writeln(gshared);
foo();
writeln(gshared);
gshared++;
writeln(gshared);
foo();
writeln(gshared);
}


Re: Blog post about multidimensional arrays in D

2020-03-27 Thread jmh530 via Digitalmars-d-learn

On Friday, 27 March 2020 at 10:57:10 UTC, p.shkadzko wrote:
I decided to write a small blog post about multidimensional 
arrays in D on what I learnt so far. It should serve as a brief 
introduction to Mir slices and how to do basic manipulations 
with them. It started with a small file with snippets for 
personal use but then kind of escalated into an idea of a blog 
post.


However, given the limited about of time I spent in Mir docs 
and their conciseness, it would be great if anyone had a second 
look and tell me what is wrong or missing because I have a 
feeling a lot of things might. It would be a great opportunity 
for me to learn and also improve it or rewrite some parts.


All is here: 
https://github.com/tastyminerals/tasty-blog/blob/master/_posts/2020-03-22-multidimensional_arrays_in_d.md


Thanks for doing this.

A small typo on this line
a.byDim1;

I think there would be a lot of value in doing another blogpost 
to cover some more advanced topics. For instance, mir supports 
three different SliceKinds and the documentation explaining the 
difference has never been very clear. I don't really feel like 
I've ever had a clear understanding of the low-level differences 
between them. The pack/ipack/unpack functions are also pretty 
hard to understand from the documentation.


Re: dub libs from home directory on windows

2020-03-18 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 18 March 2020 at 15:10:52 UTC, Виталий Фадеев wrote:

On Wednesday, 18 March 2020 at 13:52:20 UTC, Abby wrote:


I cannot build my app, so I was wondering if there is some 
clever way to solve this without hardcoded path to my profile 
name.


Thank you very much for your help.


I see, you want without hardcoded path...


I usually something like ./folder/file.extension to avoid a 
hardcoded path.


I also recommend taking a look at some other dub files to get a 
sense of how others do it.


Re: How to sort 2D Slice along 0 axis in mir.ndslice ?

2020-03-11 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 11 March 2020 at 06:12:55 UTC, 9il wrote:

[snip]

Almost the same, just fixed import for `each` and a bit polished

/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.18"
+/
import mir.ndslice;
import mir.ndslice.sorting;
import mir.algorithm.iteration: each;

void main() {
auto m = [[1, -1, 3, 2],
  [0, -2, 3, 1]].fuse;
m.byDim!0.each!sort;

import std.stdio;
m.byDim!0.each!writeln;
}


Doh on the 'each' import.

Also, I don't think I had used fuse before. That's definitely 
helpful.


Re: How to sort 2D Slice along 0 axis in mir.ndslice ?

2020-03-10 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 10 March 2020 at 23:31:55 UTC, p.shkadzko wrote:

[snip]


Below does the same thing as the numpy version.

/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.18"
+/
import mir.ndslice.sorting : sort;
import mir.ndslice.topology : byDim;
import mir.ndslice.slice : sliced;

void main() {
auto m = [1, -1, 3, 2, 0, -2, 3, 1].sliced(2, 4);
m.byDim!0.each!(a => a.sort);
}


Re: Improving dot product for standard multidimensional D arrays

2020-03-03 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 3 March 2020 at 10:25:27 UTC, maarten van damme wrote:
it is difficult to write an efficient matrix matrix 
multiplication in any language. If you want a fair comparison, 
implement your naive method in python and compare those timings.

[snip]


And of course there's going to be a big slowdown in using native 
python. Numpy basically calls blas in the background. A naive C 
implementation might be another comparison.


Re: Improving dot product for standard multidimensional D arrays

2020-03-02 Thread jmh530 via Digitalmars-d-learn

On Monday, 2 March 2020 at 20:22:55 UTC, p.shkadzko wrote:

[snip]

Interesting growth of processing time. Could it be GC?

+--+-+
| matrixDotProduct | time (sec.) |
+--+-+
| 2x[100 x 100]|0.01 |
| 2x[1000 x 1000]  |2.21 |
| 2x[1500 x 1000]  | 5.6 |
| 2x[1500 x 1500]  |9.28 |
| 2x[2000 x 2000]  |   44.59 |
| 2x[2100 x 2100]  |   55.13 |
+--+-+


Your matrixDotProduct creates a new Matrix and then returns it. 
When you look at the Matrix struct, it is basically building upon 
D's GC-backed slices. So yes, you are using the GC here.


You could try creating the output matrices outside of the 
matrixDotProduct function and then pass them by pointer or 
reference into the function if you want to profile just the 
calculation.


Re: Improving dot product for standard multidimensional D arrays

2020-03-02 Thread jmh530 via Digitalmars-d-learn

On Monday, 2 March 2020 at 18:17:05 UTC, p.shkadzko wrote:

[snip]
I tested @fastmath and @optmath for toIdx function and that 
didn't change anyting.


@optmath is from mir, correct? I believe it implies @fastmath. 
The latest code in mir doesn't have it doing anything else at 
least.


Re: Improving dot product for standard multidimensional D arrays

2020-03-02 Thread jmh530 via Digitalmars-d-learn

On Monday, 2 March 2020 at 13:35:15 UTC, p.shkadzko wrote:

[snip]


Thanks. I don't have time right now to review this thoroughly. My 
recollection is that the dot product of two matrices is actually 
matrix multiplication, correct? It generally makes sense to defer 
to other people's implementation of this. I recommend trying 
lubeck's version against numpy. It uses a blas/lapack 
implementation. mir-glas, I believe, also has a version.


Also, I'm not sure if the fastmath attribute would do anything 
here, but something worth looking into.




Re: Improving dot product for standard multidimensional D arrays

2020-03-02 Thread jmh530 via Digitalmars-d-learn

On Sunday, 1 March 2020 at 20:58:42 UTC, p.shkadzko wrote:

Hello again,

[snip]



What compiler did you use and what flags?


Re: How to sum multidimensional arrays?

2020-02-27 Thread jmh530 via Digitalmars-d-learn

On Thursday, 27 February 2020 at 16:39:15 UTC, 9il wrote:

[snip]
Few performances nitpick for your example to be fair with 
benchmarking againt the test:

1. Random (default) is slower then Xorfish.
2. double is twice larger then int and requires twice more 
memory, so it would be twice slower then int for large matrixes.


Check the prev. post, we have posted almost in the same time ;)
https://forum.dlang.org/post/izoflhyerkiladngy...@forum.dlang.org


Those differences largely came from a lack of attention to 
detail. I didn't notice the Xorshift until after I posted. I used 
double because it's such a force of habit for me to use 
continuous distributions.


I came across this in the documentation.
UniformVariable!T uniformVariable(T = double)(in T a, in T b)
if(isIntegral!T)
and did a double-take until I read the note associated with it in 
the source.


Re: How to sum multidimensional arrays?

2020-02-27 Thread jmh530 via Digitalmars-d-learn

On Thursday, 27 February 2020 at 15:28:01 UTC, p.shkadzko wrote:

On Thursday, 27 February 2020 at 14:15:26 UTC, p.shkadzko wrote:
This works but it does not look very efficient considering we 
flatten and then calling array twice. It will get even worse 
with 3D arrays.


And yes, benchmarks show that summing 2D arrays like in the 
example above is significantly slower than in numpy. But that 
is to be expected... I guess.


D -- sum of two 5000 x 6000 2D arrays: 3.4 sec.
numpy -- sum of two 5000 x 6000 2D arrays: 0.0367800739913946 
sec.


What's the performance of mir like?

The code below seems to work without issue.

/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.17"
dependency "mir-random" version="~>2.2.10"
+/
import std.stdio : writeln;
import mir.random : Random, unpredictableSeed;
import mir.random.variable: UniformVariable;
import mir.random.algorithm: randomSlice;

auto rndMatrix(T)(T max, in int rows, in int cols)
{
auto gen = Random(unpredictableSeed);
auto rv = UniformVariable!T(0.0, max);
return randomSlice(gen, rv, rows, cols);
}

void main() {
auto m1 = rndMatrix(10.0, 2, 3);
auto m2 = rndMatrix(10.0, 2, 3);
auto m3 = m1 + m2;

writeln(m1);
writeln(m2);
writeln(m3);
}


Re: 2D matrix operation (subtraction)

2020-02-25 Thread jmh530 via Digitalmars-d-learn

On Saturday, 22 February 2020 at 08:29:32 UTC, 9il wrote:

[snip]

Maybe mir.series [1] can work for you.


I had a few other thoughts after looking at septc's solution of 
using

y[0..$, 0] *= 100;
to do the calculation.

1) There is probably scope for an additional select function to 
handle the use case of choosing a specific row/column. For 
instance, what if instead of

y[0..$, 0]
you want
y[0..$, b, 0..$]
for some arbitrary b. I think you would need to do something like
y.select!1(b, b + 1);
which doesn't have the best API, IMO, because you have to repeat 
b. Maybe just an overload for select that only takes one input 
instead of two?


2) The select series of functions does not seem to work as easily 
as array indexing does. When I tried to use the 
select/selectFront functions to do what he is doing, I had to 
something like

auto z = y.selectFront!1(1);
z[] *= 100;
This would adjust y as expected (not z). However, I couldn't 
figure out how to combine these together to one line. For 
instance, I couldn't do

y.selectFront!1(1) *= 100;
or
auto y = x.selectFront!1(1).each!(a => a * 100);
though something like
y[0..$, 0].each!"a *= 100";
works without issue.

It got a little frustrating to combine those with any kind of 
iteration. TBH, I think more than the select functions, the 
functionality I would probably be looking for is more what I was 
doing with byDim!1[0] in the prior post.


I could imagine some very simple version looking like below
auto selectDim(size_t dim, T)(T x, size_t a, size_t b) {
return byDim!dim[a .. b];
}
with a corresponding version
auto selectDim(size_t dim, T)(T x, size_t a) {
return byDim!dim[a .. (a + 1)];
}
This simple version would only work with one dimension, even 
though byDim can handle multiple.


Re: 2D matrix operation (subtraction)

2020-02-21 Thread jmh530 via Digitalmars-d-learn

On Friday, 21 February 2020 at 14:43:37 UTC, jmh530 wrote:

[snip]


Actually, I kind of prefer the relevant line as
x.byDim!1[0].each!"a -= 2";
which makes it a little clearer that you can easily change [0] to 
[1] to apply each to the second column instead.


Re: 2D matrix operation (subtraction)

2020-02-21 Thread jmh530 via Digitalmars-d-learn

On Friday, 21 February 2020 at 11:53:02 UTC, Ali Çehreli wrote:

[snip]
auto byColumn(R)(R range, size_t n) {
  return Column!R(range, n);
}


mir has byDim for something similar (numir also has alongDim).

This is how you would do it:

import mir.ndslice;

void main() {
auto x = [0.0, 1.4, 1.0, 5.2, 2.0, 0.8].sliced(3, 2);
x.byDim!1.front.each!"a -= 2";
}

My recollection is that it is a little bit trickier if you want 
to subtract a vector from each column of a matrix (the sweep 
function in R).


Re: matrix operations

2019-11-27 Thread jmh530 via Digitalmars-d-learn
On Wednesday, 27 November 2019 at 16:16:04 UTC, René Heldmaier 
wrote:

Hi,

I'm looking for some basic matrix/vector operations and other 
numeric stuff.


I spent quite a lot time in reading through the mir 
documentation, but i kinda miss the bigger picture. I'm not a 
Python user btw. (I know C,C++,C#,Matlab..).


I have also looked at the documentation of the lubeck package.

What i have seen right now reminds me of the saying "Real 
programmers can write FORTRAN in any language".


Is there a type to do matrix operations with nice syntax (e.g. 
using * operator for multiplication)?


Matrix/vector operations can be done with lubeck, which itself is 
built upon mir. mtimes is the one for matrix multiplication.


I would not bank on any changes in operator overloading (e.g. 
allowing an operator for matrix multiplication) any time soon.


Re: CI: Why Travis & Circle

2019-11-16 Thread jmh530 via Digitalmars-d-learn
On Saturday, 16 November 2019 at 09:07:45 UTC, Petar Kirov 
[ZombineDev] wrote:

[snip]

Most likely the reason is parallelism. Every CI service offers 
a limited amount of agents that can run in parallel, which 
limits the number of test matrix combinations that you can run 
in a reasonable amount of time. For example, many of the major 
D projects are tested across different OSes and several 
versions of D compilers. Additionally some CIs are faster than 
others. In my experience CircleCI is faster than TravisCI by a 
large margin.




Thank you for the very insightful answer.


Re: CI: Why Travis & Circle

2019-11-14 Thread jmh530 via Digitalmars-d-learn

On Thursday, 14 November 2019 at 17:06:36 UTC, Andre Pany wrote:

[snip]

With the public availability of Github Actions I highly 
recommend it if you have open source project on Github. If is 
free and works well with D and Dub.


Kind regards
Andre


I'm not that familiar with Github Actions, but I should get more 
familiar with it.


But my broader question is why both? Don't they both do largely 
the same things?


I was motivated to ask this by looking at the mir repositories, 
which have both.

https://github.com/libmir/mir


CI: Why Travis & Circle

2019-11-14 Thread jmh530 via Digitalmars-d-learn
I'm curious what the typical motivation is for using both Travis 
CI and Circle CI in a project is.


Thanks.


Re: Running unittests of a module with -betterC

2019-10-30 Thread jmh530 via Digitalmars-d-learn
On Wednesday, 30 October 2019 at 18:45:50 UTC, Jacob Carlborg 
wrote:

On 2019-10-30 16:09, jmh530 wrote:

I feel like this should be added into the compiler so that it 
just works.


This will only run the unit tests in the current modules. The 
standard way of running the unit tests will run the unit tests 
in all modules.


That's a fair point, but the broader point I was trying to make 
was that anything that makes unittests easier to use betterC code 
is a good thing.


It seems as if there are three underlying issues here that need 
to be addressed to improve the usefulness of unittests in betterC 
code: 1) a way to gather the unittests from all modules (your 
point), 2) fixing -main for betterC, 3) a way to ensure that said 
unittests are called.


The first suggests to me that it would not be such a bad thing to 
generate ModuleInfo when -unittest is called with -betterC or at 
least just the ModuleInfo needed to aggregate the unittests from 
different modules. This functionality might need to be opt-in.


The second is pretty obvious. dmd -main -betterC is inserting a D 
main function instead of a C one. I submitted a bug request

https://issues.dlang.org/show_bug.cgi?id=20340
as this should be pretty easy to fix.

The final point depends on the two above being resolved. If dmd 
-unittest -main -betterC is called, then the compiler would be 
creating the main function so it can insert any code needed to 
run the unittests (assuming issue 1 above is resolved). By 
contrast, if just dmd -unittest -betterC is called and the user 
has created their own main, then it would be like having to run a 
shared module constructor, which is disabled in betterC. Again, I 
would assume that the benefits would outweigh the costs in 
allowing something like this on an opt-in basis, but the 
available options would be to either a) use -main or b) create a 
mixin that generates the needed unittest code so that people can 
insert it at the top of their main function on their own.






Re: Running unittests of a module with -betterC

2019-10-30 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 30 October 2019 at 15:09:40 UTC, jmh530 wrote:

[snip]

I feel like this should be added into the compiler so that it 
just works.


Hmm, maybe only when compiled with -main, but I don't think 
there's a version for that.


  1   2   3   4   >