Re: How to rebind the default tkd GUI keybinds?

2020-10-17 Thread tastyminerals via Digitalmars-d-learn

On Sunday, 11 October 2020 at 18:51:17 UTC, tastyminerals wrote:
Tk default keys are somewhat different from what we used to use 
for selecting, copying and pasting the text. So, any Tk based 
GUI app starts with writing the rebinding function for 
"ctrl+a", "ctrl+c", "ctrl+x" and "ctrl+v" keys, at least. I did 
it when writing TkInter based apps in Python. Today I am trying 
out tkd and want to do the same. However, it doesn't work :(


For example:

private void selectText(CommandArgs args) {
this._clientId.selectText;
}

this._loginFrame = new Frame(2, ReliefStyle.groove);
this._clientId = new Entry(loginFrame).grid(1, 0);
this._clientId.bind("", );

It works if I change "" to "" for example.
But how do I overwrite the actual "" key in tkd?


So, this is even tricky in Python TkInter but possible.
In tkd this is not possible unfortunately.


Re: How to add an additional config dir in DUB under source?

2020-10-13 Thread tastyminerals via Digitalmars-d-learn

On Tuesday, 13 October 2020 at 05:13:18 UTC, Mike Parker wrote:

On Monday, 12 October 2020 at 22:31:53 UTC, tastyminerals wrote:


[...]


This:

readText("conf.toml");

[...]


Thanks. I remembered that I read about them in Ali's book but 
never actually used them.




How to add an additional config dir in DUB under source?

2020-10-12 Thread tastyminerals via Digitalmars-d-learn

I have the following project structure:

source/
  media/
icon.png
  config/
conf.toml

In order to access "icon.png" without explicitly providing the 
path I added in dub.json


"stringImportPaths": [
"source/media",
"source/config"
]

It works for "icon.png" but doesn't work for "conf.toml". The 
"icon.png" can be accessed and the code below works:


addEntry(new EmbeddedPng!("quit.png")

but std.file: readText refuses to see "conf.toml":

readText("conf.toml");

file.d(371): conf.toml: No such file or directory

I wonder why and what am I doing wrong?



Re: vibe.d / experience / feedback

2020-10-11 Thread tastyminerals via Digitalmars-d-learn

On Sunday, 11 October 2020 at 11:56:29 UTC, Robert M. Münch wrote:
On 6 Oct 2020 at 10:07:56 CEST, "ddcovery" 
 wrote:


I found myself in a similar situation recently, and I can't 
help but ask you: What technology do you use regularly?


Hi, well we use a couple of different things. Scripting 
languages, C, Lua, ..



What drives/draws you to try dlang/vibe.d?


A prototype we wanted to build while evaluating D as our next 
tech stack foundation.


Do you have other alternatives to dlang/vibe.d for your 
project?


Yes. We are currently looking into Go as well.

In my case we usually work in Node+js/ts (previously 
Scala+Play) and I wanted to jump to something really 
performant for a new project without losing code 
expressiveness and development speed. Dlang seemed a good 
alternative (I like it much more than Go or Rust).


Well, for us it's getting more and more clear, that a decision 
what to use in the future will be based on less and less 
technical aspects.


The interesting thing about Go is, that their main focus is 
thinking from an enterprise perspective, not only from a 
technical one. So, their focus is getting stuff done, keeping 
maintainability in big, constantly changing teams and stripping 
everything away, that reduces productivity in such an 
environment... I don't know about any other language which puts 
all these non-technical aspects on the top of the agenda.


Viele Grüsse.


And I feel like you guys will just pick Go because it will get 
stuff done.



I am in a philosophical mood today so here it goes...

When I just started learning about D ecosystem, vibe frequently 
popped up as one of the popular frameworks available for the 
language AND also a reason for ppl to jump in and try out D. 
However, as time goes, I also pick up many complaints about vibe, 
its performance and ease of use compared to competitors. This 
post just solidifies the impression. Bad documentation is the 
worst thing that can happen to a project which gets promoted as a 
one of the jems of the language ecosystem and actually hurts the 
language image much more than does good. Sigh... I will never 
advice vibe to anyone because I know that better alternatives 
exist. People will use Go, Python, Ruby, Rust whatever has better 
docs to get it running fast and not risk wasting time.


Sadly, this is how some languages grow and some don't. And it's 
not all about the corporate support, hype, GC or random luck, 
it's about cases like the above.


How to rebind the default tkd GUI keybinds?

2020-10-11 Thread tastyminerals via Digitalmars-d-learn
Tk default keys are somewhat different from what we used to use 
for selecting, copying and pasting the text. So, any Tk based GUI 
app starts with writing the rebinding function for "ctrl+a", 
"ctrl+c", "ctrl+x" and "ctrl+v" keys, at least. I did it when 
writing TkInter based apps in Python. Today I am trying out tkd 
and want to do the same. However, it doesn't work :(


For example:

private void selectText(CommandArgs args) {
this._clientId.selectText;
}

this._loginFrame = new Frame(2, ReliefStyle.groove);
this._clientId = new Entry(loginFrame).grid(1, 0);
this._clientId.bind("", );

It works if I change "" to "" for example.
But how do I overwrite the actual "" key in tkd?





Re: How to hide a function return type in order to wrap several functions into an associated array?

2020-10-03 Thread tastyminerals via Digitalmars-d-learn

On Sunday, 27 September 2020 at 20:03:21 UTC, Paul Backus wrote:
On Sunday, 27 September 2020 at 18:54:11 UTC, tastyminerals 
wrote:

[...]


You can use an Algebraic [1] or SumType [2] for this:

alias Feature = SumType!(ulong, double, bool);

Feature numberOfPunctChars(string text)
{
// ...
return Feature(cnt);
}

Feature ratioOfDigitsToChars(string text)
{
// ...
return Feature(ratio);
}

Feature hasUnbalancedParens(string text)
{
// ...
return Feature(!isBalanced);
}

[1] 
http://dpldocs.info/experimental-docs/std.variant.Algebraic.html

[2] https://code.dlang.org/packages/sumtype


Nice, thanks. Never used it, shall take a look.


Re: How to hide a function return type in order to wrap several functions into an associated array?

2020-10-03 Thread tastyminerals via Digitalmars-d-learn

On Sunday, 27 September 2020 at 22:55:14 UTC, Ali Çehreli wrote:

On 9/27/20 11:54 AM, tastyminerals wrote:

> [...]
input, a string.
> [...]
function does
> [...]

[...]


Thank you. Quite an inspirational example with delegates.


How to hide a function return type in order to wrap several functions into an associated array?

2020-09-27 Thread tastyminerals via Digitalmars-d-learn
This is rather a generic implementation question not necessarily 
related to D but I'd like to get some opinions.
I have a collection of functions that all have the same input, a 
string. The output however is different and depending on what the 
function does it can be ulong, double or bool. The problem is 
that for each line of text I'd like to apply all these functions, 
collect the results and write them into some file. For example,


auto numberOfPunctChars(string text)
{
const ulong cnt = text.filter!(c => c.isPunctuation).count;
return Feature!ulong("numberOfPunctChars", cnt);
}


auto ratioOfDigitsToChars(string text)
{
const double digits = numberOfDigitChars(text).val.to!double;
const double alphas = numberOfAlphaChars(text).val.to!double;
const double ratio = digits / (alphas > 0 ? alphas : digits);
return Feature!double("ratioOfDigitsToChars", ratio);
}

auto hasUnbalancedParens(string text)
{
const bool isBalanced = balancedParens(text, '(', ')') && 
balancedParens(text, '[', ']');

return Feature!bool("hasUnbalancedParens", !isBalanced);
}

As you can see, I created a templated Feature struct. This does 
not help much because I also want to create an associative array 
of ["functionName": ]. How can I define such 
an array when "Feature!T function(string)[string] allFuns" 
requires defining T beforehand and using auto is not possible?


I was thinking of having a Feature struct with 3 fiels of ulong, 
double and bool members but then each Feature init would look 
ugly imho "Feature("name", null, 1.5, null)". There should be a 
another way.




Re: How to use std.net.curl with specific curl query?

2020-09-03 Thread tastyminerals via Digitalmars-d-learn
On Thursday, 3 September 2020 at 11:14:14 UTC, tastyminerals 
wrote:
I have a specific curl query that I want to use in a D script 
via std.net.curl.


Here is the query:

curl -v -X POST
--data-urlencode "username=u...@gmail.net"
--data-urlencode "password=12345"
-H "Content-Type: application/x-www-form-urlencoded"
-H "Accept: application/json"
-u "client_name:CLIENT_PASS"
"https://some.i.net/oauth/token?grant_type=password;

The std.net.curl post documentation says that it needs a URL 
and a key:value map as arguments. However, what should be the 
key and what should be the value given the above query? There 
are two "--data-urlencode" parameters so the map cannot have 
two identical keys. Unfortunately the documentation is lacking 
both information and examples. Can somebody help me out here 
please?


In addition, the current "post" ddoc example fails to run 
throwing "std.net.curl.CurlException@std/net/curl.d(4402): 
Couldn't resolve host name on handle 55DF372ABBC0"


Figured it out, just needed to read further docs.

auto http = 
HTTP("https://some.i.net/oauth/token?grant_type=password;);

auto data = "username=u...@gmail.net=12345";
http.setPostData(data, "application/x-www-form-urlencoded");
http.addRequestHeader("Content-Type", 
"application/x-www-form-urlencoded");

http.addRequestHeader("Accept", "application/json");
http.setAuthentication("client_name", "CLIENT_PASS");
http.perform



How to use std.net.curl with specific curl query?

2020-09-03 Thread tastyminerals via Digitalmars-d-learn
I have a specific curl query that I want to use in a D script via 
std.net.curl.


Here is the query:

curl -v -X POST
--data-urlencode "username=u...@gmail.net"
--data-urlencode "password=12345"
-H "Content-Type: application/x-www-form-urlencoded"
-H "Accept: application/json"
-u "client_name:CLIENT_PASS"
"https://some.i.net/oauth/token?grant_type=password;

The std.net.curl post documentation says that it needs a URL and 
a key:value map as arguments. However, what should be the key and 
what should be the value given the above query? There are two 
"--data-urlencode" parameters so the map cannot have two 
identical keys. Unfortunately the documentation is lacking both 
information and examples. Can somebody help me out here please?


In addition, the current "post" ddoc example fails to run 
throwing "std.net.curl.CurlException@std/net/curl.d(4402): 
Couldn't resolve host name on handle 55DF372ABBC0"


Re: 2-D array initialization

2020-08-04 Thread tastyminerals via Digitalmars-d-learn

On Sunday, 2 August 2020 at 19:19:51 UTC, Andy Balba wrote:

On Sunday, 2 August 2020 at 06:37:06 UTC, tastyminerals wrote:

You haven't said anything about efficiency because if you care 
and your arrays are rather big, you better go with 
https://github.com/libmir/mir-algorithm as mentioned above. It 
might be a little finicky at the start but this post: 
https://tastyminerals.github.io/tasty-blog/dlang/2020/03/22/multidimensional_arrays_in_d.html should get you up to speed.



Keep in mind that std.array.staticArray is not efficient for  
large arrays.


If you want to stick to standard D, I would not initialize a 
2D array because it is just cumbersome but rather use a 1D 
array and transform it into 2D view on demand via ".chunks" 
method. Here is an example.


import std.range;
import std.array;

void main() {
int[] arr = 20.iota.array;
auto arr2dView = arr.chunks(5);
}

Should give you

┌  ┐
│ 0  1  2  3  4│
│ 5  6  7  8  9│
│10 11 12 13 14│
│15 16 17 18 19│
└  ┘

whenever you need to access its elements as arr.chunks(5)[1][1 
.. 3] --> [6, 7].


@ tastyminerals  Thanks for your help on this. These comments, 
combined with the others, are making my climb of the D learning 
curve much quicker.


I'm not a gitHub fan, but I like the mir functions; and it 
looks like I have to download mir before using it.
mir has quite a few .d files..Is there a quick way to download 
it ?


mir is a D package (akin to Python pip package). You can easily 
include it into your program by adding at the top of your file 
the following code:


/+ dub.sdl:
name "my_script"
dependency "mir-algorithm" version="~>3.9.12"
+/

And then just run your script with "dub my_script.d", dub will 
fetch the necessary dependencies, compile and run the file. 
However, it will not generate compiled versions of your 
my_script.d for that, you better set a dub project. Here, see to 
do it: 
https://tastyminerals.github.io/tasty-blog/dlang/2020/03/01/how_to_use_external_libraries_in_d_project.html




Re: 2-D array initialization

2020-08-02 Thread tastyminerals via Digitalmars-d-learn

On Sunday, 2 August 2020 at 02:00:46 UTC, Andy Balba wrote:

On Saturday, 1 August 2020 at 22:00:43 UTC, Ali Çehreli wrote:

On 8/1/20 12:57 PM, Andy Balba wrote:

> On Saturday, 1 August 2020 at 00:08:33 UTC, MoonlightSentinel
wrote:
>> On Friday, 31 July 2020 at 23:42:45 UTC, Andy Balba wrote:
>>> How does one initialize c in D ?
>>
>> ubyte[3][4] c = [ [5, 5, 5], [15, 15,15], [25, 25,25], [35,
35,35]  ];

> I'm a D newbie. moving over from C/C++, and I'm really
finding it hard
> to adjusting to D syntax, which I find somewhat cryptic
compared to C/C++.

That's surprising to me. I came from C++03 year ago but 
everything in D was much better for me. :)


I wanted to respond to your question yesterday and started 
typing some code but then I decided to ask first: Do you 
really need a static array? Otherwise, the following is a 
quite usable 2D array:


  ubyte[][] c = [ [5, 5, 5], [15, 15,15], [25, 25,25], [35, 
35,35]  ];


However, that's quite different from a ubyte[3][4] static 
array because 'c' above can be represented like the following 
graph in memory. (Sorry if this is already known to you.)


c.ptr --> | .ptr | .ptr | .ptr | .ptr |
  |  |  |  |
  .  .  |   --> | 35 | 35 | 35 | 35 |
 etc.   etc. --> | 25 | 25 | 25 | 25 |

In other words, each element is reached through 2 dereferences 
in memory.


On the other hand, a static array consists of nothing but the 
elements in memory. So, a ubyte[3][4] would be the following 
elements in memory:


  | 5 | 5 | ... | 35 | 35 |

One big difference is that static arrays are value types, 
meaning that all elements are copied e.g. as arguments during 
function calls. On the other hand, slices are copied just as 
fat pointers (ptr+length pair), hence have reference semantics.


Here are some ways of initializing a static array. This one is 
the most natural one:


  ubyte[3][4] c = [ [5, 5, 5], [15, 15,15], [25, 25,25], [35, 
35,35]  ];


Yes, that works! :) Why did you need to cast to begin with? 
One reason may be you had a value that could not fit in a 
ubyte so the compiler did not agree. (?)


This one casts a 1D array as the desired type:

  ubyte[3][4] c = *cast(ubyte[3][4]*)(cast(ubyte[])[ 5, 5, 5, 
15, 15, 15, 25, 25, 25, 35, 35, 35 ]).ptr;


The inner cast is required because 5 etc. are ints by-default.

There is std.array.staticArray as well but I haven't used it.

Ali


Although not detailed in my original question, in my actual app
I have array ubyte [1000][3] Big which consists of research 
data I obtained,
 and from which I want to randomly select 4 observations to 
construct

ubyte c[ ][ ].

i.e. construct c= [ Big[r1][3], Big[r2][3], Big[r3][3], 
Big[r4][3] ]

where r1, r2, r3 and r4 are 4 random integers in 0..1001

Being a D newbie, my naive way of doing this was to declare c 
using:
ubyte[3][4] c= [ Big[r1][3], Big[r2][3], Big[r3][3], Big[r4][3] 
]


Obviously, I want to learn how to this the smart D way, but I 
not smart enough at this point.


You haven't said anything about efficiency because if you care 
and your arrays are rather big, you better go with 
https://github.com/libmir/mir-algorithm as mentioned above. It 
might be a little finicky at the start but this post: 
https://tastyminerals.github.io/tasty-blog/dlang/2020/03/22/multidimensional_arrays_in_d.html should get you up to speed.



Keep in mind that std.array.staticArray is not efficient for 
large arrays.


If you want to stick to standard D, I would not initialize a 2D 
array because it is just cumbersome but rather use a 1D array and 
transform it into 2D view on demand via ".chunks" method. Here is 
an example.


import std.range;
import std.array;

void main() {
int[] arr = 20.iota.array;
auto arr2dView = arr.chunks(5);
}

Should give you

┌  ┐
│ 0  1  2  3  4│
│ 5  6  7  8  9│
│10 11 12 13 14│
│15 16 17 18 19│
└  ┘

whenever you need to access its elements as arr.chunks(5)[1][1 .. 
3] --> [6, 7].




Re: D Mir: standard deviation speed

2020-07-15 Thread tastyminerals via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 07:51:31 UTC, 9il wrote:

On Wednesday, 15 July 2020 at 07:34:59 UTC, tastyminerals wrote:

On Wednesday, 15 July 2020 at 06:57:21 UTC, 9il wrote:

On Wednesday, 15 July 2020 at 06:55:51 UTC, 9il wrote:
On Wednesday, 15 July 2020 at 06:00:46 UTC, tastyminerals 
wrote:

On Wednesday, 15 July 2020 at 02:08:48 UTC, 9il wrote:

[...]


Good to know. So, it's fine to use it with sum!"fast" but 
better avoid it for general purposes.


They both are more precise by default.


This was a reply to the other your post in the thread, sorry. 
Mir algorithms are more precise by default then the 
algorithms you have provided.


Right. Is this why standardDeviation is significantly slower?


Yes. It allows you to pick a summation option, you can try 
others then default in benchmarks.


Indeed, I played around with VarianceAlgo and Summation options, 
and they impact the end result a lot.


ans = matrix.flattened.standardDeviation!(VarianceAlgo.naive, 
Summation.appropriate);

std of [300, 300] matrix 0.375903
std of [60, 60] matrix 0.0156448
std of [600, 600] matrix 1.54429
std of [800, 800] matrix 3.03954

ans = matrix.flattened.standardDeviation!(VarianceAlgo.online, 
Summation.appropriate);

std of [300, 300] matrix 1.12404
std of [60, 60] matrix 0.041968
std of [600, 600] matrix 5.01617
std of [800, 800] matrix 8.64363


The Summation.fast behaves strange though. I wonder what happened 
here?


ans = matrix.flattened.standardDeviation!(VarianceAlgo.naive, 
Summation.fast);

std of [300, 300] matrix 1e-06
std of [60, 60] matrix 9e-07
std of [600, 600] matrix 1.2e-06
std of [800, 800] matrix 9e-07

ans = matrix.flattened.standardDeviation!(VarianceAlgo.online, 
Summation.fast);

std of [300, 300] matrix 9e-07
std of [60, 60] matrix 9e-07
std of [600, 600] matrix 1.1e-06
std of [800, 800] matrix 1e-06


Re: Contributing to D wiki

2020-07-15 Thread tastyminerals via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 16:04:56 UTC, aberba wrote:
So I'm looking to make changes to the D wiki but I'm not sure 
who to talk to about such changes.


Currently: Move all other IDEs low-quality down (maybe to 
Others) and focus on just the few that really works (IntelliJ, 
Visual Studio Code and Visual Studio). Instead of many options 
that don't work, why not focus on they few that works?


D wiki is badly outdated. This is not a fact but a gut feeling 
after reading through some of its pages. I was wondering who's 
owning it myself but never actually dared to just go and update. 
I just had a feeling it's abandoned. On the other hand why would 
it be?


Re: D Mir: standard deviation speed

2020-07-15 Thread tastyminerals via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 06:57:21 UTC, 9il wrote:

On Wednesday, 15 July 2020 at 06:55:51 UTC, 9il wrote:
On Wednesday, 15 July 2020 at 06:00:46 UTC, tastyminerals 
wrote:

On Wednesday, 15 July 2020 at 02:08:48 UTC, 9il wrote:
On Tuesday, 14 July 2020 at 19:04:45 UTC, tastyminerals 
wrote:

@fastmath private double sd0(T)(Slice!(T*, 1) flatMatrix)


@fastmath shouldn't be really used with summation algorithms 
except the `"fast"` version of them. Otherwise, they may or 
may not behave like "fast".


Good to know. So, it's fine to use it with sum!"fast" but 
better avoid it for general purposes.


They both are more precise by default.


This was a reply to the other your post in the thread, sorry. 
Mir algorithms are more precise by default then the algorithms 
you have provided.


Right. Is this why standardDeviation is significantly slower?




Re: D Mir: standard deviation speed

2020-07-15 Thread tastyminerals via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 02:08:48 UTC, 9il wrote:

On Tuesday, 14 July 2020 at 19:04:45 UTC, tastyminerals wrote:

@fastmath private double sd0(T)(Slice!(T*, 1) flatMatrix)


@fastmath shouldn't be really used with summation algorithms 
except the `"fast"` version of them. Otherwise, they may or may 
not behave like "fast".


Good to know. So, it's fine to use it with sum!"fast" but better 
avoid it for general purposes.




Re: D Mir: standard deviation speed

2020-07-15 Thread tastyminerals via Digitalmars-d-learn

On Tuesday, 14 July 2020 at 19:36:21 UTC, jmh530 wrote:

On Tuesday, 14 July 2020 at 19:04:45 UTC, tastyminerals wrote:

  [...]


It would be helpful to provide a link.

You should only need one accumulator for mean and centered sum 
of squares. See the python example under the Welford example

https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
This may have broken optimization somehow.

variance and standardDeviation were recently added to 
mir.math.stat. They have the option to switch between Welford's 
algorithm and the others. What you call as the naive algorithm, 
is VarianceAlgo.twoPass and the Welford algorithm can be 
toggled with VarianceAlgo.online, which is the default option. 
It also would be interesting if you re-did the analysis with 
the built-in mir functions.


There are some other small differences between your 
implementation and the one in mir, beyond the issue discussed 
above. You take the absolute value before the square root and 
force the use of sum!"fast". Another difference is 
VarianceAlgo.online in mir is using a precise calculation of 
the mean rather than the fast update that Welford uses. This 
may have a modest impact on performance, but should provide 
more accurate results.


Ok, the wiki page looks more informative, I shall look into my 
Welford implementation.


I've just used standardDeviation from Mir and it showed even 
worse results than both of the examples above.


Here is a (WIP) project as of now.
Line 160 in 
https://github.com/tastyminerals/mir_benchmarks_2/blob/master/source/basic_ops.d


std of [60, 60] matrix 0.0389492 (> 0.001727)
std of [300, 300] matrix 1.03592 (> 0.043452)
std of [600, 600] matrix 4.2875 (> 0.182177)
std of [800, 800] matrix 7.9415 (> 0.345367)



D Mir: standard deviation speed

2020-07-14 Thread tastyminerals via Digitalmars-d-learn
I am trying to implement standard deviation calculation in Mir 
for benchmark purposes.
I have two implementations. One is the straightforward std = 
sqrt(mean(abs(x - x.mean())**2)) and the other follows Welford's 
algorithm for computing variance (as described here: 
https://www.johndcook.com/blog/standard_deviation/).


However, although the first implementation should be less 
efficient / slower, the benchmarking results show a startling 
difference in its favour. I'd like to understand if I am doing 
something wrong and would appreciate some explanation.


# Naive std
import std.math : abs;
import mir.ndslice;
import mir.math.common : pow, sqrt, fastmath;
import mir.math.sum : sum;
import mir.math.stat : mean;

@fastmath private double sd0(T)(Slice!(T*, 1) flatMatrix)
{
pragma(inline, false);
if (flatMatrix.empty)
return 0.0;
double n = cast(double) flatMatrix.length;
double mu = flatMatrix.mean;
return (flatMatrix.map!(a => (a - mu).abs ^^ 2).sum!"fast" / 
n).sqrt;

}


# std with Welford's variance
@fastmath double sdWelford(T)(Slice!(T*, 1) flatMatrix)
{
pragma(inline, false);
if (flatMatrix.empty)
return 0.0;

double m0 = 0.0;
double m1 = 0.0;
double s0 = 0.0;
double s1 = 0.0;
double n = 0.0;
foreach (x; flatMatrix.field)
{
++n;
m1 = m0 + (x - m0) / n;
s1 = s0 + (x - m0) * (x - m1);
m0 = m1;
s0 = s1;
}
// switch to n - 1 for sample variance
return (s1 / n).sqrt;
}

Benchmarking:

Naive std (1k loops):
  std of [60, 60] matrix 0.001727
  std of [300, 300] matrix 0.043452
  std of [600, 600] matrix 0.182177
  std of [800, 800] matrix 0.345367

std with Welford's variance (1k loops):
  std of [60, 60] matrix 0.0225476
  std of [300, 300] matrix 0.534528
  std of [600, 600] matrix 2.0714
  std of [800, 800] matrix 3.60142



Re: Why is this allowed

2020-06-30 Thread tastyminerals via Digitalmars-d-learn

On Tuesday, 30 June 2020 at 16:22:57 UTC, JN wrote:
Spent some time debugging because I didn't notice it at first, 
essentially something like this:


int[3] foo = [1, 2, 3];
foo = 5;
writeln(foo);   // 5, 5, 5

Why does such code compile? I don't think this should be 
permitted, because it's easy to make a mistake (when you wanted 
foo[index] but forgot the []). If someone wants to assign a 
value to every element they could do foo[] = 5; instead which 
is explicit.


auch, that is very nasty. Thanks for posting. This is a good 
example for D gotchas.


Why infinite loops are faster than finite loops?

2020-06-20 Thread tastyminerals via Digitalmars-d-learn
I am not sure that this is a question about D or a more general 
one. I have watched this nice presentation "Speed Is Found In The 
Minds of People" by Andrei: 
https://www.youtube.com/watch?v=FJJTYQYB1JQ=youtu.be?t=2596 and on 43:20 he says that "push_heap" is slow because of structured loops and finite for (throughout the presentation Andrei shows algorithm examples with infinite loops). I wonder why is that? Is it because the finite loop needs to keep track of the number of iterations it performs? Wouldn't the compiler optimize it better than the infinite one because it knows the number of iterations the for loop needs?


Using autowrap to build a D library for Python

2020-06-16 Thread tastyminerals via Digitalmars-d-learn

I am trying out autowrap to build a D library for Python.

After I ran "dub build" which builds mylib.so and try to import 
it in Python interpreter, I get:


"ImportError: dynamic module does not define module export 
function (PyInit_libautowrap_mylib)"


Which means the library was build for some other Python version 
which is strange because in dub.sdl I specifically set


subConfiguration "autowrap:python" "python36"

And attempt to import the libautowrap_mylib from Python 3.6.9 
interpreter.


The dub.sdl contains the following deps:

dependency "autowrap:python" version="~>0.5.2"
dependency "mylib" version="~>1.0.0"
subConfiguration "autowrap:python" "python36"
targetType "dynamicLibrary"






Re: Looking for a Code Review of a Bioinformatics POC

2020-06-11 Thread tastyminerals via Digitalmars-d-learn

On Thursday, 11 June 2020 at 21:54:31 UTC, duck_tape wrote:

On Thursday, 11 June 2020 at 20:24:37 UTC, tastyminerals wrote:
Mir Slices instead of standard D arrays are faster. Athough 
looking at your code I don't see where you can plug them in. 
Just keep in mind.


Thanks for taking a look! What is it about Mir Slices that 
makes them faster? I hadn't seen the Mir package before but it 
looks very useful and intriguing.


Mir is fine-tuned for LLVM, pointer magic and SIMD optimizations.


Re: Looking for a Code Review of a Bioinformatics POC

2020-06-11 Thread tastyminerals via Digitalmars-d-learn

On Thursday, 11 June 2020 at 16:13:34 UTC, duck_tape wrote:
Hi! I'm new to dlang but loving it so far! One of my favorite 
first things to implement in a new language is an interval 
library. In this case I want to submit to a benchmark repo: 
https://github.com/lh3/biofast


If anyone is willing to take a look and give some feedback I'd 
be very appreciative! Specifically if you have an performance 
improvement ideas: https://github.com/sstadick/dgranges/pull/1


Currently my D version is a few seconds slower than the Crystal 
version. putting it very solid in third place overall. I'm not 
really sure where it's falling behind crystal since `-release` 
removes bounds checking. I have not looked at the assembly 
between the two, but I suspect that Crystal inlines the 
callback and D does not.


I also think there is room for improvement in the IO, as I'm 
just using the defaults.


Add to your dub.json the following:

"""
"buildTypes": {
"release": {
"buildOptions": [
"releaseMode",
"inline",
"optimize"
],
"dflags": [
"-boundscheck=off"
]
},
}
"""

dub build --compiler=ldc2 --build=release

Mir Slices instead of standard D arrays are faster. Athough 
looking at your code I don't see where you can plug them in. Just 
keep in mind.


Re: Looking for a Code Review of a Bioinformatics POC

2020-06-11 Thread tastyminerals via Digitalmars-d-learn

On Thursday, 11 June 2020 at 16:13:34 UTC, duck_tape wrote:
Hi! I'm new to dlang but loving it so far! One of my favorite 
first things to implement in a new language is an interval 
library. In this case I want to submit to a benchmark repo: 
https://github.com/lh3/biofast


If anyone is willing to take a look and give some feedback I'd 
be very appreciative! Specifically if you have an performance 
improvement ideas: https://github.com/sstadick/dgranges/pull/1


Currently my D version is a few seconds slower than the Crystal 
version. putting it very solid in third place overall. I'm not 
really sure where it's falling behind crystal since `-release` 
removes bounds checking. I have not looked at the assembly 
between the two, but I suspect that Crystal inlines the 
callback and D does not.


I also think there is room for improvement in the IO, as I'm 
just using the defaults.



Move as much as possible code to compile time.
Do not allocate inside the loops.
Keep GC collection away from performance critical parts with 
GC.disable switch;


Also dflags-ldc "-mcpu=native" in dub.json might give you some 
edge.





Re: Metaprogramming with D

2020-06-09 Thread tastyminerals via Digitalmars-d-learn

On Monday, 8 June 2020 at 14:41:55 UTC, Jan Hönig wrote:

On Sunday, 7 June 2020 at 00:45:37 UTC, Ali Çehreli wrote:


  dmd -mixin= ...


thanks for the tip!




  writeln(q{
  void foo() {
  }
});


What is the name of this `q` thing?
How do i find it? Are there any recent tutorials on it?


Ali's online book consolidates a lot of D language knowledge like 
this. I forgot about token string literals myself but then 
remembered it was in his book.