Re: D Mir: standard deviation speed

2020-07-15 Thread tastyminerals via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 07:51:31 UTC, 9il wrote:

On Wednesday, 15 July 2020 at 07:34:59 UTC, tastyminerals wrote:

On Wednesday, 15 July 2020 at 06:57:21 UTC, 9il wrote:

On Wednesday, 15 July 2020 at 06:55:51 UTC, 9il wrote:
On Wednesday, 15 July 2020 at 06:00:46 UTC, tastyminerals 
wrote:

On Wednesday, 15 July 2020 at 02:08:48 UTC, 9il wrote:

[...]


Good to know. So, it's fine to use it with sum!"fast" but 
better avoid it for general purposes.


They both are more precise by default.


This was a reply to the other your post in the thread, sorry. 
Mir algorithms are more precise by default then the 
algorithms you have provided.


Right. Is this why standardDeviation is significantly slower?


Yes. It allows you to pick a summation option, you can try 
others then default in benchmarks.


Indeed, I played around with VarianceAlgo and Summation options, 
and they impact the end result a lot.


ans = matrix.flattened.standardDeviation!(VarianceAlgo.naive, 
Summation.appropriate);

std of [300, 300] matrix 0.375903
std of [60, 60] matrix 0.0156448
std of [600, 600] matrix 1.54429
std of [800, 800] matrix 3.03954

ans = matrix.flattened.standardDeviation!(VarianceAlgo.online, 
Summation.appropriate);

std of [300, 300] matrix 1.12404
std of [60, 60] matrix 0.041968
std of [600, 600] matrix 5.01617
std of [800, 800] matrix 8.64363


The Summation.fast behaves strange though. I wonder what happened 
here?


ans = matrix.flattened.standardDeviation!(VarianceAlgo.naive, 
Summation.fast);

std of [300, 300] matrix 1e-06
std of [60, 60] matrix 9e-07
std of [600, 600] matrix 1.2e-06
std of [800, 800] matrix 9e-07

ans = matrix.flattened.standardDeviation!(VarianceAlgo.online, 
Summation.fast);

std of [300, 300] matrix 9e-07
std of [60, 60] matrix 9e-07
std of [600, 600] matrix 1.1e-06
std of [800, 800] matrix 1e-06


Re: getopt: How does arraySep work?

2020-07-15 Thread Jon Degenhardt via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 07:12:35 UTC, Andre Pany wrote:

On Tuesday, 14 July 2020 at 15:48:59 UTC, Andre Pany wrote:
On Tuesday, 14 July 2020 at 14:33:47 UTC, Steven Schveighoffer 
wrote:

On 7/14/20 10:22 AM, Steven Schveighoffer wrote:
The documentation needs updating, it should say "parameters 
are added sequentially" or something like that, instead of 
"separation by whitespace".


https://github.com/dlang/phobos/pull/7557

-Steve


Thanks for the answer and the pr. Unfortunately my goal here 
is to simulate a partner tool written  in C/C++ which supports 
this behavior. I will also create an enhancement issue for 
supporting this behavior.


Kind regards
Anste


Enhancement issue:
https://issues.dlang.org/show_bug.cgi?id=21045

Kind regards
André


An enhancement is likely to hit some corner-cases involving list 
termination requiring choices that are not fully generic. Any 
time a legal list value looks like a legal option. Perhaps the 
most important case is single digit numeric options like '-1', 
'-2'. These are legal short form options, and there are programs 
that use them. They are also somewhat common numeric values to 
include in command lines inputs.


I ran into a couple cases like this with a getopt cover I wrote. 
The cover supports runtime processing of command arguments in the 
order entered on the command line rather than the compile-time 
getopt() call order. Since it was only for my stuff, not Phobos, 
it was an easy choice: Disallow single digit short options. But a 
Phobos enhancement might make other choices.


IIRC, a characteristic of the current getopt implementation is 
that it does not have run-time knowledge of all the valid 
options, so the set of ambiguous entries is larger than just the 
limited set of options specified in the program. Essentially, 
anything that looks syntactically like an option.


Doesn't mean an enhancement can't be built, just that there might 
some constraints to be aware of.


--Jon




Re: misc questions about a DUB package

2020-07-15 Thread DanielG via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 11:49:09 UTC, Jacob Carlborg wrote:

The plus sign and anything after is ignored by Dub.


Good to know, thank you.





Re: Contributing to D wiki

2020-07-15 Thread H. S. Teoh via Digitalmars-d-learn
On Wed, Jul 15, 2020 at 09:27:22PM +, tastyminerals via Digitalmars-d-learn 
wrote:
[...]
> D wiki is badly outdated. This is not a fact but a gut feeling after
> reading through some of its pages. I was wondering who's owning it
> myself but never actually dared to just go and update.

Why not?  It's a *wiki*.  Wikis are intended for the user community
(i.e. you & me) to go and edit.  That's the whole point of a wiki.  If
that wasn't the intention we wouldn't have set up a wiki in the first
place.


> I just had a feeling it's abandoned.  On the other hand why would it
> be?

It's only as abandoned as this community abandons it.  If people start
updating it and fixing things when they see it instead, then it will not
be abandoned.


T

-- 
There are 10 kinds of people in the world: those who can count in binary, and 
those who can't.


Re: Contributing to D wiki

2020-07-15 Thread tastyminerals via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 16:04:56 UTC, aberba wrote:
So I'm looking to make changes to the D wiki but I'm not sure 
who to talk to about such changes.


Currently: Move all other IDEs low-quality down (maybe to 
Others) and focus on just the few that really works (IntelliJ, 
Visual Studio Code and Visual Studio). Instead of many options 
that don't work, why not focus on they few that works?


D wiki is badly outdated. This is not a fact but a gut feeling 
after reading through some of its pages. I was wondering who's 
owning it myself but never actually dared to just go and update. 
I just had a feeling it's abandoned. On the other hand why would 
it be?


Re: Contributing to D wiki

2020-07-15 Thread H. S. Teoh via Digitalmars-d-learn
On Wed, Jul 15, 2020 at 04:04:56PM +, aberba via Digitalmars-d-learn wrote:
> So I'm looking to make changes to the D wiki but I'm not sure who to
> talk to about such changes.
> 
> Currently: Move all other IDEs low-quality down (maybe to Others) and
> focus on just the few that really works (IntelliJ, Visual Studio Code
> and Visual Studio). Instead of many options that don't work, why not
> focus on they few that works?
[...]

Why not just go ahead and take charge of that part of the wiki?  It's a
wiki after all.


T

-- 
Talk is cheap. Whining is actually free. -- Lars Wirzenius


Re: Vibe.d and NodeJs with Express

2020-07-15 Thread aberba via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 04:19:57 UTC, bauss wrote:

On Sunday, 12 July 2020 at 19:16:32 UTC, aberba wrote:
3) packages, now it might be better though. But I've always 
felt that there's not a lot of people using D for complete web 
dev projects...



I implement most things I need myself;


That's when you have time and compelling reason to do that. Most 
at times I don't. And dub packages mostly works fine...why I like 
it when there's several options to choose from.


[OT] Re: D Mir: standard deviation speed

2020-07-15 Thread Schrom, Brian T via Digitalmars-d-learn
> I've mixed up @fastmath and @fmamath as well. No worries.
Seems like this might be a good awareness opportunity to change one
of the names to be more descriptive and more distinctive.

FWIW, I read half way through threw the thread before I caught onto
the distinction.  I can imagine making that same mistake in the
future.


Contributing to D wiki

2020-07-15 Thread aberba via Digitalmars-d-learn
So I'm looking to make changes to the D wiki but I'm not sure who 
to talk to about such changes.


Currently: Move all other IDEs low-quality down (maybe to Others) 
and focus on just the few that really works (IntelliJ, Visual 
Studio Code and Visual Studio). Instead of many options that 
don't work, why not focus on they few that works?





Re: Forcing inline functions (again) - groan

2020-07-15 Thread kinke via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 13:38:34 UTC, Cecil Ward wrote:

I recently noticed
pragma(inline, true)
which looks extremely useful. A couple of questions :

1. Is this cross-compiler compatible?


Works for LDC and DMD, not sure about GDC, but if it doesn't 
support it, it's definitely on Iain's list.


2. Can I declare a function in one module and have it _inlined_ 
in another module at the call site?


For LDC, this works in all cases (i.e., also if compiling 
multiple object files in a single cmdline) since v1.22.


While you cannot force LLVM to actually inline, I haven't come 
across a case yet where it doesn't.


Re: Choosing a non-default linker for dmd (via dub)

2020-07-15 Thread kinke via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 11:38:47 UTC, Jacob Carlborg wrote:
There's an environment variable "CC" that can be used to select 
which C compiler is used. Is there any equivalence for 
selecting the linker, "LD" perhaps?


You normally just add -fuse-ld=gold to the C compiler cmdline, 
e.g., via -Xcc=-fuse-ld=gold in the DMD cmdline.


Re: Forcing inline functions (again) - groan

2020-07-15 Thread Stefan Koch via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 13:38:34 UTC, Cecil Ward wrote:

I recently noticed
pragma(inline, true)
which looks extremely useful. A couple of questions :

1. Is this cross-compiler compatible?

2. Can I declare a function in one module and have it _inlined_ 
in another module at the call site?


I’m looking to write functions that expand to approx one or 
even zero machine instructions and having the overhead of a 
function call would be disastrous; in some cases would make it 
pointless having the function due to the slowdown.


pragma inline will work for dmd.
and it used to fail if it couldn't inline.
Now it just generates a warning.
So with -w it will still fail.

Afaik other compilers cannot warn if the in-lining fails but I 
might be wrong.
And ldc/gdc should be able to inline most code which makes sense 
to inline.


Forcing inline functions (again) - groan

2020-07-15 Thread Cecil Ward via Digitalmars-d-learn

I recently noticed
pragma(inline, true)
which looks extremely useful. A couple of questions :

1. Is this cross-compiler compatible?

2. Can I declare a function in one module and have it _inlined_ 
in another module at the call site?


I’m looking to write functions that expand to approx one or even 
zero machine instructions and having the overhead of a function 
call would be disastrous; in some cases would make it pointless 
having the function due to the slowdown.


Re: Question about publishing a useful function I have written

2020-07-15 Thread Andre Pany via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 09:31:27 UTC, Cecil Ward wrote:

On Tuesday, 14 July 2020 at 23:10:28 UTC, Max Haughton wrote:

On Tuesday, 14 July 2020 at 21:58:49 UTC, Cecil Ward wrote:
I have written something which may or may not be novel and 
I’m wondering about how to distribute it to as many users as 
possible, hoping others will find it useful. What’s the best 
way to publish a D routine ?


[...]


GitHub is the best place to publish code. Does GDC actually 
use the optimization? I tried something like that before but I 
couldn't seem to get it to work properly.


On Tuesday, 14
]

I just tried an experiment. It seems that in release mode 
assert()s are realised as absolutely nothing at all, and so the 
_conditions_ in the asserts are not declared. So later 
generated code does not have the benefit of knowledge of 
asserted truth conditions in release mode. So in release mode, 
without these truth conditions being established, the code 
generated (apart from the asserts’ code) can be _worse than in 
debug mode_, which seems bizarre, but it’s true.


for example
assert( x < 100 );
…
if ( x==200 )  // <— evaluates to false _at compile time_
 {
 // no code generated for this block in debug mode,
 // but is generated in release mode
 }
…
if ( x < 100 ) // <— no code generated for if-test as cond 
== true at compile-time


This is by intention. While exceptions are used for resources 
like user input/file system/network,... you use asserts to 
validate your code.
When you publish your code to the world, the assertions have 
"done" their work and there is no need to include them in a 
release build.


Cases where you need asserts in release builds are an indicator, 
that you should have used exceptions instead.


Kind regards
Andre


Re: D Mir: standard deviation speed

2020-07-15 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 11:41:35 UTC, 9il wrote:

[snip]

Ah, no, my bad! You write @fmamath, I have read it as 
@fastmath. @fmamath is OK here.


I've mixed up @fastmath and @fmamath as well. No worries.


Re: Question about publishing a useful function I have written

2020-07-15 Thread Jacob Carlborg via Digitalmars-d-learn

On 2020-07-14 23:58, Cecil Ward wrote:

What’s the best way to publish a D routine ?


As others have already said, on GitHub. Then as a Dub package as well [1].

[1] https://code.dlang.org

--
/Jacob Carlborg


Re: misc questions about a DUB package

2020-07-15 Thread Jacob Carlborg via Digitalmars-d-learn

On 2020-07-15 04:20, 9il wrote:

No. Usually, a DUB package supports a range of C library version or just 
a fixes set of C API. The version behavior of the dub package is up to 
you. Usually, D API changes more frequently than the underlying C library.
If you support a specific version of the C API, I recommend adding the 
version of the C library as metadata:


0.0.1+5.3

In the above example "0.0.1" would be the version of the Dub package. 
"5.3" would be the version of the C API. The plus sign and anything 
after is ignored by Dub. See semantic versioning docs for more information:


https://semver.org

--
/Jacob Carlborg


Re: D Mir: standard deviation speed

2020-07-15 Thread 9il via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 11:37:23 UTC, jmh530 wrote:

On Wednesday, 15 July 2020 at 11:26:19 UTC, 9il wrote:

[snip]


@fmamath private double sd(T)(Slice!(T*, 1) flatMatrix)


@fastmath violates all summation algorithms except `"fast"`.
The same bug is in the original author's post.


I hadn't realized that @fmamath was the problem, rather than 
@fastmath overall. @fmamathis used on many mir.math.stat 
functions, though admittedly not in the accumulators.


Ah, no, my bad! You write @fmamath, I have read it as @fastmath. 
@fmamath is OK here.


Re: Bind C++ class to DLang : undefined reference to `Canvas::Foo()'

2020-07-15 Thread Jacob Carlborg via Digitalmars-d-learn

On 2020-07-14 05:33, Boris Carvajal wrote:

Can you try passing -D_GLIBCXX_USE_CXX11_ABI=0 to g++ and 
-version=_GLIBCXX_USE_CXX98_ABI to dmd.


That comes from: https://dlang.org/changelog/2.088.0.html#std_string

C++11 ABI is currently not supported.


I based on previous messages and the usage of Clang, I think zoujiaqing 
is using a Mac. On Mac libc++ is used. That above might not apply.


--
/Jacob Carlborg


Re: Choosing a non-default linker for dmd (via dub)

2020-07-15 Thread Jacob Carlborg via Digitalmars-d-learn

On 2020-07-12 18:36, Per Nordlöw wrote:

The line

dflags "-linker=gold" platform="linux-ldc" # use GNU gold linker

in dub.sdl

enables me to change linker for LDC.

Is it possible to choose a specific linker for DMD aswell in a similar way?

I only find the flag `-L` that sets flags but no linker executable.

One way is to link

     /usr/bin/ld

to either

     /usr/bin/ld.gold

or

     /usr/bin/ld.lld

but it would be nice to be able to do this from the dub.sdl or the 
command-line call to dub.


There's an environment variable "CC" that can be used to select which C 
compiler is used. Is there any equivalence for selecting the linker, 
"LD" perhaps?


--
/Jacob Carlborg


Re: Error: `std.uni.isUpper` conflicts with `std.ascii.isUpper`

2020-07-15 Thread WebFreak001 via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 11:25:34 UTC, aberba wrote:

On Wednesday, 15 July 2020 at 07:01:34 UTC, WebFreak001 wrote:

On Tuesday, 14 July 2020 at 20:37:53 UTC, Marcone wrote:

[...]


Additionally to the other answers telling you how to fix it, 
it's important to know why it happens in the first place:


[...]


Without reading this very explanation, how would one know 
whilst reading docs?


in that explanation I broadly covered:

https://dlang.org/spec/module.html#name_lookup
https://dlang.org/spec/module.html#selective_imports
https://dlang.org/phobos/std_ascii.html
https://dlang.org/phobos/std_ascii.html#isUpper
https://dlang.org/phobos/std_uni.html
https://dlang.org/phobos/std_uni.html#isUpper
https://dlang.org/phobos/std_uni.html#Character


Re: D Mir: standard deviation speed

2020-07-15 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 11:26:19 UTC, 9il wrote:

[snip]


@fmamath private double sd(T)(Slice!(T*, 1) flatMatrix)


@fastmath violates all summation algorithms except `"fast"`.
The same bug is in the original author's post.


I hadn't realized that @fmamath was the problem, rather than 
@fastmath overall. @fmamathis used on many mir.math.stat 
functions, though admittedly not in the accumulators.


Re: D Mir: standard deviation speed

2020-07-15 Thread 9il via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 11:23:00 UTC, jmh530 wrote:

On Wednesday, 15 July 2020 at 05:57:56 UTC, tastyminerals wrote:

[snip]

Here is a (WIP) project as of now.
Line 160 in 
https://github.com/tastyminerals/mir_benchmarks_2/blob/master/source/basic_ops.d


std of [60, 60] matrix 0.0389492 (> 0.001727)
std of [300, 300] matrix 1.03592 (> 0.043452)
std of [600, 600] matrix 4.2875 (> 0.182177)
std of [800, 800] matrix 7.9415 (> 0.345367)


I changed the dflags-ldc to "-mcpu-native -O" and compiled with 
`dub run --compiler=ldc2`. I got similar results as yours for 
both in the initial run.


I changed sd to

@fmamath private double sd(T)(Slice!(T*, 1) flatMatrix)


@fastmath violates all summation algorithms except `"fast"`.
The same bug is in the original author's post.



Re: Error: `std.uni.isUpper` conflicts with `std.ascii.isUpper`

2020-07-15 Thread aberba via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 07:01:34 UTC, WebFreak001 wrote:

On Tuesday, 14 July 2020 at 20:37:53 UTC, Marcone wrote:

[...]


Additionally to the other answers telling you how to fix it, 
it's important to know why it happens in the first place:


[...]


Without reading this very explanation, how would one know whilst 
reading docs?


Re: D Mir: standard deviation speed

2020-07-15 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 05:57:56 UTC, tastyminerals wrote:

[snip]

Here is a (WIP) project as of now.
Line 160 in 
https://github.com/tastyminerals/mir_benchmarks_2/blob/master/source/basic_ops.d


std of [60, 60] matrix 0.0389492 (> 0.001727)
std of [300, 300] matrix 1.03592 (> 0.043452)
std of [600, 600] matrix 4.2875 (> 0.182177)
std of [800, 800] matrix 7.9415 (> 0.345367)


I changed the dflags-ldc to "-mcpu-native -O" and compiled with 
`dub run --compiler=ldc2`. I got similar results as yours for 
both in the initial run.


I changed sd to

@fmamath private double sd(T)(Slice!(T*, 1) flatMatrix)
{
pragma(inline, false);
if (flatMatrix.empty)
return 0.0;
double n = cast(double) flatMatrix.length;
double mu = flatMatrix.mean;
return (flatMatrix.map!(a => (a - mu) ^^ 2)
.sum!"precise" / n).sqrt;
}

and got

std of [10, 10] matrix 0.0016321
std of [20, 20] matrix 0.0069788
std of [300, 300] matrix 2.42063
std of [60, 60] matrix 0.0828711
std of [600, 600] matrix 9.72251
std of [800, 800] matrix 18.1356

And the biggest change by far was the sum!"precise" instead of 
sum!"fast".


When I ran your benchStd function with
ans = matrix.flattened.standardDeviation!(double, "online", 
"fast");

I got
std of [10, 10] matrix 1e-07
std of [20, 20] matrix 0
std of [300, 300] matrix 0
std of [60, 60] matrix 1e-07
std of [600, 600] matrix 0
std of [800, 800] matrix 0

I got the same result with Summator.naive. That almost seems too 
low.


The default is Summator.appropriate, which is resolved to 
Summator.pairwise in this case. It is faster than 
Summator.precise, but still slower than Summator.naive or 
Summator.fast. Your welfordSD should line up with Summator.naive.


When I change that to
ans = matrix.flattened.standardDeviation!(double, "online", 
"precise");

I get
Running .\mir_benchmarks_2.exe
std of [10, 10] matrix 0.0031737
std of [20, 20] matrix 0.0153603
std of [300, 300] matrix 4.15738
std of [60, 60] matrix 0.171211
std of [600, 600] matrix 17.7443
std of [800, 800] matrix 34.2592

I also tried changing your welfordSD function based on the stuff 
I mentioned above, but it did not make a large difference.


Re: Question about publishing a useful function I have written

2020-07-15 Thread Cecil Ward via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 02:25:42 UTC, 9il wrote:

On Tuesday, 14 July 2020 at 21:58:49 UTC, Cecil Ward wrote:



Does anyone know if this has already been published by someone 
else?




https://github.com/libmir/mir-core/blob/master/source/mir/utility.d#L29

We test LDC and DMC. CI needs an update to be actually tested 
with GDC.


Brilliant. Many thanks.


Re: Question about publishing a useful function I have written

2020-07-15 Thread Cecil Ward via Digitalmars-d-learn

On Tuesday, 14 July 2020 at 23:10:28 UTC, Max Haughton wrote:

On Tuesday, 14 July 2020 at 21:58:49 UTC, Cecil Ward wrote:
I have written something which may or may not be novel and I’m 
wondering about how to distribute it to as many users as 
possible, hoping others will find it useful. What’s the best 
way to publish a D routine ?


[...]


GitHub is the best place to publish code. Does GDC actually use 
the optimization? I tried something like that before but I 
couldn't seem to get it to work properly.


On Tuesday, 14
]

I just tried an experiment. It seems that in release mode 
assert()s are realised as absolutely nothing at all, and so the 
_conditions_ in the asserts are not declared. So later generated 
code does not have the benefit of knowledge of asserted truth 
conditions in release mode. So in release mode, without these 
truth conditions being established, the code generated (apart 
from the asserts’ code) can be _worse than in debug mode_, which 
seems bizarre, but it’s true.


for example
assert( x < 100 );
…
if ( x==200 )  // <— evaluates to false _at compile time_
 {
 // no code generated for this block in debug mode,
 // but is generated in release mode
 }
…
if ( x < 100 ) // <— no code generated for if-test as cond == 
true at compile-time


Re: D Mir: standard deviation speed

2020-07-15 Thread 9il via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 07:34:59 UTC, tastyminerals wrote:

On Wednesday, 15 July 2020 at 06:57:21 UTC, 9il wrote:

On Wednesday, 15 July 2020 at 06:55:51 UTC, 9il wrote:
On Wednesday, 15 July 2020 at 06:00:46 UTC, tastyminerals 
wrote:

On Wednesday, 15 July 2020 at 02:08:48 UTC, 9il wrote:

[...]


Good to know. So, it's fine to use it with sum!"fast" but 
better avoid it for general purposes.


They both are more precise by default.


This was a reply to the other your post in the thread, sorry. 
Mir algorithms are more precise by default then the algorithms 
you have provided.


Right. Is this why standardDeviation is significantly slower?


Yes. It allows you to pick a summation option, you can try others 
then default in benchmarks.


Re: Using_string_mixins_for_logging error

2020-07-15 Thread Vitalii via Digitalmars-d-learn

Many thanks!



Re: Using_string_mixins_for_logging error

2020-07-15 Thread WebFreak001 via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 07:07:55 UTC, Vitalii wrote:

Hello everyone!

I try to compile this recipe with dmd (2.089.0), ldc2 (1.18.0):
https://wiki.dlang.org/Using_string_mixins_for_logging
but get the same error:

mixin_log.d(64): Error: basic type expected, not __FUNCTION__
mixin_log.d(64): Error: declaration expected, not __FUNCTION__

in that part of code:

mixin template set_func_name(string fn)
{
enum __FUNCTION__ = fn;
}

There is no doubt that this recipe once worked. How to fix it 
now?


that wiki entry is from 2012. __FUNCTION__, __PRETTY_FUNCTION__ 
and __MODULE__ were introduced in 2013, making that old 
__FUNCTION__ definition code obsolete.


In fact all of the code on that page can now be more easily 
written as:



enum LogLevel { INFO, WARN, ERROR }

template log(LogLevel level)
{
void log(Args...)(Args args, string fn = __FUNCTION__, string 
file = __FILE__, size_t line = __LINE__)

{
writeln(Clock.currTime(), " [", level, "] ", file, '(', 
line, "): ", fn, ": ", args);

}
}

alias info = log!(LogLevel.INFO);
alias warn = log!(LogLevel.WARN);
alias error = log!(LogLevel.ERROR);


no longer requiring any mixins or boilerplate code in your 
calling functions.


---

void main(string[] args)
{
info("hello", "world");
warn("i am ", "number ", 1);
error(true);
log!(LogLevel.INFO)("manual call");
}





2020-Jul-15 08:12:00.5900346 [INFO] onlineapp.d(21): 
onlineapp.main: helloworld
2020-Jul-15 08:12:00.5901559 [WARN] onlineapp.d(22): 
onlineapp.main: i am number 1
2020-Jul-15 08:12:00.5901745 [ERROR] onlineapp.d(23): 
onlineapp.main: true


---

This uses default values after variadic parameters which is 
allowed since 2.079.0 and also uses a nested template to allow 
for the aliases with default LogLevel.


Re: D Mir: standard deviation speed

2020-07-15 Thread tastyminerals via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 06:57:21 UTC, 9il wrote:

On Wednesday, 15 July 2020 at 06:55:51 UTC, 9il wrote:
On Wednesday, 15 July 2020 at 06:00:46 UTC, tastyminerals 
wrote:

On Wednesday, 15 July 2020 at 02:08:48 UTC, 9il wrote:
On Tuesday, 14 July 2020 at 19:04:45 UTC, tastyminerals 
wrote:

@fastmath private double sd0(T)(Slice!(T*, 1) flatMatrix)


@fastmath shouldn't be really used with summation algorithms 
except the `"fast"` version of them. Otherwise, they may or 
may not behave like "fast".


Good to know. So, it's fine to use it with sum!"fast" but 
better avoid it for general purposes.


They both are more precise by default.


This was a reply to the other your post in the thread, sorry. 
Mir algorithms are more precise by default then the algorithms 
you have provided.


Right. Is this why standardDeviation is significantly slower?




Re: getopt: How does arraySep work?

2020-07-15 Thread Andre Pany via Digitalmars-d-learn

On Tuesday, 14 July 2020 at 15:48:59 UTC, Andre Pany wrote:
On Tuesday, 14 July 2020 at 14:33:47 UTC, Steven Schveighoffer 
wrote:

On 7/14/20 10:22 AM, Steven Schveighoffer wrote:
The documentation needs updating, it should say "parameters 
are added sequentially" or something like that, instead of 
"separation by whitespace".


https://github.com/dlang/phobos/pull/7557

-Steve


Thanks for the answer and the pr. Unfortunately my goal here is 
to simulate a partner tool written  in C/C++ which supports 
this behavior. I will also create an enhancement issue for 
supporting this behavior.


Kind regards
Anste


Enhancement issue:
https://issues.dlang.org/show_bug.cgi?id=21045

Kind regards
André


Using_string_mixins_for_logging error

2020-07-15 Thread Vitalii via Digitalmars-d-learn

Hello everyone!

I try to compile this recipe with dmd (2.089.0), ldc2 (1.18.0):
https://wiki.dlang.org/Using_string_mixins_for_logging
but get the same error:

mixin_log.d(64): Error: basic type expected, not __FUNCTION__
mixin_log.d(64): Error: declaration expected, not __FUNCTION__

in that part of code:

mixin template set_func_name(string fn)
{
enum __FUNCTION__ = fn;
}

There is no doubt that this recipe once worked. How to fix it now?


Re: Error: `std.uni.isUpper` conflicts with `std.ascii.isUpper`

2020-07-15 Thread WebFreak001 via Digitalmars-d-learn

On Tuesday, 14 July 2020 at 20:37:53 UTC, Marcone wrote:

import std: isUpper, writeln;

void main(){
writeln(isUpper('A'));
}

Why I get this error? How can I use isUpper()?


Additionally to the other answers telling you how to fix it, it's 
important to know why it happens in the first place:


std.ascii defines an isUpper method which only works on ASCII 
characters (byte value < 128), which means it's a really simple 
and very fast check, basically the same as `c >= 'A' && c <= 'Z'` 
but usually more readable. The other functions in std.ascii work 
the same and are all just very convenient methods for these 
simple checks. You should only really use this for file formats 
and internal strings, never for anything the user could input 
(preferably not even for validation of English applications)


std.uni defines an isUpper method as defined in the unicode 6.2 
spec (the whole std.uni module is for these standardized unicode 
classifications and transformations, but NOT for UTF 
encoding/decoding, that's what std.utf is for), which means it 
works for a lot more languages than just the basic English 
alphabet in ASCII, at the cost of being slower.


If you have a normal import like `import std;` you can specify 
"more specified" imports using `import std.uni : isUpper;` or 
`import std.ascii : isUpper;` which will not clash with your 
first import, as long as you don't try to import both in the same 
scope. Basically:



import std;
import std.uni : isUpper;

bool someUserInputTest(string x) { return x[0].isUpper; }


Note that none of D's std.uni functions support using any locale, 
so if you want to do anything more than the default language 
neutral unicode locale you will have to use some library or using 
the OS APIs. Note also that D is missing some unicode properties 
like some check for characters which aren't lowercase or 
UPPERCASE but Titlecase (like Dz [U+01F2 LATIN CAPITAL LETTER D 
WITH SMALL LETTER Z]) which you could however use OS APIs for.


Re: D Mir: standard deviation speed

2020-07-15 Thread 9il via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 06:00:46 UTC, tastyminerals wrote:

On Wednesday, 15 July 2020 at 02:08:48 UTC, 9il wrote:

On Tuesday, 14 July 2020 at 19:04:45 UTC, tastyminerals wrote:

@fastmath private double sd0(T)(Slice!(T*, 1) flatMatrix)


@fastmath shouldn't be really used with summation algorithms 
except the `"fast"` version of them. Otherwise, they may or 
may not behave like "fast".


Good to know. So, it's fine to use it with sum!"fast" but 
better avoid it for general purposes.


They both are more precise by default.


Re: D Mir: standard deviation speed

2020-07-15 Thread 9il via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 06:55:51 UTC, 9il wrote:

On Wednesday, 15 July 2020 at 06:00:46 UTC, tastyminerals wrote:

On Wednesday, 15 July 2020 at 02:08:48 UTC, 9il wrote:

On Tuesday, 14 July 2020 at 19:04:45 UTC, tastyminerals wrote:

@fastmath private double sd0(T)(Slice!(T*, 1) flatMatrix)


@fastmath shouldn't be really used with summation algorithms 
except the `"fast"` version of them. Otherwise, they may or 
may not behave like "fast".


Good to know. So, it's fine to use it with sum!"fast" but 
better avoid it for general purposes.


They both are more precise by default.


This was a reply to the other your post in the thread, sorry. Mir 
algorithms are more precise by default then the algorithms you 
have provided.


Re: D Mir: standard deviation speed

2020-07-15 Thread tastyminerals via Digitalmars-d-learn

On Wednesday, 15 July 2020 at 02:08:48 UTC, 9il wrote:

On Tuesday, 14 July 2020 at 19:04:45 UTC, tastyminerals wrote:

@fastmath private double sd0(T)(Slice!(T*, 1) flatMatrix)


@fastmath shouldn't be really used with summation algorithms 
except the `"fast"` version of them. Otherwise, they may or may 
not behave like "fast".


Good to know. So, it's fine to use it with sum!"fast" but better 
avoid it for general purposes.




Re: D Mir: standard deviation speed

2020-07-15 Thread tastyminerals via Digitalmars-d-learn

On Tuesday, 14 July 2020 at 19:36:21 UTC, jmh530 wrote:

On Tuesday, 14 July 2020 at 19:04:45 UTC, tastyminerals wrote:

  [...]


It would be helpful to provide a link.

You should only need one accumulator for mean and centered sum 
of squares. See the python example under the Welford example

https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
This may have broken optimization somehow.

variance and standardDeviation were recently added to 
mir.math.stat. They have the option to switch between Welford's 
algorithm and the others. What you call as the naive algorithm, 
is VarianceAlgo.twoPass and the Welford algorithm can be 
toggled with VarianceAlgo.online, which is the default option. 
It also would be interesting if you re-did the analysis with 
the built-in mir functions.


There are some other small differences between your 
implementation and the one in mir, beyond the issue discussed 
above. You take the absolute value before the square root and 
force the use of sum!"fast". Another difference is 
VarianceAlgo.online in mir is using a precise calculation of 
the mean rather than the fast update that Welford uses. This 
may have a modest impact on performance, but should provide 
more accurate results.


Ok, the wiki page looks more informative, I shall look into my 
Welford implementation.


I've just used standardDeviation from Mir and it showed even 
worse results than both of the examples above.


Here is a (WIP) project as of now.
Line 160 in 
https://github.com/tastyminerals/mir_benchmarks_2/blob/master/source/basic_ops.d


std of [60, 60] matrix 0.0389492 (> 0.001727)
std of [300, 300] matrix 1.03592 (> 0.043452)
std of [600, 600] matrix 4.2875 (> 0.182177)
std of [800, 800] matrix 7.9415 (> 0.345367)