Re: mmap file performance

2024-04-24 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 15 April 2024 at 16:13:41 UTC, Andy Valencia wrote:
On Monday, 15 April 2024 at 08:05:25 UTC, Patrick Schluter 
wrote:
The setup of a memory mapped file is relatively costly. For 
smaller files it is a net loss and read/write beats it hands 
down.


Interestingly, this performance deficit is present even when 
run against the largest conveniently available file on my 
system--libQt6WebEngineCore.so.6.4.2 at 148 megs.  But since 
this reproduces in its C counterpart, it is not at all a 
reflection of D.


As you say, truly random access might play to mmap's strengths.


Indeed, my statement concerning file size is misleading. It's the 
amount of operations done on the file that is important. For 
bigger files it is normal to have more operations.
I have measurement for our system (Linux servers) where we have 
big index files which represent a ternary tree that are generally 
memory mapped. These files are several hundreds of megabytes big 
and the access is almost random. These files still grow but the 
growing parts are not memory mapped but accessed with pread() and 
pwrite() calls. Accesses via pread() take exactly twice the time 
from memory copy for reads of exactly 64 bytes (size of the 
record).




My real point is that, whichever API I use, coding in D was far 
less tedious; I like the resulting code, and it showed no 
meaningful performance cost.





Re: mmap file performance

2024-04-15 Thread Patrick Schluter via Digitalmars-d-learn

On Thursday, 11 April 2024 at 00:24:44 UTC, Andy Valencia wrote:
I wrote a "count newlines" based on mapped files.  It used 
about twice the CPU of the version which just read 1 meg at a 
time.  I thought something was amiss (needless slice 
indirection or something), so I wrote the code in C.  It had 
the same CPU usage as the D version.  So...mapped files, not so 
much.  Not D's fault.  And writing it in C made me realize how 
much easier it is to code in D!


[...]


The setup of a memory mapped file is relatively costly. For 
smaller files it is a net loss and read/write beats it hands 
down. Furthermore, sequential access is not the best way to 
exploit the advantages of mmap. Full random access is the strong 
suite of mmap as it replaces kernel syscalls (lseek,read, write 
or pread, pwrite) by user land processing.
You could try MAP_POPULATE option in the mmap as it enables 
read-ahead on the file which may help on sequential code.


Re: Safer Linux Kernel Modules Using the D Programming Language

2023-01-09 Thread Patrick Schluter via Digitalmars-d-announce
On Monday, 9 January 2023 at 09:08:59 UTC, areYouSureAboutThat 
wrote:

On Monday, 9 January 2023 at 03:54:32 UTC, Walter Bright wrote:


Yes, as long as you don't make any mistakes. A table saw won't 
cut your fingers off if you never make a mistake, too.




And yet, people keep using them (table saws).

Don't underestimate the level of risk humans are happily 
willing to accept, in exchange for some personal benefit.


and people literally kill themselves by overestimating their 
skills

https://youtu.be/wzosDKcXQ0I?t=441



Re: Idiomatic D using GC as a library writer

2022-12-05 Thread Patrick Schluter via Digitalmars-d-learn

On Sunday, 4 December 2022 at 23:37:39 UTC, Ali Çehreli wrote:

On 12/4/22 15:25, Adam D Ruppe wrote:

> which would trigger the write barrier. The thread isn't
> allowed to complete this operation until the GC is done.

According to my limited understanding of write barriers, the 
thread moving to 800 could continue because order of memory 
operations may have been satisfied. What I don't see is, what 
would the GC thread be waiting for about the write to 800?


I'm not a specialist but I have the impression that GC write 
barrier and CPU memory ordering write barriers are 2 different 
things that confusedly use the same term for 2 completely 
different concepts.




Would the GC be leaving behind writes to every page it scans, 
which have barriers around so that the other thread can't 
continue? But then the GC's write would finish and the other 
thread's write would finish.


Ok, here is the question: Is there a very long standing partial 
write that the GC can perform like: "I write to 0x42, but I 
will finish it 2 seconds later. So, all other writes should 
wait?"


> The GC finishes its work and releases the barriers.

So, it really is explicit acquisition and releasing of these 
barriers... I think this is provided by the CPU, not the OS. 
How many explicit write barriers are there?


Ali





Re: Float rounding (in JSON)

2022-10-14 Thread Patrick Schluter via Digitalmars-d-learn
On Thursday, 13 October 2022 at 19:27:22 UTC, Steven 
Schveighoffer wrote:

On 10/13/22 3:00 PM, Sergey wrote:

[...]


It doesn't look really that far off. You can't expect floating 
point parsing to be exact, as floating point does not perfectly 
represent decimal numbers, especially when you get down to the 
least significant bits.


[...]
To me it looks like there is a conversion to `real` (80 bit 
floats) somewhere in the D code and that the other languages stay 
in `double` mode everywhere. Maybe forcing `double` by disabling 
x87 on the D side would yield the same results as the other 
languages?




Re: Replacing tango.text.Ascii.isearch

2022-10-13 Thread Patrick Schluter via Digitalmars-d-learn

On Thursday, 13 October 2022 at 08:27:17 UTC, bauss wrote:
On Wednesday, 5 October 2022 at 17:29:25 UTC, Steven 
Schveighoffer wrote:

On 10/5/22 12:59 PM, torhu wrote:
I need a case-insensitive check to see if a string contains 
another string for a "quick filter" feature. It should 
preferrably be perceived as instant by the user, and needs to 
check a few thousand strings in typical cases. Is a regex the 
best option, or what would you suggest?


https://dlang.org/phobos/std_uni.html#asLowerCase

```d
bool isearch(S1, S2)(S1 haystack, S2 needle)
{
import std.uni;
import std.algorithm;
return haystack.asLowerCase.canFind(needle.asLowerCase);
}
```

untested.

-Steve


This doesn't actually work properly in all languages. It will 
probably work in most, but it's not entirely correct.


Ex. Turkish will not work with it properly.


Greek will also be problematic. 2 different lowercase sigmas but 
only 1 uppercase. Other languages that may make issues, German 
where normally ß uppercases as SS (or not) but not the other way 
round, but here we already arrived to Unicode land and the 
normalization conundrum.





Re: Programs in D are huge

2022-08-19 Thread Patrick Schluter via Digitalmars-d-learn
On Thursday, 18 August 2022 at 17:15:12 UTC, rikki cattermole 
wrote:


On 19/08/2022 4:56 AM, IGotD- wrote:
BetterC means no arrays or strings library and usually in 
terminal tools you need to process text. Full D is wonderful 
for such task but betterC would be limited unless you want to 
write your own array and string functionality.


Unicode support in Full D isn't complete.

There is nothing in phobos to even change case correctly!

Both are limited if you care about certain stuff like non-latin 
based languages like Turkic.


A general toupper/tolower for Unicode is doomed to fail. As 
already mentioned Turkish has its specificity, but other 
languages also have traps. In Greek toupper/tolower are not 
reversible i.e. `x.toupper.tolower == x` is not guaranteed . Some 
languages have 1 codepoint input and 2 codepoints as result 
(German ß becomes SS in most cases, capital ẞ is not the right 
choice in most cases).

etc. etc.


Re: A look inside "filter" function defintion

2022-08-09 Thread Patrick Schluter via Digitalmars-d-learn

On Tuesday, 2 August 2022 at 12:39:41 UTC, pascal111 wrote:

On Tuesday, 2 August 2022 at 04:06:30 UTC, frame wrote:

On Monday, 1 August 2022 at 23:35:13 UTC, pascal111 wrote:
This is the definition of "filter" function, and I think it 
called itself within its definition. I'm guessing how it 
works?


It's a template that defines the function called "Eponymous 
Templates":

https://dlang.org/spec/template.html#implicit_template_properties

A template generates code, it cannot be called, only 
instantiated.


The common syntax is just a shortcut for using it. Otherwise 
you would need to write `filter!(a => a > 0).filter([1, -1, 2, 
0, -3])`. Like UFCS, some magic the compiler does for you.


Instantiation seems some complicated to me. I read "If a 
template contains members whose name is the same as the 
template identifier then these members are assumed to be 
referred to in a template instantiation:" in the provided link, 
but I'm still stuck. Do you have a down-to-earth example for 
beginners to understand this concept?


A template is conceptually like a macro with parameters in C. An 
instantiation is like the using of the macro in your C program. 
The fundamental difference is, that the template is syntactically 
and semantically linked to the language. In C, the preprocessor 
was just a textual replacement done before the proper 
compilation. This meant that there are things that you couldn't 
do in the pre-processor (like `#if sizeof(int)==4`) and 
(horrible) things that never should have been possible (I used to 
use the C pre-processor with other languages like AutoLISP and 
dBase III).





Re: Make shared static this() encoding table compilable

2022-03-17 Thread Patrick Schluter via Digitalmars-d-learn
On Thursday, 17 March 2022 at 12:19:36 UTC, Patrick Schluter 
wrote:
On Thursday, 17 March 2022 at 12:11:19 UTC, Patrick Schluter 
wrote:
On Thursday, 17 March 2022 at 11:36:40 UTC, Patrick Schluter 
wrote:



[...]

Something akin to
```d
auto lookup(ushort key)
{
  return cp949[key-0x8141];
}

[...]


Takes 165 ms to compile with dmd 2.094.2 -O on [godbolt] with 
the whole table generated from the Unicode link.


[godbolt]: https://godbolt.org/z/hEzP7rKnn]


Upps, remove the ] at the end of the link to [godbolt]

[godbolt]: https://godbolt.org/z/hEzP7rKnn


Re: Make shared static this() encoding table compilable

2022-03-17 Thread Patrick Schluter via Digitalmars-d-learn
On Thursday, 17 March 2022 at 12:11:19 UTC, Patrick Schluter 
wrote:
On Thursday, 17 March 2022 at 11:36:40 UTC, Patrick Schluter 
wrote:



[...]

Something akin to
```d
auto lookup(ushort key)
{
  return cp949[key-0x8141];
}

[...]


Takes 165 ms to compile with dmd 2.094.2 -O on [godbolt] with the 
whole table generated from the Unicode link.


[godbolt]: https://godbolt.org/z/hEzP7rKnn]


Re: Make shared static this() encoding table compilable

2022-03-17 Thread Patrick Schluter via Digitalmars-d-learn
On Thursday, 17 March 2022 at 11:36:40 UTC, Patrick Schluter 
wrote:

On Monday, 14 March 2022 at 09:40:00 UTC, zhad3 wrote:
Hey everyone, I am in need of some help. I have written this 
Windows CP949 encoding table 
https://github.com/zhad3/zencoding/blob/main/windows949/source/zencoding/windows949/table.d which is used to convert CP949 to UTF-16.


After some research about how to initialize immutable 
associative arrays people suggested using `shared static 
this()`. So far this worked for me, but I recently discovered 
that DMD cannot compile this in release mode with 
optimizations.


`dub build --build=release`  or `dmd` with `-release -O` fails:

```
code  windows949
function  
zencoding.windows949.fromWindows949!(immutable(ubyte)[]).fromWindows949

code  table
function  zencoding.windows949.table._sharedStaticCtor_L29_C1
dmd failed with exit code -11.
```

I usually compile my projects using LDC where this works fine, 
but I don't want to force others to use LDC because of this 
one problem.


Hence I'd like to ask on how to change the code so that it 
compiles on DMD in release mode (with optimizations). I 
thought about having a computational algorithm instead of an 
encoding table but sadly I could not find any references in 
that regard. Apparently encoding tables seem to be the 
standard.


Why not use a simple static array (not an associative array). 
Where the values are indexed on `key - min(keys)`. Even with 
the holes in the keys (i.e. keys that do not have corresponding 
values) it will be smaller that the constructed associative 
array? The lookup is also faster.

Something akin to
```d
auto lookup(ushort key)
{
  return cp949[key-0x8141];
}

immutable ushort[0xFDFE-0x8141+1] cp949 = [
0x8141-0x8141: 0xAC02,
0x8142-0x8141: 0xAC03,
0x8143-0x8141: 0xAC05,
0x8144-0x8141: 0xAC06,
0x8145-0x8141: 0xAC0B,
0x8146-0x8141: 0xAC0C,
0x8147-0x8141: 0xAC0D,
0x8148-0x8141: 0xAC0E,
0x8149-0x8141: 0xAC0F,
0x814A-0x8141: 0xAC18,
0x814B-0x8141: 0xAC1E,
0x814C-0x8141: 0xAC1F,
0x814D-0x8141: 0xAC21,
0x814E-0x8141: 0xAC22,
0x814F-0x8141: 0xAC23,
0x8150-0x8141: 0xAC25,
0x8151-0x8141: 0xAC26,
0x8152-0x8141: 0xAC27,
0x8153-0x8141: 0xAC28,
0x8154-0x8141: 0xAC29,
0x8155-0x8141: 0xAC2A,
0x8156-0x8141: 0xAC2B,
0x8157-0x8141: 0xAC2E,
0x8158-0x8141: 0xAC32,
0x8159-0x8141: 0xAC33,
0x815A-0x8141: 0xAC34,
0x8161-0x8141: 0xAC35,
0x8162-0x8141: 0xAC36,
0x8163-0x8141: 0xAC37,
...
0xFDFA-0x8141: 0x72A7,
0xFDFB-0x8141: 0x79A7,
0xFDFC-0x8141: 0x7A00,
0xFDFD-0x8141: 0x7FB2,
0xFDFE-0x8141: 0x8A70,
];
```


Re: Make shared static this() encoding table compilable

2022-03-17 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 14 March 2022 at 09:40:00 UTC, zhad3 wrote:
Hey everyone, I am in need of some help. I have written this 
Windows CP949 encoding table 
https://github.com/zhad3/zencoding/blob/main/windows949/source/zencoding/windows949/table.d which is used to convert CP949 to UTF-16.


After some research about how to initialize immutable 
associative arrays people suggested using `shared static 
this()`. So far this worked for me, but I recently discovered 
that DMD cannot compile this in release mode with optimizations.


`dub build --build=release`  or `dmd` with `-release -O` fails:

```
code  windows949
function  
zencoding.windows949.fromWindows949!(immutable(ubyte)[]).fromWindows949

code  table
function  zencoding.windows949.table._sharedStaticCtor_L29_C1
dmd failed with exit code -11.
```

I usually compile my projects using LDC where this works fine, 
but I don't want to force others to use LDC because of this one 
problem.


Hence I'd like to ask on how to change the code so that it 
compiles on DMD in release mode (with optimizations). I thought 
about having a computational algorithm instead of an encoding 
table but sadly I could not find any references in that regard. 
Apparently encoding tables seem to be the standard.


Why not use a simple static array (not an associative array). 
Where the values are indexed on `key - min(keys)`. Even with the 
holes in the keys (i.e. keys that do not have corresponding 
values) it will be smaller that the constructed associative 
array? The lookup is also faster.


Re: Teaching D at a Russian University

2022-02-20 Thread Patrick Schluter via Digitalmars-d-announce

On Sunday, 20 February 2022 at 11:35:59 UTC, Mike Parker wrote:
On Sunday, 20 February 2022 at 11:04:45 UTC, Patrick Schluter 
wrote:


I read that the "for" as an equivalent of "because" was indeed 
almost extinct but was more or less resurrected by Tolkien as 
he used it throughout Lord of the Rings and the 
Hobbit.https://english.stackexchange.com/questions/566024/the-meaning-of-word-for-at-the-beginning-of-sentence


Yes, the Tolkienesque way of using "for" at the beginning of a 
sentence is rarely used anymore. But it is still sometimes used 
in modern writing to join two independent clauses together in a 
single sentence, usually for flavor.


The funny thing, as an English as third language learner (I grew 
up as French and German bilingual) Tolkienesque for never 
registered as something odd. It was only when a colleague, who 
happened to be a native english speaker, made a remark in one of 
my emails at work that I learnt about it.


Re: Teaching D at a Russian University

2022-02-20 Thread Patrick Schluter via Digitalmars-d-announce

On Sunday, 20 February 2022 at 03:44:42 UTC, Paul Backus wrote:

On Saturday, 19 February 2022 at 20:26:45 UTC, Elronnd wrote:

On Saturday, 19 February 2022 at 17:33:07 UTC, matheus wrote:
By the way English isn't my first language but I think there 
is a small typo:


"In D, such nuances are fewer, for header files are not 
required."


I think it's missing the word "example":

"In D, such nuances are fewer, for example header files are 
not required."


I think it is fine as is.


Yes, this is a perfectly correct use of "for" as a coordinating 
conjunction. [1] It may come across as a bit formal or 
old-fashioned, though—in normal speech, you'd usually use 
"since".


[1] https://writing.wisc.edu/handbook/grammarpunct/coordconj/


I read that the "for" as an equivalent of "because" was indeed 
almost extinct but was more or less resurrected by Tolkien as he 
used it throughout Lord of the Rings and the 
Hobbit.https://english.stackexchange.com/questions/566024/the-meaning-of-word-for-at-the-beginning-of-sentence





Re: ldc executable crashes with this code

2022-02-04 Thread Patrick Schluter via Digitalmars-d-learn

On Thursday, 3 February 2022 at 02:01:34 UTC, forkit wrote:

On Thursday, 3 February 2022 at 01:57:12 UTC, H. S. Teoh wrote:




would be nice if the compiler told me something though :-(

i.e. "hey, dude, you really wanna to that?"


would be nice if programmers (C or D) learnt that a typecast 
means "shut up compiler I know what I do". You explicitly 
instructed the compiler to not complain.


Remove the typecast and the compiler will bring an error.

That's the reason why typecasts are to be avoided as much as 
possible.It is often a code smell.


Re: gdc or ldc for faster programs?

2022-01-31 Thread Patrick Schluter via Digitalmars-d-learn

On Tuesday, 25 January 2022 at 22:41:35 UTC, Elronnd wrote:

On Tuesday, 25 January 2022 at 22:33:37 UTC, H. S. Teoh wrote:
interesting because idivl is known to be one of the slower 
instructions, but gdc nevertheless considered it not 
worthwhile to replace it, whereas ldc seems obsessed about 
avoid idivl at all costs.


Interesting indeed.  Two remarks:

1. Actual performance cost of div depends a lot on hardware.  
IIRC on my old intel laptop it's like 40-60 cycles; on my newer 
amd chip it's more like 20; on my mac it's ~10.  GCC may be 
assuming newer hardware than llvm.  Could be worth popping on a 
-march=native -mtune=native.  Also could depend on how many 
ports can do divs; i.e. how many of them you can have running 
at a time.


2. LLVM is more aggressive wrt certain optimizations than gcc, 
by default.  Though I don't know how relevant that is at -O3.


-O3 often chooses longer code and unrollsmore agressively 
inducing higher miss rates in the instruction caches.

-O2 can beat -O3 in some cases when code size is important.


Re: DMD now incorporates a disassembler

2022-01-10 Thread Patrick Schluter via Digitalmars-d-announce

On Sunday, 9 January 2022 at 06:04:25 UTC, max haughton wrote:

On Sunday, 9 January 2022 at 02:58:43 UTC, Walter Bright wrote:


I've never seen one. What's the switch for gcc to do the same 
thing?




For GCC/Clang you'd want -S (and then -masm=intel to make the 
output ~~beautiful to nobody but the blind~~ readable).


I prefer -save-temps -fverbose-asm which generates a supplemental 
.i and .s file without changing the .o file.


Re: How to print unicode characters (no library)?

2021-12-28 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 27 December 2021 at 07:12:24 UTC, rempas wrote:


I don't understand that. Based on your calculations, the 
results should have been different. Also how are the numbers 
fixed? Like you said the amount of bytes of each encoding is 
not always standard for every character. Even if they were 
fixed this means 2-bytes for each UTF-16 character and 4-bytes 
for each UTF-32 character so still the numbers doesn't make 
sense to me. So still the number of the "length" property 
should have been the same for every encoding or at least for 
UTF-16 and UTF-32. So are the sizes of every character fixed or 
not?




Your string is represented by 8 codepoints. The number of 
codeunits to represent them in memory depends on the encoding. D 
supports to work with 3 different encodings (in the Unicode 
standard there are more than these 3)


string  utf8s  = "Hello \n";
wstring utf16s = "Hello \n"w;
dstring utf32s = "Hello \n"d;

Here the canonical Unicode representation of your string

   H  e  l  l  o  \n
U+0048 U+0065 U+006C U+006C U+006F U+0020 U+1F602 U+000a

let's see how these 3 variable are represented in memory:

utf8s : 48 65 6C 6C 6F 20 F0 9F 98 82 0a
11 char in memory using 11 bytes

utf16s: 0048 0065 006C 006C 006F 0020 D83D DE02 000A
9 wchar in memory using 18 bytes

utf16s: 0048 0065 006C 006C 006F 0020 
0001F602 000A

8 dchar in memory using 32 bytes

As you can see, the most compact form is generally UTF-8, that's 
why it is the preferred encoding for Unicode.


UTF-16 is supported because of legacy support reason like it is 
used in the Windows API and also internally in Java.


UTF-32 has one advantage, in that it has a 1 to 1 mapping between 
codepoint and array index. In practice it is not that much of an 
advantage as codepoints and characters are disjoint concepts. 
UTF-32 uses a lot of memory for practically no benefit (when you 
read in the forum about the big auto-decode error of D it is 
linked to this).


Re: GDC has just landed v2.098.0-beta.1 into GCC

2021-12-03 Thread Patrick Schluter via Digitalmars-d-announce

On Friday, 3 December 2021 at 18:22:36 UTC, Iain Buclaw wrote:
On Friday, 3 December 2021 at 13:48:48 UTC, Patrick Schluter 
wrote:
On Tuesday, 30 November 2021 at 19:37:34 UTC, Iain Buclaw 
wrote:


Hi, just a little question that annoys me in my project which 
is mainly written in C and clashes with the D code I'm 
integrating slowly into it.
I generate the makefile dependencies with the -MMD option of 
gcc and that option generates .d files (which are not D 
language files), this is annoying as I had to rename my D 
files with a .D extension.
Is there a way to force gcc to use another extension? Is this 
extension clash been solved somehow, as the man of gcc 10.2 
lists .d as the extension for Dlang files.


Yes, with -MF to specify the output dependency file.


Thanks


Re: GDC has just landed v2.098.0-beta.1 into GCC

2021-12-03 Thread Patrick Schluter via Digitalmars-d-announce

On Tuesday, 30 November 2021 at 19:37:34 UTC, Iain Buclaw wrote:

Hi, just a little question that annoys me in my project which is 
mainly written in C and clashes with the D code I'm integrating 
slowly into it.
I generate the makefile dependencies with the -MMD option of gcc 
and that option generates .d files (which are not D language 
files), this is annoying as I had to rename my D files with a .D 
extension.
Is there a way to force gcc to use another extension? Is this 
extension clash been solved somehow, as the man of gcc 10.2 lists 
.d as the extension for Dlang files.





Re: Wrong result with enum

2021-11-11 Thread Patrick Schluter via Digitalmars-d-learn

On Thursday, 11 November 2021 at 05:37:05 UTC, Salih Dincer wrote:

is this a issue, do you need to case?

```d
enum tLimit = 10_000;  // (1) true result
enum wLimit = 100_000; // (2) wrong result

void main()
{
  size_t subTest1 = tLimit;
  assert(subTest1 == tLimit);/* no error */

  size_t subTest2 = wLimit;
  assert(subTest2 == wLimit);/* no error */

  size_t gauss = (tLimit * (tLimit + 1)) / 2;
  assert(gauss == 50_005_000);   /* no error */

  gauss = (wLimit * (wLimit + 1)) / 2;
  assert(gauss == 5_000_050_000);/* failure

  // Fleeting Solution:
enum size_t limit = 100_000;
gauss = (limit * (limit + 1)) / 2;
assert(gauss == 5_000_050_000); //* no error */

} /* Small Version:

void main(){
  enum t = 10_000;
  size_t a = t * t;
  assert(a == 100_000_000);// No Error

  enum w = 100_000;
  size_t b = w * w;
  assert(b == 10_000_000_000); // Assert Failure
}
*/
```


Integer overflow. By default an enum is defined as `int` which is 
limited to 32 bit. `int.max` is 2_147_483_647 which is the 
biggest number representable with an int.


You can declare the enum to be of a bigger type `enum  : long { w 
= 100_000 };`
or you can use `std.bigint` if you don't know the maximum you 
work with or the library `std.experimental.checkedint` which 
allows to set the behaviour one wants in case of overflow.


Re: Wrong result with enum

2021-11-11 Thread Patrick Schluter via Digitalmars-d-learn

On Thursday, 11 November 2021 at 12:05:19 UTC, Tejas wrote:
On Thursday, 11 November 2021 at 09:11:37 UTC, Salih Dincer 
wrote:
On Thursday, 11 November 2021 at 06:34:16 UTC, Stanislav 
Blinov wrote:
On Thursday, 11 November 2021 at 05:37:05 UTC, Salih Dincer 
wrote:

is this a issue, do you need to case?

```d
enum tLimit = 10_000;  // (1) true result
enum wLimit = 100_000; // (2) wrong result
```


https://dlang.org/spec/enum.html#named_enums

Unless explicitly set, default type is int. 110 is 
greater than int.max.

11
```d
  enum w = 100_000;
  size_t b = w * w;
  // size_t b = 10 * 10; // ???
  assert(b == 10_000_000_000); // Assert Failure
```
The w!(int) is not greater than the b!(size_t)...


Are you on 32-bit OS? I believe `size_t` is 32 bits on 32 bit 
OS and 64 on a 64-bit OS


That's not the issue with his code. The 32 bit overflow happens 
already during the `w * w` mulitplication. The wrong result is 
then assigned to the `size_t`.


`cast(size_t)w * w` or the declaration `enum  : size_t { w = 
100_000 };` would change that.





Re: writef, compile-checked format, pointer

2021-08-09 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 9 August 2021 at 19:38:28 UTC, novice2 wrote:

format!"fmt"() and writef!"fmt"() templates
with compile-time checked format string
not accept %X for pointers,

but format() and writef() accept it

https://run.dlang.io/is/aQ05Ux
```
void main() {
import std.stdio: writefln;
int x;
writefln("%X", );  //ok
writefln!"%s"();  //ok
//writefln!"%X"();  //compile error
}
```

is this intentional?


Yes. %X is to format integers. Runtime evaluation of a format 
string does not allow for type checking. When using the template, 
the evaluation can be thorough and the types can be checked 
properly. You have 2 solutions for your problem, either a type 
cast


writefln!"%X"(cast(size_t));

or using the generic format specifier that will deduce itself the 
format to using depending in the passed type.


writefln!"%s"();




Re: Is returning void functions inside void functions a feature or an artifact?

2021-08-03 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 2 August 2021 at 14:46:36 UTC, jfondren wrote:

On Monday, 2 August 2021 at 14:31:45 UTC, Rekel wrote:

[...]


I don't know where you can find this in the docs, but what 
doesn't seem trivial about it? The type of the expression 
`print()` is void. That's the type that `doSomething` returns. 
That's the type of the expression that `doSomething` does 
return and the type of the expression following a `return` 
keyword in `doSomething`. Rather than a rule expressly 
permitting this, I would expect to find to either nothing (it's 
permitted because it makes sense) or a rule against it (it's 
expressly forbidden because it has to be to not work, because 
it makes sense).


C, C++, Rust, and Zig are all fine with this. Nim doesn't like 
it.


Wow. Just discovered that C accepts it. After 35 years of daily 
use of C, there are still things to discover.


Re: issue with static foreach

2021-07-22 Thread Patrick Schluter via Digitalmars-d-learn

On Thursday, 22 July 2021 at 03:43:44 UTC, someone wrote:

```

Now, if uncomment those two innocuous commented lines for the 
if (true == true) block:


```d

labelSwitch: switch (lstrExchangeID) {

static foreach (sstrExchangeID; gstrExchangeIDs) {

   mixin(r"case r"d, `"`, sstrExchangeID, `"`, r"d : "d);
   mixin(r"classTickerCustom"d, sstrExchangeID, r" 
lobjTicker"d, sstrExchangeID, r" = new classTickerCustom"d, 
sstrExchangeID, r"(lstrSymbolID);"d);

   mixin(r"if (true == true) {"d);
   mixin(r"pobjTickersCustom"d, sstrExchangeID, r" ~= 
lobjTicker"d, sstrExchangeID, r";"d);
   mixin(r"pobjTickersCommon ~= cast(classTickerCommon) 
lobjTicker"d, sstrExchangeID, r";"d);

   mixin(r"}"d);
   mixin(r"break labelSwitch;"d);

}

default :

   break;

}


What an unreadable mess. Sorry.

I would have done something like that:


```d
mixin(format!
`case r"%1$s"d :
   classTickerCustom%1$s  lobjTicker%1$s  = new 
classTickerCustom%1$s (lstrSymbolID);

   if (true == true) {
   pobjTickersCustom%1$s  ~= lobjTicker%1$s ;
   pobjTickersCommon ~= cast(classTickerCommon) 
lobjTicker%1$s ;

   }
   break labelSwitch;`(sstrExchangeID)
);
```

That's easier to edit imho.



Re: wanting to try a GUI toolkit: needing some advice on which one to choose

2021-06-03 Thread Patrick Schluter via Digitalmars-d-learn

On Tuesday, 1 June 2021 at 20:56:05 UTC, someone wrote:
On Tuesday, 1 June 2021 at 16:20:19 UTC, Ola Fosheim Grøstad 
wrote:



[...]


I wasn't considering/referring to content in the browser, this 
is an entirely different arena.


[...]


Thank you! I can only agree.


Re: Recommendations on avoiding range pipeline type hell

2021-05-16 Thread Patrick Schluter via Digitalmars-d-learn

On Sunday, 16 May 2021 at 09:55:31 UTC, Chris Piker wrote:

On Sunday, 16 May 2021 at 09:17:47 UTC, Jordan Wilson wrote:


Another example:
```d
auto r = [iota(1,10).map!(a => a.to!int),iota(1,10).map!(a => 
a.to!int)];

# compile error
```

Hi Jordan

Nice succinct example.  Thanks for looking at the code :)

So, honest question.  Does it strike you as odd that the exact 
same range definition is considered to be two different types?


Even in C
```
typedef struct {
int a;
} type1;
```
and
```
struct {
int a;
} type2;
```

are two different types. The compiler will give an error if you 
pass one to a function waiting for the other.


```
void fun(type1 v)
{
}

type2 x;

fun(x);  // gives error
```
See https://godbolt.org/z/eWenEW6q1


Maybe that's eminently reasonable to those with deep knowledge, 
but it seems crazy to a new D programmer.  It breaks a general 
assumption about programming when copying and pasting a 
definition yields two things that aren't the same type. (except 
in rare cases like SQL where null != null.)






On a side note, I appreciate that `.array` solves the problem, 
but I'm writing pipelines that are supposed to work on 
arbitrarily long data sets (> 1.4 TB is not uncommon).





Re: Shutdown signals

2021-05-11 Thread Patrick Schluter via Digitalmars-d-learn

On Tuesday, 11 May 2021 at 06:44:57 UTC, Tim wrote:

On Monday, 10 May 2021 at 23:55:18 UTC, Adam D. Ruppe wrote:

[...]


I don't know why I didn't find that. I was searching for the 
full name, maybe too specific? Thanks anyways, this is super 
helpful. I wish it was documented better though :(


So why use sigaction and not signal? From what I can tell 
signal is the C way of doing things


Use `sigaction()`, `signal()` has problems. See this 
stackoverflow [1] question explains the details


[1]: 
https://stackoverflow.com/questions/231912/what-is-the-difference-between-sigaction-and-signal


Re: News Roundup on the D Blog

2021-03-26 Thread Patrick Schluter via Digitalmars-d-announce

On Friday, 26 March 2021 at 10:21:01 UTC, drug wrote:

On 3/26/21 12:52 PM, Martin Tschierschke wrote:

The view reader comments are all negative about D.


What exactly? Tango vs Phobos? GC? Or something reasonable?


No, just the typical know-it-alls w.nkers the heise-forum are 
full of.


Re: How to delete dynamic array ?

2021-03-18 Thread Patrick Schluter via Digitalmars-d-learn
On Wednesday, 17 March 2021 at 16:20:06 UTC, Steven Schveighoffer 
wrote:


It's important to understand that [] is just a practical syntax 
for a fat pointer.


Thinking of [] just as a fancy pointer helps imho to clarify that 
the pointed to memory nature is independant of the pointer itself.


Re: Endianness - How to test code for portability

2021-03-13 Thread Patrick Schluter via Digitalmars-d-learn

On Friday, 12 March 2021 at 05:53:40 UTC, Preetpal wrote:
In the portability section of the language spec, they talk 
about endianness 
(https://dlang.org/spec/portability.html#endianness)  which 
refers "to the order in which multibyte types are stored." IMO 
if you wanted to actually be sure your code is portable across 
both big endian and little endian systems, you should actually 
run your code on both types of systems and test if there any 
issues.


The problem is that I am not aware of any big-endian systems 
that you can actually test on and if there is any D lang 
compiler support for any of these systems if they exist.


This is not an important issue to me but I was just curious to 
see if anyone actually tests for portability issues related to 
endianness by compiling their D Lang code for a big endian 
architecture and actually running it on that system.


Actual big endian systems? Not many around anymore:
- SPARC almost dead
- IBM z/system still around and not going away but a D 
implementation not very likely as it adds the other difficulty 
that it is not ASCII but EBCDIC.

- AVR32 doesn't look very vivid.
- Freescale Coldfire (as successor of 68K) also on a descending 
path

- OpenRISC superseded by RISC-V

Some CPU can do both but are generally used in little endian mode 
(ARM, Power) or also obsolete (Alpha, IA64).


While from an intellectual perspective endiannes support is a 
good thing, from a pure pragmatic vue it is a solved issue. 
Little endian won, definitely (except on the network in the 
TCP/IP headers).


Re: I learned something new in D this week! (anonymous class rundown)

2021-02-19 Thread Patrick Schluter via Digitalmars-d-announce
On Thursday, 18 February 2021 at 04:31:39 UTC, Adam D. Ruppe 
wrote:
Many of you know I've been around D for a long time now and 
picked up a lot of random tricks over the years, so it isn't 
every day I learn about a new old feature in the language's 
basic syntax.


Would you like to know more?

http://dpldocs.info/this-week-in-d/Blog.Posted_2021_02_15.html

I also showed how you can use anonymous classes in betterC 
without any crazy hacks btw!



Lately most my blog posts have just been quick updates on what 
features are coming in my libraries, but I still write these 
more general-knowledge kind of tips from time to time. (and 
sometimes the lib entries can be generally educational too like 
my little attempt at demystifying fibers a couple months ago: 
http://dpldocs.info/experimental-docs/arsd.fibersocket.html#conceptual-overview )


DWT users knew about anonymous classes as they are used a lot 
there. Of course as SWT is a Java based library, D had to had the 
features to ease the porting.


Re: Discussion Thread: DIP 1039--Static Arrays with Inferred Length--Community Review Round 1

2021-01-06 Thread Patrick Schluter via Digitalmars-d-announce

On Wednesday, 6 January 2021 at 14:03:14 UTC, Mathias LANG wrote:

On Wednesday, 6 January 2021 at 13:48:52 UTC, angel wrote:
On Wednesday, 6 January 2021 at 09:24:28 UTC, Mike Parker 
wrote:



The Feedback Thread is here:
https://forum.dlang.org/post/qglydztoqxhhcurvb...@forum.dlang.org


Why not "int[auto] arr = [1, 2, 3]" ?
IMHO auto keyword is less ambiguous than $.


Someone else could misunderstand `auto` to mean partial type 
deduction on associative array, e.g. `int[auto] arr = ["Hello": 
ubyte(1), "World": ubyte(2)];`.
Personally, I think `$` is very natural here, but I also didn't 
consider `auto` before.


$ is very much appropriate imho, as it implies the length of the 
array. auto suggests a type (or storage class) and has only 
barely a link with arrays.




Re: where is the memory corruption?

2020-12-10 Thread Patrick Schluter via Digitalmars-d-learn

On Wednesday, 9 December 2020 at 21:28:04 UTC, Paul Backus wrote:

On Wednesday, 9 December 2020 at 21:21:58 UTC, ag0aep6g wrote:


D's wchar is not C's wchar_t. D's wchar is 16 bits wide. The 
width of C's wchar_t is implementation-defined. In your case 
it's probably 32 bits.


In D, C's wchar_t is available as `core.stdc.stddef.wchar_t`.

http://dpldocs.info/experimental-docs/core.stdc.stddef.wchar_t.1.html


Don't use wchar_t in C. It has variable size depending of 
implementation. On Posix machines (Linux, BSD etc.) it's 32 bit 
wide UTF-32, on Windows it 16 bit UTF-16.





Re: Return values from auto function

2020-11-07 Thread Patrick Schluter via Digitalmars-d-learn

On Saturday, 7 November 2020 at 15:49:13 UTC, James Blachly wrote:


```
retval = i > 0 ? Success!int(i) : Failure("Sorry");
```

casting each to `Result` compiles, but is verbose:

```
return i > 0 ? cast(Result) Success!int(i) : cast(Result) 
Failure("Sorry");

```

** Could someone more knowledgeable than me explain why 
implicit conversion does not happen with the ternary op, but 
works fine with if/else? Presumably, it is because the op 
returns a single type and implicit conversion is performed 
after computing the expression's return type? If this somehow 
worked, it would make the SumType package much more ergonomic **


It's just that tenary requires the same type in both branches. It 
was already so in C.



return i > 0 ? (retval = Success!int(i)) : (retval = 
Failure("Sorry"));


should work


Re: why `top` report is not consistent with the memory freed by core.stdc.stdlib : free?

2020-11-06 Thread Patrick Schluter via Digitalmars-d-learn

On Friday, 6 November 2020 at 06:17:42 UTC, mw wrote:

Hi,

I'm trying this:

https://wiki.dlang.org/Memory_Management#Explicit_Class_Instance_Allocation

using core.stdc.stdlib : malloc and free to manually manage 
memory, I tested two scenarios:


-- malloc & free
-- malloc only

and I use Linux command `top` to check the memory used by the 
program, there is no difference in this two scenarios.


I also tried to use `new` to allocate the objects, and 
GC.free(). The memory number reported by `top` is much less 
than those reported by using core.stdc.stdlib : malloc and free.



I'm wondering why? shouldn't core.stdc.stdlib : malloc and free 
be more raw (low-level) than new & GC.free()? why `top` shows 
stdlib free() is not quite working?




stdlib free does not give memory back to the system in a process 
normally on Linux. top only shows the virtual memory granted to 
that process. When you malloc, the VIRT goes up, the RES might go 
up also but they only go down if explicitly requested.





Re: Getting Qte5 to work

2020-10-28 Thread Patrick Schluter via Digitalmars-d-learn

On Wednesday, 28 October 2020 at 06:52:35 UTC, evilrat wrote:


Just an advice, Qte5 isn't well maintained, the other 
alternatives such as 'dlangui' also seems abandoned, so 
basically the only maintained UI library here is gtk-d, but 
there was recently a nice tutorial series written about it.


DWT is also still active. The looks are a little outdated as it 
is swt-3 based but works just fine.





Re: LDC 1.24.0-beta1

2020-10-26 Thread Patrick Schluter via Digitalmars-d-announce

On Saturday, 24 October 2020 at 00:00:02 UTC, starcanopy wrote:

On Friday, 23 October 2020 at 22:48:33 UTC, Imperatorn wrote:

On Friday, 23 October 2020 at 20:21:39 UTC, aberba wrote:

On Friday, 23 October 2020 at 18:01:19 UTC, Kagamin wrote:

[...]


Not saying Kinke SHOULD do it. Was rather disagreeing with 
the idea that "developers" don't use installers. And that's a 
shortcoming with the LDC project...no straightforward way to 
set it up on Windows using an installer. If visuald supports 
LDC, why not point people to it.


[...]


I agree with this. Not providing an installer gives the 
message that you're not that interested in people using it.


That's an exaggeration. Every release is accompanied by 
binaries that one may easily retrieve. Setting up the 
dependencies is only done once, and if you're a Windows 
developer, such an environment most likely exists, and you'll 
likely only have to add the bin to your path. It's my 
understanding that there are few people regularly working on 
LDC; allocating (voluntary!) manpower to a nice but 
non-essential component doesn't seem wise.


You underestimate how spoiled windows developer are. Even these 
simple step are completely out of character for most software on 
the platform. 20 years ago it wasn't a problem, now on Windows 10 
it's a whole other story. How many clicks to get the dialog to 
set PATH? On NT4 it was 2 clicks, now on Windows 10 I still 
haven't figured out how to do it without searching like a madman.


To make it short. The Windows platform is getting more and more 
hostile to manual tuning.


Re: Why was new(size_t s) { } deprecated in favor of an external allocator?

2020-10-15 Thread Patrick Schluter via Digitalmars-d-learn

On Wednesday, 14 October 2020 at 20:32:51 UTC, Max Haughton wrote:

On Wednesday, 14 October 2020 at 20:27:10 UTC, Jack wrote:

What was the reasoning behind this decision?


Andrei's std::allocator talk from a few years ago at cppcon 
covers this (amongst other things)


Yes, and what did he say?
You seriously don't expect people to search for a random talk 
from a random event from a random year?


Re: Why is BOM required to use unicode in tokens?

2020-09-18 Thread Patrick Schluter via Digitalmars-d-learn
On Wednesday, 16 September 2020 at 00:22:15 UTC, Steven 
Schveighoffer wrote:

On 9/15/20 8:10 PM, James Blachly wrote:

On 9/15/20 10:59 AM, Steven Schveighoffer wrote:

[...]


Steve: It sounds as if the spec is correct but the glyph 
(codepoint?) range is outdated. If this is the case, it would 
be a worthwhile update. Do you really think it would be 
rejected out of hand?




I don't really know the answer, as I'm not a unicode expert.

Someone should verify that the character you want to use for a 
symbol name is actually considered a letter or not. Using 
phobos to prove this is kind of self-defeating, as I'm pretty 
sure it would be in league with DMD if there is a bug.


I checked, it's not a letter. None of the math symbols are.



But if it's not a letter, then it would take more than just 
updating the range. It would be a change in the philosophy of 
what constitutes an identifier name.







Re: Symmetry Investments and the D Language Foundation are Hiring

2020-09-02 Thread Patrick Schluter via Digitalmars-d-announce
On Wednesday, 2 September 2020 at 12:50:35 UTC, Steven 
Schveighoffer wrote:

On 9/1/20 2:38 PM, Patrick Schluter wrote:
On Tuesday, 1 September 2020 at 13:28:07 UTC, Steven 
Schveighoffer wrote:

On 9/1/20 5:38 AM, Stefan Koch wrote:

[...]


I have to agree with Jacob -- what common situation is 
changing the timestamps of your files but not the data?



git checkout branch
git checkout -


Is that a part of normal development process? Typically when I 
want incremental building, I'm editing a file, then rebuilding.


I mean, you check out a different branch, but you don't want to 
rebuild everything? I would. And with D, where there are so 
many templates, almost everything is going to need rebuilding 
anyway. This update to dub might replace a build-time problem 
with a build inconsistency problem (hopefully linker error, but 
possibly code generation differences).


Yes, it happens from time to time and on makefile based it 
involves recompilations of unchanged files but granted it could 
be just me.




Re: Symmetry Investments and the D Language Foundation are Hiring

2020-09-01 Thread Patrick Schluter via Digitalmars-d-announce
On Tuesday, 1 September 2020 at 13:28:07 UTC, Steven 
Schveighoffer wrote:

On 9/1/20 5:38 AM, Stefan Koch wrote:
On Tuesday, 1 September 2020 at 09:09:36 UTC, Jacob Carlborg 
wrote:
BTW, is timestamps vs SHA-1 hashing really the most pressing 
issue with Dub?




We think that not recompiling certain modules which have not 
changed will improve our build times.
And the task proposed is actually something that can go in 
without too much struggle.

Whereas deeper issues in dub likely take much longer.


I have to agree with Jacob -- what common situation is changing 
the timestamps of your files but not the data?



git checkout branch
git checkout -





Re: Post: Why no one is using your D library

2020-07-02 Thread Patrick Schluter via Digitalmars-d-announce

On Thursday, 2 July 2020 at 14:56:09 UTC, aberba wrote:

Why no one is using your D library

So I decided to write a little something special. Its my love 
letter to D folks.


https://aberba.vercel.app/2020/why-no-one-is-using-your-d-library/


Thank you. Really good and I hope devs here will follow your 
advices. It's needed.


Re: Generating struct .init at run time?

2020-07-02 Thread Patrick Schluter via Digitalmars-d-learn

On Thursday, 2 July 2020 at 07:51:29 UTC, Ali Çehreli wrote:
Normally, struct .init values are known at compile time. 
Unfortunately, they add to binary size:


[...]
memset() is the function you want. The initializer is an element 
generated in the data segment (or in a read only segment) that 
will be copied to the variable by a internal call to memcpy. The 
same happens in C except that the compilers are often clever and 
replace the copy by a memset().






Re: Talk by Herb Sutter: Bridge to NewThingia

2020-06-29 Thread Patrick Schluter via Digitalmars-d-announce

On Monday, 29 June 2020 at 16:47:27 UTC, jmh530 wrote:

On Monday, 29 June 2020 at 15:44:38 UTC, Patrick Schluter wrote:

[snip]

And that is completely wrong headed.


+1

As much as I'm sympathetic to the arguments for a slim standard 
library, the amount of problems I've had in a corporate setting 
trying to get libraries installed behind firewalls/proxies 
makes me glad for the larger one. Until a few years ago, I had 
to manually download every R library and every single one of 
their dependencies and manage them myself. It's one reason I 
really like run.dlang.org. I've never had a problem with it.


Of course, I see nothing wrong with a high bar for entry into 
the standard library or the sort of promotion/relegation-type 
approach I've heard on the forums.


All packages I install on our Linux servers, I have to compile 
them by source. All the default paths used by the packages are 
readonly. So, everything with prefix, all dependencies. The 
machine with an outdated gcc (4.4.7). In that configuration, it 
is impossible to build a recent dmd.
Fortunately I managed to get an alternative proxy access that 
managed to download the bootstrap d compiler.





Re: Talk by Herb Sutter: Bridge to NewThingia

2020-06-29 Thread Patrick Schluter via Digitalmars-d-announce

On Monday, 29 June 2020 at 12:17:57 UTC, Russel Winder wrote:
On Mon, 2020-06-29 at 10:31 +, IGotD- via 
Digitalmars-d-announce wrote:

Another rant…

…batteries included standard libraries are a thing of the 1990s 
and earlier. They are a reflection of pre-Internet thinking. 
You got a language distribution, and everything else was home 
grown.


Now we have the Internet you can get libraries via download. 
Languages come with the minimal library needed to function and 
all else is gettable. Go, Rust, Python, and other languages 
have this (even though Python still has a batteries included 
standard library). C++ has moved on to this idea; Conan (or 
other system) hasn't really caught on in C++ circles. Ditto 
Fortran, still using pre-2000 thinking.


And that is completely wrong headed. Internet is not always 
directly accessible. There are a lot of companies that restrict 
access to the Internet for their security sensible servers, 
intranets etc. University people often have no clue what is 
standard in the corporate or the public office world. To give an 
example from the EU Commission. A good portion of our servers are 
isolated on an Intranet with very restricted access to the 
Internet via proxys for which access has to be requested for to 
the IT service. Out Intranet is deployed over different sites in 
Europe but the trafic is not routed over the Internet but over 
specialized network reserved for public institutions in Europe. 
The few bridges to the Internet in that network are surveilled 
like Fort Knox. There are also special rooms throughout our 
premises that are not even connected to the Intranet. Building 
software for these special machines has become a real challenge 
nowadays.
These security issues are not even the strictest I've seen or 
heard of here in Luxembourg with all its banking companies.


No, Internet is not always as easy peasy and having a language 
that can live alone and provide quite a lot of features without 
always calling home is a good thing.
That's why I always ranted several versions ago when Visual 
Studio was nearly forced upon the user. Visual Studio and all 
Microsoft stuff is extremely difficult to install with a poor 
Internet connection (the setup didn't even accept a proxy).


Sorry, my rant.





Re: "if not" condition check (for data validation)

2020-06-18 Thread Patrick Schluter via Digitalmars-d-learn

On Thursday, 18 June 2020 at 13:58:33 UTC, Dukc wrote:

On Thursday, 18 June 2020 at 13:57:39 UTC, Dukc wrote:

if (not!(abra && cadabra)) ...

if (not(abra && cadabra)) ...


Which is a quite a complicated way to write

if (!(abra && cadabra)) ...



Re: Rationale for accepting DIP 1028 as is

2020-05-27 Thread Patrick Schluter via Digitalmars-d-announce

On Wednesday, 27 May 2020 at 10:46:11 UTC, Walter Bright wrote:

On 5/27/2020 2:34 AM, Bastiaan Veelo wrote:

On Wednesday, 27 May 2020 at 09:09:58 UTC, Walter Bright wrote:

On 5/26/2020 11:20 PM, Bruce Carneal wrote:
I'm not at all concerned with legacy non-compiling code of 
this nature.


Apparently you agree it is not an actual problem.


Really? I don't know if you really missed the point being 
made, or you're being provocative. Both seem unlikely to me.


His argument was:

"Currently a machine checked @safe function calling an 
unannotated extern C routine will error out during compilation. 
This is great as the C routine was not machine checked, and 
generally can not be checked.  Post 1028, IIUC, the compilation 
will go through without complaint.  This seems quite clear.  
What am I missing?"


I replied that it was unlikely that such legacy code existed.

He replied that he was not concerned about it.

I.e. working legacy code is not going break.


The legacy code is not the issue, never was.
It always was about unsafe code that will become @safe with that 
DIP.

Safe code is safe and DIP doesn't change that.
It's all about UNSAFE code becoming magically labelled SAFE by 
the compiler but that is still UNSAFE in reality.





Re: DIP1028 - Rationale for accepting as is

2020-05-26 Thread Patrick Schluter via Digitalmars-d-announce

On Tuesday, 26 May 2020 at 03:37:29 UTC, Walter Bright wrote:

On 5/25/2020 7:04 PM, Johannes Loher wrote:

[..]


Do you honestly think option 1 is better?


Yes, for reasons I carefully laid out.


which fails to convince anyone because the reasoning is flawed.



> no clues whatsoever

He can look at unattributed declarations.

The whole debate boils down to "is greenwashing better, more 
honest, more debuggable than leaving things unattributed?" No 
on all three accounts.


Unattended automatic greenwashing by the compiler is WORSE!


Re: DIP1028 - Rationale for accepting as is

2020-05-24 Thread Patrick Schluter via Digitalmars-d-announce

On Sunday, 24 May 2020 at 03:28:25 UTC, Walter Bright wrote:

I'd like to emphasize:

1. It is not possible for the compiler to check any 
declarations where the implementation is not available. Not in 
D, not in any language. Declaring a declaration safe does not 
make it safe.


2. If un-annotated declarations cause a compile time error, it 
is highly likely the programmer will resort to "greenwashing" - 
just slapping @safe on it. I've greenwashed code. Atila has. 
Bruce Eckel has. We've all done it. Sometimes even for good 
reasons.


3. Un-annotated declarations are easily detectable in a code 
review.


4. Greenwashing is not easily detectable in a code review.

5. Greenwashing doesn't fix anything. The code is not safer. 
It's an illusion, not a guarantee.


6. If someone cares to annotate declarations, it means he has 
at least thought about it, because he doesn't need to. Hence 
it's more likely to be correct than when greenwashed.


7. D should *not* make it worthwhile for people to greenwash 
code.


It is, in a not-at-all obvious way, safer for C declarations to 
default to being safe.


Apparently, you're of the opinion it's better the compiler does 
the greenwashing. Got it!




String interpolation

2020-05-21 Thread Patrick Schluter via Digitalmars-d-learn

https://forum.dlang.org/post/prlulfqvxrgrdzxot...@forum.dlang.org

On Tuesday, 10 November 2015 at 11:22:56 UTC, wobbles wrote:


int a = 1;
int b = 4;
writefln("The number %s is less than %s", a, b);


writeln("The number ",a, " is less than ",b);


Re: Compilation memory use

2020-05-05 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 4 May 2020 at 17:00:21 UTC, Anonymouse wrote:
TL;DR: Is there a way to tell what module or other section of a 
codebase is eating memory when compiling?


[...]


maybe with the massif tool of valgrind?


Re: How does one read file line by line / upto a specific delimeter of an MmFile?

2020-03-16 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 16 March 2020 at 13:09:08 UTC, Adnan wrote:

On Sunday, 15 March 2020 at 00:37:35 UTC, H. S. Teoh wrote:
On Sat, Mar 14, 2020 at 10:37:37PM +, Adnan via 
Digitalmars-d-learn wrote:

[...]


That's because a memory-mapped file appears directly in your 
program's memory address space as if it was an array of bytes 
(ubyte[]).  No interpretation is imposed upon the contents.  
If you want lines out of it, try casting the memory to 
const(char)[] and using std.algorithm.splitter to get a range 
of lines. For example:


auto mmfile = new MmFile("myfile.txt");
auto data = cast(const(char)[]) mmfile[];
auto lines = data.splitter("\n");
foreach (line; lines) {
...
}


T


Would it be wasteful to cast the entire content into a const 
string? Can a memory mapped file be read with a buffer?


a string is the same thing as immutable(char)[] . It would make 
no difference with the example above.


Re: DConf 2020 Canceled

2020-03-12 Thread Patrick Schluter via Digitalmars-d-announce

On Wednesday, 11 March 2020 at 20:30:12 UTC, Anonymous wrote:
to all the people dogpiling the responses against Era's point 
of view:


the reason there is not more dissent, whether here or in other 
respectable forums (eg scientific research in general), is 
purely because of social mechanics (ostracization of 
dissenters) - not the inherent unassailable truthfulness of the 
apparent consensus point of view. when contrary information is 
personally and professionally radioactive, is it a wonder 
nobody wants to associate themselves with it?


but here, as in so many elsewheres, "this is not the place." 
I'm already pushing the boundary with this meta-post containing 
no specific assertions, and will almost certainly put Mike in 
the unfortunate position of having to put his foot down in this 
thread (sorry Mike).


I'm just pointing out that, anywhere that people's real life 
identities are tied to what they are saying, there will be an 
artificial consensus around safe, socially sanctioned 
viewpoints. so you all essentially get an unrestricted platform 
to say "lol we're so informed and naysayers are tinfoil-hat 
nutters," but if somebody made a good-faith effort to respond 
to any of your points, messages would start getting deleted and 
the thread would be locked. and far from exceptional, that 
happens EVERYWHERE.


I don't expect any of you /respectable, rational/ people to 
read it, but for the shy dissenters among us, here's a short 
little essay on the circularity of scientific peer review (I am 
not the author):


https://www.reddit.com/r/accountt1234/comments/5umtip/scientific_circular_reasoning/


What, you're saying continents can move and that there's no 
phlogiston and no ether around? Dinosaurs did not gradually 
disappear and washing ones hands could avoid childbed fever? and 
that stomach ulcer are of bacierial origin?

Heretic, to the pyre.
More seriously: these were all examples of career killing 
"consensus scientific truths"™ that have been slowly showed to be 
not that truthful (after a lot of funerals).
So, a little bit of caution on the consensus opinion is required, 
especially if that consensus enables billion/trillion big 
industries (global warming, pharmacology, etc.).


Re: The Serpent Game Framework - Open Source!!

2020-02-29 Thread Patrick Schluter via Digitalmars-d-announce

On Saturday, 29 February 2020 at 07:18:26 UTC, aberba wrote:

On Thursday, 27 February 2020 at 22:29:41 UTC, aberba wrote:
There's this ongoing open source game framework by Ikey. I 
knew him to be a diehard C guru (from the Solus Project) but 
is now rocking D, hence Serpent.


[...]


Ikey did an interview with Foss and he said something about why 
he uses D. It's interested and funny as well.


Having done a lot of Go development, I started researching 
alternatives to C that were concurrency-aware, string-sane, and 
packed with a powerful cross-platform standard library. This is 
the part where everyone will automatically tell you to use Rust.


Unfortunately, I’m too stupid to use Rust because the syntax 
literally offends my eyes. I don’t get it, and I never will. 
Rust is a fantastic language and as academic endeavours go, 
highly successful. Unfortunately, I’m too practically minded 
and seek comfort in C-style languages, having lived in that 
world too long. So, D was the best candidate to tick all the 
boxes, whilst having C & C++ interoptability.


Pew! Pew!! Nailed it.

https://itsfoss.com/ikey-doherty-serpent-interview/


from the article

Unfortunately, I’m too stupid to use Rust because the syntax 
literally offends my eyes. I don’t get it, and I never will. Rust 
is a fantastic language and as academic endeavours go, highly 
successful. Unfortunately, I’m too practically minded and seek 
comfort in C-style languages, having lived in that world too 
long. So, D was the best candidate to tick all the boxes, whilst 
having C & C++ interoptability.



That's exactly my sentiment too.


Re: FeedSpot Recognizes the GtkDcoding Blog

2020-02-07 Thread Patrick Schluter via Digitalmars-d-announce

On Thursday, 6 February 2020 at 10:34:16 UTC, Ron Tarrant wrote:
On Tuesday, 4 February 2020 at 22:23:33 UTC, Bastiaan Veelo 
wrote:



Well done!

Bastiaan.


On Tuesday, 4 February 2020 at 19:11:48 UTC, M.M. wrote:


Congratulations!


Thanks, guys. I'm hoping this will help brighten the spotlight 
on the D language. TIOBE (https://archive.ph/E3Xu7) has D 
rising fast in popularity. If I can help in some small way to 
keep this momentum going, then I'm a cappy hamper.


These are exactly the things that were a little bit missing in 
the D world. Usage of it and advertisement of its usage.


Congrats for your blog.


Re: CT regex in AA at compile time

2020-01-07 Thread Patrick Schluter via Digitalmars-d-learn
On Tuesday, 7 January 2020 at 15:40:58 UTC, Taylor Hillegeist 
wrote:

I'm trying to trick the following code snippet into compilation.

enum TokenType{
//Terminal
Plus,
Minus,
LPer,
RPer,
Number,
}

static auto Regexes =[
  TokenType.Plus:   ctRegex!(`^ *\+`),
  TokenType.Minus:  ctRegex!(`^ *\-`),
  TokenType.LPer:   ctRegex!(`^ *\(`),
  TokenType.RPer:   ctRegex!(`^ *\)`),
  TokenType.Number: ctRegex!(`^ *[0-9]+(.[0-9]+)?`)
];

but I can't get it to work. it says its an Error: non-constant 
expression.


I imagine this has to do with the ctRegex template or 
something. maybe there is a better way? Does anyone know?


In that specific case: why don't you use an array indexed on 
TokenType? TokenType are consecutive integrals so indexing is the 
fastest possible access method.


Re: What kind of Editor, IDE you are using and which one do you like for D language?

2019-12-30 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 30 December 2019 at 14:59:22 UTC, bachmeier wrote:

On Monday, 30 December 2019 at 06:43:03 UTC, H. S. Teoh wrote:


[...]


Another way in which the IDE is "heavy" is the amount of 
overhead for beginning/occasional users. I like that I can get 
someone started using D like this:


1. Open text editor
2. Type simple program
3. Compile by typing a few characters into a terminal/command 
prompt.


An IDE adds a crapload to the learning curve. It's terrible, 
because they need to memorize a bunch of steps when they use a 
GUI (click here -> type this thing in this box -> click here -> 
...)


Back when I was teaching intro econ courses, which are taken by 
nearly all students here, I'd sometimes be talking with 
students taking Java or C++ courses. One of the things that 
really sucked (beyond using Java for an intro programming 
class) was that they'd have to learn the IDE first. Not only 
were they hit with this as the simplest possible program:


public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World");
}
}

but before they even got there, the instructor went through an 
entire lecture teaching them about the IDE. That's an effective 
way to make students think programming is a mind-numbingly 
stupid task on par with reading the phone book.


Contrast that with students opening a text editor, typing 
`print "Hello World"` and then running the program.


IDE support should obviously be made available. I think it 
would be a mistake, however, to move away from the simplicity 
of being able to open a text editor, type in a few lines, and 
then compile and run in a terminal. It's not just beginners. 
This is quite handy for those who will occasionally work with D 
code. For someone in my position (academic research), beginners 
and occasional programmers represents most of the user base.


Good point. It also trains people to not be able to work without 
IDE. I see it at work with some of the Java devs who aren't even 
able to invoke javac in a command line and setting javapath 
correctly. Why? Because IDE shielded them from these easy things. 
It has also a corrolary that they're not capable to implement 
sometimes simple protocols or file processings without resorting 
to external libraries. A little bit like people needing even and 
odd library in Javascript.


Re: What kind of Editor, IDE you are using and which one do you like for D language?

2019-12-30 Thread Patrick Schluter via Digitalmars-d-learn

On Sunday, 29 December 2019 at 14:41:46 UTC, Russel Winder wrote:
On Sat, 2019-12-28 at 22:01 +, p.shkadzko via 
Digitalmars-d-learn

wrote:
[…]
p.s. I found it quite satisfying that D does not really need 
an IDE, you will be fine even with nano.




The fundamental issue with these all battery included fancy IDE's 
(especially in Java) is that they tend to become dependencies of 
the projects themselves.


How many times have I seen in my professionnal world, projects 
that required specific versions of Eclipse with specific versions 
of extensions and libraries?
At my work we have exactly currently the problem. One developer 
wrote one of the desktop apps and now left the company. My 
colleagues of that department are now struggling to maintain the 
app as it used some specific GUI libs linked to some Eclipse 
version and they are nowhere to be found. You may object that 
it's a problem of the project management and I would agree. It 
was the management error to let the developer choose the IDE 
solution in the first place. A more classical/portable approach 
would have been preferable.


Furthermore, it is extremely annoying that these IDE change over 
time and all the fancy stuff gets stale and changed with other 
stuff that gets stale after time.
Visual Studio is one of the worst offenders in that category. 
Every 5 years it changes so much that everything learnt before 
can be thrown away.
IDE's work well for scenarios that the developers of the IDE 
thought of. Anything a little bit different requires changes that 
are either impossible to model or require intimate knowledge of 
the functionning of the IDE. Visual Studio comes to mind again of 
an example where that is horribly painful (I do not even mention 
the difficulty to even install such behemoth programs on our 
corporate laptops which are behind stupid proxies and follow 
annoying corporate policy rules).






Re: Blog series to teach and show off D's metaprogramming by creating a JSON serialiser

2019-11-04 Thread Patrick Schluter via Digitalmars-d-announce

On Sunday, 3 November 2019 at 21:35:18 UTC, JN wrote:

On Sunday, 3 November 2019 at 08:37:07 UTC, SealabJaster wrote:

On Sunday, 3 November 2019 at 08:35:42 UTC, SealabJaster wrote:
On Friday, 1 November 2019 at 21:14:56 UTC, SealabJaster 
wrote:

...


Sorry, seems it cut out the first half of that reply.

New posts are out, and I don't want to spam Announce with new 
threads, so I'm just replying to this one.


#1.1 
https://bradley.chatha.dev/Home/Blog?post=JsonSerialiser1_1

#2 https://bradley.chatha.dev/Home/Blog?post=JsonSerialiser2


"This often seems to confuse people at first, especially those 
coming from other languages"


I think what's confusing people is that enum (short for 
ENUMERATION) is suddenly used like a constant/alias.


I don't get why it confuses people.
In all languages I know (C, C++, Java, Pascal, etc..) they are 
used to associate a compile time symbols with some quantities, 
i.e. the definition of constants.
When an enumeration only consists of 1 value, then the 
enumeration is this value itself.


Re: Silicon Valley C++ Meetup - August 28, 2019 - "C++ vs D: Let the Battle Commence"

2019-08-31 Thread Patrick Schluter via Digitalmars-d-announce

On Tuesday, 27 August 2019 at 19:23:41 UTC, Ali Çehreli wrote:
I will be presenting a comparison of D and C++. RSVP so that we 
know how much food to order:


  https://www.meetup.com/ACCU-Bay-Area/events/263679081/

It will not be streamed live but some people want to record it; 
so, it may appear on YouTube soon.


As always, I have way too many slides. :) The contents are

- Introduction
- Generative programming with D
- Thousand cuts of D
- C++ vs. D
- Soapboxing



Really a pitty that the audio is really, really bad. It's 
interesting enough to put up with it, but I had to crank up the 
knob to 11 to barely be able to ujnderstand what you were saying.




Re: Template specialized functions creating runtime instructions?

2019-08-21 Thread Patrick Schluter via Digitalmars-d-learn

On Wednesday, 21 August 2019 at 00:11:23 UTC, ads wrote:

On Wednesday, 21 August 2019 at 00:04:37 UTC, H. S. Teoh wrote:
On Tue, Aug 20, 2019 at 11:48:04PM +, ads via 
Digitalmars-d-learn wrote: [...]
2) Deducing the string as you describe would require CTFE 
(compile-time function evaluation), which usually isn't done 
unless the result is *required* at compile-time.  The typical 
way to force this to happen is to store the result into an 
enum:


enum myStr = fizzbuzz!...(...);
writeln(myStr);

Since enums have to be known at compile-time, this forces CTFE 
evaluation of fizzbuzz, which is probably what you're looking 
for here.


T


Thank you for clearing those up. However even if I force CTFE 
(line 35), it doesn't seem to help much.


https://godbolt.org/z/MytoLF


It does.

on line 4113 you have that string

.L.str:
.asciz  
"Buzz\n49\nFizz\n47\n46\nFizzBuzz\n44\n43\nFizz\n41\nBuzz\nFizz\n38\n37\nFizz\nBuzz\n34\nFizz\n32\n31\nFizzBuzz\n29\n28\nFizz\n26\nBuzz\nFizz\n23\n22\nFizz\nBuzz\n19\nFizz\n17\n16\nFizzBuzz\n14\n13\nFizz\n11\nBuzz\nFizz\n8\n7\nFizz\nBuzz\n4\nFizz\n2\n1\n"


and all main() does is call writeln with that string

_Dmain:
pushrax
lea rsi, [rip + .L.str]
mov edi, 203
call@safe void 
std.stdio.writeln!(immutable(char)[]).writeln(immutable(char)[])@PLT

xor eax, eax
pop rcx
ret


You haven't given instruction to the linker to strip unused code 
so the functions generated by the templates are still there.


Re: How should I sort a doubly linked list the D way?

2019-08-14 Thread Patrick Schluter via Digitalmars-d-learn

On Tuesday, 13 August 2019 at 18:28:35 UTC, Ali Çehreli wrote:

On 08/13/2019 10:33 AM, Mirjam Akkersdijk wrote:
> On Tuesday, 13 August 2019 at 14:04:45 UTC, Sebastiaan Koppe
wrote:

>> Convert the nodes into an D array, sort the array with
nodes.sort!"a.x
>> < b.x" and then iterate the array and repair the next/prev
pointers.

If possible, I would go further and ditch the linked list 
altogether: Just append the nodes to an array and then sort the 
array. It has been shown in research, conference presentations, 
and in personal code to be the fasted option is most (or all) 
cases.


> doesn't the nature of the dynamic array slow it down a bit?

Default bounds checking is going to cost a tiny bit, which you 
can turn off after development with a compiler flag. (I still 
wouldn't.)


The only other option that would be faster is an array that's 
sitting on the stack, created with alloca. But it's only for 
cases where the thread will not run out of stack space and the 
result of the array is not going to be used.


> can't I define an array of fixed size, which is dependent on
the input
> of the function?

arr.length = number_of_elements;

All elements will be initialized to the element's default 
value, which happens to be null for pointers. (If we are back 
to linked list Node pointers.)


However, I wouldn't bother with setting length either as the 
cost of automatic array resizing is amortized, meaning that it 
won't hurt the O(1) algorithmic complexity in the general case. 
In the GC case that D uses, it will be even better: because if 
the GC knowns that the neighboring memory block is free, it 
will just add that to the dynamic array's capacity without 
moving elements to the new location.


Summary: Ditch the linked list and put the elements into an 
array. :)




There are mainly three reasons why arrays are nowadays faster 
than double linked lists:
- pointer chasing can difficultly be paralized and defeats 
prefetching. Each pointer load may cost the full latency to 
memory (hundreds of cycles). In a multiprocessor machine may also 
trigger a lot of coherency trafic.
- on 64 bit systems 2 pointers cost 16 bytes. If the payload is 
small, there is more memory used in the pointer than in the data.
- when looping in an array the OO machinery will be able to 
parallelize execution beyond loop limits.
- reduced allocation, i.e. allocation is done in bulk => faster 
GC for D.


It is only when there are a lot of external references to the 
payload in the list that using an array may become too unwieldy, 
i.e. if moving an element in memory requires the update of other 
pointers outside of the list.




Re: Question about ubyte x overflow, any safe way?

2019-08-05 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 5 August 2019 at 18:21:36 UTC, matheus wrote:

On Monday, 5 August 2019 at 01:41:06 UTC, Ali Çehreli wrote:

...
Two examples with foreach and ranges. The 'ubyte.max + 1' 
expression is int. The compiler casts to ubyte (because we 
typed ubyte) in the foreach and we cast to ubyte in the range:

...


Maybe it was a bad example of my part (Using for), and indeed 
using foreach would solve that specific issue, but what I'm 
really looking for if there is a flag or a way to check for 
overflow when assigning some variable.


ubyte u = 260;  // Here should be given some warning or throw 
exception.


It's ubyte, but it could be any other data type.



Yes, no question. It's checkedint that you should use. It was 
written exactly for that purpose.






DWT doesn't compile with dmd 2.87.0

2019-07-14 Thread Patrick Schluter via Digitalmars-d-dwt
DWT doesn't build anymore with the new compiler. Wasn't DWT 
supposed to be part of the build job of compiler so that 
regressions are caught in time?



dwt 1.0.1+swt-3.4.1: building configuration "windows-win32"...
C:\Users\Patri\AppData\Local\dub\packages\dwt-1.0.1_swt-3.4.1\dwt\org.eclipse.swt.win32.win32.x86\src\org\eclipse\swt\internal\gdip\Gdip.d(478,9):
 Deprecation: The delete keyword has been deprecated.  Use object.destroy() 
(and core.memory.GC.free() if applicable) instead.
C:\Users\Patri\AppData\Local\dub\packages\dwt-1.0.1_swt-3.4.1\dwt\org.eclipse.swt.win32.win32.x86\src\org\eclipse\swt\ole\win32\OleControlSite.d(886,43):
 Error: class `org.eclipse.swt.ole.win32.OleControlSite.OleControlSite` member 
AddRef is not accessible
C:\Users\Patri\AppData\Local\dub\packages\dwt-1.0.1_swt-3.4.1\dwt\org.eclipse.swt.win32.win32.x86\src\org\eclipse\swt\ole\win32\OleControlSite.d(906,43):
 Error: class `org.eclipse.swt.ole.win32.OleControlSite.OleControlSite` member 
AddRef is not accessible
C:\Users\Patri\AppData\Local\dub\packages\dwt-1.0.1_swt-3.4.1\dwt\org.eclipse.swt.win32.win32.x86\src\org\eclipse\swt\widgets\IME.d(506,29):
 Deprecation: The delete keyword has been deprecated.  Use object.destroy() 
(and core.memory.GC.free() if applicable) instead.
C:\D\dmd2\windows\bin\dmd.exe failed with exit code 1.


Re: Is there a way to slice non-array type in @safe?

2019-07-12 Thread Patrick Schluter via Digitalmars-d-learn
On Thursday, 11 July 2019 at 19:35:50 UTC, Stefanos Baziotis 
wrote:

On Thursday, 11 July 2019 at 18:46:57 UTC, Paul Backus wrote:


Casting from one type of pointer to another and slicing a 
pointer are both @system, by design.


Yes, I'm aware, there are no pointers in the code. The pointer 
was used
here because it was the only way to solve the problem (but not 
in @safe).


What's the actual problem you're trying to solve? There may be 
a different way to do it that's @safe.


I want to make an array of bytes that has the bytes of the 
value passed.
For example, if T = int, then I want an array of 4 bytes that 
has the 4
individual bytes of `s1` let's say. For long, an array of 8 
bytes etc.
Ideally, that would work with `ref` (i.e. the bytes of where 
the ref points to).


imho this cannot be safe on 1st principle basis. You gain access 
to the machine representation of variable, which means you bypass 
the "control" the compiler has on its data. Alone the endianness 
issue is enough to have different behaviour of your program on 
different implementations. While in practice big endian is nearly 
an extinct species (, it is still enough to show why that 
operation is inherently @system and should not be considered 
@safe.
Of course, a @trusted function can be written to take care of 
that, but that's in fact exactly the case as it should be.


Re: Why are immutable array literals heap allocated?

2019-07-07 Thread Patrick Schluter via Digitalmars-d-learn

On Saturday, 6 July 2019 at 09:56:57 UTC, ag0aep6g wrote:

On 06.07.19 01:12, Patrick Schluter wrote:

On Friday, 5 July 2019 at 23:08:04 UTC, Patrick Schluter wrote:
On Thursday, 4 July 2019 at 10:56:50 UTC, Nick Treleaven 
wrote:

immutable(int[]) f() @nogc {
    return [1,2];
}

[...]


and it cannot optimize it away because it doesn't know what 
the caller want to do with it. It might in another module 
invoke it and modify it, the compiler cannot tell. auto a=f(); 
a[0]++;


f returns immutable. typeof(a) is immutable(int[]). You can't 
do a[0]++.


You're right, I shouldn't post at 1 am.


Re: Why are immutable array literals heap allocated?

2019-07-05 Thread Patrick Schluter via Digitalmars-d-learn

On Friday, 5 July 2019 at 23:08:04 UTC, Patrick Schluter wrote:

On Thursday, 4 July 2019 at 10:56:50 UTC, Nick Treleaven wrote:

immutable(int[]) f() @nogc {
return [1,2];
}

onlineapp.d(2): Error: array literal in `@nogc` function 
`onlineapp.f` may cause a GC allocation


This makes dynamic array literals unusable with @nogc, and 
adds to GC pressure for no reason. What code would break if 
dmd used only static data for [1,2]?


int[] in D is not an array but a fat pointer. When one realizes 
that then it become quite obvious why [1,2] was allocated. 
There is somewhere in the binary a static array [1,2] but as it 
is assigned to a pointer to mutable data, the compiler has no 
choice as to allocate a mutable copy of that immutable array.


and it cannot optimize it away because it doesn't know what the 
caller want to do with it. It might in another module invoke it 
and modify it, the compiler cannot tell. auto a=f(); a[0]++;


Re: Why are immutable array literals heap allocated?

2019-07-05 Thread Patrick Schluter via Digitalmars-d-learn

On Thursday, 4 July 2019 at 10:56:50 UTC, Nick Treleaven wrote:

immutable(int[]) f() @nogc {
return [1,2];
}

onlineapp.d(2): Error: array literal in `@nogc` function 
`onlineapp.f` may cause a GC allocation


This makes dynamic array literals unusable with @nogc, and adds 
to GC pressure for no reason. What code would break if dmd used 
only static data for [1,2]?


int[] in D is not an array but a fat pointer. When one realizes 
that then it become quite obvious why [1,2] was allocated. There 
is somewhere in the binary a static array [1,2] but as it is 
assigned to a pointer to mutable data, the compiler has no choice 
as to allocate a mutable copy of that immutable array.


Re: [OT] Re: 1 - 17 ms, 553 ╬╝s, and 1 hnsec

2019-05-27 Thread Patrick Schluter via Digitalmars-d-learn

On Tuesday, 21 May 2019 at 02:12:10 UTC, Les De Ridder wrote:

On Sunday, 19 May 2019 at 12:24:28 UTC, Patrick Schluter wrote:

On Saturday, 18 May 2019 at 21:05:13 UTC, Les De Ridder wrote:
On Saturday, 18 May 2019 at 20:34:33 UTC, Patrick Schluter 
wrote:
* hurrah for French keyboard which has a rarely used µ key, 
but none for Ç a frequent character of the language.





That's the lowercase ç. The uppercase Ç is not directly 
composable,


No, note that I said  and not . Using Lock> it

outputs a 'Ç' for me (at least on X11 with the French layout).


Does not work on Windows.  and it gives 9. I tested 
also on my Linux Mint box and it output lowercase ç with lock>.






There are 2 other characters that are not available on the 
french keyboard: œ and Œ. Quite annoying if you sell beef 
(bœuf) and eggs (œufs) in the towns of Œutrange or Œting.


It seems those are indeed not on the French layout at all. 
Might I
suggest using the Belgian layout? It is AZERTY too and has both 
'œ'

and 'Œ'.


No, it hasn't.
I indeed prefer the Belgian keyboard. It has more composable 
deadkey characters accents, tildas. Brackets [{]} and other 
programming characters < > | etc, are better placed than on the 
French keyboard.
Btw æ and Æ are missing also, but there it's not very important 
as there are really only very few words in French that use them 
ex-æquo, curriculum vitæ, et cætera


[OT] Re: 1 - 17 ms, 553 ╬╝s, and 1 hnsec

2019-05-19 Thread Patrick Schluter via Digitalmars-d-learn

On Saturday, 18 May 2019 at 21:05:13 UTC, Les De Ridder wrote:
On Saturday, 18 May 2019 at 20:34:33 UTC, Patrick Schluter 
wrote:
* hurrah for French keyboard which has a rarely used µ key, 
but none for Ç a frequent character of the language.





That's the lowercase ç. The uppercase Ç is not directly 
composable, annoying or to say it in French to illsutrate: "Ça 
fait chier". I use Alt+1+2+8 on Windows, but most people do not 
know these ancient OEM-437 based character codes going back to 
the orignal IBM-PC. The newer ANSI based Alt+0+1+9+9 is one 
keypress longer and I would have to learn actually the code.


There are 2 other characters that are not available on the french 
keyboard: œ and Œ. Quite annoying if you sell beef (bœuf) and 
eggs (œufs) in the towns of Œutrange or Œting.


Re: 1 - 17 ms, 553 ╬╝s, and 1 hnsec

2019-05-18 Thread Patrick Schluter via Digitalmars-d-learn

On Thursday, 16 May 2019 at 15:19:03 UTC, Alex wrote:

1 - 17 ms, 553 ╬╝s, and 1 hnsec


That's µs* for micro-seconds.

* hurrah for French keyboard which has a rarely used µ key, but 
none for Ç a frequent character of the language.




WTH!! is there any way to just get a normal u rather than some 
fancy useless asci hieroglyphic? Why don't we have a fancy M? 
and an h?


What's an hnsec anyways?





Re: Compile time mapping

2019-05-12 Thread Patrick Schluter via Digitalmars-d-learn

On Saturday, 11 May 2019 at 15:48:44 UTC, Bogdan wrote:
What would be the most straight-forward way of mapping the 
members of an enum to the members of another enum (one-to-one 
mapping) at compile time?


An example of a Initial enum that creates a derived enum using 
the same element names but applying a transformation via a 
function foo() pus adding some other enum elements in the Derived 
one not present in the Initial.

It's a little bit clumsy but works very well.
I use this at module level. This allows to have the Derived enum 
at compile time so that it can be used to declare variables or 
functions at compile time.




mixin({
  string code = "enum Derived : ulong { "~
"init = 0,";  /* We set the dummy 
init value to 0 */

  static foreach(i; __traits(allMembers, Initial)) {
code ~= i~"= foo(Initial."~i~"),";
  }
  code ~= "
ALL=  Whatever,
THING  =  42,
  return code ~ "}";
}());




Re: DMD different compiler behaviour on Linux and Windows

2019-04-25 Thread Patrick Schluter via Digitalmars-d-learn

On Thursday, 25 April 2019 at 20:18:28 UTC, Zans wrote:

import std.stdio;

void main()
{
char[] mychars;
mychars ~= 'a';
long index = 0L;
writeln(mychars[index]);
}

Why would the code above compile perfectly on Linux (Ubuntu 
16.04), however it would produce the following error on Windows 
10:


source\app.d(8,21): Error: cannot implicitly convert expression 
index of type long to uint


On both operating systems DMD version is 2.085.0.


The issue here is not Windows vs Linux but 32 bits vs 64 bits.
On 32 bits architectures size_t is defined as uint, long being 64 
bits long, conversion from long to uint is a truncating cast 
which are not allowed implicitely in D.
It is unfortunate that the D compiler on Windows is still 
delivered by default as a 32 bits binary and generating 32 bits 
code. I think the next release will start to deliver the compiler 
as 64 bits binary and generating 64 bits code.




Re: How to debug long-lived D program memory usage?

2019-04-21 Thread Patrick Schluter via Digitalmars-d-learn

On Thursday, 18 April 2019 at 12:00:10 UTC, ikod wrote:
On Wednesday, 17 April 2019 at 16:27:02 UTC, Adam D. Ruppe 
wrote:
D programs are a vital part of my home computer 
infrastructure. I run some 60 D processes at almost any 
time and have recently been running out of memory.


I usually run program under valgrind in this case. Though it 
will not help you to debug GC problems, but will cut off memory 
leaked malloc-s.


Even valgrind tool=massif ?


Re: Any easy way to extract files to memory buffer?

2019-03-19 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 18 March 2019 at 23:40:02 UTC, Michelle Long wrote:

On Monday, 18 March 2019 at 23:01:27 UTC, H. S. Teoh wrote:
On Mon, Mar 18, 2019 at 10:38:17PM +, Michelle Long via 
Digitalmars-d-learn wrote:
On Monday, 18 March 2019 at 21:14:05 UTC, Vladimir Panteleev 
wrote:
> On Monday, 18 March 2019 at 21:09:55 UTC, Michelle Long 
> wrote:
> > Trying to speed up extracting some files that I first 
> > have to extract using the command line to files then read 
> > those in...
> > 
> > Not sure what is taking so long. I imagine windows caches 
> > the extraction so maybe it is pointless?

[...]

Why not just use std.mmfile to memory-map the file into memory 
directly? Let the OS take care of actually paging in the file 
data.



T


The files are on disk and there is an external program that 
read them and converts them and then writes the converted files 
to disk then my program reads. Ideally the conversion program 
would take memory instead of disk files but it doesn't.


the file that was written by the first program will be in the 
file cache. mmap() (and the Windows equivalent of that) syscalls 
are at the core only giving access to the OS file cache. This 
means that std.mmfile is the way to go. There will be no 
reloading from disk if the file sizes are within reason.


Re: Should D file end with newline?

2019-02-15 Thread Patrick Schluter via Digitalmars-d-learn

On Wednesday, 13 February 2019 at 05:13:12 UTC, sarn wrote:
On Tuesday, 12 February 2019 at 20:03:09 UTC, Jonathan M Davis 
wrote:

So, I'd say that it's safe to say that dmd
The whole thing just seems like a weird requirement that 
really shouldn't be there,


Like I said in the first reply, FWIW, it's a POSIX requirement.

Turns out most tools don't care (and dmd is apparently one of 
them).  If you want an easy counterexample, try the wc command 
(it miscounts lines for non-compliant files).  I've never seen 
that break an actual build system, which is why I said you 
could mostly get away with it.  On the other hand, being 
POSIX-compliant always works.


it matters even less if text editors are automatically 
appending newlines to files if they aren't there whether they 
show them or not, since if that's the case, you'd have to 
really work at it to have files not ending with newlines 
anyway.


There are definitely broken text editors out there that won't 
add the newline (can't think of names).  Like Jacob Carlborg 
said, Github flags the files they generate.


hexdump shows a newline followed by a null character followed 
by a newline after the carriage return.


hexdump is printing little-endian 16b by default, so I think 
that's just two newlines followed by a padding byte from 
hexdump.
 Try using the -c or -b flag and you probably won't see any 
null byte.


Curiously, if I create a .cpp or .c file with vim and have it 
end with a curly brace, vim _does_ append a newline followed 
by a null character followed by a newline at the end of the 
file. So, I guess that vim looks at the extension and realizes 
that C/C++ has such a requirement and takes care of it for 
you, but it does not think that .d files need them and adds 
nothing extra for them. It doesn't add anything for a .txt 
file when I tried it either.


Are you sure?  vim is supposed to add the newline for all text 
files because that's POSIX.  It does on my (GNU/Linux) machine.


A lots of fgets() based tools on Unix systems fail to read the 
last line if it doesn't contain a line feed character at the end. 
Afaicr glibc implementation does not have that problem but a lot 
of other standard C libs do.
When we were still on Solaris we had to be very careful with 
that, as strange things could happen when using sed, awk, wc and 
a lot of other standard Unix commands.
Now that we have switched to Linux we don't have the issue 
anymore.


Re: Compiling to 68K processor (Maybe GDC?)

2019-01-20 Thread Patrick Schluter via Digitalmars-d-learn
On Sunday, 20 January 2019 at 09:27:33 UTC, Jonathan M Davis 
wrote:
On Saturday, January 19, 2019 10:45:41 AM MST Patrick Schluter 
via Digitalmars-d-learn wrote:

On Saturday, 19 January 2019 at 12:54:28 UTC, rikki cattermole

wrote:
> [...]

At least 68030 (or 68020+68851) would be necessary for proper 
segfault managing (MMU) and an OS that uses it. Afaict NULL 
pointer derefernecing must fault for D to be "usable". At 
least all code is written with that assumption.


For @safe to work properly, dereferencing null must be @safe, 
which means more or less means that either it results in a 
segfault, or the compiler has to add additional checks to 
ensure that null isn't dereferenced. The situation does get a 
bit more complicated in the details (e.g. calling a non-virtual 
member function on a null pointer or reference wouldn't 
segfault if the object's members are never actually accessed, 
and that's fine, because it doesn't violate @safe), but in 
general, either a segfault must occur, or the compiler has to 
add extra checks so that invalid memory is not accessed. At 
this point, AFAIK, all of the D compilers assume that 
dereferencing null will segfault, and they don't ever add 
additional checks. If an architecture does not segfault when 
dereferencing null, then it will need special handling by the 
compiler, and I don't think that ever happens right now. So, if 
D were compiled on such an architecture, @safe wouldn't provide 
the full guarantees that it's supposed to.




Ok, thanks for the explanation. This said, my statement that a 
PMMU is required for NULL pointer segfaults is wrong. Even 68000 
can segfault on NULL dereference in user mode at least (the 
famous bus error 2 bombs on Atari ST or guru meditations on 
Amiga). In priviledged mode though it's not the case as there is 
memory at address 0 (reset vector) that might be necessary to 
access by an OS.




Re: Compiling to 68K processor (Maybe GDC?)

2019-01-19 Thread Patrick Schluter via Digitalmars-d-learn
On Saturday, 19 January 2019 at 12:54:28 UTC, rikki cattermole 
wrote:

On 20/01/2019 1:38 AM, Edgar Vivar wrote:

Hi,

I have a project aiming to old 68K processor. While I don't 
think DMD would be able for this on the other hand I think GDC 
can, am I right?


If yes would be any restriction of features to be used? Or the 
compiler would be smart enough to handle this properly?


Edgar V.


Potentially.

D is designed to only work on 32bit+ architectures. The 68k 
series did have 32bit versions of them.


After a quick check it does look like LDC is out as LLVM has 
not yet got support for M68k target. Which is unfortunate 
because with the -betterC flag it could have pretty much out of 
the box worked. Even if you don't have most of D at your 
disposal e.g. classes and GC (but hey old cpu! can't expect 
that).


I have no idea about GDC, but the -betterC flag is pretty 
recent so its support may not be what you would consider first 
class there yet.


At least 68030 (or 68020+68851) would be necessary for proper 
segfault managing (MMU) and an OS that uses it. Afaict NULL 
pointer derefernecing must fault for D to be "usable". At least 
all code is written with that assumption.


Re: Bitwise rotate of integral

2019-01-08 Thread Patrick Schluter via Digitalmars-d-learn

On Tuesday, 8 January 2019 at 12:35:16 UTC, H. S. Teoh wrote:
On Tue, Jan 08, 2019 at 09:15:09AM +, Patrick Schluter via 
Digitalmars-d-learn wrote:

On Monday, 7 January 2019 at 23:20:57 UTC, H. S. Teoh wrote:

[...]

> [...]
Are you sure it's dmd looking for the pattern. Playing with 
the godbolt link shows that dmd doesn't generate the rol code 
(gdc 4.8.2 neither).


I vaguely remember a bug about this. There is definitely 
explicit checking for this in dmd; I don't remember if it was a 
bug in the pattern matching code itself, or some other problem, 
that made it fail. You may need to specify -O for the code to 
actually be active. Walter could point you to the actual 
function that does this optimization.



I did use the -O flag. The code generated did not use rol.


Re: signed nibble

2019-01-08 Thread Patrick Schluter via Digitalmars-d-learn
On Tuesday, 8 January 2019 at 10:32:25 UTC, Ola Fosheim Grøstad 
wrote:
On Tuesday, 8 January 2019 at 09:30:14 UTC, Patrick Schluter 
wrote:

[...]


Heh, I remember they had a friday-night trivia contest at the 
mid-90s students pub (for natural sciences) where one of the 
questions was the opcode for 6502 LDA (or was it NOP?), and I 
believe I got it right. The opcode for NOP is burned into my 
memory as $EA was used for erasing code during debugging in a 
monitor. And it was also the letters for the big game company 
Electronic Arts...


The cycle counts for 6502 are pretty easy though as they tend 
to be related to the addressing mode and most of them are in 
the range 1-5... No instruction for multiplication or 
division... Oh the fun...


2-7 cycles ;-)


Re: signed nibble

2019-01-08 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 7 January 2019 at 21:46:21 UTC, H. S. Teoh wrote:
On Mon, Jan 07, 2019 at 08:41:32PM +, Patrick Schluter via 
Digitalmars-d-learn wrote:

On Monday, 7 January 2019 at 20:28:21 UTC, H. S. Teoh wrote:
> On Mon, Jan 07, 2019 at 08:06:17PM +0000, Patrick Schluter 
> via Digitalmars-d-learn wrote:

[...]
> > Up to 32 bit processors, shifting was more expensive than 
> > branching.
> 
> Really?  Haha, never knew that, even though I date all the 
> way back to writing assembly on 8-bit processors. :-D
> 
Most of my career was programming for 80186. Shifting by one 
was 2 cycles in register and 15 in memory. Shifting by 4, 9 
cycles for regs/21 for mem. And 80186 was a fast shifter 
compared to 8088/86 or 68000 (8+2n cycles).


I used to hack 6502 assembly code.


Yeah, that's also what I started with, on the Apple II in the 
early 80s. I was quite surprized that my 6502 knowledge came in 
very handy when we worked on dial-in modems in the late 90s as 
the Rockwell modems all used 6502 derived micro-controllers for 
them.


During the PC revolution I wrote an entire application in 8088 
assembly.  Used to know many of the opcodes and cycle counts by 
heart like you do, but it's all but a faint memory now.


I had to lookup the exact cycle counts ;-) . I remember the 
relative costs, more or less, but not the details anymore.




Re: Bitwise rotate of integral

2019-01-08 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 7 January 2019 at 23:20:57 UTC, H. S. Teoh wrote:
On Mon, Jan 07, 2019 at 11:13:37PM +, Guillaume Piolat via 
Digitalmars-d-learn wrote:

On Monday, 7 January 2019 at 14:39:07 UTC, Per Nordlöw wrote:
> What's the preferred way of doing bitwise rotate of an 
> integral value in D?
> 
> Are there intrinsics for bitwise rotation available in LDC?


Turns out you don't need any:

https://d.godbolt.org/z/C_Sk_-

Generates ROL instruction.


There's a certain pattern that dmd looks for, that it 
transforms into a ROL instruction. Similarly for ROR.  Deviate 
too far from this pattern, though, and it might not recognize 
it as it should.  To be sure, always check the disassembly.


Are you sure it's dmd looking for the pattern. Playing with the 
godbolt link shows that dmd doesn't generate the rol code (gdc 
4.8.2 neither).




Re: signed nibble

2019-01-07 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 7 January 2019 at 20:28:21 UTC, H. S. Teoh wrote:
On Mon, Jan 07, 2019 at 08:06:17PM +, Patrick Schluter via 
Digitalmars-d-learn wrote:

On Monday, 7 January 2019 at 18:56:17 UTC, H. S. Teoh wrote:
> On Mon, Jan 07, 2019 at 06:42:13PM +0000, Patrick Schluter 
> via Digitalmars-d-learn wrote:

[...]

> > byte b = nibble | ((nibble & 0x40)?0xF0:0);
> 
> This is equivalent to doing a bit comparison (implied by the 
> ? operator).  You can do it without a branch:
> 
> 	cast(byte)(nibble << 4) >> 4
> 
> will use the natural sign extension of a (signed) byte to 
> "stretch" the upper bit.  It just takes 2-3 CPU instructions.
> 

Yeah, my bit-fiddle-fu goes back to pre-barrel-shifter days. 
Up to 32 bit processors, shifting was more expensive than 
branching.


Really?  Haha, never knew that, even though I date all the way 
back to writing assembly on 8-bit processors. :-D


Most of my career was programming for 80186. Shifting by one was 
2 cycles in register and 15 in memory. Shifting by 4, 9 cycles 
for regs/21 for mem. And 80186 was a fast shifter compared to 
8088/86 or 68000 (8+2n cycles).




Re: signed nibble

2019-01-07 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 7 January 2019 at 18:56:17 UTC, H. S. Teoh wrote:
On Mon, Jan 07, 2019 at 06:42:13PM +, Patrick Schluter via 
Digitalmars-d-learn wrote:

On Monday, 7 January 2019 at 17:23:19 UTC, Michelle Long wrote:
> Is there any direct way to convert a signed nibble in to a 
> signed byte with the same absolute value? Obviously I can do 
> some bit comparisons but just curious if there is a very 
> quick way.


byte b = nibble | ((nibble & 0x40)?0xF0:0);


This is equivalent to doing a bit comparison (implied by the ? 
operator).  You can do it without a branch:


cast(byte)(nibble << 4) >> 4

will use the natural sign extension of a (signed) byte to 
"stretch" the upper bit.  It just takes 2-3 CPU instructions.




Yeah, my bit-fiddle-fu goes back to pre-barrel-shifter days. Up 
to 32 bit processors, shifting was more expensive than branching.




Re: signed nibble

2019-01-07 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 7 January 2019 at 18:47:04 UTC, Adam D. Ruppe wrote:
On Monday, 7 January 2019 at 18:42:13 UTC, Patrick Schluter 
wrote:

byte b = nibble | ((nibble & 0x40)?0xF0:0);


don't you mean & 0x80 ?


He asked for signed nybble. So mine is wrong and yours also :-)

It's obviously 0x08 for the highest bit of the low nybble.

byte b = nibble | ((nibble & 0x08)?0xF0:0);


Re: signed nibble

2019-01-07 Thread Patrick Schluter via Digitalmars-d-learn

On Monday, 7 January 2019 at 17:23:19 UTC, Michelle Long wrote:
Is there any direct way to convert a signed nibble in to a 
signed byte with the same absolute value? Obviously I can do 
some bit comparisons but just curious if there is a very quick 
way.


byte b = nibble | ((nibble & 0x40)?0xF0:0);


Re: Bug in shifting

2018-12-19 Thread Patrick Schluter via Digitalmars-d-learn
On Tuesday, 18 December 2018 at 20:33:43 UTC, Rainer Schuetze 
wrote:



On 14/12/2018 02:56, Steven Schveighoffer wrote:

On 12/13/18 7:16 PM, Michelle Long wrote:

byte x = 0xF;
ulong y = x >> 60;


Surely you meant x << 60? As x >> 60 is going to be 0, even 
with a ulong.


It doesn't work as intuitive as you'd expect:

void main()
{
int x = 256;
int y = 36;
int z = x >> y;
writeln(z);
}

prints "16" without optimizations and "0" with optimizations. 
This happens for x86 architecture because the processor just 
uses the lower bits of the shift count. It is probably the 
reason why the language disallows shifting by more bits than 
the size of the operand.


Yes. On x86 shifting (x >> y) is in reality x >> (y & 0x1F) on 32 
bits and x >> (y & 0x3F) on 64 bits.


Re: D is helping from porch pirates

2018-12-19 Thread Patrick Schluter via Digitalmars-d-announce

On Wednesday, 19 December 2018 at 08:02:41 UTC, Piotrek wrote:

On Monday, 17 December 2018 at 23:13:18 UTC, Daniel Kozák wrote:

https://gma.abc/2zWvXCl


D supports the bright side of life ;) That's a good spirit. 
Thanks for sharing.


Cheers,
Piotrek


I found that approach more fun
https://www.youtube.com/watch?v=xoxhDk-hwuo


Re: Why does nobody seem to think that `null` is a serious problem in D?

2018-11-21 Thread Patrick Schluter via Digitalmars-d-learn

On Tuesday, 20 November 2018 at 23:14:27 UTC, Johan Engelen wrote:
On Tuesday, 20 November 2018 at 19:11:46 UTC, Steven 
Schveighoffer wrote:

On 11/20/18 1:04 PM, Johan Engelen wrote:


D does not make dereferencing on class objects explicit, 
which makes it harder to see where the dereference is 
happening.


Again, the terms are confusing. You just said the dereference 
happens at a.foo(), right? I would consider the dereference to 
happen when the object's data is used. i.e. when you read or 
write what the pointer points at.


But `a.foo()` is already using the object's data: it is 
accessing a function of the object and calling it. Whether it 
is a virtual function, or a final function, that shouldn't 
matter.


It matters a lot. A virtual function is a pointer that is in the 
instance, so there is a derefernce of the this pointer to get the 
address of the function.
For a final function, the address of the function is known at 
compile time and no dereferencing is necessary.


That is a thing that a lot of people do not get, a member 
function and a plain  function are basically the same thing. What 
distinguishes them, is their mangled name. You can call a non 
virtual member function from an assembly source if you know the 
symbol name.
UFCS uses this fact, that member function and plain function are 
indistinguishable in a object code point of view, to fake member 
functions.



There are different ways of implementing class function calls, 
but here often people seem to pin things down to one specific 
way. I feel I stand alone in the D community in treating the 
language in this abstract sense (like C and C++ do, other 
languages I don't know). It's similar to that people think that 
local variables and the function return address are put on a 
stack; even though that is just an implementation detail that 
is free to be changed (and does often change: local variables 
are regularly _not_ stored on the stack [*]).


Optimization isn't allowed to change behavior of a program, yet 
already simple dead-code-elimination would when null 
dereference is not treated as UB or when it is not guarded by a 
null check. Here is an example of code that also does what you 
call a "dereference" (read object data member):

```
class A {
int i;
final void foo() {
int a = i; // no crash with -O
}
}

void main() {
A a;
a.foo();  // dereference happens
}


No. There's no dereferencing. foo does nothing visible and can be 
replaced by a NOP. For the call, no dereferencing required.



```

When you don't call `a.foo()` a dereference, you basically say


Again, no dereferencing for a (final) function call. `a.foo()` is 
the same thing as `foo(a)` by reverse UFCS. The generated code is 
identical. It is only the compiler that will use different 
mangled names.


that `this` is allowed to be `null` inside a class member 
function. (and then it'd have to be normal to do `if (this) 
...` inside class member functions...)


These discussions are hard to do on a mailinglist, so I'll stop 
here. Until next time at DConf, I suppose... ;-)


-Johan

[*] intentionally didn't say where those local variables _are_ 
stored, so that people can solve that little puzzle for 
themselves ;-)





Re: Why is stdio ... stdio?

2018-11-10 Thread Patrick Schluter via Digitalmars-d-learn

On Saturday, 10 November 2018 at 18:47:19 UTC, Chris Katko wrote:

On Saturday, 10 November 2018 at 13:53:14 UTC, Kagamin wrote:

[...]


There is another possibility. Have the website run (fallible) 
heuristics to detect a snippet of code and automatically 
generate it. That would leave the mailing list people 
completely unchanged.


[...]


Simply using markup convention used in stackoverflow and reddit 
of formatting as code when indented by 4 blanks would already be 
a good step forward. I do it now even on newsgroup like 
comp.lang.c, the only newsgroup I still use via thunderbird 
(yeah, for the D groups I prefer the web interface which is 
really that good, contrary to all other web based newsgroup 
reader I ever saw).





[...]




Re: Profiling DMD's Compilation Time with dmdprof

2018-11-07 Thread Patrick Schluter via Digitalmars-d-announce

On Tuesday, 6 November 2018 at 21:52:30 UTC, H. S. Teoh wrote:
On Tue, Nov 06, 2018 at 07:44:41PM +, Atila Neves via 
Digitalmars-d-announce wrote:
On Tuesday, 6 November 2018 at 18:00:22 UTC, Vladimir 
Panteleev wrote:
> This is a tool + article I wrote in February, but never got 
> around to finishing / publishing until today.
> 
> https://blog.thecybershadow.net/2018/02/07/dmdprof/
> 
> Hopefully someone will find it useful.


Awesome, great work!

I really really hate waiting for the compiler.


OTOH, I really really hate that the compiler, in the name of 
faster compilation, eats up all available RAM and gets 
OOM-killed on a low memory system, so no amount of waiting will 
get me an executable.


Now that the compiler is completely in D, wouldn't it be a good 
idea to activate the GC in the compiler. I know that it requires 
some care for bootstrapping the compiler when there are 
dependencies to the D runtime, but the compiler would be an 
excellent example of the advantage of the GC (i.e. dumb fast 
allocations as long as there's memory, collection when no memory 
left which is miles away better than to get OOM-killed).





Re: D Binding to GUI libraries

2018-10-21 Thread Patrick Schluter via Digitalmars-d

On Sunday, 21 October 2018 at 18:24:30 UTC, Jacob Carlborg wrote:

On 2018-10-21 19:29, Russel Winder wrote:

But who apart from Eclipse and JetBrains uses Java for desktop 
GUI

applications?


There's probably a ton of business/enterprise applications that 
are written in Java.


But I don't care for that, that's why I'm using D :)


I do not have Eclipse to check, but the JetBrains IDEs
(at least CLion, GoLand, IntelliJ IDEA, and PyCharm) ship 
Swing, SWT,

and JavaFX in their systems.


Not sure what you mean with "ship" here. Swing and JavaFX are 
shipped with Java.


Eclipse itself is built using SWT.

Swing, and I believe SWT, have somewhat old architectures for 
GUI
frameworks where GTK+, Qt, and wxWidgets have moved on. But 
this may

just be opinion rather than agreed "fact".


I haven't use these other frameworks so I don't know what's 
consider old architecture and modern architecture.



Apart from GtkD on GTK+ systems


Linux doesn't have a "native" GUI in the same sense as macOS 
and Windows.


, and dqml, QtE5, qtD, and dqt on Qt,
and wxD on wxWidgets. Qt and wxWidgets pride themselves on 
being able
to use native frameworks underneath – I have no personal 
evidence as I

only use GNOME, I am not a good data point.


Qt is not native, at least not on macOS. Are any of the Qt D 
bindings actually useful? wxD seems very old, D1 old, is that 
useable?


When I said that DWT is basically the only native D toolkit, I 
failed to also include: up to date (as in working with the 
latest compiler), working and cross-platform.


I like it and I'm looking forward that it gets beyond swt 3.4.
I ported my Java GUI SWT program to D and it was a breeze to do. 
I didn't even require to change the structure of the app and the 
class hierarchy. There was only the file and string handling that 
I had to change, in fact make so much more readable and efficient.
There were some difficulties because of compiler issues in 
version 2.7x, but those were resolved and everything went smooth 
after that.


Re: Shared - Another Thread

2018-10-18 Thread Patrick Schluter via Digitalmars-d
On Thursday, 18 October 2018 at 17:01:46 UTC, Stanislav Blinov 
wrote:

On Thursday, 18 October 2018 at 16:31:33 UTC, Vijay Nayar wrote:

Imagine a simple algorithm that does logic on very long 
numbers, split into bytes.  One multi-threaded implementation 
may use 4 threads.  The first operating on bytes 0, 4, 8, etc.

 The second operating on bytes 1, 5, 9, etc.

In this case, a mutex or lock isn't actually needed, because 
the algorithm itself assures that threads don't collide.


Yes, they do collide. You just turned your cache into a giant 
clusterf**k. Keyword: MESIF.


In that case partitioning in cache line sizes is the least that 
has to be done.


Re: Shared - Another Thread

2018-10-18 Thread Patrick Schluter via Digitalmars-d

On Thursday, 18 October 2018 at 16:24:39 UTC, Manu wrote:
On Thu., 18 Oct. 2018, 5:05 am Patrick Schluter via 
Digitalmars-d, < digitalmars-d@puremagic.com> wrote:


On Wednesday, 17 October 2018 at 22:56:26 UTC, H. S. Teoh 
wrote:
>> If something might be used by someone else it's better not 
>> to touch it, unless one has confirmation it is not used by 
>> someone else.

>>
>> This is what shared has to enforce.
>
> Yes.  But how can the compiler statically verify this?  
> Because if it cannot be statically verified, then somewhere 
> along the line we have to trust the programmer. Ergo, it's 
> programming by convention, and we all know how effective 
> that is.

>
and that is exactly what shared is currently doing. Adding the 
rw restriction at least adds a protection for inadvertantly 
changing a shared object, a thing that doesn't exist now.


What cracks me up with Manu's proposal is that it is its 
simplicity and lack of ambition that is criticized the most. 
shared is a clusterfuck, according to what I gathered from the 
forum, I never had yet to use it in my code. Manu's idea makes 
it a little less of a clusterfuck, and people attack the idea 
because it doesn't solve all and everything that's wrong with 
shared. Funny.




Elaborate on this... It's clearly over-ambitious if anything.
What issues am I failing to address? I'm creating a situation 
where using
shared has a meaning, is safe, and doesn't require any unsafe 
interactions,
no casts, etc, for users at any level above the bare metal 
tooling... How

would you improve on that proposition?


No, your proposition is not the issue here. The problem I see is 
the expectation people have with what shared is supposed to do. I 
have the impression from reading in this forum about shared that 
people expect that just putting a shared in front of a variable 
will solve all the concurrency problems in existance.
Your proposition doesn't want to address this utopic goal and 
that is a good thing imo. Adding that restriction that you 
propose makes explicit what was implied but not clearly stated 
until now.
I'm not good enough in D to add more than a meta reflexion on the 
subject so I will not follow up on that. I often have the 
impression that a lot of things are going slower than necessary 
because a mentality where the perfect is in the way of good.


Re: Shared - Another Thread

2018-10-18 Thread Patrick Schluter via Digitalmars-d

On Wednesday, 17 October 2018 at 22:56:26 UTC, H. S. Teoh wrote:
If something might be used by someone else it's better not to 
touch it, unless one has confirmation it is not used by 
someone else.


This is what shared has to enforce.


Yes.  But how can the compiler statically verify this?  Because 
if it cannot be statically verified, then somewhere along the 
line we have to trust the programmer. Ergo, it's programming by 
convention, and we all know how effective that is.


and that is exactly what shared is currently doing. Adding the rw 
restriction at least adds a protection for inadvertantly changing 
a shared object, a thing that doesn't exist now.


What cracks me up with Manu's proposal is that it is its 
simplicity and lack of ambition that is criticized the most. 
shared is a clusterfuck, according to what I gathered from the 
forum, I never had yet to use it in my code. Manu's idea makes it 
a little less of a clusterfuck, and people attack the idea 
because it doesn't solve all and everything that's wrong with 
shared. Funny.


Re: D Logic bug

2018-10-12 Thread Patrick Schluter via Digitalmars-d
On Friday, 12 October 2018 at 13:15:22 UTC, Steven Schveighoffer 
wrote:

On 10/12/18 6:06 AM, Kagamin wrote:
On Thursday, 11 October 2018 at 23:17:15 UTC, Jonathan Marler 
wrote:

[...]


That's https://issues.dlang.org/show_bug.cgi?id=14186


Wow, interesting that C precedence is different from C++ here.



It's C++ which the anormal one.



Re: D Logic bug

2018-10-12 Thread Patrick Schluter via Digitalmars-d
On Thursday, 11 October 2018 at 23:17:57 UTC, Jonathan M Davis 
wrote:
On Thursday, October 11, 2018 8:35:34 AM MDT James Japherson 
via Digitalmars-d wrote:


Certainly, major languages like C, C++, Java, and C# all do it 
the way that D does, and they all have the same kind of 
precedence for the ternary operator that D does.


No, the off man out is C++. it's the only one with the priority 
of the ternary equal to assignments. All other languages do it 
like C, i.e. with a higher priority for ?:


C++ is the annoying one (as always) here.


Re: D Logic bug

2018-10-12 Thread Patrick Schluter via Digitalmars-d
On Thursday, 11 October 2018 at 23:17:15 UTC, Jonathan Marler 
wrote:
On Thursday, 11 October 2018 at 21:57:00 UTC, Jonathan M Davis 
wrote:
On Thursday, October 11, 2018 1:09:14 PM MDT Jonathan Marler 
via Digitalmars-d wrote:

On Thursday, 11 October 2018 at 14:35:34 UTC, James Japherson

wrote:
> [...]

In c++ the ternary operator is the second most lowest 
precedence operator, just above the comma.  You can see a 
table of each operator and their precendence here, I refer to 
it every so often: 
https://en.cppreference.com/w/cpp/language/operator_precedence


Learning that the ternary operator has such a low precedence 
is one of those things that all programmers eventually run 
into...welcome to the club :)


It looks like D has a similar table here 
(https://wiki.dlang.org/Operator_precedence).  However, it 
doesn't appear to have the ternary operator in there. On that 
note, D would take it's precedence order from C/C++ unless 
there's a VERY good reason to change it.


The operator precedence matches in D. Because in principle, C 
code should either be valid D code with the same semantics as 
it had in C, or it shouldn't compile as D code, changing 
operator precedence isn't something that D is going to do 
(though clearly, the ternary operator needs to be added to the 
table). It would be a disaster for porting code if we did.


- Jonathan M Davis


I had a look at the table again, looks like the ternary 
operator is on there, just called the "conditional operator".  
And to clarify, D's operator precedence is close to C/C++ but 
doesn't match exactly.


Please do not conflate C and C++. It is specifically on order of 
precedence of the ternary that the 2 languages differ. It is C++ 
and only C++ which has the unconventionnal order of precedence 
where the ternary has the same priority as the assign operators. 
ALL other C derived languages have a higher priority for the 
ternary than the assignments.


Re: D Logic bug

2018-10-11 Thread Patrick Schluter via Digitalmars-d
On Thursday, 11 October 2018 at 14:35:34 UTC, James Japherson 
wrote:

Took me about an hour to track this one down!

A + (B == 0) ? 0 : C;

D is evaluating it as

(A + (B == 0)) ? 0 : C;


As it should.




The whole point of the parenthesis was to associate.

I usually explicitly associate precisely because of this!

A + ((B == 0) ? 0 : C);

In the ternary operator it should treat parenthesis directly to 
the left as the argument.


Of course, I doubt this will get fixed but it should be noted 
so other don't step in the same poo.


No. Except for assignement and assignment operators, ternary 
operator has the lowest precedence of any operator in D (and C, 
C++, java, PHP, C# etc.).





  1   2   3   4   5   >