Re: LDC 1.38.0

2024-05-15 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, May 11, 2024 at 01:22:58AM +, kinke via Digitalmars-d-announce 
wrote:
> Glad to announce LDC 1.38.0. Major changes:
> 
> - Based on D 2.108.1.
> - Support for LLVM 18; the prebuilt packages use v18.1.5.
> - Android: Switch to native ELF TLS, supported since API level 29 (Android
> v10), dropping our former custom TLS emulation (requiring a modified LLVM
> and a legacy ld.bfd linker). The prebuilt packages themselves require
> Android v10+ (armv7a) / v11+ (aarch64) too, and are built with NDK r26d.
> Shared druntime and Phobos libraries are now available
> (`-link-defaultlib-shared`), as on regular Linux.
> 
> Full release log and downloads:
> https://github.com/ldc-developers/ldc/releases/tag/v1.38.0
> 
> Thanks to all contributors & sponsors!

Thanks for continuing to bring us this awesome compiler!


--T


Re: Is D programming friendly for beginners?

2024-03-12 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Mar 12, 2024 at 08:40:49PM +, Meta via Digitalmars-d-announce wrote:
[...]
> I think it really depends on the person. My first language was C++, which
> was absolute hell to learn as a complete beginner to programming, but I
> really wanted to learn a language with low-level capabilities that could
> also do gamedev. Learning C++ as my first language was incredibly difficult,
> but it also made the programming parts of my CS degree a breeze - especially
> courses like machine level programming. Nobody else in the class even
> understood what a pointer was for the first couple weeks.

People who are more than casually interested in computers should
have at least some idea of what the underlying hardware is like.
Otherwise the programs they write will be pretty weird.
-- D. Knuth

;-)


T

-- 
Just because you can, doesn't mean you should.


Re: Is D programming friendly for beginners?

2024-03-12 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Mar 12, 2024 at 06:03:43PM +, Lance Bachmeier via 
Digitalmars-d-announce wrote:
> On Tuesday, 12 March 2024 at 17:03:42 UTC, Mike Shah wrote:
> 
> > As a note, the 'which language is best for CS 1' debate has long
> > been debated -- but at least in a school setting, I've found the
> > quality/enthusiasm/encouragement of the teacher to be the most
> > important aspect regardless of language choice.
> 
> As someone that's been teaching beginners to program at a university
> for a long time (but not in a CS department) I've come to see the
> choice of language as largely unimportant. You have to decide what you
> want to teach them and then eliminate the languages that aren't
> suitable. D is one of many languages that would work with the right
> content. Other languages, like C++, add unnecessary overhead and thus
> should not be used.
> 
> It's often said "X is a complicated language" but that's the wrong way
> to look at it. You're teaching a set of programming concepts, not a
> language.  The question is how well a particular language works for
> learning those concepts.

I don't know how CS programs are carried out these days, but back when I
was in university, the choice of language is largely irrelevant, because
the whole point of a programming course isn't to teach you a specific
language, but to teach you the *principles* that underlie programming in
general. There are really only a small handful of different paradigms
that you need to learn; once you learned the principles behind them,
they can be applied to any language out there.  You wouldn't need anyone
to teach you a new language then; you could just learn it yourself by
applying these same principles.

The rest, as they say, is just details. ;-)


T

-- 
The irony is that Bill Gates claims to be making a stable operating system and 
Linus Torvalds claims to be trying to take over the world. -- Anonymous


Re: LDC 1.37.0

2024-03-05 Thread H. S. Teoh via Digitalmars-d-announce
On Sun, Mar 03, 2024 at 02:46:34PM +, kinke via Digitalmars-d-announce 
wrote:
> Glad to announce LDC 1.37.0. Major changes:
> 
> * Based on D 2.107.1.
> * Important fix wrt. if-statement elision on constant condition.
> 
> Full release log and downloads:
> https://github.com/ldc-developers/ldc/releases/tag/v1.37.0
> 
> Thanks to all contributors & sponsors!

Awesome, thanks to the LDC team for another fine compiler!


--T


Re: LDC 1.36.0

2024-01-06 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Jan 06, 2024 at 06:03:54PM +, kinke via Digitalmars-d-announce 
wrote:
> Glad to announce LDC 1.36.0. Major changes:
> 
> * Based on D 2.106.1.
> * Support for LLVM 17; the prebuilt packages use v17.0.6.
> * New GDC-compatible CLI options `-fno-{exceptions,moduleinfo,rtti}` to
> selectively enable some `-betterC` effects.
> * Support for sample-based PGO via clang-compatible CLI option
> `-fprofile-sample-use` and `ldc-profgen` tool.
[...]

Awesome! Thanks to everyone involved in making this awesome compiler
available!


T

-- 
I've been around long enough to have seen an endless parade of magic new 
techniques du jour, most of which purport to remove the necessity of thought 
about your programming problem.  In the end they wind up contributing one or 
two pieces to the collective wisdom, and fade away in the rearview mirror. -- 
Walter Bright


Re: Release D 2.106.0

2023-12-04 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Dec 02, 2023 at 06:09:11PM +, Iain Buclaw via 
Digitalmars-d-announce wrote:
> Glad to announce D 2.106.0, ♥ to the 33 contributors.
> 
> This release comes with...
> 
> - In the D language, it is now possible to statically initialize AAs.
[...]

Finally!  Hooray!


T

-- 
That's not a bug; that's a feature!


Re: New DUB documentation

2023-11-22 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Nov 22, 2023 at 09:58:51PM +, Vladimir Marchevsky via 
Digitalmars-d-announce wrote:
> On Wednesday, 22 November 2023 at 21:52:12 UTC, claptrap wrote:
> > A single table of contents type menu would be better IMO, a left
> > sidebar that gives links to all the pages.
> 
> Wouldn't it be too huge? 5 big separate sections, each has a list of
> articles, each article having a number of chapters, sub-chapters,
> sub-sub-chapters...

Could be optionally expanded depending on where you are in the
navigation.


T

-- 
Never criticize a man until you've walked a mile in his shoes. Then when you do 
criticize him, you'll be a mile away and he won't have his shoes.


Re: LDC 1.35.0

2023-10-16 Thread H. S. Teoh via Digitalmars-d-announce
On Sun, Oct 15, 2023 at 01:37:30PM +, kinke via Digitalmars-d-announce 
wrote:
> Glad to announce LDC 1.35.0. Major changes:
> 
> * Based on D 2.105.2+.
> * A few important ImportC fixes.
> * Fix GC2Stack optimization regression introduced in v1.24.
> 
> Full release log and downloads:
> https://github.com/ldc-developers/ldc/releases/tag/v1.35.0
> 
> Thanks to all contributors & sponsors!

Awesome!  Thanks to everyone involved in bringing us this awesome
compiler.


T

-- 
Javascript is what you use to allow third party programs you don't know 
anything about and doing you know not what to run on your computer. -- Charles 
Hixson


Re: LDC 1.32.0

2023-03-12 Thread H. S. Teoh via Digitalmars-d-announce
On Sun, Mar 12, 2023 at 04:11:21PM +, kinke via Digitalmars-d-announce 
wrote:
> Glad to announce LDC 1.32.0. Major changes:
[...]

Awesome!  Big thanks to all the LDC contributors for their hard work to
bring us this awesome compiler.


T

-- 
A bend in the road is not the end of the road unless you fail to make the turn. 
-- Brian White


Re: Hipreme Engine is fully ported to WebAssembly

2023-02-07 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Feb 03, 2023 at 01:41:35PM +, Hipreme via Digitalmars-d-announce 
wrote:
> This has been finished for quite a time but I was polishing some features.
> 
> I'll save this post to 4 things:
> 
> **Advertise that [Hipreme Engine](https://github.com/MrcSnm/HipremeEngine)
> is now supporting 5 platforms**, being those:
> 
> - Xbox Series
> - Windows
> - Linux
> - Android
> - Browser

This is awesome!  Now I definitely have to look into this when I get
around to my web project again.


[...]
> **2. D performance on web**:
> 
> Incredible. This game runs with 5% CPU usage on web, and it still has
> many optimization opportunities that I didn't run for, such as:
[...]

Nice!


[...]
> - What is the slowest part of D? The JS bridge. The bloat is all over
> there, executing code from the bridge is by a magnitude of 10x slower
> than a single side code. Thankfully my engine has already dealt with
> that by every OpenGL call being batched.
[...]

Yeah, just as I expected, the JS bridge is a bottleneck.  But hopefully
as WASM matures more, we may be able to bypass JS, probably partially at
first, eventually completely, hopefully.


T

-- 
Spaghetti code may be tangly, but lasagna code is just cheesy.


Re: WildCAD - a simple 2D drawing application

2023-01-30 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Jan 31, 2023 at 11:25:02AM +1300, Richard (Rikki) Andrew Cattermole via 
Digitalmars-d-announce wrote:
> Like this? https://docs.webfreak.org/getting-started/first-steps/

Not bad!  Looks much more promising than the original page for sure.

Still ran into some issues though. The linked "recipes" page contains
tables that are far too wide for the width of the text block, forcing me
to have to scroll left and right to see the rest of the columns. This
makes it very painful to use.  The Build Settings section, for example,
has a table with overly-wide Name and Arguments column, such that the
description is invisible. The table is so long that at first glance
there's not even a scrollbar, leaving me wondering what's up with all
that blank space in the page.  The description column is also far too
narrow for the amount of text it contains; this makes it needlessly
cramped and the table longer than necessary.

I realize this is a WIP, but still missing is a page that explains dub's
operational model: what it does exactly, and how this relates to the
basic config items.  The "DUB Reference" tab AFAIK still links to that
1-page infodump I referred to in my other post (or at least some version
of it), so the points I raised still apply to that page.  Needs a lot
more work before this is usable as the official reference.



T

-- 
Кто везде - тот нигде.


Re: WildCAD - a simple 2D drawing application

2023-01-30 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Jan 30, 2023 at 08:39:33PM +, bachmeier via Digitalmars-d-announce 
wrote:
> On Monday, 30 January 2023 at 04:40:48 UTC, Siarhei Siamashka wrote:
> > On Monday, 30 January 2023 at 02:44:50 UTC, bachmeier wrote:
> > > if you put your code in directories that match the modules you
> > > want to import, there's no need for Dub and the corresponding
> > > poorly documented configuration.
> > 
> > What is poorly documented? Can you suggest some documentation
> > improvements?
> 
> That ship has sailed. I've given up on Dub because those who promote
> it aren't interested in criticism. I'll just link to [this
> page](https://dub.pm/package-format-json) and if anyone thinks that's
> acceptable documentation for someone new to the language, there's no
> point having a conversation about it.

+1.  What we need is a dub tutorial that leads you step-by-step how to
set up a project, what to put in the config file, explain what each
config item means (at least for the basic cases), and a FAQ explaining
commonly encountered issues and how to resolve them (or why a particular
use case is not possible).

Yes, all of this information is already there on the linked page. But a
newcomer (1) does not know 80% of what the items there even mean, (2)
does not understand how dub uses this information and what effects they
have, and (3) does not know which 5 of the 100 or so pieces of
information on the page are relevant to him, right at this moment. The
linked page may arguably be useful as a reference for someone who
already knows dub well, but it's completely obtuse to something who
doesn't know what dub is.

Even as a reference, the information isn't organized in an easily
navigable way. It's basically one giant infodump of the absolute bare
minimum information you need to remind yourself how to do a particular
task, but to someone who doesn't already know dub, it looks disorganized
and thrown together in a completely arbitrary, opaque order.

For example, the very first section talks about "global scope". What's a
"global scope" and why do I need it?  Who knows, who cares, here are all
the settings that go into "global scope", you go figure it out yourself.
Immediately following is the section "sub packages". There is no
explanation of what's a "sub package" and why I might want one until the
second paragraph, which contains a single sentence that's supposed to
magically make me understand why I might want a sub package.  I
basically have to read through the whole thing, digest it, connect the
dotted lines myself, before I can even have an answer to the most basic
of questions, "what is this and why should I care about it?". In the
meantime there's all kinds of random statements aimed at experienced
users, recommendations for best practices without any context or
explanation of why they're recommended (you're expected to figure it out
yourself).

The paragraph before the last code block in this section ("Sub
packages") is a typical example: "It is also possible to define the sub
packages within the root package file, but note that it is generally
discouraged to put the source code of multiple sub packages into the
same source folder." In one sentence the document has managed to (1)
introduce a possible configuration without any explanation of why one
might want to do that and (2) contradict itself by saying this is
generally discouraged. Then (3) the remainder of the paragraph goes on
to completely negate the first part of the first sentence, complete with
a code example of, I guess, what not to do? Why is all of this even
here?!  If something is a discouraged practice, it sholdn't be smack in
my face right in the middle of the docs, occupying at least half of the
entire section(!).  First document what I *should* do, then in a
footnote or a separate page explain what to avoid.  And explain why I
might want a "sub package" instead of assuming I already know what it
is.

And on and on and on. The order of sections is, to put it mildly, hard
to follow.  As an infodump, it's not suitable material to introduce
someone to dub. There's no explanation of common use cases, zero
hand-holding, and poor or missing motivation for anything. There's no
explanation of how to handle common use cases that one might encounter.
Maybe the explanation *is* in there somewhere, but I've no idea where it
is and have to essentially read through the entire thing to find it (and
hope I don't miss it).

As reference material, it's incomplete and could use better navigation.
And clearer explanation and definitions of what each config item does,
instead of 1-sentence descriptions that the reader has to collect then
somehow piece together in his head and magically guess the intent of
each feature. Also ALL corner cases must be covered thoroughly (if it's
to be a reference, it better be thorough!), as well as conceivable use
cases that someone might want but isn't supported. (And an explanation
of why it isn't supported would also be nice.)


T

Re: D Language Foundation Monthly Meeting Summary for December 2022

2023-01-23 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Jan 23, 2023 at 08:43:03PM +, Adam D Ruppe via 
Digitalmars-d-announce wrote:
> On Monday, 23 January 2023 at 20:06:46 UTC, H. S. Teoh wrote:
> > There should be a tool for auto-generating JS wrappers, perhaps even
> > HTML snippets, so that a user literally can just write:
> > 
> > import std; // OK, maybe import std.wasm or something
> > void main() { writeln("Hello, world!");
> > and get a webpage that prints that message in a browser window
> > without writing a single line of JS or HTML.
> 
> http://webassembly.arsdnet.net/
> 
> Paste in
> import std.stdio;
> void main() { writeln("hello world"); }
> 
> to the box on that page
> 
> and get
> http://webassembly.arsdnet.net/usertemp

Ahahahaha...  just like Adam to have already dunnit while I'm still
twiddling my fingers wondering how to go about doing it. :-D  Now all we
need is to package your little page up into a dub package or something
(personally I prefer just a tarball) and we're good to go. :-D


> Webassembly is a trash target but like been there done that.

Yeah TBH after dabbling with it a little I realized just how much it was
still dependent on JS to do the heavy lifting.  You can't even pass
strings across the JS/WASM boundary without truckloads of JS
boilerplate.  The C-like API isn't officially part of the WASM standard
yet, and they're still trying to figure out how GC might work. As far as
I'm concerned, it's still early adopter tech, not yet stable enough for
me to invest in.


> Of course there are some caveats in what works, there have been come
> contributions coming in from hipreme recently to extend it a lil.

Nice.  Can it handle WebGL yet?  I betcha that'd be the second question
a newbie to D would ask after asking about WASM. :-P


T

-- 
I see that you JS got Bach.


Re: D Language Foundation Monthly Meeting Summary for December 2022

2023-01-23 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Jan 21, 2023 at 04:29:28AM +, Mike Parker via 
Digitalmars-d-announce wrote:
[...]
> __CTFE writeln__
> 
> Razvan next brought up [a PR to implement a `__ctfeWriteln`
> built-in](https://github.com/dlang/dmd/pull/12412). It was currently
> stalled and needed Walter's approval. Walter asked Razvan to email him
> about it. He subsequently approved it.

This may seem like a small item, but it's a landmark!!  The first PR for
this was submitted back in 2011 (https://github.com/dlang/dmd/pull/296),
superceded in in 2012 (https://github.com/dlang/dmd/pull/692), revived
in 2016 (https://github.com/dlang/dmd/pull/6101), re-attempted in 2017
(https://github.com/dlang/dmd/pull/7082), submitted in its present form
in Apr 2021 (https://github.com/dlang/dmd/pull/12412), and finally
approved in Dec 2022.  This is monumental!

OTOH, it begs the question, is there any way to improve our present
process so that relatively small features like these don't take 11 years
to get implemented?


[...]
> ### Ali
> Ali reported that he had finished the new D program at work he had
> [told us about in the November
> meeting](https://forum.dlang.org/thread/citxnklerlvqmybyo...@forum.dlang.org).
> It had uncovered a performance issue with `std.file.dirEntries`. As
> for the program, he was happy with the end result.
> 
> He said he'd used `std.parallelism.parallel` again and speculated he's
> probably among the people who've used it most. He said it helps
> tremendously. It's very simple and everything becomes very fast.

Just wanted to chime in here to say that std.parallelism.parallel is
absolutely awesome, and I've been using it in a few of my projects for
what amounts to instant speed-up "for free".

The original design hit jackpot in making it as easy as possible to turn
a regular foreach loop into a parallel loop: just add .parallel to your
aggregate. This makes it trivial to test the performance gains of
parallelizing any given foreach loop (with independent iterations, of
course). You didn't have to invest a ton of time writing code to
instantiate task managers, task pools, create threads, manage threads,
wait for them to finish, etc.. For highly-specific performance tweaks,
you'd probably want to do all that, but for one-off quick evaluations of
whether a parallel approach is even worth it in the first place, the
design of .parallel is exactly the thing needed. Once you've confirmed
it works, you can, if needed, invest more effort into managing task
pools, etc..  If not, you haven't wasted any effort except writing
`.parallel` -- it's basically zero cost.  And for script-like helper
utilities, .parallel is just the thing you need to get the job done in
the shortest amount of time possible.  No need for anything more
elaborate.


[...]
> ### Walter
[...]
> He then said that he had noticed in discussions on HN and elsewhere a
> tectonic shift appears to be going on: C++ appears to be sinking.
> There seems to be a lot more negativity out there about it these days.
> He doesn't know how big this is, but it seems to be a major shift.
> People are realizing that there are intractable problems with C++,
> it's getting too complicated, they don't like the way code looks when
> writing C++, memory safety has come to the fore and C++ doesn't deal
> with it effectively, etc.

The inevitable is happening.  Has been happening, just on a smaller
scale.  But it will only grow.


[...]
> Robert thinks Rust has won that game. [...] Rust is also taking over
> some of the web world because it compiles easily to web assembly.

LDC already compiles to WASM.  It's a crucial first step.  But the
usability level of D in WASM is currently wayyy below what it would take
to win people over.  If we want to win this game, we need to get WASM
support to the point that you could in theory just recompile a D program
and have it work in WASM without any change.  Well, excepting, of
course, stuff that WASM fundamentally can't do.

Currently, you can compile individual functions, but you can't have
main(), you can't use Phobos, you can't use the GC, and you need to
write a lot of JS boilerplate to have your WASM D code interact with
anything outside its own little bubble.  Strictly speaking this isn't
D's problem, but that's cold comfort for anyone who wants to develop for
WASM in D.  Yeah, writing JS and HTML is part-and-parcel of targeting
WASM, but why can't we make our packaging better?  There should be a
tool for auto-generating JS wrappers, perhaps even HTML snippets, so
that a user literally can just write:

import std; // OK, maybe import std.wasm or something
void main() { writeln("Hello, world!");

and get a webpage that prints that message in a browser window without
writing a single line of JS or HTML.  All the JS boilerplate and HTML
tedium should be automatically taken care of, unless the user overrides
something.

Using WASM with D should be on the level of usability of appending
.parallel to your aggregate to 

Re: ThePath - Convenient lib to deal with paths and files. (Alpha version)

2023-01-17 Thread H. S. Teoh via Digitalmars-d-announce
On Sun, Jan 15, 2023 at 01:53:51PM +, Dmytro Katyukha via 
Digitalmars-d-announce wrote:
[...]
> Also, this lib contains function
> [createTempDirectory](https://github.com/katyukha/thepath/blob/master/source/thepath/utils.d),
> that, i think, would be nice to have it in Phobos.

Yes it would be nice.  But there may be security implications.  For
Posix, I see you use mkdtemp, which is secured by the OS / libc
implementor.  But for non-Posix, you used std.random; this is insecure
because std.random is not intended for cryptographic applications, and
anything not designed for crytographic security is vulnerable to
exploits.  Also, you need to be careful with the default permissions
with the temp directory is created; leaving it up to whatever's set in
the user's environment is generally unwise.


> So, the questions are:
> - Do it have sense to convert `Path` to a class? Or keep it as struct?

Struct.  In general, idiomatic D code prefers structs over classes. If
you're not using inheritance and runtime polymorphism, there's no need
to use classes.


> - Do it have sense to convert `Path` to template struct to make it
>   possible to work with other types of strings (except `string` type)?

IMO, this only introduces needless complexity.  For example std.regex
templatizes over char/wchar/dchar, but I've basically never needed to
use anything except the char instantiation.  This needless template
parametrization only adds to std.regex's slow compile times; in
retrospect it was IMO a mistake.  Regular D code should just use strings
(UTF-8) for everything, and convert to wstring at the OS boundary if
you're on Windows and need something to be in UTF-16.  And dstring is
essentially useless; I've not heard of anyone needing to use dstring for
the 10 or so years I've been using D.

Just use string, that's good enough.


> - What are the requirements to place 
> [createTempDirectory](https://github.com/katyukha/thepath/blob/master/source/thepath/utils.d#L11)
>   function in Phobos?

Use Phobos coding style, bring it up to Phobos coding standards.


> - What else could be changed to make it better?
[...]

Probably should always use the libc or OS function for creating a temp
directory; it's generally bad idea to roll your own when it comes to
creating temporary files or directories where there can be serious
security implications. Other than insecure random name generation,
there's also timing issues to be considered, i.e., if an attacker could
predict the name, he could preemptively create the directory with the
wrong permissions between your call to std.file.exists and
std.file.mkdir, and exploit those permissions to manipulate the
behaviour of your program later.  You need to leverage OS APIs to
guarantee the atomicity of checking for existence and creating the
directory.


T

-- 
Ignorance is bliss... until you suffer the consequences!


Re: Good News: Almost all druntime supported on arsd webassembly

2023-01-06 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 06, 2023 at 12:52:43PM +, Hipreme via Digitalmars-d-announce 
wrote:
> Hello people. I have tried working again with adam's wasm minimal
> runtime, and yesterday I was able to make a great progress on it.
[...]
> All those tests are currently passing. That means we almost got all
> the common features from the D Runtime into Arsd custom runtime.
> Meaning that the only thing that would be missing right now being the
> WASI libc. But my engine is not going to use it entirely, only a
> subset of it. So, I would like to say that whoever wants to play with
> it now is able to do it so.
> 
> 
> That being said, I would carefully advice that while I bring those
> implementations, I didn't care about memory leaks, so, it is a runtime
> without GC: careless allocations. But! It is possible to port some
> programs specially if you're already doing house clearing yourself. As
> my engine does not leak memory in loop (as that would make it trigger
> the GC and thus make it slow), it is totally possible to use it.
[...]

This is awesome stuff, thanks for pushing ahead with it!!  Keep this up,
and I might actually decide to use your game engine for my projects. ;-)

The big question I have right now is, what's the status of interfacing
with web APIs such as WebGL?  How much JS glue is needed for that to
work?  My dream is for all of the JS boilerplate to be automated away,
so that I don't have to write a single line of JS for my D project to
work in WASM.


T

-- 
Those who don't understand Unix are condemned to reinvent it, poorly.


Re: Safer Linux Kernel Modules Using the D Programming Language

2023-01-06 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 06, 2023 at 04:07:12AM +, areYouSureAboutThat via 
Digitalmars-d-announce wrote:
[...]
> btw. Linus one said, more or less, that one reason he likes C so much, is
> because when he is typing it, he can visualise what assembly will be
> produced (i.e. his mind is always intune with the code the machine will
> actually run).

That has stopped being true for at least a decade or more. C was
designed to map well to the PDP-11's instruction set; modern CPU's are
completely different beasts with out-of-order execution, cache
hierarchy, multi-core, multi-thread per core, expanded instruction sets,
and microcode. Why do you think, for example, that in the kernel
functions and intrinsics are used for certain CPU-specific instructions?
Because nothing in C itself corresponds to them.  The closeness of C to
the CPU is only an illusion.


T

-- 
Don't get stuck in a closet---wear yourself out.


Re: Breaking news: std.uni changes!

2023-01-02 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Jan 03, 2023 at 05:13:53PM +1300, Richard (Rikki) Andrew Cattermole via 
Digitalmars-d-announce wrote:
> On 03/01/2023 10:24 AM, Dukc wrote:
> > Other things coming to mind: Bidirectional grapheme iteration,
> > Word break and line break algorithms, lazy normalisation. Indeed,
> > lots of improvement potential.
> 
> I've done word break, "lazy" normalization (so can stop at any point),
> and lazy case insensitive comparison with normalization.
> 
> But: Bidirectional grapheme iteration makes my eye twitch lol.
> 
> My main concern for adding new features is increasing the size of
> Phobos binary for the tables. Most people don't need a lot of these
> optional algorithms, but they do need things like casing to work
> correctly (which makes increased size worth it).

Is there a way to make these tables pay-as-you-go? As in, if you never
call a function that depends on a table, it would not be pulled into the
binary?


T

-- 
They say that "guns don't kill people, people kill people." Well I think the 
gun helps. If you just stood there and yelled BANG, I don't think you'd kill 
too many people. -- Eddie Izzard, Dressed to Kill


Re: text based file formats

2022-12-20 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 20, 2022 at 07:46:36PM +, John Colvin via 
Digitalmars-d-announce wrote:
[...]
> > There's also my little experimental csv parser that was designed to
> > be as fast as possible:
> > 
> > https://github.com/quickfur/fastcsv
> > 
> > However it can only handle input that fits in memory (using std.mmfile
> > is one possible workaround), has a static limit on field sizes, and does
> > not do validation.
[...]
> We use this at work with some light tweaks, it’s done a lot work 

Wow, I never expected it to be actually useful. :-P  Good to know it's
worth something!


T

-- 
They say that "guns don't kill people, people kill people." Well I think the 
gun helps. If you just stood there and yelled BANG, I don't think you'd kill 
too many people. -- Eddie Izzard, Dressed to Kill


Re: text based file formats

2022-12-19 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Dec 19, 2022 at 04:16:57PM -0800, Walter Bright via 
Digitalmars-d-announce wrote:
> On 12/19/2022 4:35 AM, Adam D Ruppe wrote:
> > On Monday, 19 December 2022 at 09:55:47 UTC, Walter Bright wrote:
> > > Curious why CSV isn't in the list.
> > 
> > Maybe std.csv is already good enough?
> 
> LOL, learn something every day! I've even written my own, but it isn't very 
> good.

There's also my little experimental csv parser that was designed to be
as fast as possible:

https://github.com/quickfur/fastcsv

However it can only handle input that fits in memory (using std.mmfile
is one possible workaround), has a static limit on field sizes, and does
not do validation.


T

-- 
Debian GNU/Linux: Cray on your desktop.


Re: D Language Foundation October 2022 Quarterly Meeting Summary

2022-11-04 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Nov 04, 2022 at 03:57:05PM +, Bastiaan Veelo via 
Digitalmars-d-announce wrote:
> On Wednesday, 2 November 2022 at 18:20:42 UTC, H. S. Teoh wrote:
> > On Wed, Nov 02, 2022 at 06:11:12PM +, M. M. via
> > Digitalmars-d-announce wrote:
> > > Thank you to Martin Nowak for all his as release manager. Happy to
> > > hear that someone like Ian took over.
> > 
> > I'm just curious why Martin stepped down. If he doesn't mind sharing
> > the reason.
> 
> From what I've heard, Martin started his own business, which takes up
> all his time.
> 
> Wishing you success, Martin!
[...]

+1, best wishes, Martin!


T

-- 
Obviously, some things aren't very obvious.


Re: D Language Foundation October 2022 Quarterly Meeting Summary

2022-11-02 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Nov 02, 2022 at 06:11:12PM +, M. M. via Digitalmars-d-announce 
wrote:
> On Wednesday, 2 November 2022 at 04:42:06 UTC, Mike Parker wrote:
> > The D Language Foundation's October 2022 meeting was a quarterly,
> > meaning that several industry representatives attended. It took
> > place via Jitsi Meet on October 7, 2022, at 14:00 UTC. The following
> > people attended (those with DLF next to their names are either D
> > Language Foundation board members, paid employees, or affiliated
> > volunteers):
> > 
> > [...]
> 
> Thank you for the summary. It's very informative.
> 
> Thank you to Martin Nowak for all his as release manager. Happy to
> hear that someone like Ian took over.

I'm just curious why Martin stepped down. If he doesn't mind sharing the
reason.


T

-- 
Knowledge is that area of ignorance that we arrange and classify. -- Ambrose 
Bierce


Re: GCC 12.2 Released (D v2.100.1)

2022-08-19 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Aug 19, 2022 at 11:36:09AM +, Iain Buclaw via 
Digitalmars-d-announce wrote:
> Hi,
> 
> GCC version 12.2 has been released.
[...]
> - Updated the D front-end from v2.100.0-rc1 to v2.100.1.
[...]

:-O  TOTAL AWESOMENESS!!!  Now GDC is officially up-to-date with the
latest version of the language, and no longer has to play second class
to LDC.  I might even start using GDC for my larger projects just to see
how it compares performance-wise to LDC.

Big thanks to Iain for his tireless work all these years to push D into
the GCC toolchain!!


T

-- 
Just because you survived after you did it, doesn't mean it wasn't stupid!


Re: More fun with toStringz and the GC

2022-08-06 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Aug 05, 2022 at 10:14:24PM -0400, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
> On 8/5/22 8:51 PM, Don Allen wrote:
> 
> > And this, from Section 32.2 of the Language Reference Manual:
> > 
> > If pointers to D garbage collector allocated memory are passed to C
> > functions, it's critical to ensure that the memory will not be
> > collected by the garbage collector before the C function is done
> > with it. This is accomplished by:
> > 
> >      Making a copy of the data using core.stdc.stdlib.malloc() and
> >  passing the copy instead.
> >      -->Leaving a pointer to it on the stack (as a parameter or
> > automatic variable), as the garbage collector will scan the stack.<--
> >      Leaving a pointer to it in the static data segment, as the garbage
> > collector will scan the static data segment.
> >      Registering the pointer with the garbage collector with the
> > std.gc.addRoot() or std.gc.addRange() calls.
> > 
> > I did what the documentation says and it does not work.
> 
> I know, I felt exactly the same way in my post on it:
> 
> https://forum.dlang.org/post/sial38$7v0$1...@digitalmars.com
> 
> I even issued a PR to remove the problematic recommendation:
> 
> https://github.com/dlang/dlang.org/pull/3102
> 
> But there was pushback to the point where it wasn't worth it. So I
> closed it.
[...]

IMO this PR should be revived. The one thing worse than no documentation
is misleading documentation.  This state of things should not be allowed
to continue.


T

-- 
Study gravitation, it's a field with a lot of potential.


Re: LDC 1.30.0

2022-07-20 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Jul 20, 2022 at 06:43:05PM +, kinke via Digitalmars-d-announce 
wrote:
> Glad to announce LDC 1.30.0. Major changes:
> 
> - Based on D 2.100.1.
> - LLVM for prebuilt packages bumped to v14.0.3. All target architectures
> supported by LLVM are enabled now.
> - Dropped LDC ltsmaster (v0.17.x) as supported host compiler. Like DMD, the
> min D version for bootstrapping is v2.079 (or GDC v9.x) now.
> - Dropped support for LLVM < 9.
> - New LeakSanitizer support via `-fsanitize=leak`.
> - New prebuilt universal macOS package, runnable on both x86_64 and arm64,
> and enabling x86_64/arm64 macOS/iOS cross-compilation targets out of the
> box.
> 
> Full release log and downloads:
> https://github.com/ldc-developers/ldc/releases/tag/v1.30.0
> 
> Thanks to all contributors & sponsors!

Awesome stuff!  Thanks LDC team for all their hard work in continuing to
bring us a top-of-the-line D compiler!


--T


Re: A New Game Written in D

2022-05-18 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, May 19, 2022 at 07:07:45AM +1200, rikki cattermole via 
Digitalmars-d-announce wrote:
> 
> On 19/05/2022 7:03 AM, H. S. Teoh wrote:
> > We keep coming back to this impasse: write barriers.  It's high time
> > somebody implemented this in a dmd fork so that we can actually test
> > out more advanced GC designs.
> 
> No. Not dmd.
> 
> LDC or GDC.
> 
> DMD is not suitable for experimentation due to the situation with
> atomics.
> 
> Advanced GC's may need concurrent data structures like lock-free, and
> you cannot implement them with dmd due to the atomic instructions
> being 3 function calls deep. You get segfaults. Been there done that,
> what a waste of 7 months.

Sounds good, do it in ldc/gdc, then.  Nowadays I only ever use dmd when
I need quick turn-around time anyway.  In terms of codegen it's pretty
lackluster, for production builds my go-to is LDC.


T

-- 
Change is inevitable, except from a vending machine.


Re: A New Game Written in D

2022-05-18 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, May 19, 2022 at 06:18:58AM +1200, rikki cattermole via 
Digitalmars-d-announce wrote:
> On 19/05/2022 5:51 AM, Kenny Shields wrote:
> > Also, I know that D has some sort of interface for allowing custom
> > GC implementations -- has anyone ever made a functional alternative?
> > Something that takes the incremental approach that you mentioned as
> > opposed to the pause and scan method?
> 
> Severely doubtful, like pretty much all the advanced GC designs, it
> appears to require write barriers (book barely touches on it however).
> 
> https://www.amazon.com/Garbage-Collection-Handbook-Management-Algorithms/dp/1420082795

We keep coming back to this impasse: write barriers.  It's high time
somebody implemented this in a dmd fork so that we can actually test out
more advanced GC designs.


T

-- 
Life would be easier if I had the source code. -- YHL


Re: Library associative array project v0.0.1

2022-05-12 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, May 12, 2022 at 01:46:19PM -0400, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
> On 5/12/22 1:15 PM, H. S. Teoh wrote:
> > On Wed, May 11, 2022 at 11:31:02AM -0400, Steven Schveighoffer via 
> > Digitalmars-d-announce wrote:
[...]
> > > 1. Proves that a library implementation is possible, also shows
> > > where shortcomings are.
> > 
> > What are the shortcomings that you found?
[...]
> I was surprised at how short the code is once you throw out all the
> TypeInfo BS that is currently in druntime.

Yeah, the TypeInfo stuff takes up a lot of code. And is hackish and
ugly.


> > > For the future:
> > > 
> > > 1. Implement all the things that AAs can do (which are possible,
> > > some are not).
> > 
> > Which things are not possible to do?
> 
> The whole thing how the compiler knows that an outer AA is being used
> to initialize an inner AA.
> 
> e.g. this works currently, but is impossible to hook for a library (I
> think):
> 
> ```d
> int[int][int] aa;
> aa[0][1] = 5;
> ```

I already saw this problem 8 years ago:

https://issues.dlang.org/show_bug.cgi?id=7753

Maybe it's time for us to write a DIP for this? ;-)


> There's also possible problems with qualifiers that are
> yet-to-be-discovered, but usually show up when the compiler is
> cheating.

Ugh. AA + qualifiers == one of the dark dirty corners of D that I don't
like looking at. It's a prime example of an issue that's papered over
half-heartedly with a half-solution that doesn't really solve the
problem:

1. In the bad ole days, AA's used to allow unqualified types for the
key. This wasn't a problem with POD or immutable types, but shows up
when you try to make an AA with, say, a char[] key type. You'd insert a
key into the AA, then accidentally overwrite the array contents, and
suddenly the key "vanishes" from the AA -- because the contents changed
without the AA knowing, so it's still sitting in the old slot with the
old hash value.

2. This problem was brought up, so somebody thought it was a good idea
to implicitly force the key type into const. So when you declared an AA
of type int[char[]] for example, it'd get implicitly converted to
int[const(char[])].  But this doesn't solve anything!  All the const
does is ensure the AA cannot modify the key. But the user still can!!
So the original problem still exists.

AA keys really should be immutable, i.e., it should not be modifiable by
*anyone*, not the AA code, not the caller, not anyone else. That's the
only way you can guarantee the consistency of the hash values stored in
the AA vs. the contents of the key.

OTOH, though, you *do* want to accept const-qualified versions of the
key *during lookup*. (It would be onerous to require .idup'ing a char[]
into a string just so you can do a lookup in the AA, for example.) This
gets a bit hairy, though: if the AA entry may not exist and may need to
be created, then the key must be immutable ('cos we'll potentially be
storing a reference to the key in the AA). But if it's a pure lookup
without new entry creation, then const is acceptable.


> > > 2. Look at alternatives to GC for allocation/deallocation.
> > > 3. Possible use with betterC?
> > [...]
> > 
> > Just use malloc/free?  Could have problems with dangling references
> > to buckets, though, if somebody retains the pointer returned by `key
> > in aa` expressions.  Or use ref-counting of some sort.  But hard to
> > do this without changing binary compatibility.
> 
> Yes, the lifetime issues are the real problem with not using the GC.
> Maybe you just avoid the `in` operator in those flavors?
> Instead you can use a match-style operation, something like:
> 
> ```d
> hash.match(k, (ref v) {
>   // work with v
> });
> ```
[...]

That's a great idea.  Should be `scope ref`, though, to avoid the
reference leaking out via a closure / global. ;-)


T

-- 
English is useful because it is a mess. Since English is a mess, it maps well 
onto the problem space, which is also a mess, which we call reality. Similarly, 
Perl was designed to be a mess, though in the nicest of all possible ways. -- 
Larry Wall


Re: Library associative array project v0.0.1

2022-05-12 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, May 11, 2022 at 11:31:02AM -0400, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
> I just spent a couple hours making a library AA solution that is
> binary compatible with druntime's builtin AA.

This is awesome!  Don't have time to look at it in detail right now, but
will definitely keep this in mind.


> The benefits:
> 
> 1. Proves that a library implementation is possible, also shows where
> shortcomings are.

What are the shortcomings that you found?


> 2. Usable at compile time to make an AA that can be used at runtime.

Awesome.


> 3. Much more approachable code than the AA runtime, does not require
> "faking" a typeinfo, dealing with typeinfo in general, or deal with
> magic compiler hooks. This gives a good base to start experimenting
> with.

Awesome.


> For the future:
> 
> 1. Implement all the things that AAs can do (which are possible, some
> are not).

Which things are not possible to do?


> 2. Look at alternatives to GC for allocation/deallocation.
> 3. Possible use with betterC?
[...]

Just use malloc/free?  Could have problems with dangling references to
buckets, though, if somebody retains the pointer returned by `key in
aa` expressions.  Or use ref-counting of some sort.  But hard to do this
without changing binary compatibility.


T

-- 
"You know, maybe we don't *need* enemies." "Yeah, best friends are about all I 
can take." -- Calvin & Hobbes


Re: Release: serverino - please destroy it.

2022-05-09 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, May 09, 2022 at 04:48:11PM +, Vladimir Panteleev via 
Digitalmars-d-announce wrote:
> On Monday, 9 May 2022 at 16:37:15 UTC, H. S. Teoh wrote:
> > Why is memory protection the only way to implement write barriers in
> > D?
> 
> Well, it's the only way I know of without making it a major
> backwards-incompatible change. The main restriction in this area is
> that it must continue working with code written in other languages,
> and generally not affect the ABI drastically.

Ah, gotcha.  Yeah, I don't think such an approach would be fruitful (it
was worth a shot, though!).  If D were ever to get write barriers,
they'd have to be in some other form, probably more intrusive in terms
of backwards-compatibility and ABI.


T

-- 
Curiosity kills the cat. Moral: don't be the cat.


Re: Release: serverino - please destroy it.

2022-05-09 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, May 09, 2022 at 05:55:39AM +, Vladimir Panteleev via 
Digitalmars-d-announce wrote:
> On Monday, 9 May 2022 at 00:25:43 UTC, H. S. Teoh wrote:
> > In the past, the argument was that write barriers represented an
> > unacceptable performance hit to D code.  But I don't think this has
> > ever actually been measured. (Or has it?)  Maybe somebody should
> > make a dmd fork that introduces write barriers, plus a generational
> > GC (even if it's a toy, proof-of-concept-only implementation) to see
> > if the performance hit is really as bad as believed to be.
> 
> Implementing write barriers in the compiler (by instrumenting code)
> means that you're no longer allowed to copy pointers to managed memory
> in non-D code. This is a stricter assumption that the current ones we
> have; for instance, copying a struct (which has indirections) with
> memcpy would be forbidden.

Hmm, true.  That puts a big damper on the possibilities... OTOH, if this
could be made an optional feature, then code that we know doesn't need,
e.g., passing pointers to C code, can take advantage of possibly better
GC strategies.


T

-- 
English has the lovely word "defenestrate", meaning "to execute by throwing 
someone out a window", or more recently "to remove Windows from a computer and 
replace it with something useful". :-) -- John Cowan


Re: Release: serverino - please destroy it.

2022-05-09 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, May 09, 2022 at 05:52:30AM +, Vladimir Panteleev via 
Digitalmars-d-announce wrote:
> On Sunday, 8 May 2022 at 23:44:42 UTC, Ali Çehreli wrote:
> > While we are on topic :) and as I finally understood what
> > generational GC is[1], are there any fundamental issues with D to
> > not use one?
> 
> I implemented one a long time ago. The only way to get write barriers
> with D is memory protection. It worked, but unfortunately the write
> barriers caused a severe performance penalty.

Why is memory protection the only way to implement write barriers in D?


> It's possible that it might be viable with more tweaking, or in
> certain applications where most of the heap is not written to; I did
> not experiment a lot with it.

Interesting data point, in any case.


T

-- 
The early bird gets the worm. Moral: ewww...


Re: Release: serverino - please destroy it.

2022-05-08 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, May 09, 2022 at 12:10:53PM +1200, rikki cattermole via 
Digitalmars-d-announce wrote:
> On 09/05/2022 11:44 AM, Ali Çehreli wrote:
> > While we are on topic :) and as I finally understood what
> > generational GC is[1], are there any fundamental issues with D to
> > not use one?
> 
> This is not a D issue, its an implementation one.
> 
> We don't have write barriers, that's it.
> 
> Make them opt-in and we can have more advanced GC's.
[...]

In the past, the argument was that write barriers represented an
unacceptable performance hit to D code.  But I don't think this has ever
actually been measured. (Or has it?)  Maybe somebody should make a dmd
fork that introduces write barriers, plus a generational GC (even if
it's a toy, proof-of-concept-only implementation) to see if the
performance hit is really as bad as believed to be.


T

-- 
The best way to destroy a cause is to defend it poorly.


Re: GCC 12.1 Released (D v2.100-rc.1)

2022-05-06 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, May 06, 2022 at 11:57:47AM +, Iain Buclaw via 
Digitalmars-d-announce wrote:
> Hi,
> 
> I am proud to announce another major GCC release, 12.1.
> 
> This year, the biggest change in the D front-end is the version bump from
> v2.076.1 to 
> **[v2.100.0-rc.1](https://gcc.gnu.org/git/?p=gcc.git;a=commitdiff;h=b4acfef1342097ceaf10fa935831f8edd7069431)**.
> For the full list of front-end changes, please read the [change log on
> dlang.org](https://dlang.org/changelog/2.100.0.html). As and when DMD
> releases new minor releases of v2.100.x, they will be backported into the
> next minor release of GCC.
[...]

This is AWESOME news!!! Finally, GDC will be able to compile the
up-to-date language. I will be seriously considering using gdc for my
latest projects again.

Huge thanks to Iain for all his hard work through all these years to
make this happen!


T

-- 
If lightning were to ever strike an orchestra, it'd always hit the conductor 
first.


Re: LDC 1.29.0

2022-04-08 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Apr 08, 2022 at 05:42:46AM +, kinke via Digitalmars-d-announce 
wrote:
> Glad to announce LDC 1.29.0. Major changes:
> 
> * Based on D 2.099.1.
[...]
> Full release log and downloads:
> https://github.com/ldc-developers/ldc/releases/tag/v1.29.0
> 
> Thanks to all contributors & sponsors!

Thanks to the LDC team for continuing to bring us an awesome D compiler!


T

-- 
Tech-savvy: euphemism for nerdy.


Re: D Language Foundation Monthly Meeting Summary for March 2022

2022-04-04 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Apr 05, 2022 at 12:23:54AM +1200, rikki cattermole via 
Digitalmars-d-announce wrote:
[...]
> +1 infer everything!

I agree, in principle.  The ideal is 100% inference.  Unfortunately,
that's unlikely to be actually reachable. Nevertheless, we should
definitely move in the direction of more inference vs. less.


T

-- 
Life is complex. It consists of real and imaginary parts. -- YHL


Re: argparse version 0.7.0 - a CLI parsing library

2022-03-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Mar 18, 2022 at 09:30:57PM +, Adam Ruppe via Digitalmars-d-announce 
wrote:
[...]
> One approach you might consider is a hybrid too, where you have the
> big struct you build out of the individual udas.
> 
> So you work on the big one but you do getBig!decl and it loops through
> the members of thebig struct and sees if the same-typed UDAs are on
> decl. If so, it loads them in, if not, it leaves default values.
[...]

Yeah, that's what I thought too. So you could have something like this:

struct SmallUDA {}
struct AnotherSmallUDA {}

struct AggregateUDAs {
bool hasSmall;
bool hasAnotherSmall;

static typeof(this) opCall(T)() {
AggregateUDAs result;
hasSmall = hasUDA!(T, SmallUDA);
hasAnotherSmall = hasUDA!(T, AnotherSmallUDA);
return result;
}
}

void processUDAs(T)() {
// Look ma! No need to sprinkle hasUDA everywhere
enum aggreg = AggregateUDAs!T();
...
static if (aggreg.hasSmall) { ... }
...
static if (aggreg.hasAnotherSmall) { ... }
...
}

@SmallUDA
@AnotherSmallUDA
struct MyInputType { ... }

processUDAs!MyInputType();


T

-- 
Why did the mathematician reinvent the square wheel?  Because he wanted to 
drive smoothly over an inverted catenary road.


Re: argparse version 0.7.0 - a CLI parsing library

2022-03-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Mar 18, 2022 at 06:21:46PM +, Anonymouse via Digitalmars-d-announce 
wrote:
> On Thursday, 17 March 2022 at 19:07:28 UTC, H. S. Teoh wrote:
> > Using independent, orthogonal UDAs may make option specification
> > using your module easier to read. For example, from your docs:
[...]
> > It might also simplify your implementation by having more smaller,
> > independent pieces for each UDA instead of a single complex UDA that
> > handles everything.
> 
> I use UDAs extensively in my project and I've historically been doing
> the multiple-UDA approach you describe. Upon seeing argparse a few
> months back I started rewriting it to use a single UDA, and I found it
> allowed for a simpler implementation (and not the other way around).
> 
> The immediate gains boiled down to that I could now pass what is
> essentially a context struct around at CTFE instead of keeping track
> of multiple variables. Default values are also much easier to manage
> with much fewer `hasUDA`s sprinkled everywhere.

Hmm, interesting indeed!  I should experiment with both approaches to
do a fairer comparison.


T

-- 
Век живи - век учись. А дураком помрёшь.


Re: argparse version 0.7.0 - a CLI parsing library

2022-03-17 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Mar 14, 2022 at 03:06:44AM +, Andrey Zherikov via 
Digitalmars-d-announce wrote:
> Hi everyone,
> 
> I'd like to share that I've published a new version of
> [argparse](https://code.dlang.org/packages/argparse) library.  It's
> got some new features since my [first
> announcement](https://forum.dlang.org/post/zjljbdzfrtcxfiuzo...@forum.dlang.org)
> as well as some bug fixes:
> - Support of the usage without UDAs
> - Custom grouping of arguments on help screen
> - Mutually exclusive arguments
> - Mutually dependent arguments
> - Subcommands
[...]

Very comprehensive library!  Quite similar in concept to my own argument
parsing module that also uses a struct + UDAs for specifying options.
(Though your module is more advanced; mine doesn't handle things like
subcommands.)

I notice that some of your UDAs are pretty complicated, which makes me
wonder if you're aware that it's possible to attach multiple UDAs to a
single declaration.  For example, in my own argument parsing module, I
have a UDA @Alt for specifying an alternative name (usually a
single-character shorthand) to an option, as well as a UDA @Help for
attaching help text to an option.  Here's an example of both being used
for the same declaration:

struct Options {
@Alt("n") // accept `-n` in addition to `--name`
@Help("Name of the object to generate")
string name;
}

Using independent, orthogonal UDAs may make option specification using
your module easier to read. For example, from your docs:

struct T {
@(NamedArgument
.PreValidation!((string s) { return s.length > 1 && 
s[0] == '!'; })
.Parse!((string s) { return s[1]; })
.Validation   !((char v) { return v >= '0' && v <= '9'; 
})
.Action !((ref int a, char v) { a = v - '0'; })
)
int a;
}

could be rewritten with multiple UDAs as:

struct T {
@NamedArgument
@PreValidation!((string s) { return s.length > 1 && s[0] == 
'!'; })
@Parse!((string s) { return s[1]; })
@Validation   !((char v) { return v >= '0' && v <= '9'; })
@Action!((ref int a, char v) { a = v - '0'; })
int a;
}

It might also simplify your implementation by having more smaller,
independent pieces for each UDA instead of a single complex UDA that
handles everything.

Also, some of your function literals could use shorthand syntax, e.g.:

.PreValidation!((string s) { return s.length > 1 && s[0] == '!'; })

could be written as:

.PreValidation!(s => s.length > 1 && s[0] == '!')


T

-- 
Tell me and I forget. Teach me and I remember. Involve me and I understand. -- 
Benjamin Franklin


Re: Release D 2.099.0

2022-03-09 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Mar 09, 2022 at 06:46:34PM +, Brian Callahan via 
Digitalmars-d-announce wrote:
[...]
> I'm happy to report that as of DMD 2.099.0, there are 0 lines of diff
> between upstream DMD and the OpenBSD package :)
[...]

Wonderful!


T

-- 
Государство делает вид, что платит нам зарплату, а мы делаем вид, что работаем.


Re: Added copy constructors to "Programming in D"

2022-02-09 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Feb 09, 2022 at 10:28:15AM -0800, Ali Çehreli via 
Digitalmars-d-announce wrote:
[...]
> - const is a promise
> 
> - immutable is a requirement
[...]

Strictly speaking, that's not really an accurate description. :-P  A
more accurate description would be:

- const: I cannot modify the data (but someone else might).

- immutable: I cannot modify the data, AND nobody else can either.

The best way I've found to understand the relationship between const,
immutable, and mutable in D is the following "type inheritance" diagram
(analogous to a class inheritance diagram):

  const
 / \
(mutable)   immutable

Const behaves like the "base class" (well, base type, sortof) that
either mutable or immutable can implicitly convert to. Mutable and
immutable, however, are mutually incompatible "derived classes" that
will not implicitly convert to each other. (Of course, the reality is
somewhat more complex than this, but this is a good, simple conceptual
starting point to understanding D's type system.)

Const means whoever holds the reference to the data cannot modify it. So
it's safe to hand them both mutable and immutable data.

Mutable means you are allowed to modify it, so obviously it's illegal to
pass in const or immutable.  Passing mutable to const is OK because the
recipient cannot modify it, even though the caller himself may (since he
holds a mutable reference to it).

Immutable means NOBODY can modify it, not even the caller. I.e.,
*nobody* holds a mutable reference to the data. So you cannot pass
mutable to immutable.  Obviously, it's safe to pass immutable to const
(the callee cannot modify it anyway, so we're OK).  But you cannot pass
const to immutable, because, as stated above, a const reference might be
pointing to mutable data: even though the holder of the reference cannot
himself modify it, it may have come from a mutable reference somewhere
else. Allowing it would break the rule that immutable means *nobody* has
a mutable reference to the data. So that's not allowed.

IOW, "downcasting" in the above "type hierarchy" is not allowed, in
general.

However, there's a special case where mutable does implicitly convert to
mutable: this is if the mutable reference is unique, meaning that it's
the only reference that exists to that data. In such a case, it's OK to
convert that mutable reference to an immutable one, provided the mutable
reference immediately goes out of scope. After that point, nobody holds
a mutable reference to it anymore, so it fits into the definition of
immutable. This happens when a mutable reference is returned from a pure
function:

MyData createData() pure {
MyData result; // N.B.: mutable
return result;
// mutable reference goes out of scope
}

// OK: function is pure and reference to data is unique
immutable MyData data = createData();

The `pure` ensures that createData didn't cheat and store a mutable
reference to the data in some global variable, so the reference to
MyData that it returns is truly unique. So in this case we allow mutable
to implicitly convert to immutable.

There's also another situation where immutable is allowed to implicitly
convert to mutable: this is when the data is a by-value type containing
no indirections. Essentially, we're making a copy of the immutable data,
so it doesn't matter if we modify the copy, since we're not actually
modifying the original data.

//

Now, w.r.t. the original question of when we should use const vs.
immutable:

- For local variables, there's no practical difference between const and
  immutable, because by definition the current function holds the only
  reference to it, so there can't be any mutable reference to it. I
  would just use immutable in this case -- the compiler may be able to
  optimize the code better knowing that there can't be any mutable
  reference anywhere else (though in theory the compiler should have
  already figured this out, since it's a local variable).

- For function parameters, I would always use const over immutable,
  unless there was a reason I want to guarantee that nobody else holds a
  mutable reference to that argument (e.g., I'm storing the reference in
  a data structure that requires the data not to be mutated afterwards).
  Using const makes the function usable with both mutable and immutable
  arguments, which is more flexible when you don't need to guarantee
  that the data will never be changed by anybody.

  Some people may prefer `in` instead of const for function parameters:
  it's more self-documenting, and if you use -preview=in, it means
  `const scope`, which adds an additional check that you don't
  accidentally leak reference to parameters past the scope of the
  function.


T

-- 
Windows 95 was a joke, and Windows 98 was the punchline.


Re: On the D Blog: A Gas Dynamics Toolkit in D

2022-02-02 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Feb 02, 2022 at 11:40:09AM -0500, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
[...]
> D error messages can be bad. Especially when you are using lots of
> range wrappers. It all depends on what you use.
[...]

True. I've had my fair share of WAT moments with D error messages.

The worst IME are the ones coming from nested templates with lambdas,
like when you use lots of range wrappers like you said. No thanks to the
way dmd handles speculative compilation by gagging errors, a single typo
in a lambda causes an error that gets gagged, then no thanks to D's
equivalent of SFINAE the compiler then proceeds to attempt to
instantiate completely unintended, unrelated template overloads, going
deep into a rabbit hole that ultimately ends with an obscure error deep
inside a totally unrelated overload that doesn't give the slightest clue
as to what the real error is.

I've often had to resort to manually instantiating templates, or worse,
rewriting the lambda as an actual function and instantiating it
manually, in order to discern what the error is. There's of course
-verrors=spec, but that ungags ALL gagged errors, which in a non-trivial
program results in the actual error being drowned in an ocean of totally
unrelated errors.


> But C++ is a low bar ;)
[...]

On Wed, Feb 02, 2022 at 04:40:04PM +, Adam D Ruppe via 
Digitalmars-d-announce wrote:
[...]
> No incompatibility there: "better than C++" is a very low bar.

No argument there. :-P


T

-- 
Heads I win, tails you lose.


Re: On the D Blog: A Gas Dynamics Toolkit in D

2022-02-02 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Feb 02, 2022 at 08:14:32AM +, Mike Parker via 
Digitalmars-d-announce wrote:
[...]
> https://dlang.org/blog/2022/02/02/a-gas-dynamics-toolkit-in-d/
[...]

Favorite quote:

"Good error messages from the compiler. We often used to be
overwhelmed by the C++ template error messages that could run to
hundreds of lines.  The D compilers have been much nicer to us
and we have found the “did you mean” suggestions to be quite
useful."

Interesting that the author(s) found D error messages better than C++,
in spite of frequent complaints about error messages here in the forums.
:-P


T

-- 
Famous last words: I *think* this will work...


Re: D Language Quarterly Meeting Summary for January 2021

2022-01-24 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Jan 24, 2022 at 10:56:57PM +, Elronnd via Digitalmars-d-announce 
wrote:
> On Friday, 21 January 2022 at 12:55:58 UTC, ag0aep6g wrote:
> > I still believe it should be fairly simple:
> > 
> > https://forum.dlang.org/post/ofc0lj$2u4h$1...@digitalmars.com
> 
> There is a simpler solution: put the context pointer in rax.  This is
> currently a caller-saved register, so there is no problem with
> clobbering it.  It is used for c-style variadics, but not d-style
> ones.  Since it is a register, nothing breaks when you have more
> parameters than fit in registers.  Any other problems?

This may work for x86, but does it work for other platforms? If not, it
won't fly on LDC/GDC.


T

-- 
Life is complex. It consists of real and imaginary parts. -- YHL


Re: D Language Quarterly Meeting Summary for January 2021

2022-01-22 Thread H. S. Teoh via Digitalmars-d-announce
On Sun, Jan 23, 2022 at 03:24:04AM +, Paul Backus via 
Digitalmars-d-announce wrote:
[...]
> The way I envision it, `std` would be the "rolling release" namespace
> that allows breaking changes, and if you wanted stability, you'd have
> to explicitly depend on `std.vN`. What we currently call `std` would
> be renamed to `std.v1`.

+1, this idea would work.


T

-- 
English is useful because it is a mess. Since English is a mess, it maps well 
onto the problem space, which is also a mess, which we call reality. Similarly, 
Perl was designed to be a mess, though in the nicest of all possible ways. -- 
Larry Wall


Re: D Language Quarterly Meeting Summary for January 2021

2022-01-22 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Jan 22, 2022 at 05:09:51PM +, Alexandru Ermicioi via 
Digitalmars-d-announce wrote:
> On Saturday, 22 January 2022 at 05:43:55 UTC, Paul Backus wrote:
> > On Friday, 21 January 2022 at 12:33:25 UTC, Mike Parker wrote:
> > > ### Andrei
> > > Andrei brought up std.v2, but this is where memory fails me. What I
> > > do recall is that there was a bit of talk about the std.v2 namespace
> > > and how it will live alongside std, and this came up because Robert
> > > isn't convinced the planned approach is the right way to go about
> > > it. If Andrei or anyone else would like to say more about what was
> > > discussed, please post something below.
> > 
> > IMO having the `std` and `std.v2` namespaces exist alongside each other
> > *in the official D distribution* would be a mistake, and would make the
> > language significantly less approachable for new users.
> 
> Imho, current design where obsolete modules are moved from phobos to undead
> is a lot better. Just keep newest stuff in phobos and move old one in undead
> repo. Projects still relying on old  functionality, can easily just import
> old module from undead project and continue using old functionality, until
> they move to newest one.

Is undead versioned? If Phobos starts innovating again, we may need to
keep multiple old versions in undead for old codebases to continue
working.


T

-- 
Perhaps the most widespread illusion is that if we were in power we would 
behave very differently from those who now hold it---when, in truth, in order 
to get power we would have to become very much like them. -- Unknown


Re: Why I Like D

2022-01-14 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 14, 2022 at 09:18:23AM +, Paulo Pinto via 
Digitalmars-d-announce wrote:
> On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote:
[...]
> > How is using D "losing autonomy"?  Unlike Java, D does not force you
> > to use anything. You can write all-out GC code, you can write @nogc
> > code (slap it on main() and your entire program will be guaranteed
> > to be GC-free -- statically verified by the compiler). You can write
> > functional-style code, and, thanks to metaprogramming, you can even
> > use more obscure paradigms like declarative programming.
[..]
> When languages are compared in grammar and semantics alone, you are
> fully correct.
> 
> Except we have this nasty thing called eco-system, where libraries,
> IDE tooling, OS, team mates, books, contractors,  are also part of
> the comparisasion.
[...]

That's outside of the domain of the language itself.  I'm not gonna
pretend we don't have ecosystem problems, but that's a social issue, not
a technical one.

Well OK, maybe IDE tooling is a technical issue too... but I write D
just fine in Vim. Unlike Java, using an IDE is not necessary to be
productive in D. You don't have to write aneurysm-inducing amounts of
factory classes and wrapper types just to express the simplest of
abstraction.  I see an IDE for D as something nice to have, not an
absolute essential.


> Naturally C# 10 was only an example among several possible ones, that
> have a flowershing ecosytem and keep getting the features only D could
> brag about when Andrei's book came out 10 years ago.

IMNSHO, D should forget all pretenses of being a stable language, and
continue to evolve as it did 5-10 years ago.  D3 should be a long-term
goal, not a taboo that nobody wants to talk about.  But hey, I'm not the
one making decisions here, and talk is cheap...


T

-- 
Give me some fresh salted fish, please.


Re: Why I Like D

2022-01-14 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 14, 2022 at 03:51:17AM +, forkit via Digitalmars-d-announce 
wrote:
> On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote:
> > 
> > How is using D "losing autonomy"?  Unlike Java, D does not force you
> > to use anything. You can write all-out GC code, you can write @nogc
> > code (slap it on main() and your entire program will be guaranteed
> > to be GC-free -- statically verified by the compiler). You can write
> > functional-style code, and, thanks to metaprogramming, you can even
> > use more obscure paradigms like declarative programming.
> > 
> 
> I'm talking about the 'perception of autonomy' - which will differ
> between people. Actual autonomy does not, and cannot, exist.
> 
> I agree, that if a C++ programmer wants the autonomy of chosing
> between GC or not, in their code, then they really don't have that
> autonomy in C++ (well, of course they do actually - but some hoops
> need to be jumped through).

IMO, 'autonomy' isn't the notion you're looking for.  The word I prefer
to use is *empowerment*.  A programming language should be a toolbox
filled with useful tools that you can use to solve your problem.  It
should not be a straitjacket that forces you to conform to what its
creators decided is good for you (e.g., Java), nor should it be a
minefield full of powerful but extremely dangerous explosives that you
have to be very careful not to touch in the wrong way (e.g., C++). It
should let YOU decide what's the best way to solve a problem -- and give
you the tools to help you on your way.

I mean, you *can* write functional-style code in C if you really, really
wanted to -- but you will face a lot of friction and it will be a
constant uphill battle. The result will be a huge unmaintainable mess.
With D, UFCS gets you 90% of the way there, and the syntax is even
pleasant to read.  Functional not your style? No problem, you can do OO
too. Or just plain ole imperative. Or all-out metaprogramming.  Or a
combination of all four -- the language lets you intermingle all of them
in the *same* piece of code.  I've yet to find another language that
actively *encourages* you to mix multiple paradigms together into a
seamless whole.

Furthermore, the language should empower you to do what it does -- for
example, user-defined types ought to be able to do everything built-in
types can.  Built-in stuff shouldn't have "magical properties" that
cannot be duplicated in a user-defined type.  The language shouldn't
hide magical properties behind a bunch of opaque, canned black-box
solutions that you're not allowed to look into.  The fact that D's GC is
written in D, for example, is a powerful example of not hiding things
behind opaque black-boxes. You can, in theory, write your own GC and use
that instead of the default one.

D doesn't completely meet my definition of empowerment, of course, but
it's pretty darned close -- closer than any other language I've used.
That's why I'm sticking with it, in spite of various flaws that I'm not
going to pretend don't exist.

As for why anyone would choose something over another -- who knows. My
own choices and preferences have proven to be very different from the
general population, so I'm not even gonna bother to guess how anyone
else thinks.


T

-- 
English is useful because it is a mess. Since English is a mess, it maps well 
onto the problem space, which is also a mess, which we call reality. Similarly, 
Perl was designed to be a mess, though in the nicest of all possible ways. -- 
Larry Wall


Re: Why I Like D

2022-01-14 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 14, 2022 at 06:20:58AM +, Araq via Digitalmars-d-announce wrote:
> On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote:
> > It takes 10x the effort to write a shell-script substitute in C++
> > because at every turn the language works against me -- I can't avoid
> > dealing with memory management issues at every turn -- should I use
> > malloc/free and fix leaks / dangling pointers myself? Should I use
> > std::autoptr? Should I use std::shared_ptr? Write my own refcounted
> > pointer for the 15th time?  Half my APIs would be cluttered with
> > memory management paraphrenalia, and half my mental energy would be
> > spent fiddling with pointers instead of MAKING PROGRESS IN MY
> > PROBLEM DOMAIN.
> > 
> > With D, I can work at the high level and solve my problem long
> > before I even finish writing the same code in C++.
> 
> Well C++ ships with unique_ptr and shared_ptr, you don't have to roll
> your own. And you can use them and be assured that the performance
> profile of your program doesn't suddenly collapse when the data/heap
> grows too big as these tools assure independence of the heap size.

That's not entirely accurate. Using unique_ptr or shared_ptr does not
guarantee you won't get a long pause when the last reference to a large
object graph goes out of scope, for example, and a whole bunch of dtors
get called all at once. In code that's complex enough to warrant
shared_ptr, the point at which this happens is likely not predictable
(if it was, you wouldn't have needed to use shared_ptr).


> (What does D's GC assure you? That it won't run if you don't use it?
> That's such a low bar...)

When I'm writing a shell-script substitute, I DON'T WANT TO CARE about
memory management, that's the point. I want the GC to clean up after me,
no questions asked.  I don't want to spend any time thinking about
memory allocation issues.  If I need to manually manage memory, *then* I
manually manage memory and don't use the GC.  D gives me that choice.
C++ forces me to think about memory allocation WHETHER I WANT TO OR NOT.

And unique_ptr/shared_ptr doesn't help in this department, because their
use percolates through all of my APIs. I cannot pass a unique_ptr to an
API that receives only shared_ptr, and vice versa, without jumping
through hoops.  Having a GC lets me completely eliminate memory
management concerns from my APIs, resulting in cleaner APIs and less
time wasted fiddling with memory management.  It's a needless waste of
time.  WHEN performance demands it, THEN I can delve into the dirty
details of how to manually manage memory.  When performance doesn't
really matter, I don't care, and I don't *want* to care.


> Plus with D you cannot really work at the "high level" at all, it is
> full of friction. Is this data const? Or immutable? Is this @safe?
> @system? Should I use @nogc?

When I'm writing a shell-script substitute, I don't care about
const/immutable or @safe/@system.  Let all data be mutable for all I
care, it doesn't matter.  @nogc is a waste of time in shell-script
substitutes.  Just use templates and let the compiler figure out the
attributes for you.

When I'm designing something longer term, *then* I worry about
const/immutable/etc.. And honestly, I hardly ever bother with
const/immutable, because IME they just become needless encumbering past
the first few levels of abstraction.  They preclude useful things like
caching, lazy initialization, etc., and are not worth the effort except
for leaf-node types.  There's nothing wrong with mutable by default in
spite of what academic types tell you.


> Are exceptions still a good idea?

Of course it's a good idea.  Esp in a shell-script substitute, where I
don't want to waste my time worrying about checking error codes and all
of that nonsense. Just let it throw an exception and die when something
fails, that's good enough.

If exceptions ever become a problem, you're doing something wrong. Only
in rare cases do you actually need nothrow -- in hotspots identified by
a profiler where try-blocks actually make a material difference. 90% of
code doesn't need to worry about this.


> Should I use interfaces or inheritance?  Should I use class or struct?

For shell script substitutes?  Don't even bother with OO. Just use
structs and templates with attribute inference, job done.

Honestly, even for most serious programs I wouldn't bother with OO,
unless the problem domain actually maps well onto the OO paradigm. Most
problem domains are better handled with data-only types and external
operations on them.  Only for limited domains OO is actually useful.
Even many polymorphic data models are better handled in other ways than
OO (like ECS for runtime dynamic composition).


> Pointers or inout?

inout is a misfeature. Avoid it like the plague.

As for pointers vs. non-pointers: thanks to type inference and `.`
working for both pointers and non-pointers, most of the time you don't
even need to care.  I've written lots of code 

Re: Why I Like D

2022-01-13 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 14, 2022 at 01:19:01AM +, forkit via Digitalmars-d-announce 
wrote:
[...]
> C provides even greater autonomy over both C++ and D. And I'd argue,
> that's why C remains so useful, and so popular (for those problems
> where such a level of autonomy is needed).
> 
> By, 'autonomy', I mean a language provided means, for choosing what
> code can do, and how it does it.
[...]
> An aversion to losing that autonomy, I believe, is a very real reason
> as to why larger numbers of C++ programmers do not even consider
> switching to D.

How is using D "losing autonomy"?  Unlike Java, D does not force you to
use anything. You can write all-out GC code, you can write @nogc code
(slap it on main() and your entire program will be guaranteed to be
GC-free -- statically verified by the compiler). You can write
functional-style code, and, thanks to metaprogramming, you can even use
more obscure paradigms like declarative programming.

If anything, D makes it *easier* to have "autonomy", because its
metaprogramming capabilities let you do so without contorting syntax or
writing unmaintainable write-only code.  I can theoretically do
everything in C++ that I do in D, for example, but C++ requires that I
spend 5x the amount of effort to navigate its minefield of language
gotchas (and then 50x the effort to debug the resulting mess), and
afterwards I have to visit the optometrist due to staring at unreadable
syntax for extended periods of time.

In D, I get to choose how low-level I want to go -- if all I need is a
one-off shell script substitute, I can just allocate away and the GC
will worry about cleaning after me.  If I need to squeeze out more
performance, I run the profiler and identify GC hotspots and fix them
(or discover that the GC doesn't even affect performance, and redirect
my efforts elsewhere, where it actually matters more).  If that's not
enough, GC.disable and GC.collect lets me control how the GC behaves.
If that's still not enough, I slap @nogc on my inner loops and pull out
malloc/free.

In C++, I'm guaranteed that there is no GC -- even when having a GC
might actually help me achieve what I want.  In order to reap the
benefits of a GC in C++, I have to jump through *tons* of hoops --
install a 3rd party GC, carefully read the docs to avoid doing things
that might break it ('cos it's not language-supported), be excluded from
using 3rd party libraries that are not compatible with the GC, etc..
Definitely NOT worth the effort for one-off shell script replacements.
It takes 10x the effort to write a shell-script substitute in C++
because at every turn the language works against me -- I can't avoid
dealing with memory management issues at every turn -- should I use
malloc/free and fix leaks / dangling pointers myself? Should I use
std::autoptr? Should I use std::shared_ptr? Write my own refcounted
pointer for the 15th time?  Half my APIs would be cluttered with memory
management paraphrenalia, and half my mental energy would be spent
fiddling with pointers instead of MAKING PROGRESS IN MY PROBLEM DOMAIN.

With D, I can work at the high level and solve my problem long before I
even finish writing the same code in C++.  And when I need to dig under
the hood, D doesn't stop me -- it's perfectly fine with malloc/free and
other such alternatives.  Even if I can't use parts of Phobos because of
GC dependence, D gives me the tools to roll my own easily. (It's not as
if I don't already have to do it myself in C++ anyway -- and D is a
nicer language for it; I can generally get it done faster in D.)

Rather than take away "autonomy", D empowers me to choose whether I want
to do things manually or use the premade high-level niceties the
language affords me. (*And* D lets me mix high-level and low-level code
in the same language. I can even drop down to asm{} blocks if that's
what it takes. Now *that's* empowerment.) With C++, I HAVE to do
everything manually. It's actually less choice than D affords me.


T

-- 
People tell me I'm stubborn, but I refuse to accept it!


Re: Why I Like D

2022-01-13 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Jan 13, 2022 at 09:32:15PM +, Paul Backus via 
Digitalmars-d-announce wrote:
> On Wednesday, 12 January 2022 at 20:48:39 UTC, forkit wrote:
[...]
> > Programmers want the right of self-government, over their code.
> 
> Actually, I think *self*-government has very little to do with it.
[...]
> So, why do so many programmers reject D? Because there's something
> else they care about more than their own autonomy: other programmers'
> *lack* of autonomy. Or, as it's usually put, "the ecosystem."
[...]
> Suppose you've already decided that you don't want to use a GC, and
> you also don't want to write every part of your project from
> scratch--that is, you would like to depend on existing libraries.
> Where would you rather search for those libraries: code.dlang.org, or
> crates.io? Who would you want the authors of those libraries to be:
> self-governing, autonomous programmers, who are free to use GC as much
> or as little as they like; or programmers who have chosen to give up
> that autonomy and limit themselves to *never* using GC?

This reminds me of the Lisp Curse: the language is so powerful that
everyone can easily write their own [GUI toolkit] (insert favorite
example library here).  As a result, everyone invents their own
solution, all solving more-or-less the same problem, but just
differently enough to be incompatible with each other. And since they're
all DIY solutions, they each suffer from a different set of
shortcomings.  As a result, there's a proliferation of [GUI toolkits],
but none of them have a full feature set, most are in various states of
(in)completion, and all are incompatible with each other.

For the newcomer, there's a bewildering abundance of choices, but none
of them really solves his particular use-case (because none of the
preceding authors faced his specific problem).  As a result, his only
choices are to arbitrarily choose one solution and live with its
problems, or reinvent his own solution. (Or give up and go back to Java.
:-D)

Sounds familiar? :-P


T

-- 
Democracy: The triumph of popularity over principle. -- C.Bond


Re: LDC 1.28.1

2022-01-13 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Jan 13, 2022 at 03:51:07PM +, kinke via Digitalmars-d-announce 
wrote:
> A new patch version was just released:
> 
> * Based on D 2.098.1+ (stable from 2 days ago).

Big thanks to the LDC team for continuing to deliver one of the best D
compilers around!


T

-- 
Государство делает вид, что платит нам зарплату, а мы делаем вид, что работаем.


Re: Why I Like D

2022-01-13 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Jan 13, 2022 at 10:21:12AM +, Stanislav Blinov via 
Digitalmars-d-announce wrote:
[...]
> Oh there is a psychological barrier for sure. On both sides of the,
> uh, "argument". I've said this before but I can repeat it again: time
> it. 4 milliseconds. That's how long a single GC.collect() takes on my
> machine.  That's a quarter of a frame. And that's a dry run. Doesn't
> matter if you can GC.disable or not, eventually you'll have to
> collect, so you're paying that cost (more, actually, since that's not
> going to be a dry run). If you can afford that - you can befriend the
> GC. If not - GC goes out the window.

?? That was exactly my point. If you can't afford it, you use @nogc.
That's what it's there for!

And no, if you don't GC-allocate, you won't eventually have to collect
'cos there'd be nothing to collect. Nobody says you HAVE to use the GC.
You use it when it fits your case; when it doesn't, you GC.disable or
write @nogc, and manage your own allocations, e.g., with an arena
allocator, etc..

Outside of your game loop you can still use GC allocations freely. You
just collect before entering the main loop, then GC.disable or just
enter @nogc code. You can even use GC memory to pre-allocate your arena
allocator buffers, then run your own allocator on top of that. E.g.,
allocate a 500MB buffer (or however big you need it to be) before the
main loop, then inside the main loop a per-frame arena allocator hands
out pointers into this buffer. At the end of the frame, reset the
pointer. That's a single-instruction collection.  After you exit your
main loop, call GC.collect to collect the buffer itself.

This isn't Java where every allocation must come from the GC. D lets you
work with raw pointers for a reason.


> In other words, it's only acceptable if you have natural pauses
> (loading screens, transitions, etc.) with limited resource consumption
> between them OR if you can afford to e.g. halve your FPS for a while.
> The alternative is to collect every frame, which means sacrificing a
> quarter of runtime. No, thanks.

Nobody says you HAVE to use the GC in your main loop.


> Thing is, "limited resource consumption" means you're preallocating
> anyway, at which point one has to question why use the GC in the first
> place.

You don't have to use the GC. You can malloc your preallocated buffers.
Or GC-allocate them but call GC.disable before entering your main loop.


> The majority of garbage created per frame can be trivially
> allocated from an arena and "deallocated" in one `mov` instruction (or
> a few of them). And things that can't be allocated in an arena, i.e.
> things with destructors - you *can't* reliably delegate to the GC
> anyway - which means your persistent state is more likely to be
> manually managed.
[...]

Of course. So don't use the GC for those things. That's all. The GC is
still useful for things outside the main loop, e.g., setup code, loading
resources in between levels, etc..  The good thing about D is that you
*can* make this choice.  It's not like Java where you're forced to use
the GC whether you like it or not.  There's no reason to clamor to
*remove* the GC from D, like some appear to be arguing for.


T

-- 
The only difference between male factor and malefactor is just a little 
emptiness inside.


Re: fixedstring: a @safe, @nogc string type

2022-01-12 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Jan 12, 2022 at 07:55:41PM +, Moth via Digitalmars-d-announce wrote:
> On Tuesday, 11 January 2022 at 17:55:28 UTC, H. S. Teoh wrote:
[...]
> > One minor usability issue I found just glancing over the code: many
> > of your methods take char[] as argument. Generally, you want
> > const(char)[] instead, so that it will work with both char[] and
> > immutable(char)[]. No reason why you can't copy some immutable chars
> > into a FixedString, for example.
> 
> they should all already be `in char[]`? i've added a test to confirm
> it works with both `char[]` and `immutable(char)[]` and it compiles
> fine.
[...]

Oh you're right!  I totally missed that.  Sorry, my bad.


T

-- 
Talk is cheap. Whining is actually free. -- Lars Wirzenius


Re: Why I Like D

2022-01-12 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Jan 12, 2022 at 05:42:46PM +, bachmeier via Digitalmars-d-announce 
wrote:
> On Wednesday, 12 January 2022 at 16:52:02 UTC, Arjan wrote:
[...]
> > I think it stems from experience from long ago when JAVA was HOT and
> > sold as the solution of all world problems, but failed to meet
> > expectations and was dismissed because they found is was the GC what
> > made it fail..

That was my perception of GC too, colored by the bad experiences of Java
from the 90's.  Ironically, Java's GC has since improved to be one of
the top world-class GC implementations, yet the opinions of those who
turned away from Java in the 90's have not caught up with today's
reality.


[...]
> I don't think they're necessarily wrong. If you don't want to deal
> with GC pauses, it may well be easier to use an approach that doesn't
> have them, in spite of what you have to give up. On the other hand,
> many of them have no idea what they're talking about. Like claims that
> a GC gets in your way if the language has one.

Depends on the language; some may indeed require GC use to write
anything meaningful at all, and some may have the GC running in the
background.  However, D's GC only ever triggers on allocations, and as
of a few releases ago, it doesn't even initialize itself until the first
allocation, meaning that it doesn't even use up *any* resources if you
don't actually use it (except for increasing executable size, if you
want to nitpick on that).  This must be one of the most non-intrusive GC
implementations I've ever seen.  Which makes me *really* incredulous
when the naysayers complain about it.


T

-- 
There are two ways to write error-free programs; only the third one works.


Re: Why I Like D

2022-01-12 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Jan 12, 2022 at 11:14:54AM -0500, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
[...]
> Look at Sociomantic -- they still used the GC, just made sure to
> minimize the possibility of collections.
> 
> I wonder if there is just so much fear of the GC vs people who
> actually tried to use the GC and it failed to suit their needs. I've
> never been afraid of the GC in my projects, and it hasn't hurt me at
> all.
[...]

Like I said, my suspicion is that it's more of a knee-jerk reaction to
the word "GC" than anything actually founded in reality, like somebody
actually wrote a game in D and discovered the GC is a problem vs
somebody is *thinking* about writing a game in D, then thinks about the
GC, then balks because of the expectation that the GC is going to do
something bad like kill the hypothetical framerate or make the
not-yet-implemented animation jerky.

Those who actually wrote code and found GC performance problems would
have just slapped @nogc on their code or inserted GC.disable at the
beginning of the game loop and called it a day, instead of getting all
knotted up in the forums about why GC is bad in principle.


T

-- 
I'm still trying to find a pun for "punishment"...


Re: Why I Like D

2022-01-12 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Jan 12, 2022 at 03:41:03PM +, Adam D Ruppe via 
Digitalmars-d-announce wrote:
> On Wednesday, 12 January 2022 at 15:25:37 UTC, H. S. Teoh wrote:
> > However it turns out that unless you are writing a computer
> > game, a high frequency trading system, a web server
> 
> Most computer games and web servers use GC too.
[...]

Depends on what kind of games, I guess. If you're writing a 60fps
real-time raytraced 3D FPS running at 2048x1152 resolution, then
*perhaps* you might not want a GC killing your framerate every so often.

(But even then, there's always GC.disable and @nogc... so it's not as if
you *can't* do it in D. It's more a psychological barrier triggered by
the word "GC" than anything else, IMNSHO.)


T

-- 
A mathematician is a device for turning coffee into theorems. -- P. Erdos


Re: Why I Like D

2022-01-12 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Jan 11, 2022 at 06:37:47PM -0800, Walter Bright via 
Digitalmars-d-announce wrote:
> "Why I like D" is on the front page of HackerNews at the moment at number 11.
> 
> https://news.ycombinator.com/news

Favorite quote:

Some people may not consider the GC a feature, I certainly did
not at the beginning. I came from a hard-core game developer
mindset where you need to know the exact timing for every
operation in your critical path. I lived by quotes like: “the
programmer knows better how to manage memory” and “you cannot
have unexpected pauses for GC collection”.

However it turns out that unless you are writing a computer
game, a high frequency trading system, a web server, or anything
that really cares about sub-second latency, chances are that a
garbage collector is your best friend. It will remove the burden
of having to think about memory management at all and at the
same time guarantee that you won’t have any memory leaks in your
code.

*Flamesuit on.*


T

-- 
Insanity is doing the same thing over and over again and expecting different 
results.


Re: fixedstring: a @safe, @nogc string type

2022-01-11 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Jan 11, 2022 at 11:16:13AM +, Moth via Digitalmars-d-announce wrote:
> On Tuesday, 11 January 2022 at 03:20:22 UTC, Salih Dincer wrote:
> > [snip]
> 
> glad to hear you're finding it useful! =]

One minor usability issue I found just glancing over the code: many of
your methods take char[] as argument. Generally, you want const(char)[]
instead, so that it will work with both char[] and immutable(char)[].
No reason why you can't copy some immutable chars into a FixedString,
for example.

Another potential issue is with the range interface. Your .popFront is
implemented by copying the entire buffer 1 char forwards, which can
easily become a hidden performance bottleneck. Iteration over a
FixedString currently is O(N^2), which is a problem if performance is
your concern.

Generally, I'd advise not conflating your containers with ranges over
your containers: I'd make .opSlice return a traditional D slice (i.e.,
const(char)[]) instead of a FixedString, and just require writing `[]`
when you need to iterate over the string as a range:

FixedString!64 mystr;
foreach (ch; mystr[]) { // <-- iterates over const(char)[]
...
}

This way, no redundant copying of data is done during iteration.

Another issue is the way concatenation is implemented. Since
FixedStrings have compile-time size, this potentially means every time
you concatenate a string in your code you get another instantiation of
FixedString. This can lead to a LOT of template bloat if you're not
careful, which may quickly outweigh any benefits you may have gained
from not using the built-in strings.


> hm, i'm not sure how i would go about fixing that double character
> issue. i know there's currently some wierdness with wchars / dchars
> equality that needs to be fixed [shouldn't be too much trouble, just
> need to set aside the time for it], but i think being able to tell how
> many chars there are in a glyph requires unicode awareness? i'll look
> into it.
[...]

Yes, you will require Unicode-awareness, and no, it will NOT be as
simple as you imagine.

First of all, you have the wide-character issue: if you're dealing with
anything outside of the ASCII range, you will need to deal with code
points (potentially wchar, dchar).  You can either take the lazy way out
(FixedString!(n, wchar), FixedString!(n, dchar)), but that will
exacerbate your template bloat very quickly. Plus, it wastes a lot of
memory, esp. if you start using dchar[] -- 4 bytes per character
potentially makes ASCII strings use up 4x more memory. (And even if you
decide using dchar[] isn't a concern, there's still the issue of
graphemes -- see below, which requires non-trivial decoding anyway.)

Or you can handle UTF-8, which is a better solution in terms of memory
usage. But then you will immediately run into the encoding/decoding
problem. Your .opSlice, for example, will not work correctly unless you
auto-decode. But that will be a performance hit -- this is one of the
design mistakes in hindsight that's still plaguing Phobos today. IMO the
better approach is to iterate over the string *without* decoding, but
just detecting codepoint boundaries.  Regardless, you will need *some*
way of iterating over code points instead of code units in order to deal
with this properly.

But that's only the beginning of the story. In Unicode, a "code point"
is NOT what most people imagine a "character" is. For most European
languages this is the case, but once you go outside of that, you'll
start finding things like accented characters that are composed of
multiple code points.  In Unicode, that's called a Grapheme, and here's
the bad news: the length of a Grapheme is technically unbounded (even
though in practice it's usually 2 or occasionally 3 -- but you *will*
find more on rare occasions). And worst of all, determining the length
of a grapheme requires an expensive, non-trivial algorithm that will
KILL your performance if you blindly do it every time you traverse your
string.

And generally, you don't *want* to do grapheme segmentation anyway --
most code doesn't even care what the graphemes are, it just wants to
treat strings as opaque data that you may occasionally want to segment
into substrings (and substrings don't necessarily require grapheme
segmentation to compute, depending on what the final goal is). But
occasionally you *will* need grapheme segmentation (e.g., if you need to
know how many visual "characters" there are in a string); for that, you
will need std.uni. And no, it's not something you can implement
overnight.  It requires some heavy-duty lookup tables and a (very
careful!) implementation of TR14.

Because of the foregoing, you have at least 4 different definitions of
the length of the string:

1. The number of code units it occupies, i.e., the number of chars /
wchars / dchars.

2. The number of code points it contains, which, in UTF-8, is a
non-trivial quantity that requires iterating over the entire string to
compute. Or 

Re: DMD now incorporates a disassembler

2022-01-08 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Jan 08, 2022 at 08:29:20PM +, max haughton via 
Digitalmars-d-announce wrote:
> On Saturday, 8 January 2022 at 18:08:27 UTC, Steven Schveighoffer wrote:
> > On 1/8/22 12:23 PM, jmh530 wrote:
> > > On Friday, 7 January 2022 at 21:41:55 UTC, Walter Bright wrote:
> > > > Compile with -vasm to see it! Enjoy!
[...]
> > > Would make a nice project for someone to integrate this into
> > > run.dlang.org
> > 
> > Isn't there already an ASM button?
[...]
> Yup.

Better yet, the ASM button on run.dlang.org shows disassembly for all 3
compilers, not just dmd.


T

-- 
The early bird gets the worm. Moral: ewww...


Re: DMD now incorporates a disassembler

2022-01-07 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Jan 08, 2022 at 07:39:54AM +0100, ag0aep6g via Digitalmars-d-announce 
wrote:
> On 07.01.22 22:41, Walter Bright wrote:
> > Compile with -vasm to see it! Enjoy!
> 
> With feature creep in full swing now, when can I expect to read my email
> with DMD?

You already can:

echo 'import std;void main(){execute("/usr/bin/mail");}' | dmd -run -

:-P


T

-- 
"Real programmers can write assembly code in any language. :-)" -- Larry Wall


Re: He Wrote a High-Frequency Trading Platform in D

2021-12-11 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Dec 11, 2021 at 01:58:02PM +, Mike Parker via 
Digitalmars-d-announce wrote:
> Georges Toutoungis shared his D user experience on the D blog. He went
> from being excited, to dismissive, to using D to implement an HFT and
> never looking back.
> 
> The blog:
> https://dlang.org/blog/2021/12/11/i-wrote-a-high-frequency-trading-platform-in-d/
> 
> Reddit:
> https://www.reddit.com/r/programming/comments/re075b/he_wrote_a_highfrequency_trading_platform_in_d/

+1, awesome.  We need more of this kind of reports.


--T


Re: GDC has just landed v2.098.0-beta.1 into GCC

2021-11-30 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Nov 30, 2021 at 07:37:34PM +, Iain Buclaw via 
Digitalmars-d-announce wrote:
> Hi,
> 
> The latest version of the D language has [now
> landed](https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=5fee5ec362f7a243f459e6378fd49dfc89dc9fb5)
> in GCC.
[...]
> **Why specifically v2.098.0-beta.1?**  No real reason, other than it
> was just the last time that I had synchronized my [development
> branch](https://github.com/D-Programming-GDC/gcc/tree/ibuclaw/gdc)
> with upstream dmd.
[...]

Awesome!!  Big thanks for the hard work, Iain!  Finally, GDC is
up-to-date with the other two D compilers.  This is super great news.


T

-- 
What do you mean the Internet isn't filled with subliminal messages? What about 
all those buttons marked "submit"??


Re: LDC 1.28.0

2021-10-19 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Oct 19, 2021 at 11:37:22PM +, kinke via Digitalmars-d-announce 
wrote:
> Glad to announce LDC 1.28 - some highlights:
> 
> * Based on D 2.098.0+ (yesterday's stable).
> * Dynamic casts across binary boundaries (DLLs etc.) now work.
> * Windows: `-dllimport=defaultLibsOnly` doesn't require
> `-linkonce-templates` anymore.
> * dcompute: Basic support for OpenCL image I/O.
> 
> Full release log and downloads:
> https://github.com/ldc-developers/ldc/releases/tag/v1.28.0
> 
> Thanks to all contributors & sponsors!

+1, awesome, thanks for the good work!


T

-- 
Caffeine underflow. Brain dumped.


Re: Beta 2.098.0

2021-10-05 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Oct 05, 2021 at 07:36:28PM +0200, ag0aep6g via Digitalmars-d-announce 
wrote:
[...]
> > > On Monday, 4 October 2021 at 22:40:19 UTC, Temtaime wrote:
> > > > What is really discourages me that persons like Walter instead
> > > > of making D great just do nothing helpful.
[...]
> It's absolutely true that many reported issues don't get fixed for
> *years*.  And that very much includes serious bugs. As far as I can
> tell, it's also true that Walter prioritizes new features instead
> (ImportC is the latest fad).
> 
> I sympathize with Temtaime. Their criticism wasn't sugar-coated, but
> it is constructive and it is valid in my opinion.

I don't agree with the tone of the criticism, but I do sympathize with
the sentiment.  The sad reality is that it's much more fun to write new
code than to debug old code.  Especially when you just had a cool idea
that feels like it would revolutionize everything.  And it very well
might do just that; but in the meantime, "boring" stuff like fixing bugs
in the current (probably hairy, messy, unclean) code gets neglected.

This is a particularly pronounced problem in groups consisting mostly of
experts or highly-experienced people.  Everybody wants to do the cool,
innovative stuff, nobody feels like doing the boring grunt work.  Worse
yet, in high-expertise areas like debugging the D compiler even those
who are willing to do the grunt work may not actually feel qualified
enough to do it.

But grunt work is just as necessary as the innovative, ground-breaking
stuff.  *Somebody* has to step up and be willing to do it.  It's a
thankless, unrewarding job, but a very necessary one.


T

-- 
Don't modify spaghetti code unless you can eat the consequences.


Re: Bison 3.8.1 released with D backend

2021-09-20 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Sep 15, 2021 at 01:24:25PM +, Carl Sturtivant via 
Digitalmars-d-announce wrote:
> 
> The D back-end for deterministic parsers contributed by Adela Vais is
> now available with the release of Bison 3.8.1 !
> 
> https://github.com/adelavais
> 
> See https://savannah.gnu.org/forum/forum.php?forum_id=10047 for details.

Great news!


T

-- 
Unix was not designed to stop people from doing stupid things, because that 
would also stop them from doing clever things. -- Doug Gwyn


Re: Surprise - New Post on the GtkD Coding Blog

2021-09-07 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Sep 07, 2021 at 12:29:14PM +, Dukc via Digitalmars-d-announce wrote:
> On Friday, 3 September 2021 at 18:52:13 UTC, Adam D Ruppe wrote:
> > (i loathe and despise wayland but ill try not to rant)
> 
> Have you written more about this on your blog? I have read more than
> one piece that wishes good riddance of X in favour of Wayland, I'd
> like to read something about the "but" side.

I for one will *not* be happy about X being dropped in favor of Wayland,
esp. since the latter is not (yet?) at feature parity (or better) with
X.


T

-- 
Right now I'm having amnesia and deja vu at the same time. I think I've 
forgotten this before.


Re: dmdtags 1.0.0: an accurate tag generator for D source code

2021-08-31 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Aug 27, 2021 at 09:38:58PM +, Paul Backus via 
Digitalmars-d-announce wrote:
> `dmdtags` is a tags file generator for D source code that uses the DMD
> compiler frontend for accurate parsing.
> 
> This release supports 100%-accurate parsing of arbitrary D code
> (tested on DMD and Phobos sources), as well as the most commonly-used
> command line options, `-R`, `-o`, and `-a`. The generated tags file
> has been tested for compatibility with Vim and is compliant with the
> [POSIX standard for `ctags`][posix], so any editor with `ctags`
> support should be able to use it.

This is AWESOME!!!  Thanks a ton for this... I'll definitely be using
this in the near future!


[...]
> [`universal-ctags`][uctags], the current most-popular and
> best-maintained tags file generator, claims support for many
> programming languages, including D. However, its D parser is not
> well-maintained, and it often excludes large numbers of symbols from
> its output due to parsing failures.
> 
> Because `dmdtags` uses the DMD frontend for parsing, its results will
> always be accurate and up-to-date. For pure D projects, it can be used
> as a replacement for `universal-ctags`. For mixed-language projects,
> it can be used together with other tag generators with the `--append`
> option.

Is there any hope of merging this back to upstream ctags?

Regardless, this is awesome.


T

-- 
Long, long ago, the ancient Chinese invented a device that lets them see 
through walls. It was called the "window".


Re: trash-d: Replacement for rm that uses the trash bin

2021-08-24 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Aug 24, 2021 at 02:19:58AM +, rushsteve1 via Digitalmars-d-announce 
wrote:
> https://github.com/rushsteve1/trash-d
> 
> A near drop-in replacement for `rm` that uses the Freedesktop trash
> bin.  Started because an acquaintance `rm -rf`'d his music folder and
> I thought there had to be a better way.

Cool!


> It's pretty simple and only uses the D stdlib. Been working on it in
> my spare time for a bit over a week and I figure it's good enough to
> show people.

Very nice!


> I started this project in Bash originally but switched to D since I
> thought it would be a good way to learn some more (also Bash is
> scary). Ended up being a great choice!

Agreed, I wouldn't touch bash scripting with a 100-foot pole if I could
help it.  D is much better for this sort of thing. ;-)


> Pretty new to D so feedback welcome!

Welcome!


T

-- 
The early bird gets the worm. Moral: ewww...


Re: LDC 1.26.0

2021-04-28 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Apr 28, 2021 at 03:30:58PM +, kinke via Digitalmars-d-announce 
wrote:
> Glad to announce LDC 1.26:
[...]
> https://github.com/ldc-developers/ldc/releases/tag/v1.26.0
[...]

Awesome!  Thanks to all LDC contributors who made this release possible.
LDC has been my workhorse D compiler, and it's been awesome. Thanks for
all the hard work.


T

-- 
Too many people have open minds but closed eyes.


Re: Cross-compiler targeting macOS

2021-04-08 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Apr 08, 2021 at 10:23:27AM +0200, Jacob Carlborg via 
Digitalmars-d-announce wrote:
> On 2021-04-07 17:27, Guillaume Piolat wrote:
> 
> > Dumb question maybe but: in what use cases should this be used?
> 
> I don't know, ask H. S. Teoh :D.
> 
> I know some people have asked for it. I did it mostly because I knew
> how to do it and do it properly. I general I don't see the point to
> cross-compile (unless it's required, like mobile an embedded), because
> it seems like people want to use cross-compiling because they don't
> have the target system. But eventually you need to test the result and
> then you do need the target system to be able to run it.

That's the main reason for me, anyway, can't speak for others.  And yes,
ideally you'd want to own the target system as well so that you can test
it, but sometimes you just want to share a personal project with someone
running on say MacOS, and it doesn't seem to make sense to buy a Mac
just to be able to share that one program.  So cross-compiling would
be a much better solution.


> But perhaps if you target Windows you can then use Wine to run the
> executable.  Seem to be something similar for macOS [1]. But if you
> can run the result using Wine you should be able to run the compiler
> using Wine as well. Perhaps it's less of a hassle to cross-compile, I
> don't know.

IME, test results from Wine are not reliable. It's a good first pass to
make sure you didn't do anything obviously broken, but just because
something runs well in Wine does not guarantee it will run well on an
actual Windows box.

But still, even then I'd rather cross-compile, because then I can just
do everything on a single development machine instead of having to
install and maintain multiple development toolchains across different
machines. Otherwise it's just a lot of unnecessary hassle having to
sync source code between different development environments and switch
between computers just to build a set of release binaries, say.  On a
single development environment with cross-compilation, I can just setup
the build script to build all binaries for all platforms at once,
without any of these hassles.


> If you're targeting Linux on non-native architectures you can use
> qemu.  Seems pretty easy if you have a statically linked binary and
> use qemu user emulation.

For testing, yeah I'd do that. For builds, I'd rather centralize
everything on a single development environment.


> There's also free public CI services that target macOS, no need to
> cross-compile and it can run the code as well.

That's good to know.  Still, I'd rather keep things independent of a
network connection in case I ever find myself in a place without one.


> I did have a use case at my previous job. The production systems were
> running Linux but all developers were using macOS. We created a custom
> tool for the developers, which then needed to target macOS. It was a
> GUI application so Docker wasn't an option. We only had access to
> Linux CI runners so I used cross-compiling. It couldn't test the
> result, but at least it could build it and publish it. That's when I
> setup the first incarnation of this project [2]. In this new
> incarnation, I've fixed the main problem of the first incarnation:
> reproducibility.
> 
> [1] https://www.darlinghq.org
> [2] https://github.com/jacob-carlborg/docker-ldc-darwin
[...]

Thanks for this, it is very helpful.


T

-- 
Change is inevitable, except from a vending machine.


Re: Cross-compiler targeting macOS

2021-04-07 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Apr 07, 2021 at 12:24:40PM +, Jacob Carlborg via 
Digitalmars-d-announce wrote:
> # Docker LDC Darwin
> 
> I would like to announce a new project I'm working on:
> docker-ldc-darwin [1]. The project consists of a Dockerfile for
> building a Docker image which has all the necessary tools to
> cross-compile D applications targeting macOS x86-64.
[...]

Thanks!!! This is what I've been looking for, for a long time!


T

-- 
Holding a grudge is like drinking poison and hoping the other person dies. -- 
seen on the 'Net


Re: On the D Blog--Symphony of Destruction: Structs, Classes, and the GC

2021-03-04 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Mar 04, 2021 at 11:42:58PM +, Dukc via Digitalmars-d-announce wrote:
> On Thursday, 4 March 2021 at 13:54:48 UTC, Mike Parker wrote:
[...]
> If an assert was failing, the program is going to terminate anyway, so
> InvalidMemoryOperationError is no problem. Well, it might obfuscate
> the underlying error if there is no stack trace, but banning
> `assert`ing in anything that could be called by a destructor sounds
> too drastic to me. Even the lowest level system code tends to contain
> asserts in D, at least in my codebase. If asserting is banned,
> destructors can do faily much nothing. I'd think it's much more
> practical to redefine the assert failure handler if
> InvalidMemoryOperationError due to a failed assert is a problem.

This is precisely why Walter (and others) have said that assert failures
should not throw anything, they should simply terminate (perhaps calling
a user-defined panic function right before aborting, if special handling
is needed).

That, or we take Mike's advice to pretend that class dtors don't exist.


T

-- 
The diminished 7th chord is the most flexible and fear-instilling chord. Use it 
often, use it unsparingly, to subdue your listeners into submission!


Re: LDC 1.25.0

2021-02-23 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Feb 23, 2021 at 07:32:13PM +, kinke via Digitalmars-d-announce 
wrote:
> On Tuesday, 23 February 2021 at 18:19:09 UTC, H. S. Teoh wrote:
> > Tested this on one of my projects yesterday. For -O3, it reduced
> > compile time by about ~26%.  For -O, it reduced compile time by
> > about 24%.  Not as much as I'd hoped, but still pretty big
> > reductions.
> 
> Thx for some numbers. [Note that -O == -O3 == -O4 == -O5, they are all
> the same (at least for now), contrary to what you might read
> somewhere.] A reduction by 25%, i.e., a 1.33x speed-up, for code that
> is guaranteed to be at least as fast as before (higher cross-module
> inlining potential) isn't too bad, aye? :)

Yeah actually it's pretty good.  It's only that my expectations were a
bit high when you reported 50+% reductions in compile times. :-)


> > For non-optimized builds, it reduced compile times by only 1-2%
> > (pretty insignificant).
> 
> I find it rather interesting that it isn't any slower. Compiling debug
> Phobos all-at-once took 67% longer on my box (and increased the static
> lib size by 76%). Without -O, I've only seen some improvements with
> `-unittest`.
[...]

Interesting indeed. I just did a quick test with -unittest, and got
these numbers:

-unittest:  15.9 sec
-unittest -linkonce-templates:  22.3 sec
-unittest -O:   54.4 sec
-unittest -O -linkonce-templates:   40.7 sec

Apparently with -unittest it *does* run slower without -O. But with -O,
it does run faster.


T

-- 
It won't be covered in the book. The source code has to be useful for 
something, after all. -- Larry Wall


Re: LDC 1.25.0

2021-02-23 Thread H. S. Teoh via Digitalmars-d-announce
On Sun, Feb 21, 2021 at 06:26:38PM +, kinke via Digitalmars-d-announce 
wrote:
> Glad to announce LDC 1.25 - some highlights:
> 
> - Based on D 2.095.1.

Awesome!!  Thanks to everyone in the LDC team who made this release
possible.


[...]
> - New experimental template emission scheme for -linkonce-templates.
> This option can significantly accelerate compilation times for
> optimized builds (e.g., 56% faster on my box when compiling the
> optimized Phobos unittests).
[...]

Tested this on one of my projects yesterday. For -O3, it reduced compile
time by about ~26%.  For -O, it reduced compile time by about 24%.  Not
as much as I'd hoped, but still pretty big reductions.

For non-optimized builds, it reduced compile times by only 1-2% (pretty
insignificant).


T

-- 
The easy way is the wrong way, and the hard way is the stupid way. Pick one.


Re: Idioms for the D programming language

2021-02-11 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Feb 11, 2021 at 12:12:36PM -0800, Walter Bright via 
Digitalmars-d-announce wrote:
[...]
> https://p0nce.github.io/d-idioms/

Not bad, but it seems to be missing some of the newer idioms.
Like the templated static this trick for transparently encoding
compile-time information at runtime.  For example:

import std.regex;
void main(string[] args) {
foreach (arg; args) {
if (arg.match(Re!`some.*regex(pattern)+`)) {
... // do something
}
}
}

template Re(string reStr) {
Regex!char re;
static struct Impl {
static this() {
re = regex(reStr);
}
}
string Re() { return re; }
}

The Re template captures the regex string at compile-time and injects it
into Impl's static this(), which compiles the regex at program startup
at runtime, then the eponymous function Re() simply returns the
precompiled value.

The result:
- Regexes are automatically picked up at compile-time;
- But expensive compile-time generation of the regex (which consumes
  lots of compiler memory and slows down compilation) is skipped;
- Compilation of the regex happens only once upon program startup and
  cached, and thereafter is a simple global variable lookup.
- Multiple occurrences of the Re template with the same regex is
  automatically merged (because it's the same instantiation of the
  template).

D features used:
- compile-time string parameters
- static this()
- eponymous templates
- reuse of template instantiations that have the same arguments


[...]
> https://www.reddit.com/r/programming/comments/lhssjp/idioms_for_the_d_programming_language/

There doesn't appear to be any discussion happening here.


T

-- 
If it breaks, you get to keep both pieces. -- Software disclaimer notice


Re: DIP 1034--Add a Bottom Type (reboot)--Formal Assessment Begins

2021-02-03 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Feb 03, 2021 at 09:20:57AM +, Mike Parker via 
Digitalmars-d-announce wrote:
> After a bit of delay, DIP 1034, "Add a Bottom Type (reboot)", is now in the
> hands of Walter and Atila for the Formal Assessment. We can expect to have a
> final decision or some other result by March 4.
> 
> You can find the final draft of DIP 1034 here:
> 
> https://github.com/dlang/DIPs/blob/1eb2f39bd5b6652a14ef5300062a1234ad00ceb1/DIPs/DIP1034.md

Too late now, but there's a typo in the last code example under section
"Flow analysis across functions": the return line should read:

return x != 0 ? 1024 / x : abort("calculation went awry.");

rather than:

return x != 0 ? 1024 / x : abort(0, "calculation went awry.");

(extraneous '0' first argument.)


T

-- 
There are four kinds of lies: lies, damn lies, and statistics.


Re: Please Congratulate My New Assistant

2021-01-25 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Jan 25, 2021 at 08:03:53PM +, Paul Backus via 
Digitalmars-d-announce wrote:
> On Monday, 25 January 2021 at 12:48:48 UTC, Imperatorn wrote:
> > But, at the same time, I guess it could be a bit demoralizing you
> > know?
> 
> That's true. Sometimes, reality is demoralizing. That doesn't mean we
> should hide our heads in the sand and ignore it.
[...]

I think we're looking at this in the wrong way.  While we should
certainly work on fixing bugs and thereby improve the language, it's a
mistake to try to optimize the number of bugs as a metric.  Reducing the
number of bugs equates to improvements in the language *only when the
closed bugs correspond to changes that improve the language*.  Reducing
the number of bugs as a metric on its own does not necessarily equate to
language improvement.

For example, we could declare by pure fiat that starting from today, D
has zero bugs, and implement this by closing all bugs in bugzilla.  Does
that mean that D is now perfect?  Of course not.  All it means is that
we've buried our heads in the sand and pretended that there are no more
bugs.  The language, however, remains in exactly the same state as it
was yesterday, warts and all.  The act of closing the bugs *has not
improved the language by one bit*.

Now look at this another way.  Suppose we start with the current state
of the language, but with zero reported bugs.  Then we open the
floodgates for people to try out the language and report bugs.  Every
bug report, every issue, that gets filed tells us something that we
weren't aware of before: an area where the language could be improved.
IOW, every bug report is an *opportunity for improvement*.  If after we
opened the floodgates no reports are filed, that doesn't mean the
language is perfect; rather, it means the language is dead and nobody
cares enough about it to file bugs anymore. Or the language has reached
a dead-end and cannot be improved any further. The fact that people are
still filing bugs means (1) the language is still alive, and (2) there
is plenty of room for improvement. I.e., we're not at a dead-end.
There's plenty more to look forward to.

So don't look at the bug count as some kind of liability to rid
ourselves of by whatever means possible; rather, look at it as a sign of
life and the opportunity to grow.


T

-- 
MSDOS = MicroSoft's Denial Of Service


Re: I'm creating a game purely written in D with the arsd library

2021-01-02 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Jan 02, 2021 at 09:05:03PM +, Murilo via Digitalmars-d-announce 
wrote:
> On Saturday, 2 January 2021 at 19:15:44 UTC, evilrat wrote:
> > On Saturday, 2 January 2021 at 19:10:59 UTC, Murilo wrote:
> > > I also don't want anyone stealing my idea.
> > 
> > Too late. You already posted it. Technically anyone could "steal" it
> > from now.
> 
> But they would have to write their own code, they can't copy paste my
> code.

Nope.  Reverse-engineering is a thing.  There's even tools out there to
automate this stuff.


T

-- 
Trying to define yourself is like trying to bite your own teeth. -- Alan Watts


Re: I'm creating a game purely written in D with the arsd library

2021-01-02 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Jan 02, 2021 at 09:01:17PM +, Murilo via Digitalmars-d-announce 
wrote:
[...]
> It's because I don't people to know the spoilers, so no one will see
> the source code.

IMO, that view is misguided, because as soon as some software runs on
the user's PC, it's already open to reverse-engineering. Given enough
time and effort, everything can be reverse-engineered.

The catch is, "given enough time and effort".  Meaning, it's *possible*
to reverse-engineer everything, but whether or not someone will actually
do it depends on whether they consider it worth their time and effort.
I'd surmise practically everyone will consider it not worth the effort.
By extension, given the source code, people might be curious to look at
a couple of pages of it, but I honestly doubt they'd have the motivation
to comb through every last page to ferret out any secrets you may have
hidden. (And if they actually did, then congratulations, you've gained a
dedicated follower. That's not a bad thing! You *want* users with that
level of dedication.)

Providing source code is mainly for convenience to people who might want
to compile it for platforms you do not have (thus spreading the word
about your program).


T

-- 
The two rules of success: 1. Don't tell everything you know. -- YHL


Re: Our community seems to have grown, so many people are joining the Facebook group

2020-12-29 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Dec 30, 2020 at 02:31:36AM +, Murilo via Digitalmars-d-announce 
wrote:
> On Tuesday, 29 December 2020 at 15:06:07 UTC, Ola Fosheim Grøstad wrote:
> > No, the OP clearly stated that he made the group "official". That is
> > a deliberate attempt to fracture.

No, that's reading more into it than the OP intended.


> I'm sorry you see it like this but my intention when I created the
> group was to expand Dlang by bringing it to places people couldn't
> find it yet. The whole point of the FB group is to aggregate people
> into our community, to bring more people to Dlang and make Dlang
> famous. My whole intention was to help our community grow, not
> fracture.

I applaud your efforts. Even though I would not participate in Facebook
for personal reasons, and it's probably not a good idea to present the
FB group as "official", it's nevertheless a fact that FB reaches a lot
more people than most other platforms.  So why not take advantage of it.

Let's not get up in arms about technicalities here.  The word is
spreading and that's a good thing, not something to argue over.


T

-- 
Political correctness: socially-sanctioned hypocrisy.


Re: Httparsed - fast native dlang HTTP 1.x message header parser

2020-12-14 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 15, 2020 at 12:11:44AM +, Adam D. Ruppe via 
Digitalmars-d-announce wrote:
> On Monday, 14 December 2020 at 21:59:02 UTC, tchaloupka wrote:
> > * arsd's cgi.d - I haven't expected it to be so much slower than
> > vibe-d parser, it's almost 3 times slower, but on the other hand
> > it's super simple idiomatic D (again doesn't check or allow what RFC
> > says it should and many tests will fail)
> 
> yeah, I think I actually wrote that about eight years ago and then
> never revisited it actually git blame says "committed on Mar 24,
> 2012" so almost nine! And indeed, that git blame shows the bulk of it
> is still the initial commit, though a few `toLower`s got changed to
> `asLowerCase` a few years ago... so it used to be even worse! lol

Slow or not, cgi.d is totally awesome in my book, because recently it
saved my life.  While helping out someone, I threw together a little D
script to do what he wanted; only, I run Linux and he runs a Mac, and my
script is CLI-only while he's a non-poweruser and has no idea what to do
at the command prompt.  So naturally my thought was, let's give this a
web interface so that there's a fighting chance non-programmers would
know how to use it.  Being a program I wrote in literally 4 hours
(possibly less), I wasn't going to let it turn into a monster full of
hundreds of 3rd party dependencies, so I reached for my trusty solution:
arsd's cgi.d.

Just a single file, no network dependencies, no complicated builds, just
drop the file into my code, import it, and off I go.  Better yet, it
came with a built-in CLI request tester: perfect for local testing
without the hassle of needing to start/stop an entire web service just
to run a quick test; plus a compile-time switch to adapt it to any
common webserver interface you like: CGI, FastCGI, even standalone HTTP
server.  Problem solved in a couple o' hours, as opposed to who knows
how long it would have taken to engineer a "real" solution with vibe.d
or one of the other heavyweight "frameworks" out there.

It may not be the fastest web module in the D world, but it's certainly
danged convenient, does the necessary job with a minimum of fuss, easily
adaptable to a variety of common use cases, and best of all, requires
basically no dependencies beyond just dropping the file into your code.

For that alone, I think Adam deserves a salute.

(But of course, if Adam improves cgi.d to be competitive with vibe.d,
then it could totally rock the D world! ;-))


T

-- 
Written on the window of a clothing store: No shirt, no shoes, no service.


Re: Release D 2.094.2

2020-11-23 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Nov 23, 2020 at 07:07:43PM +, Martin Nowak via 
Digitalmars-d-announce wrote:
> Glad to announce D 2.094.2, ♥ to the 10 contributors.
> 
> http://dlang.org/download.html
> 
> This point release fixes a few issues over 2.094.2, see the changelog
> for more details.
> 
> http://dlang.org/changelog/2.094.2.html
[...]

Unfortunately, the fix for bugzilla 21285 is incomplete. There's still a
failing case (see latest bugnote):

https://issues.dlang.org/show_bug.cgi?id=21285


T

-- 
Too many people have open minds but closed eyes.


Re: GCC 10.2.1 Released

2020-08-24 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Aug 24, 2020 at 09:24:23PM +, Iain Buclaw via 
Digitalmars-d-announce wrote:
[...]
> GCC 10.2 is a bug-fix release from the GCC 10 branch containing
> important fixes for regressions and serious bugs found GCC 10.1.

Thanks for all of your efforts, Iain!!


[...]
> Also fixed is a compile-time performance bug when using `static
> foreach'.
[...]
> Compilation time has been reduced from around 40 to 0.08 seconds.
> Memory consumption is also reduced from 3.5GB to 55MB. (Thanks
> BorisCarvajal!)
[...]

Wow. That's a pretty major improvement!  Is this improvement upstreamed?

Just out of curiosity, which language version will the next GCC release
have?  Currently, my version of GDC gives __VERSION__ as 2.076, which is
pretty old (whereas LDC gives 2.093, basically on par with DMD).  Will
the next GDC major release have a significantly-updated language
version?

(I understand that the original plan was to get a foot in GCC's door
first, for bootstrapping reasons, then now that we have GDC in the
official GCC distribution, we can bootstrap to a much more up-to-date
front-end version.)


T

-- 
Never step over a puddle, always step around it. Chances are that whatever made 
it is still dripping.


Re: Reading IDX Files in D, an introduction to compile time programming

2020-08-21 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Aug 21, 2020 at 01:18:30PM -0700, Ali Çehreli via 
Digitalmars-d-announce wrote:
[...]
> In my case I found a limitation: I cannot "iterate a directory" and
> import all file contents in there (the limitation is related to a C
> library function not having source code so it cannot be evaluated).

The actual limitation is that string imports do not allow reading
directory contents (the C function can be replaced if such were
allowed).  Generally, I don't expect directory traversal to ever be
allowed at compile-time, since it opens the door to a huge can o'
security worms. :-P


> So, I use a build step to generate the file that contains all files in
> my directory. So, I first import the file list then 'static foreach'
> that list to import and parse contents of other files.

Yeah, that's probably the simplest solution.


[...]
> Error: `fakePureCalloc` cannot be interpreted at compile time, because
> it has no available source code
> 
> (I think the error message is different in dmd 2.084 where my project
> currently uses.)
[...]

fakePureCalloc is a red herring; even if it weren't a problem, you'd
eventually run into the problem that you cannot do directory traversal
at compile-time.


T

-- 
Time flies like an arrow. Fruit flies like a banana.


Re: Reading IDX Files in D, an introduction to compile time programming

2020-08-21 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Aug 21, 2020 at 08:54:14AM -0700, H. S. Teoh via Digitalmars-d-announce 
wrote:
> On Fri, Aug 21, 2020 at 03:04:30PM +, data pulverizer via 
> Digitalmars-d-announce wrote:
> > I have written an article targeted at people new to D on
> > compile-time programming:
> > https://www.active-analytics.com/blog/reading-idx-files-in-d/
> [...]
> 
> CSS leakage into text in 2nd bullet point under "Introduction":
> "uspadding: 0.5em;s" should be "uses".
[...]

Anyway, besides that typo / formatting error, this is a very interesting
idea to generate type declarations at compile-time based on external
files.  I'm gonna hafta "steal" this idea for my own projects. ;-)


T

-- 
Dogs have owners ... cats have staff. -- Krista Casada


Re: Reading IDX Files in D, an introduction to compile time programming

2020-08-21 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Aug 21, 2020 at 03:04:30PM +, data pulverizer via 
Digitalmars-d-announce wrote:
> I have written an article targeted at people new to D on compile-time
> programming:
> https://www.active-analytics.com/blog/reading-idx-files-in-d/
[...]

CSS leakage into text in 2nd bullet point under "Introduction":
"uspadding: 0.5em;s" should be "uses".


T

-- 
Shin: (n.) A device for finding furniture in the dark.


Re: Article: The surprising thing you can do in the D programming language

2020-08-20 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Aug 20, 2020 at 10:20:59AM +, user1234 via Digitalmars-d-announce 
wrote:
> On Thursday, 20 August 2020 at 10:12:22 UTC, aberba wrote:
> > Wrote something on OpenSource.com
> > 
> > https://opensource.com/article/20/8/nesting-d
> 
> I'm not sure. A few notes on nesting.
> 
> 1. context pointer can prevent inlining. you can nest static funcs,
> but then what is the point

The point is not to pollute module namespace with implementation details
that are only relevant to that one function.

Also, if you're worried about inlining, use LDC instead of DMD. LDC is
well able to inline nested functions.  Don't make judgments based on the
DMD backend; it's well-known that its optimizer is ... sub-optimal...


> 2. bad complexity. you can make the nested funcs static and factor
> them out.  The code is more readable.
[...]

The point is not to pollute module namespace with functions that are
only relevant as implementation details of one function.


T

-- 
First Rule of History: History doesn't repeat itself -- historians merely 
repeat each other.


Re: LDC 1.23.0

2020-08-19 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Aug 19, 2020 at 05:45:46PM +, kinke via Digitalmars-d-announce 
wrote:
> Glad to announce LDC 1.23 - some highlights:
> 
> - Based on D 2.093.1+.
> - LLVM for prebuilt packages bumped to v10.0.1; min version raised to
>   6.0.
> - Cross-compiling to the iOS/x86_64 simulator now works out-of-the-box
>   with the prebuilt Mac package.
> - Windows: New -gdwarf command-line option for debugging with
>   gdb/lldb.
> - Fix linker errors for -betterC wrt. cleanups (structs with dtor,
>   `scope(exit)`).
> 
> Full release log and downloads:
> https://github.com/ldc-developers/ldc/releases/tag/v1.23.0
[...]

Awesome!!! Big thanks to the LDC team and everyone involved in making
this possible. Going to upgrade right now!


T

-- 
The peace of mind---from knowing that viruses which exploit Microsoft system 
vulnerabilities cannot touch Linux---is priceless. -- Frustrated system 
administrator.


Re: The ABC's of Templates in D

2020-07-31 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jul 31, 2020 at 01:46:43PM +, Mike Parker via 
Digitalmars-d-announce wrote:
[...]
> If you've got a code base that uses templates in interesting ways,
> please get in touch! We do offer a bounty for guest posts, so you can
> help with a bit of PR and make a bit of cash at the same time.

Not sure how blog-worthy this is, but recently I was writing a utility
that used std.regex extensively, and I wanted to globally initialize all
regexes (for performance), but I didn't want to use ctRegex because of
onerous compile-time overhead.  So my initial solution was to create a
global struct `Re`, that declared all regexes as static fields and used
a static ctor to initialize them upon startup. Something like this:

struct Re {
static Regex!char pattern1;
static Regex!char pattern2;
... // etc.

static this() {
pattern1 = regex(`foo(\w+)bar`);
pattern2 = regex(`...`);
... // etc.
}
}

auto myFunc(string input) {
...
auto result = input.replaceAll(Re.pattern1, `blah $1 bleh`);
...
}

This worked, but was ugly because (1) there's too much boilerplate to
declare each regex and individually initialize them in the static ctor;
(2) the definition of each regex was far removed from its usage context,
so things like capture indices were hard to read (you had to look at two
places in the file at the same time to see the correspondence, like the
$1 in the above snippet).

Eventually, I came up with this little trick:

Regex!char staticRe(string reStr)()
{
static struct Impl
{
static Regex!char re;
static this()
{
re = regex(reStr);
}
}
return Impl.re;
}

auto myFunc(string input) {
...
auto result = input.replaceAll(staticRe!"foo(\w+)bar", `blah $1 
bleh`);
...
}

This allowed the regex definition to be right where it's used, making
things like capture indices immediately obvious in the surrounding code.

Points of interest:

1) staticRe is a template function that takes its argument as a
   compile-time parameter, but at runtime, it simply returns a
   globally-initialized regex (so runtime overhead is basically nil at
   the caller's site, if the compiler inlines the call).

2) The regex is not initialized by ctRegex in order to avoid the
   compile-time overhead; instead, it's initialized at program startup
   time.

3) Basically, this is equivalent to a global variable initialized by a
   module static ctor, but since we can't inject global variables into
   module scope from a template function, we instead declare a wrapper
   struct inside the template function (which ensures a unique
   instantiation -- which also sidesteps the issue of generating unique
   global variable names at compile-time), with a static field that
   basically behaves like a global variable.  To ensure startup
   initialization, we use a struct static ctor, which essentially gets
   concatenated to the list of module-static ctors that are run before
   main() at runtime.

Well, OK, strictly speaking the regex is re-created per thread because
it's in TLS. But since this is a single-threaded utility, it's Good
Enough(tm). (I didn't want to deal with `shared` or __gshared issues
since I don't strictly need it. But in theory you could do that if you
needed to.)

//

Here's a related trick using the same principles that I posted a while
ago: a D equivalent of gettext that automatically extracts translatable
strings. Basically, something like this:

class Language { ... }
Language curLang = ...;

version(extractStrings) {
private int[string] translatableStrings;
string[] getTranslatableStrings() {
return translatableStrings.keys;
}
}

string gettext(string str)() {
version(extractStrings) {
static struct StrInjector {
static this() {
translatableStrings[str]++;
}
}
}
return curLang.translate(str);
}

...
auto myFunc() {
...
writeln(gettext!"Some translatable message");
...
}

The gettext function uses a static struct to inject a static ctor into
the program that inserts all translatable strings into a global AA.
Then, when compiled with -version=extractStrings, this will expose the
function getTranslatableStrings that returns a list of all translatable
strings.  Voila! No need for a separate utility to parse 

Re: Article: the feature that makes D my favorite programming language

2020-07-25 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Jul 25, 2020 at 01:28:34PM +, Adam D. Ruppe via 
Digitalmars-d-announce wrote:
> On Saturday, 25 July 2020 at 11:12:16 UTC, aberba wrote:
> > Oop! Chaining the writeln too could have increased the wow factor. I
> > didn't see that.
> 
> oh I hate it when people do that though, it just looks off to me at
> that point.

Me too.  It gives me the same creepie-feelies as when people write
writeln(x) as:

writeln = x;

Actually, D's lax syntax surrounding the = operator gives rise to the
following reverse-UFCS nastiness:

// Cover your eyes (unless you're reverse-Polish :-P)! and don't
// do this at home, it will corrupt your sense of good coding
// style!
import std;
void main() {
writeln = filter!(x => x % 3 == 1)
= map!(x => x*2)
= [ 1, 2, 3, 4, 5, 6 ];
}

// Output: [4, 10]


T

-- 
Winners never quit, quitters never win. But those who never quit AND never win 
are idiots.


Re: Article: the feature that makes D my favorite programming language

2020-07-24 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jul 24, 2020 at 08:34:17PM +, aberba via Digitalmars-d-announce 
wrote:
> Wrote something on the feature that makes D my favorite programming
> language
> 
> https://opensource.com/article/20/7/d-programming

Nitpick: evenNumbers doesn't need to return int[].  In fact, dropping
the .array makes it even better because it avoids an unnecessary
allocation when you're not going to store the array -- writeln is well
able to handle printing arbitrary ranges. Let the caller call .array
when he wishes the store the array; if it's transient, omitting .array
saves an allocation.


T

-- 
Nearly all men can stand adversity, but if you want to test a man's character, 
give him power. -- Abraham Lincoln


Re: Decimal string to floating point conversion with correct half-to-even rounding

2020-07-07 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Jul 07, 2020 at 01:47:59PM -0400, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
> On 7/7/20 1:37 PM, H. S. Teoh wrote:
> > cf. the repeated problems we had over the years with libcurl, zlib,
> > etc.
> 
> zlib is actually included copy-paste style in Phobos [1]. So it's
> interesting that you cite it as an example of causing problems because
> we don't include a copy of it.
[...]

Ah, haha, I think it was mainly libcurl that was causing problems.  Now
that I think about it again, I don't recall zlib causing problems,
probably because we *didn't* rely on it being available on the target
machine!


T

-- 
Dogs have owners ... cats have staff. -- Krista Casada


Re: Decimal string to floating point conversion with correct half-to-even rounding

2020-07-07 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Jul 07, 2020 at 03:08:33PM +, Adam D. Ruppe via 
Digitalmars-d-announce wrote:
> On Tuesday, 7 July 2020 at 13:00:04 UTC, Steven Schveighoffer wrote:
> > Doing that these days would be silly. You can depend on a specific
> > version of a repository without problems.
> 
> I always have problems when trying to do that. git submodules bring
> pretty consistent pain in my experience.

git submodules serves a very specific niche; using it for anything
outside of that most definitely brings in gigantic pain.

That very specific niche is this: there's an external repository R that
contains code you'd like to use, BUT that you'll never edit (unless you
wish to push it back upstream).  You check it out in some subdirectory
of your source tree, say ./someRepo/*, and add it as a submodule. This
adds the SHA hash of the exact revision of the code in .gitmodules,
meaning that you depend on that exact version of R.  Occasionally, you
want to update R to some (presumably newer) version, so you do a `git
submodule foreach git pull ...` to pull the revision you want. This
updates .gitmodules to point to the new revision.

What you do *not* want to do is to edit the contents of the submodule,
because that will start creating diverging branches in the submodule,
which generally leads to a gigantic mess, like when somebody checks out
your code and tries to fetch submodules, they may not find the revision
being referred to (you haven't pushed the commits upstream, upstream
rejected it, etc).

Basically, git submodules let you refer to a specific commit in a
specific repo.  Don't expect it to do anything else for you, including
housekeeping that you *thought* it ought to do.

I've used git submodules quite happily for my own projects, where I want
to pull in code from another project but don't want to have to worry
about version compatibility and all of that dependency hell. Basically
you update a submodule to a new revision when and only when *you*
initiate it, and don't commit the submodule update until you've verified
that the new revision didn't break your code.  The submodule SHA ensures
that you'll get the exact version of the submodule that you last checked
in, not some random new version or some corrupted/edited version that
some unreliable network source have given you instead.


> But it probably isn't so bad if the submodule rarely changes.

Yeah, you do *not* want to use submodules if you're interested in
keeping up with the latest bleeding edge from upstream.  Well I mean you
*can*, but just don't expect it to automate anything for you. The onus
is upon you to test everything with the new revision before committing
it.

Oh, and another point: it's *probably* a good idea to git clone (i.e.
fork in github parlance) the submodule into a local copy, so that if the
network source vanishes into the ether, you aren't left with
uncompilable code.  Don't laugh, the way modern software development is
going, I will be SO not surprised when one day some obscure project that
everyone implicitly depends on suddenly vanishes into the ether and the
rest of the world collapses because everybody and his neighbour's dog
blindly assumed that "if it's on the network, it'll be there forever".


> Just for 100% control anyway nothing beats copy/paste. Then there's
> zero difference between you writing it yourself.

I highly recommend this approach when your dependency is small. Or if
you want to ensure no external dependencies.  There have been far, FAR
too many times IME in the past few years where I encountered a project
that was no longer compilable because one or more dependencies have
vanished into the ether.  Or the code no longer compiles with the
dependency because the latest version of said dependency has migrated to
a brand new codebase, and the old revision that the project depended on
is not compatible with the new version.  Or said project itself has
moved on and happily broke functionality I depended on.

These days, my policy is: download the danged source code for the
specific version of the specific project I'm depending on, AND download
the danged source code of the danged dependencies of that project, etc.,
and keep a local copy of the whole recursive dependency tree so that I
can ensure I can always build that specific version of that specific
project with that specific functionality that I'm using.  Trying to keep
up with projects that gratuitously break stuff, or abandoned projects
whose dependencies are no longer compatible with it, etc., is a hell I
wish to have no part in.  I've lost faith in the emperor's code reuse
clothes; copy-n-paste is what gives the real guarantees.

And I'm not alone in this -- I've noticed that quite a few open source
projects are distributing a copy of the sources of the librar{y,ies}
they depend on in their own source tree as a fallback, in case the
target system's version of that library doesn't exist, or is hard to
find, or is somehow incompatible with the version 

Re: A security review of the D library Crypto

2020-07-04 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Jul 01, 2020 at 07:19:11AM +, Cym13 via Digitalmars-d-announce 
wrote:
[...]
> https://breakpoint.purrfect.fr/article/review_crypto_d.html
[...]

Very interesting writeup indeed, thanks!


> Furthermore if you would like someone to have a look at your project
> to identify issues I am always glad to help free and open source
> projects that can't afford security review through traditional means
> so feel free to reach out.
[...]

I'm not the author, but I'm curious about the D implementation of Botan
(https://code.dlang.org/packages/botan) -- how is its security level?  I
glanced at it before and it seemed OK, but it'd be really nice to have a
3rd party opinion, esp. from someone who's skilled with cryptanalysis.


T

-- 
In theory, there is no difference between theory and practice.


Re: LDC 1.22.0

2020-06-16 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Jun 16, 2020 at 08:12:12PM +, kinke via Digitalmars-d-announce 
wrote:
> Glad to announce LDC 1.22 - some highlights:
[...]

Awesome!!  Thanks for continuing to bring us this awesome compiler!


T

-- 
Those who've learned LaTeX swear by it. Those who are learning LaTeX swear at 
it. -- Pete Bleackley


Re: Rationale for accepting DIP 1028 as is

2020-05-28 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, May 28, 2020 at 03:21:09AM -0600, Jonathan M Davis via 
Digitalmars-d-announce wrote:
[...]
> With the DIP in its current state, @safe becomes a lie.  The compiler
> no longer guarantees that @safe code is memory safe so long as it
> doesn't call any @trusted code where the programmer incorrectly marked
> it as @trusted. Instead, the compiler blindly treats non-extern(D)
> declarations as @safe and invisibly introduces memory safety bugs into
> @safe code.  Nothing about that is "OK."
[...]

I see it already.  The next time someone wants to make a codebase @safe
but the compiler complains about some violation, just add `extern(C)` to
the function and move on.


T

-- 
Claiming that your operating system is the best in the world because more 
people use it is like saying McDonalds makes the best food in the world. -- 
Carl B. Constantine


Re: Work = Resources * Efficiency

2020-05-23 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, May 23, 2020 at 04:00:35AM -0400, Nick Sabalausky (Abscissa) via 
Digitalmars-d-announce wrote:
[...]
> "Efficiency": Merriam-Webster
> :
> 
> 1: the quality or degree of being efficient ("X is X! Yawn, tell me
> something that means something!")
> 
> 2.a: efficient operation (ehh? Ehhh!?!?!)
[...]

To understand recursion, you must first understand recursion. ;-)


T

-- 
What did the alien say to Schubert? "Take me to your lieder."


Re: DIP1028 - Rationale for accepting as is

2020-05-23 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, May 23, 2020 at 10:55:40AM +, Dukc via Digitalmars-d-announce wrote:
[...]
> When I look my own code that uses the Nuklear GUI library, written in
> C, it's all `@system`. I have not had the time to make `@trusted`
> wrappers over the BindBC-nuklear API, so I did what tends to occur to
> us as the next best thing: resign and make the whole client code
> `@system`. Just making `@trusted` wrappers over BindBC-nuklear seemed
> to me as inresponsible use of the attribute. And reading this theard,
> it would seem like most of you would agree.
> 
> But when I think it, what I have accomplised from avoiding that
> antipattern?  The only difference is, that if my D code does something
> `@system`, it'll remain under the radar. So I'm worse off than had I
> submitted to the antipattern!
[...]

And this is precisely why I proposed that what we need is a way for the
compiler to mechanically check all code *except* certain specified
blackboxes that are skipped over.  Then you can have your calls to
unvetted C functions and still have the mechanical checks enabled for
the rest of your code.

This is also related to @trusted blocks inside a function, the intention
of which is to limit the @system code to as small a surface area as
possible while enabling @safe checks for the rest of the function.


T

-- 
Gone Chopin. Bach in a minuet.


Re: DIP1028 - Rationale for accepting as is

2020-05-23 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, May 22, 2020 at 10:50:02PM -0700, Walter Bright via 
Digitalmars-d-announce wrote:
> On 5/22/2020 10:33 AM, rikki cattermole wrote:
> > To me at least, this butchers @safe/trusted/system into a system
> > that is near useless for guarantees for an entire program.
> 
> It never attempted to guarantee safety in code that was never compiled
> with a D compiler. It's impossible to do that. No language does that.

And therefore what we need is a way of indicating verifiability up to
things outside of our control. E.g., some kind of way to express that
the safety of a piece of code is keyed upon some external function or
delegate, thus enabling @safe checks for all code except calls into said
external function/delegate.

This would work out to be practically where we're at now, except that we
don't implicitly pretend external code is @safe where there is no
verification at all.


T

-- 
Designer clothes: how to cover less by paying more.


  1   2   3   4   >