Re: D GUI Framework (responsive grid teaser)

2019-05-22 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, May 22, 2019 at 05:11:06PM -0700, Manu via Digitalmars-d-announce wrote:
> On Wed, May 22, 2019 at 3:33 PM H. S. Teoh via Digitalmars-d-announce
>  wrote:
> >
> > On Wed, May 22, 2019 at 02:18:58PM -0700, Manu via Digitalmars-d-announce 
> > wrote:
[...]
> > > I couldn't possibly agree less; I think cool kids would design
> > > literally all computer software like a game engine, if they
> > > generally cared about fluid experience, perf, and battery life.
> > [...]
> >
> > Wait, wha...?!  Write game-engine-like code if you care about
> > *battery life*??  I mean... fluid experience, sure, perf, OK, but
> > *battery life*?!  Unless I've been living in the wrong universe all
> > this time, that's gotta be the most incredible statement ever.  I've
> > yet to see a fluid, high-perf game engine *not* drain my battery
> > like there's no tomorrow, and now you're telling me that I have to
> > write code like a game engine in order to extend battery life?
> 
> Yes. Efficiency == battery life. Game engines tend to be the most
> efficient software written these days.
>
> You don't have to run applications at an unbounded rate. I mean, games
> will run as fast as possible maximising device resources, but assuming
> it's not a game, then you only execute as much as required rather than
> trying to produce frames at the highest rate possible. Realtime
> software is responding to constantly changing simulation, but non-game
> software tends to only respond to input-driven entropy; if entropy
> rate is low, then exec-to-sleeping ratio heavily biases towards
> sleeping.
> 
> If you have a transformation to make, and you can do it in 1ms, or
> 100us, then you burn 10 times less energy doing it in 100us.
[...]

But isn't that just writing good code in general?  'cos when I think of
game engines, I think of framerate maximization, which equals maximum
battery drain because you're trying to do as much as possible in any
given time interval.

Moreover, I've noticed a recent trend of software trying to emulate
game-engine-like behaviour, e.g., smooth scrolling, animations, etc..
In the old days, GUI apps primarily only respond to input events and
that was it -- click once, the code triggers once, does its job, and
goes back to sleep.  These days, though, apps seem to be bent on
animating *everything* and smoothing *everything*, so one click
translates to umpteen 60fps animation frames / smooth-scrolling frames
instead of only triggering once.

All of which *increases* battery drain rather than decrease it.

And this isn't just for mobile apps; even the pervasive desktop browser
nowadays seems bent on eating up as much CPU, memory, and disk as
physically possible -- everybody has their neighbour's dog wants ≥60fps
hourglass / spinner animations and smooth scrolling, eating up GBs of
memory, soaking up 99% CPU, and cluttering the disk with caches of
useless paraphrenelia like spinner animations.

Such is the result of trying to emulate game-engine-like behaviour. And
now you're recommending that everyone should write code like a game
engine!

(Once, just out of curiosity (and no small amount of frustration), I
went into Firefox's about:config and turned off all smooth scrolling,
animation, etc., settings.  The web suddenly sped up by at least an
order of magnitude, probably more. Down with 60fps GUIs, I say.  Unless
you're making a game, you don't *need* 60fps. It's squandering resources
for trivialities where we should be leaving those extra CPU cycles for
actual, useful work instead, or *actually* saving battery life by not
trying to make everything emulate a ≥60fps game engine.)


T

-- 
Give me some fresh salted fish, please.


Re: D GUI Framework (responsive grid teaser)

2019-05-22 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, May 22, 2019 at 02:18:58PM -0700, Manu via Digitalmars-d-announce wrote:
> On Wed, May 22, 2019 at 10:20 AM Ola Fosheim Grøstad via
> Digitalmars-d-announce  wrote:
[...]
> > But you shouldn't design a UI framework like a game engine.
> >
> > Especially not if you also want to run on embedded devices
> > addressing pixels over I2C.
> 
> I couldn't possibly agree less; I think cool kids would design
> literally all computer software like a game engine, if they generally
> cared about fluid experience, perf, and battery life.
[...]

Wait, wha...?!  Write game-engine-like code if you care about *battery
life*??  I mean... fluid experience, sure, perf, OK, but *battery
life*?!  Unless I've been living in the wrong universe all this time,
that's gotta be the most incredible statement ever.  I've yet to see a
fluid, high-perf game engine *not* drain my battery like there's no
tomorrow, and now you're telling me that I have to write code like a
game engine in order to extend battery life?

I think I need to sit down.


T

-- 
Never step over a puddle, always step around it. Chances are that whatever made 
it is still dripping.


Re: Phobos is now compiled with -preview=dip1000

2019-05-17 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, May 16, 2019 at 10:35:27AM +, Seb via Digitalmars-d-announce wrote:
> On Thursday, 16 May 2019 at 10:03:42 UTC, Kagamin wrote:
> > On Thursday, 16 May 2019 at 05:22:42 UTC, Seb wrote:
> > > Yes that sounds like the culprit. Btw as mentioned on DConf, the
> > > dip1000 switch contains a few other breaking changes which will
> > > make it even harder to adopt too.
> > 
> > Well, it's an inherent property of DIP1000 to not compile code that
> > previously compiled. Though safety of tupleof shouldn't depend on
> > DIP1000.
> 
> Well, here's the full discussion:
> 
> https://github.com/dlang/dmd/pull/8035

Finally got round to skimming through that discussion.

Looks like in this case, what we need is for toHash to be declared
@trusted when .tupleof includes private members (because toHash is not
supposed to modify any private members, and I assume hashing over a
private member shouldn't violate @safe -- right?).

Either that, or RedBlackTree needs to be changed so that @safe-ty
doesn't depend on random user types having private fields breaking
compilation.  It's pretty ridiculous, from a user's POV, for a standard
container to fail to compile just because the user had the audacity to
declare private members in his object! And the fact that this root
problem is masked under a totally obscure compile error only adds salt
to the wound.


T

-- 
IBM = I Blame Microsoft


Re: Phobos is now compiled with -preview=dip1000

2019-05-15 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, May 15, 2019 at 05:53:17PM -0700, H. S. Teoh via Digitalmars-d-announce 
wrote:
> On Wed, May 15, 2019 at 11:34:44AM -0700, Walter Bright via 
> Digitalmars-d-announce wrote:
> > On 5/15/2019 11:09 AM, H. S. Teoh wrote:
> > > *Why* putting 'private' on a field member makes toHash unsafe, is
> > > beyond my ability to comprehend.
> > 
> > That's because the reduced version isn't a reduced version. It
> > imports a vast amount of other code.
> 
> Alright, here's a TRULY reduced version:

Gah, so apparently .hashOf is a gigantic overload set of *21* different
overloads, so this is not really "truly" reduced. =-O

Anybody up for figuring out which overload(s) is/are getting called?
Betcha the problem is that -preview=dip1000 causes one of the overloads
to fail to compile, thus shuffling to a different overload that isn't
@safe.  I hate SFINAE.


T

-- 
Just because you survived after you did it, doesn't mean it wasn't stupid!


Re: Phobos is now compiled with -preview=dip1000

2019-05-15 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, May 15, 2019 at 11:34:44AM -0700, Walter Bright via 
Digitalmars-d-announce wrote:
> On 5/15/2019 11:09 AM, H. S. Teoh wrote:
> > *Why* putting 'private' on a field member makes toHash unsafe, is
> > beyond my ability to comprehend.
> 
> That's because the reduced version isn't a reduced version. It imports
> a vast amount of other code.

Alright, here's a TRULY reduced version:


struct S {
private int _x;
}
struct RedBlackTree
{
size_t toHash() nothrow @safe
{
return .hashOf(S.init);
}
}
void main() { }


Compiling with -preview=dip1000 causes a compile error complaining that
toHash() is not @safe.  Removing 'private' makes it go away. Compiling
without -preview=dip1000 also makes it go away.

Now explain this one. :-D


T

-- 
A linguistics professor was lecturing to his class one day. "In
English," he said, "A double negative forms a positive. In some
languages, though, such as Russian, a double negative is still a
negative. However, there is no language wherein a double positive can
form a negative." A voice from the back of the room piped up, "Yeah,
yeah."


Re: Phobos is now compiled with -preview=dip1000

2019-05-15 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, May 15, 2019 at 11:09:01AM -0700, H. S. Teoh via Digitalmars-d-announce 
wrote:
> On Wed, May 15, 2019 at 12:39:05AM -0700, Walter Bright via 
> Digitalmars-d-announce wrote:
> > https://github.com/dlang/phobos/pull/6931
> > 
> > This is a major milestone in improving the memory safety of D
> > programming.  Thanks to everyone who helped with this!
> > 
> > Time to start compiling your projects with DIP1000, too!
> 
> My very first attempt to compile my code with -preview=dip1000 led to
> a regression. :-(
[...]

Bugzilla issue:

https://issues.dlang.org/show_bug.cgi?id=19877


T

-- 
To err is human; to forgive is not our policy. -- Samuel Adler


Re: Phobos is now compiled with -preview=dip1000

2019-05-15 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, May 15, 2019 at 12:39:05AM -0700, Walter Bright via 
Digitalmars-d-announce wrote:
> https://github.com/dlang/phobos/pull/6931
> 
> This is a major milestone in improving the memory safety of D
> programming.  Thanks to everyone who helped with this!
> 
> Time to start compiling your projects with DIP1000, too!

My very first attempt to compile my code with -preview=dip1000 led to a
regression. :-(

Reduced code:
--
import std.container.rbtree;
alias Grid = RedBlackTree!(GridPoint);
struct GridPoint
{
private string _srcStr;
int opCmp(in GridPoint p) const { return 0; }
}
--

Compiler output (with -preview=dip1000):
--
/usr/src/d/phobos/std/container/rbtree.d(): Error: `@safe` function 
`std.container.rbtree.RedBlackTree!(GridPoint, "a < b", 
false).RedBlackTree.toHash` cannot call `@system` function 
`core.internal.hash.hashOf!(GridPoint).hashOf`
/usr/src/d/druntime/import/core/internal/hash.d(510):
`core.internal.hash.hashOf!(GridPoint).hashOf` is declared here
numid.d(3): Error: template instance 
`std.container.rbtree.RedBlackTree!(GridPoint, "a < b", false)` error 
instantiating
--

The culprit is the 'private' in GridPoint.  Removing 'private' gets rid
of the problem.

*Why* putting 'private' on a field member makes toHash unsafe, is beyond
my ability to comprehend.


T

-- 
Windows: the ultimate triumph of marketing over technology. -- Adrian von Bidder


Re: bool (was DConf 2019 AGM Livestream)

2019-05-13 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, May 13, 2019 at 07:16:04AM -0400, Andrei Alexandrescu via 
Digitalmars-d-announce wrote:
> On 5/12/19 11:46 PM, H. S. Teoh wrote:
> > On Sun, May 12, 2019 at 01:20:16PM +, Mike Franklin via 
> > Digitalmars-d-announce wrote:
> > [...]
> > > If anyone's looking for a challenge, I welcome them to propose a
> > > new `Bool` type (note the capital B) for inclusion in my new
> > > library.
> > [...]
> > 
> > As long as && and || continue to evaluate to a 1-bit integer, all
> > library efforts to implement Bool will be futile.
> 
> When writing std.typecons.Ternary I thought of overloading opBinary
> for | and & to take a lazy argument on the right. I forgot why I ended
> up not doing it (I think it was because of code generation issues).
> This is something that could be made to work.

The problem is that && and || cannot be overloaded (and for very good
reasons -- it would open the door to C++-style egregious operator
overload abuse), and the alternatives & and | have the wrong precedence,
so you wouldn't be able to write boolean expressions with Bool naturally
the way you could with the built-in bool type. It would not be a drop-in
replacement and you would have to rewrite every single boolean
expression to use Bool instead, and that with a high chance of
introducing subtle errors because of the different precedence of & and
|.

Boolean expressions and the associated boolean type is one of the things
that should be baked into the language (it'd be a mess otherwise), but
that also means that if the language doesn't get it right you have no
recourse.


T

-- 
Too many people have open minds but closed eyes.


Re: bool (was DConf 2019 AGM Livestream)

2019-05-12 Thread H. S. Teoh via Digitalmars-d-announce
On Sun, May 12, 2019 at 01:20:16PM +, Mike Franklin via 
Digitalmars-d-announce wrote:
[...]
> If anyone's looking for a challenge, I welcome them to propose a new
> `Bool` type (note the capital B) for inclusion in my new library.
[...]

As long as && and || continue to evaluate to a 1-bit integer, all
library efforts to implement Bool will be futile.


T

-- 
Never ascribe to malice that which is adequately explained by incompetence. -- 
Napoleon Bonaparte


Re: utiliD: A library with absolutely no dependencies for bare-metal programming and bootstrapping other D libraries

2019-05-10 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, May 11, 2019 at 01:45:08AM +, Mike Franklin via 
Digitalmars-d-announce wrote:
[...]
> I think this thread is beginning losing sight of the larger picture.
> What I'm trying to achieve is the opt-in continuum that Andrei
> mentioned elsewhere on this forum.  We can't do that with the way the
> compiler and runtime currently interact.  So, the first task, which
> I'm trying to get around to, is to convert runtime hooks to templates.
> Using the compile-time type information will allow us to avoid
> `TypeInfo`, therefore classes, therefore the entire D runtime.  We're
> now much closer to the opt-in continuum Andrei mentioned previously on
> this forum.  Now let's assume that's done...

Yes, that's definitely a direction we want to head in.  I think it will
be very beneficial.


> Those new templates will eventually call a very few functions from the
> C standard library, memcpy being one of them.  Because the runtime
> hooks are now templates, we have type information that we can use in
> the call to memcpy.  Therefore, I want to explore implementing `void
> memcpy(T)(ref T dst, const ref T src) @safe, nothrow, pure, @nogc`
> rather than `void* memcpy(void*, const void*, size_t)`  There are some
> issues here such as template bloat and compile times, but I want to
> explore it anyway.  I'm trying to imagine, what would memcpy in D look
> like if we didn't have a C implementation clouding narrowing our
> imagination.  I don't know how that will turn out, but I want to
> explore it.

Put this way, I think that's a legitimate area to explore. But copying a
block of memory from one place to another is simply just that, copying a
block of memory from one place to another.  It just boils down to how to
copy N bytes from A to B in the fastest way possible. For that, you just
reduce it to moving K words (the size of which depends only on the
target machine, not the incoming type) of memory from A to B, plus or
minus a few bytes at the end for non-aligned data. The type T only
matters if you need to do type-specific operations like call default
ctors / dtors, but at the memcpy level that should already have been
taken care of by higher-level code, and it isn't memcpy's concern what
ctors/dtors to invoke.

The one thing knowledge of T can provide is whether or not T[] can be
unaligned. If T.sizeof < machine word size, then you need extra code to
take care of the start/end of the block; otherwise, you can just go
straight to the main loop of copying K words from A to B. So that's one
small thing we can take advantage of. It could save a few cycles by
avoiding a branch hazard at the start/end of the copy, and making the
code smaller for inlining.

Anything else you optimize on copying K words from A to B would be
target-specific, like using vector ops, specialized CPU instructions,
and the like. But once you start getting into that, you start getting
into the realm of whether all the complex setup needed for, e.g., a
vector op is worth the trouble if T.sizeof is small. Perhaps here's
another area where knowledge of T can help (if T is small, just use a
naïve for-loop; if T is sufficiently large, it could be worth incurring
the overhead of setting up vector copy registers, etc., because it makes
copying the large body of T faster).

So potentially a D-based memcpy could have multiple concrete
implementations (copying strategies) that are statically chosen based on
the properties of T, like alignment and size.


[...]
> However, DMD won't do the right thing.

Honestly, at this point I don't even care.


> I guess others are thinking that we'd just re-implement `void*
> memcpy(void*, const void*, size_t)` in D and we'd throw in a runtime
> call to `memcpy([0], [0], T.sizeof())`.  That's
> ridiculous.  What I want to do is use the type information to generate
> an optimal implementation (considering size and alignment) that DMD
> will be forced to inline with `pragma(inline)`.

It could be possible to select multiple different memcpy implementations
by statically examining the properties of T.  I think that might be one
advantage D could have over just calling libc's memcpy.  But you have to
be very careful not to outdo the compiler's optimizer so that it doesn't
recognize it as memcpy and fails to apply what would otherwise be a
routine optimization pass.


> That implementation can also take into consideration target features
> such as SIMD.  I don't believe the code will be complex, and I expect
> it to perform at least as well as the C implementation.  My initial
> tests show that it will actually outperform the C implementation, but
> that could be a problem with my tests.  I'm still researching it.

Actually, if you want to compete with the C implementation, you might
find that things could get quite hairy. Maybe not with memcpy, but other
functions like memchr have very clever hacks to speed it up that you
probably wouldn't think of without reading C library source code. There
may also be subtle 

Re: utiliD: A library with absolutely no dependencies for bare-metal programming and bootstrapping other D libraries

2019-05-10 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, May 11, 2019 at 12:23:31AM +, Mike Franklin via 
Digitalmars-d-announce wrote:
[...]
> Also, take a look at this data:
> https://forum.dlang.org/post/jdfiqpronazgglrkm...@forum.dlang.org  Why
> is DMD making 48,000 runtime calls to memcpy to copy 8 bytes of data?
> Many of those calls should be inlined.  I see opportunity for
> improvement there.
[...]

When it comes to performance, I've essentially given up looking at DMD
output. DMD's inliner gives up far too easily, leading to a lot of calls
that aren't inlined when they really should be, and DMD's optimizer does
not have loop unrolling, which excludes a LOT of subsequent
optimizations that could have been applied.  I wouldn't base any
performance decisions on DMD output. If LDC or GDC produces non-optimal
code, then we have cause to do something. Otherwise, IMO we're just
uglifying D code and making it unmaintainable for no good reason.


T

-- 
Recently, our IT department hired a bug-fix engineer. He used to work for 
Volkswagen.


Re: utiliD: A library with absolutely no dependencies for bare-metal programming and bootstrapping other D libraries

2019-05-10 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, May 10, 2019 at 05:16:24PM +, Mike Franklin via 
Digitalmars-d-announce wrote:
[...]
> I've studied the ARM implementation of memcpy a little, and it's quite
> hard to follow.  I'd like for the D implementations to make such code
> easier to understand and maintain.
[...]

I'm not 100% sure it's a good idea to implement memcpy in D just to
prove that it can be done / just to say that we're independent of libc.
Libc implementations of fundamental operations, esp. memcpy, are usually
optimized to next week and back for the target architecture, taking
advantage of the target arch's quirks to maximize performance. Not to
mention that advanced compiler backends recognize calls to memcpy and
can optimize it in ways they can't optimize a generic D function they
fail to recognize as being equivalent to memcpy. I highly doubt a
generic D implementation could hope to beat that, and it's a little
unrealistic, given our current manpower situation, for us to be able to
optimize it for each target arch ourselves.


> On Friday, 10 May 2019 at 05:20:59 UTC, Eugene Wissner wrote:
[...]
> > Whereby I should say that tanya‘s range definitions differ from
> > Phobos.
[..]

I'm a bit uncomfortable with having multiple, incompatible range
definitions.  While the Phobos definition can be argued whether it's the
best, shouldn't we instead be focusing on improving the *standard*
definition of ranges, rather than balkanizing the situation by
introducing multiple, incompatible definitions just because?  It's one
thing for Andrei to propose a std.v2 that, ostensibly, might have a new,
hopefully improved, range API, deprecating the current definition; it's
another thing to have multiple alternative, competing definitions in
libraries that user code can choose from.  That would be essentially
inviting the Lisp Curse.


T

-- 
Life would be easier if I had the source code. -- YHL


Re: DConf 2019 Livestream

2019-05-09 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, May 09, 2019 at 01:54:31AM -0400, Nick Sabalausky (Abscissa) via 
Digitalmars-d-announce wrote:
[...]
> This sort of stuff happens literally EVERY year! At this point, you
> can pretty much guarantee that for any Dconf, Day 1's keynote doesn't
> get professionally livestreamed, if its recorded at all. At the very
> LEAST, it makes us look bad.
> 
> Is there SOMETHING we can do about this moving forward? Maybe use
> Dconf/Dfoundation funds to hire a proven video crew not reliant on
> venue, or something...?

+1. This repeated unreliability of streaming/recording is embarrassing.
We should just use our own video crew next DConf. *After* testing
everything on-venue *before* the actual start of the conference, so that
any issues are noticed and addressed beforehand.


T

-- 
There is no gravity. The earth sucks.


Re: DMD metaprogramming enhancement

2019-04-25 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Apr 25, 2019 at 11:41:32PM +, Suleyman via Digitalmars-d-announce 
wrote:
> Hello everyone,
> 
> I am happy to announce that in the next DMD release you will be able
> to more freely enjoy your metaprograming experience now that a
> long-standing limitation has been lifted.
> 
> You can now instantiate local and member templates with local symbols.
[...]

That's very nice.  Which PR was it that implemented this?


T

-- 
Век живи - век учись. А дураком помрёшь.


Re: Beta 2.086.0

2019-04-20 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Apr 20, 2019 at 03:47:08PM +, Andre Pany via Digitalmars-d-announce 
wrote:
[...]
> Thank you so much. The -lowmem switch finally enables usage of D in
> CloudFoundry (your application is usually compiled on CloudFoundry and
> you very likely have a limit of 1024 MB).
[...]

Oh goodie!  Finally dmd will no longer be a laughing stock on low-memory
machines. Very glad to hear of -lowmem.


T

-- 
Winners never quit, quitters never win. But those who never quit AND never win 
are idiots.


Re: Phobos now compiling with -dip1000

2019-03-22 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Mar 23, 2019 at 12:01:49AM -0400, Nick Sabalausky (Abscissa) via 
Digitalmars-d-announce wrote:
> On 3/22/19 11:06 PM, Walter Bright wrote:
> > Many thanks to Sebastian Wilzbach, Nicholas Wilson, Mike Franklin,
> > and others!
> > 
> > It's been a long and often frustrating endeavor, but we made it and
> > I'm very pleased with the results.
[...]
> Ie DIP1000: "Scoped Pointers": "...provides a mechanism to guarantee
> that a reference cannot escape lexical scope" in large part to aid
> non-GC memory management.
> 
> With that aside, this does indeed sound like a great milestone (not
> that I doubted!). Kudos and congrats all around!

Does that mean -dip1000 will become the default compiler behaviour in
the near future?

Also, does it only apply to @safe code, so that I have to start
annotating stuff with @safe in order to benefit from it?


T

-- 
Meat: euphemism for dead animal. -- Flora


Re: DConf 2019 Schedule

2019-03-18 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Mar 18, 2019 at 11:41:14AM -0700, Walter Bright via 
Digitalmars-d-announce wrote:
> On 3/18/2019 8:55 AM, Robert M. Münch wrote:
> > Typo in Walter's abstract: ... "D supports a number of techniques
> > for allocating meory." => memory
> 
> I should stop trusting the cat to review my work.

At least it didn't come out as meowry... :D


T

-- 
Computerese Irregular Verb Conjugation: I have preferences.  You have biases.  
He/She has prejudices. -- Gene Wirchenko


Re: The D Programming Language has been accepted as a GSoC 2019 organization

2019-02-27 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Feb 27, 2019 at 01:04:42PM -0500, Nick Sabalausky (Abscissa) via 
Digitalmars-d-announce wrote:
[...]
> Although frankly, I have to admit, this whole "Fast code, fast" thing
> is complete and utter rubbish compared to the "Better C++" that we've
> now decided to be politically incorrect (very ironically, despite
> active promotion of "betterC").

That fast-fast-fast slogan makes me cringe every time I see it. I try
not to look at it every time I go to dlang.org, lest I throw up. If it
weren't for the fact that D is actually technically superior to many
other alternatives, I might have left D on that account alone; it's
*that* bad.

It surely can't have escaped the more observant among us the irony that
the flagship D compiler, dmd, is the antithesis of that slogan as far as
codegen quality is concerned.  Thankfully, ldc/gdc comes to the rescue
on the codegen front, otherwise this slogan would bear far more
resemblance to fast food than I find palatable -- fast food fast, who
cares if it's unhealthy and fattening AKA compile code fast, who cares
if it produces slow executables.

But since the PTBs have decreed it, and I really don't care enough about
marketing to want to push for a change, I just shut one eye and carry on
with the more important things in life. *shrug*


T

-- 
Without outlines, life would be pointless.


Re: DIP 1018--The Copy Constructor--Formal Review

2019-02-25 Thread H. S. Teoh via Digitalmars-d-announce
On Sun, Feb 24, 2019 at 08:59:49PM -0800, Walter Bright via 
Digitalmars-d-announce wrote:
[...]
> An interesting manifestation of this uselessness in C++ is the notion
> of "logical const", where a supposedly "const" value is lazily set to
> a value upon first use. I.e. it isn't const, it's just pretend const.

I disagree.  Logical const means the outside world can't tell that the
object has changed, because only a constant value is ever seen from
outside.  This is the basis of lazy initialization (which is part of the
concept of lazy evaluation), an important feature of FP style code, and
something that D does not support.

D's relaxed version of purity -- a function is pure if the outside world
can't see any impure semantics -- makes its scope much more widely
applicable than a strict interpretation of purity as in a functional
language.  Logical const is the same idea in the realm of mutability --
though I don't necessarily agree with C++'s anemic implementation of it.
What D could really benefit from was a statically-verifiable way of
lazily initializing something that is const to the outside world.

A derived problem with D's const is the inability to express a cached
const object.  You can't cache the object because it's const, which
greatly limits the scope of usability of const in large data structures.

The same limitation makes ref-counting such a huge challenge to
implement in D.  There is simply no way to associate a refcount with a
const object without jumping through hoops and/or treading into UB
territory by casting away const. There is no way to express that the
refcount is mutable but the rest of the object isn't. Well, you *can*
express this if you use circumlocutions like:

struct RefCounted(T) {
int refcount;
const(T) payload;
}

but that's worthless in generic code because const(RefCounted!T) !=
RefCounted!(const T). So you have to special-case every generic function
that needs to work with this type, and the special cases percolate
through the entire codebase, uglifying the code and forcing generic
functions that shouldn't need to know about RefCounted to have to know
about it so that they can work with it.


Because of these limitations, const is really only useful in low-level
modules of limited scope, in simple, self-contained data structures.
Higher-level, larger data structures are basically unusable with D's
const because lazy initialization and caching are not possible without
treading into UB territory by casting.  I'm not going to argue that
C++'s version of const is any better -- because non-enforceable const is
worthless, like you said -- but let's not kid ourselves that D's const
is that much better.  D's const is really only usable in very limited
situations, and there are many things for which it's unusable even
though logically it *could* have been applicable.


T

-- 
Two wrongs don't make a right; but three rights do make a left...


Re: intel-intrinsics v1.0.0

2019-02-14 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Feb 14, 2019 at 10:15:19PM +, Guillaume Piolat via 
Digitalmars-d-announce wrote:
[...]
> I think ispc is interesting, and a very D-ish thing to have would be
> an ispc-like compiler at CTFE that outputs LLVM IR (or assembly or
> intel-intrinsics). That would break the language boundary and allows
> inlining. Though probably we need newCTFE for this, as everything
> interesting seems to need newCTFE :) And it's a gigantic amount of
> work.

Much as I love the idea of generating D code at compile-time and look
forward to newCTFE, there comes a point when I'd really rather just run
the DSL through some kind of preprocessing (i.e., compile with ispc) as
part of the build, then link the result to the D code, rather than
trying to shoehorn everything into (new)CTFE.


T

-- 
You have to expect the unexpected. -- RL


Re: gtkDcoding Blog: Post #0009 - Boxes

2019-02-14 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Feb 14, 2019 at 04:33:55PM +, Dejan Lekic via 
Digitalmars-d-announce wrote:
[...]
> (no, not everyone uses news clients and threaded mode)...

They should. ;-)

Non-threaded mail/news clients are fundamentally b0rken. :-P


T

-- 
Caffeine underflow. Brain dumped.


Re: DCD 0.11.0 released

2019-02-11 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Feb 11, 2019 at 08:40:32PM +, notna via Digitalmars-d-announce 
wrote:
[...]
> Reported so many times & still there (Win10 here);
> 
> Installing DCD
> Downloading from 
> https://github.com/dlang-community/DCD/releases/download/v0.10.2/dcd-v0.10.2-windows-x86.zip
> to C:\Users\\AppData\Roaming\code-d\bin
> 
> Failed installing: std.net.curl.CurlException@std\net\curl.d(4340):
> Peer certificate cannot be authenticated with given CA certificates on
> handle

This seriously needs to be part of the CI so that we *know* for sure it
works.


T

-- 
Valentine's Day: an occasion for florists to reach into the wallets of nominal 
lovers in dire need of being reminded to profess their hypothetical love for 
their long-forgotten.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-02-08 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Feb 09, 2019 at 01:08:55AM +, bitwise via Digitalmars-d-announce 
wrote:
> On Saturday, 9 February 2019 at 00:04:20 UTC, Dennis wrote:
> > On Friday, 8 February 2019 at 23:58:49 UTC, H. S. Teoh wrote:
> > > Yep, the moral of the story is, if codegen quality is important to
> > > you, use ldc (and presumably gdc too) rather than dmd.
> > 
> > That's definitely true, but that leaves the question whether
> > lowering rvalue references to lambdas is acceptable. There's the
> > 'dmd for fast builds, gdc/ldc for fast code' motto, but if your
> > debug builds of your game make it run at 15 fps it becomes unusable.
> > I don't want the gap between dmd and compilers with modern back-ends
> > to widen.
> 
> Since the user doesn't explicitly place the lambda in their code,
> wouldn't it be justifiable for the compiler to take it back out again
> at a later step in compilation, even in debug mode?

Using lowering to lambdas as a way of defining semantics is not the same
thing as actually using lambdas to implement a feature in the compiler!

While it can be convenient to do the latter as a first stab, I'd expect
that the optimizer could make use of special knowledge available in the
compiler to implement this more efficiently. Since the compiler will
always use a fixed pattern for the lowering, the backend could detect
this pattern and optimize accordingly.  Or the compiler implementation
could lower it directly to something more efficient in the first place.


T

-- 
If you look at a thing nine hundred and ninety-nine times, you are perfectly 
safe; if you look at it the thousandth time, you are in frightful danger of 
seeing it for the first time. -- G. K. Chesterton


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-02-08 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Feb 09, 2019 at 12:04:20AM +, Dennis via Digitalmars-d-announce 
wrote:
> On Friday, 8 February 2019 at 23:58:49 UTC, H. S. Teoh wrote:
> > Yep, the moral of the story is, if codegen quality is important to
> > you, use ldc (and presumably gdc too) rather than dmd.
> 
> That's definitely true, but that leaves the question whether lowering
> rvalue references to lambdas is acceptable. There's the 'dmd for fast
> builds, gdc/ldc for fast code' motto, but if your debug builds of your
> game make it run at 15 fps it becomes unusable. I don't want the gap
> between dmd and compilers with modern back-ends to widen.

TBH, I've been finding that ldc compilation times aren't all that bad
compared to dmd.  It's definitely slightly slower, but it's not anywhere
near the gap between, say, dmd and g++.

Recently I've been quite tempted to replace dmd with ldc as my main D
compiler, esp. now that ldc releases are essentially on par with dmd
releases in terms of release schedule of a particular language version.
The slowdown in compilation times isn't enough to offset the benefits,
as long as you're not compiling with, say, -O3 which *would* make the
ldc optimizer run slower (but with the huge benefit of significantly
better codegen -- I've seen performance improvements of up to ~200% with
ldc -O3 vs. dmd -O -inline).

And template-heavy code is slow across all D compilers anyway, so the
relatively small compilation time difference between dmd and ldc doesn't
really matter that much anymore once you have a sufficiently large
codebase with heavy template use.


T

-- 
What doesn't kill me makes me stranger.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-02-08 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Feb 08, 2019 at 03:42:51PM -0800, H. S. Teoh via Digitalmars-d-announce 
wrote:
> On Fri, Feb 08, 2019 at 11:34:47PM +, Dennis via Digitalmars-d-announce 
> wrote:
> > On Friday, 8 February 2019 at 23:02:34 UTC, Nicholas Wilson wrote:
> > > Immediately called lamdas are always inlined.
> > 
> > ```
> > extern(C) void main() {
> > int a = (() => 1)();
> > }
> > ```
[...]
> Does LDC/GDC inline it?
> 
> I no longer trust dmd for codegen quality. :-/
[...]

Just checked: LDC does inline it.  In fact, LDC compiles the whole thing
out and just has `ret` for main(). :-D  Forcing LDC not to elide the
whole thing by inserting a writeln(a) call reveals that the lambda is
indeed inlined.

Yep, the moral of the story is, if codegen quality is important to you,
use ldc (and presumably gdc too) rather than dmd.


T

-- 
Freedom of speech: the whole world has no right *not* to hear my spouting off!


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-02-08 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Feb 08, 2019 at 11:34:47PM +, Dennis via Digitalmars-d-announce 
wrote:
> On Friday, 8 February 2019 at 23:02:34 UTC, Nicholas Wilson wrote:
> > Immediately called lamdas are always inlined.
> 
> ```
> extern(C) void main() {
> int a = (() => 1)();
> }
> ```
> 
> dmd -inline -O -release -betterC
> 
> asm:
> ```
> main:
>   pushRBP
>   mov RBP,RSP
>   callqword ptr pure nothrow @nogc @safe int
> onlineapp.main().__lambda1()@GOTPCREL[RIP]
>   xor EAX,EAX
>   pop RBP
>   ret
> ```
> 
> https://run.dlang.io/is/lZW9B6
> 
> Still a lambda call :/

Does LDC/GDC inline it?

I no longer trust dmd for codegen quality. :-/


T

-- 
Customer support: the art of getting your clients to pay for your own 
incompetence.


Re: NEW Milestone: 1500 packages at code.dlang.org

2019-02-07 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Feb 07, 2019 at 05:06:09PM +, Seb via Digitalmars-d-announce wrote:
> On Thursday, 7 February 2019 at 16:40:08 UTC, Anonymouse wrote:
> > 
> > What was the word on the autotester (or similar) testing popular
> > packages as part of the test suite?
> 
> This is been done since more than a year now for the ~50 most popular
> packages: https://buildkite.com/dlang
> 
> In my opinion this is one of the main reasons why the last releases
> were so successful (=almost no regressions).

That's awesome. This is the way to go.  Congrats to everyone who helped
pull this off.


T

-- 
Freedom of speech: the whole world has no right *not* to hear my spouting off!


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-31 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Jan 31, 2019 at 10:26:39PM +, jmh530 via Digitalmars-d-announce 
wrote:
> On Thursday, 31 January 2019 at 21:57:21 UTC, Steven Schveighoffer wrote:
[...]
> > That being said, you can look at the fact that most people don't
> > even know about this problem, even seasoned veterans, as a sign that
> > it's really not a big problem.
> > 
> 
> The way you put it makes it sound like a bug...
> 
> I don't know if it helps, but below compiles without error.
> 
> struct Foo
> {
>private int _x;
>int* x() { return &_x; }
> }
> 
> struct Bar
> {
>private Foo _y;
>Foo* y() { return &_y; }
>void y(Foo foo) { _y = foo; }
> }
> 
> void main() {
> Foo a = Foo(1);
> assert(*a.x == 1);
> *a.x *= 2;
> assert(*a.x == 2);
> 
> Bar b;
> b.y = Foo(1);
> assert(*b.y.x == 1);
> *b.y.x *= 2;
> assert(*b.y.x == 2);
> }

Why is it a problem that this code compiles without error?


T

-- 
Perhaps the most widespread illusion is that if we were in power we would 
behave very differently from those who now hold it---when, in truth, in order 
to get power we would have to become very much like them. -- Unknown


Re: 5 reasons the D programming language is a great choice for development

2019-01-30 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Jan 30, 2019 at 08:34:58PM +, Simen Kjærås via 
Digitalmars-d-announce wrote:
> I found this article espousing D's strengths today:
> https://opensource.com/article/17/5/d-open-source-software-development

It appears to be written by our very own `aberba`, who also frequently
participates in these forums.

Good read!


T

-- 
Give a man a fish, and he eats once. Teach a man to fish, and he will sit 
forever.


Re: DIP 1017--Add Bottom Type--Formal Assessment

2019-01-30 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Jan 30, 2019 at 02:05:37PM +, Mike Parker via 
Digitalmars-d-announce wrote:
> Given the nature of the feedback in both review rounds this DIP has
> gone through, Walter has decided to reject his own DIP. He still
> believes there is a benefit to adding a bottom type to the language,
> but this proposal is not the way to go about it. He hopes to revisit
> the issue in the future.
[...]

Hopefully next time the help of more qualified people in type theory,
like Timon, would be solicited, so that a more consistent,
logically-sound solution would be proposed.


T

-- 
Life is too short to run proprietary software. -- Bdale Garbee


Re: The New Fundraising Campaign

2019-01-19 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Jan 19, 2019 at 03:28:12PM +, user1234 via Digitalmars-d-announce 
wrote:
> On Saturday, 19 January 2019 at 14:14:32 UTC, H. S. Teoh wrote:
> > On Sat, Jan 19, 2019 at 08:17:30AM +, Anonymouse via
> > Digitalmars-d-announce wrote:
> > > On Saturday, 19 January 2019 at 06:43:34 UTC, H. S. Teoh wrote:
> > > > [...]
> > [...]
> > > For us on the browser pages don't always load, though.
> > 
> > That's a valid complaint.  It would serve us well if the Foundation
> > can pay for dedicated hardware for the forum, instead of the current
> > machine that seems to get overloaded every so often.
> > 
> > Or if the problem is software, pay for someone to fix it or replace
> > it with something that doesn't have this problem.
[...]
> Yeah, I think the main problem is the database locks.
> People discussed about the that previously.

Yeah I vaguely remember that.

I wonder if it's worth it to split the database into an active part (for
recent threads) and an archive part (for older threads that are unlikely
to change). Most of the lookups will be in the smaller active part,
which hopefully will be more performant, and old posts will be migrated
to the archive to maintain a maximum active size.

But I could be misunderstanding the problem.


T

-- 
The right half of the brain controls the left half of the body. This means that 
only left-handed people are in their right mind. -- Manoj Srivastava


Re: Musicpulator - Library for analyzing and manipulating music - 0.0.2

2019-01-19 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Jan 19, 2019 at 10:35:52AM +, bauss via Digitalmars-d-announce 
wrote:
> Happy to announce the first version of Musicpulator.
> 
> An open-source library for analyzing and manipulating music.
> 
> As of now only manual analysis and manipulation is possible, but in
> future versions this will change.
> 
> Please see the README.md for examples as there are a lot!
> 
> Github: https://github.com/UndergroundRekordz/Musicpulator
> 
> DUB: https://code.dlang.org/packages/musicpulator

Interesting.

Is there a way to import music, say from XML, for analysis?  Or is only
internal analysis available currently?


T

-- 
One reason that few people are aware there are programs running the internet is 
that they never crash in any significant way: the free software underlying the 
internet is reliable to the point of invisibility. -- Glyn Moody, from the 
article "Giving it all away"


Re: The New Fundraising Campaign

2019-01-19 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Jan 19, 2019 at 08:17:30AM +, Anonymouse via Digitalmars-d-announce 
wrote:
> On Saturday, 19 January 2019 at 06:43:34 UTC, H. S. Teoh wrote:
> > This forum is very functional.  I would participate less in a forum
> > that requires loading up a browser to use. But then again, maybe
> > people would be happier if I wasn't around to blab about vim and
> > symmetry and why dub sux, so perhaps that might be for the better.
> > :-P
[...]
> For us on the browser pages don't always load, though.

That's a valid complaint.  It would serve us well if the Foundation can
pay for dedicated hardware for the forum, instead of the current machine
that seems to get overloaded every so often.

Or if the problem is software, pay for someone to fix it or replace it
with something that doesn't have this problem.


T

-- 
All problems are easy in retrospect.


Re: The New Fundraising Campaign

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Sat, Jan 19, 2019 at 03:11:55AM +, bachmeier via Digitalmars-d-announce 
wrote:
> On Friday, 4 January 2019 at 10:30:07 UTC, Martin Tschierschke wrote:
> 
> > Cool, what a wonderful start to the year 2019!
> > A big thank you to all pushing the development of D with money and time!
> > What next Mike?
> 
> Hopefully a campaign to put together a working forum. Would you invest major
> resources in a language that doesn't even have a usable forum?

This forum is very functional.  I would participate less in a forum that
requires loading up a browser to use. But then again, maybe people would
be happier if I wasn't around to blab about vim and symmetry and why dub
sux, so perhaps that might be for the better. :-P


T

-- 
The peace of mind---from knowing that viruses which exploit Microsoft system 
vulnerabilities cannot touch Linux---is priceless. -- Frustrated system 
administrator.


Re: D-lighted, I'm Sure

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 18, 2019 at 09:41:14PM +0100, Jacob Carlborg via 
Digitalmars-d-announce wrote:
> On 2019-01-18 21:23, H. S. Teoh wrote:
> 
> > Haha, that's just an old example from back in the bad ole days where
> > NTP syncing is rare, and everyone's PC is slightly off anywhere from
> > seconds to minutes (or if it's really badly-managed, hours, or maybe
> > the wrong timezone or whatever).
> 
> I had one of those issues at work. One day when I came in to work it
> was suddenly not possible to SSH into a remote machine. It worked the
> day before. Turns out the ntpd daemon was not running on the remote
> machine (for some reason) and we're using Kerberos with SSH, that
> means if the clocks are too much out of sync it will not be able to
> login. That was a ... fun, debugging experience.
[...]

Ouch.  Ouch!  That must not have been a pleasant experience in any sense
of the word.  Knowing all too well how these things tend to go, the
errors you get from the SSH log probably were very unhelpful, mostly
stemming from C's bad ole practice or returning a generic unhelpful
"failed" error code for all failures indiscriminately.  I had to work on
SSH-based code recently, and it's just ... not a nice experience overall
due to the way the C code was written.


T

-- 
GEEK = Gatherer of Extremely Enlightening Knowledge


Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 18, 2019 at 08:03:48PM +, Mark via Digitalmars-d-announce wrote:
[...]
> Why not do away with AliasSeq and use strings all the way?
> 
> string Constify(string type)
> {
> // can add input checks here
> return "const(" ~ type ~ ")";
> }
> 
> void main()
> {
> import std.algorithm : map;
> enum someTypes = ["int", "char", "bool"];
> enum constTypes = map!Constify(someTypes);
> mixin(constTypes[0] ~ "myConstInt = 42;"); // int myConstInt = 42;
> }
> 
> Represent types as strings, CTFE them as you see fit, and output a
> string that can then be mixin'ed to use the actual type. :)

That would work, but it would also suffer from all the same problems as
macro-based programming in C.  The compiler would be unable to detect
when you accidentally pasted type names together where you intended to
be separate, the strings may not actually represent real types, and
generating code from pasting / manipulating strings is very error-prone.
And you could write very unmaintainable code like pasting partial tokens
together as strings, etc., which makes it hard for anyone else
(including yourself after 3 months) to understand just what the code is
trying to do.

Generally, you want some level of syntactic / semantic enforcement by
the compiler when you manipulate lists (or whatever other structures) of
types.


T

-- 
INTEL = Only half of "intelligence".


Re: D-lighted, I'm Sure

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 18, 2019 at 08:03:09PM +, Neia Neutuladh via 
Digitalmars-d-announce wrote:
> On Fri, 18 Jan 2019 11:43:58 -0800, H. S. Teoh wrote:
> > (1) it often builds unnecessarily -- `touch source.d` and it
> > rebuilds source.d even though the contents haven't changed; and
> 
> Timestamp-based change detection is simple and cheap. If your
> filesystem supports a revision id for each file, that might work
> better, but I haven't heard of such a thing.

Barring OS/filesystem support, there's recent OS features like inotify
that lets a build daemon listen for changes to files within a
subdirectory. Tup, for example, uses this to make build times
proportional to the size of the changeset rather than the size of the
entire workspace.  I consider this an essential feature of a modern
build system.

Timestamp-based change detection also does needless work even when there
*is* a change.  For example, edit source.c, change a comment, and make
will recompile it all the way down -- .o file, .so file or executable,
all dependent targets, etc.. Whereas a content-based change detection
(e.g. md5 checksum based) will stop at the .o step because the comment
did not cause the .o file to change, so further actions like linking
into the executable are superfluous and can be elided.  For small
projects the difference is negligible, but for large-scale projects this
can mean the difference between a few seconds -- usable for high
productivity code-compile-test cycle -- and half an hour: completely
breaks the productivity cycle.


> If you're only dealing with a small number of small files,
> content-based change detection might be a reasonable option.

Content-based change detection is essential IMO. It's onerous if you use
the old scan-the-entire-source-tree model of change detection; it's
actually quite practical if you use a modern inotify- (or equivalent)
based system.


> > (2) it often fails to build necessary targets -- if for whatever
> > reason your system clock is out-of-sync or whatever, and a newer
> > version of source.d has an earlier date than a previously-built
> > object.
> 
> I'm curious what you're doing that you often have clock sync errors.

Haha, that's just an old example from back in the bad ole days where NTP
syncing is rare, and everyone's PC is slightly off anywhere from seconds
to minutes (or if it's really badly-managed, hours, or maybe the wrong
timezone or whatever).  The problem is most manifest when networked
filesystems are involved.

These days, clock sync isn't really a problem anymore, generally
speaking, but there's still something else about make that makes it fail
to pick up changes.  I still regularly have to `make clean;make`
makefile-based projects just to get the lousy system to pick up the
changes.  I don't have that problem with more modern build systems.
Probably it's an issue of undetected dependencies.


T

-- 
I think Debian's doing something wrong, `apt-get install pesticide', doesn't 
seem to remove the bugs on my system! -- Mike Dresser


Re: D-lighted, I'm Sure

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 18, 2019 at 06:59:59PM +, JN via Digitalmars-d-announce wrote:
[...]
> The trick with makefiles is that they work well for a single
> developer, or a single project, but become an issue when dealing with
> multiple libraries, each one coming with its own makefile (if you're
> lucky, if you're not, you have multiple CMake/SCons/etc. systems to
> deal with). Makefiles are very tricky to do crossplatform, especially
> on Windows, and usually they aren't enough, I've often seen people use
> bash/python/ruby scripts to drive the building process anyway.

Actually, the problems I had with makefiles come from within single
projects.  One of the most fundamental problems, which is also a core
design, of Make is that it's timestamp-based.  This means:

(1) it often builds unnecessarily -- `touch source.d` and it rebuilds
source.d even though the contents haven't changed; and

(2) it often fails to build necessary targets -- if for whatever reason
your system clock is out-of-sync or whatever, and a newer version of
source.d has an earlier date than a previously-built object.

Furthermore, makefiles generally do not have a global view of your
workspace, so builds are not reproducible (unless you go out of your way
to do it).  Running `make` after editing some source files does not
guarantee you'll end up with the same executables as if you checked in
your changes, did a fresh checkout, and ran `make`.  I've had horrible
all-nighters looking for heisenbugs that have no representation in the
source code, but are caused by make picking up stale object files from
who knows how many builds ago.  You end up having to `make clean; make`
every other build "just to be sure", which is really stupid in this day
and age.  (And even `make clean` does not guarantee you get a clean
workspace -- too many projects than I care to count exhibit this
problem.)

Then there's parallel building, which again requires explicit effort,
macro hell typical of tools from that era, etc..  I've already ranted
about this at great lengths before, so I'm not going to repeat them
again.  But make is currently near (if not at) the bottom of my list of
build tools for many, many reasons.


Ultimately, as I've already said elsewhere, what is needed is a
*standard tool-independent dependency graph declaration* attached to
every project, that captures the dependency graph of the project in a
way that any tool that understands the standard format can parse and act
on.  At the core of it, every build system out there is essentially just
an implementation of a directed acyclic graph walk. A standard problem
with standard algorithms to solve it.  But everybody rolls their own
implementation gratuitously incompatible with everything else, and so we
find ourselves today with multiple, incompatible build systems that, in
large-scale software, often has to somehow co-exist within the same
project.


> The big thing dub provides is package management. Having a package
> manager is an important thing for a language nowadays. Gone are the
> days of hunting for library source, figuring out where to put
> includes. Just add a line in your dub.json file and you have the
> library. Need to upgrade to newer version? Just change the version in
> dub.json file. Need to download the problem from scratch? No problem,
> dub can use the json file to download all the dependencies in proper
> versions.

Actually, I have the opposite problem.  All too often, my projects that
depend on some external library become uncompilable because said library
has upgraded from version X to version Z, and version X doesn't exist
anymore (the oldest version is now Y), or upstream made an incompatible
change, or the network is down and dub can't download the right version,
etc..

These days, I'm very inclined to just download the exact version of the
source code that I need, and include it as part of my source tree, just
so there will be no gratuitous breakage due to upstream changes, old
versions being no longer supported, or OS changes that break pre-shipped
.so files, and all of that nonsense.  Just compile the damn thing from
scratch from the exact version of the sources that you KNOW works --
sources that you have in hand RIGHT HERE instead of somewhere out there
in the nebulous "cloud" which happens to be unreachable right now,
because your network is down and in order to fix the network you need to
compile this tool that depends on said missing sources.

I understand it's convenient for the package manager to "automatically"
install dependencies for you, refresh to the latest version, and
what-not. But frankly, I find that the amount of effort it takes to
download the source code of some library and setup the include paths
manually is miniscule, compared to the dependency hell I have to deal
with in a system like dub.

These days I almost automatically write off 3rd party libraries that
have too many dependencies.  The best kind of 3rd party code is the
standalone kind, 

Re: D-lighted, I'm Sure

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 18, 2019 at 12:06:54PM -0500, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
> On 1/18/19 11:42 AM, Ron Tarrant wrote:
[...]
> > Just to set the record straight, I only had access to that Coleco
> > Adam for the few weeks I was in that Newfoundland outport. Within a
> > year, I too had my very own C-64 plugged into a monster Zenith
> > console job.  Remember those? I don't remember what I paid for a
> > used C-64, but the Zenith 26" was $5 at a garage sale up the street
> > and another $5 for delivery.
> 
> I had to use my parents' TV in the living room :) And I was made to
> learn typing before I could play games on it, so cruel...
[...]

Wow, what cruelty! ;-)  The Apple II was my first computer ever, and I
spent 2 years playing computer games on it until they were oozing out of
my ears.  Then I got so fed up with them that I decided I'm gonna write
my own.  So began my journey into BASIC, and then 6502 assembly, etc..

A long road later, I ended up here with D.


T

-- 
This is a tpyo.


Re: D-lighted, I'm Sure

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 18, 2019 at 02:29:14PM +, Mike Parker via 
Digitalmars-d-announce wrote:
[...]
> The blog:
> https://dlang.org/blog/2019/01/18/d-lighted-im-sure/
[...]

Very nice indeed!  Welcome aboard, Ron!

And wow... 6502?  That's what I grew up on too!  I used to remember most
of the opcodes by heart... though nowadays that memory has mostly faded
away.  The thought of it still evokes nostalgic feelings, though.

I'm also not a big fan of dub, but I'm in the minority around these
parts.  Having grown up on makefiles and dealt with them in a large
project at my day job, I've developed a great distaste for them, and
nowadays the standard build tool I reach for is SCons.  Though possibly
in the not-so-distant future I might start using something more scalable
like Tup, or Button, written by one of our very own D community members.
But for small projects, just plain ole dmd is Good Enough(tm) for me.

I won't bore you with my boring editor, vim (with no syntax highlighting
-- yes I've been told I'm crazy, and in fact I agree -- just plain ole
text, with little things like autoindenting, no fancy IDE features --
Linux is my IDE, the whole of it :-P).  Vim users seem to out in force
around these parts for some reason, besides the people clamoring for a
"proper" IDE, but I suspect I'm the only one who deliberately turns
*off* syntax highlighting, and indeed, any sort of color output from dmd
or any other tools (I find it distracting). So don't pay too much heed
to what I say, at least on this subject. :-D


T

-- 
Живёшь только однажды.


Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jan 18, 2019 at 11:23:11AM +0100, Jacob Carlborg via 
Digitalmars-d-announce wrote:
> On 2019-01-17 23:44, H. S. Teoh wrote:
> 
> > YES!  This is the way it should be.  Type-tuples become first class
> > citizens, and you can pass them around to functions and return them
> > from functions
> No no no, not only type-tuples, you want types to be first class
> citizens.  This makes it possible to store a type in a variable, pass
> it to and return from functions. Instead of a type-tuple, you want a
> regular array of types.  Then it would be possible to use the
> algorithms in std.algorithm to manipulate the arrays. I really hate
> that today one needs to resort to things like staticMap and
> staticIndexOf.

Yes, that would be the next level of symmetry. :-D  Types as first class
citizens would eliminate another level of distinctions that leads to the
necessity of staticMap, et al.  But it will also require changing the
language in a much more fundamental, invasive way.

So I'd say, let's take it one step at a time.  Start with first-class
type-tuples, then once that's ironed out and working well, take it to
the next level and have first-class types.  Trying to leap from here to
there in one shot is probably a little too ambitious, with too high a
chance of failure.


[...]
> It would be awesome to be able to do things like this:
> 
> type foo = int;
> 
> type bar(type t)
> {
> return t;
> }
> 
> auto u = [byte, short, int, long].map!(t => t.unsigned).array;
> assert(u == [ubyte, ushort, uint, ulong];
[...]

Yes this would be awesome.  But in order to avoid unmanageable
complexity of implementation, all of this would have to be compile-time
only constructs.


T

-- 
Your inconsistency is the only consistent thing about you! -- KD


Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-18 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Jan 17, 2019 at 05:32:52PM -0800, Walter Bright via 
Digitalmars-d-announce wrote:
> On 1/17/2019 11:31 AM, H. S. Teoh wrote:
> > [...]
> 
> Thanks for the thoughtful and well-written piece.
> 
> But there is a counterpoint: symmetry in mathematics is one thing, but
> symmetry in human intuition is not. Anytime one is dealing in human
> interfaces, one runs into this.  I certainly did with the way imports
> worked in D. The lookups worked exactly the same for any sort of
> symbol lookup. I thought it was great.
> 
> But I was unable to explain it to others. Nobody could understand it
> when I said imported symbol lookup worked exactly like any lookup in a
> name space.  They universally said it was "unintuitive", filed bug
> reports, etc.  Eventually, I had to give it up. Now import lookup
> follows special different rules, people are happy, and I learned
> (again) that symmetry doesn't always produce the best outcomes.

Alas, it's true, it's true, 100% symmetry is, in the general case,
impossible to achieve.  If we wanted 100% mathematical symmetry, one
could argue lambda calculus is the best programming language ever,
because it's Turing complete, the syntax is completely uniform with no
quirky exceptions, and the semantics are very clearly defined with no
ambiguity anywhere.  Unfortunately, these very characteristics are also
what makes lambda calculus impossible to work with for anything but the
most trivial of programs. It's completely unmaintainable, extremely hard
to read, and has non-trivial semantics that vary wildly from the
smallest changes to the code.

For a human-friendly programming language, any symmetry must necessarily
be based on human expectations.  Unfortunately, as you learned, human
intuition varies from person to person, and indeed, is often
inconsistent even with the same person, so trying to maximise symmetry
in a way that doesn't become "counterintuitive" is a pretty tall order.

As somebody (perhaps you) said once, in Boeing's experience with
designing intuitive UIs, they discovered that what people consider
"intuitive" is partly colored by their experience, and their experience
is in turn shaped by the UIs they interact with.  So it's a feedback
loop, which means what's "intuitive" is not some static set of rules
(even allowing for arbitrarily complex rules), but it's a *moving
target*, the hardest thing to design for.  What's considered "intuitive"
today may be considered "totally counter-intuitive" 10 years from now.

In the case of imports, I'd argue that the problem is with how people
understand the word "import".  From a compiler's POV, the simplest, most
straightforward (and most symmetric!) definition is "pull in the symbols
into the local scope".  Unfortunately, that's not the understanding most
programmers have.  Perhaps in an older, bygone era people might have
been more open to that sort of definition, but in this day and age of
encapsulation and modularity, "pull in symbols into the local scope"
does not adequately capture people's expectations: it violates
encapsulation, in the following sense: symbols from the imported module
shadow local symbols, which goes against the expectation that the local
module is an encapsulated thing, inviolate from outside interference.
It breaks the expectation of encapsulation.  It breaks the symmetry that
everywhere else, outside code cannot interfere with local symbols.

Consequently, the expectation is that imported symbols are somehow
"second class" relative to local symbols -- imported symbols don't
shadow local symbols (unless you explicitly ask for it), and thus
encapsulation is preserved (in some sense).  So we have here a conflict
between different axes of symmetry: the symmetry of every module being
an inviolate, self-contained unit (encapsulation), and the symmetry of
having the same rules for symbol lookup no matter where the symbol came
from.  It's a toss-up which axis of symmetry one should strive for, and
which one should be compromised.

I'd say the general principle ought to be that the higher-level symmetry
(encapsulation of modules) should override the lower-level symmetry (the
mechanics of symbol lookup).  But this is easy to say because hindsight
is 20/20; it's not so simple at the time of decision-making because it's
not obvious which symmetries are in effect and what their relative
importance should be.  And there's always the bugbear that symmetry from
the implementor's (compiler writer's) POV does not necessarily translate
to symmetry from the user's (language user's) POV.

Still, I'd say that in a general sense, symmetry ought to be a
relatively high priority as far as designing language features or
adding/changing features are concerned.  Adding a new feature with
little regard for how it interacts with existing features, what new
corner cases it might introduce, etc., is generally a bad idea. Striving
for maximal symmetry should at least give you a ballpark idea for where
things should be headed, 

Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-17 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Jan 17, 2019 at 10:20:24PM +, Stefan Koch via 
Digitalmars-d-announce wrote:
> On Thursday, 17 January 2019 at 19:31:24 UTC, H. S. Teoh wrote:
[...]
> > Coming back to the D example at the end, I totally agree with the
> > sentiment that D templates, in spite of their significant
> > improvements over C++ syntax, ultimately still follow the same
> > recursive model. Yes, you can use CTFE to achieve the same thing at
> > runtime, but it's not the same thing, and CTFE cannot manipulate
> > template argument lists (aka AliasSeq aka whatever it is you call
> > them).  This lack of symmetry percolates down the entire template
> > system, leading to the necessity of the hack that Bartosz refers to.
> > 
> > Had template argument lists / AliasSeq been symmetric w.r.t. runtime
> > list manipulation, we would've been able to write a foreach loop
> > that manipulates the AliasSeq in the most readable way without
> > needing to resort to hacks or recursive templates.
> > 
> For 2 years I have pondered this problem, and I did come up with a
> solution.  It's actually not that hard to have CTFE interact with
> type-tuples.  You can pass them as function parameters, or return them
> if you wish.  Of course a type-tuple returning ctfe function, is
> compile-time only.

YES!  This is the way it should be.  Type-tuples become first class
citizens, and you can pass them around to functions and return them from
functions, the only stipulation being that they can only exist at
compile-time, so it's an error to use them at runtime.

In other words, they become symmetric to other built-in language types,
and can be manipulated by conventional means, instead of being an
oddball exception with special-case behaviour that requires special-case
syntax dedicated to manipulating them.  Again, the root of the problem
is asymmetry, and the solution is to make it symmetric.


> This solved one more problem that ctfe has:
> helper functions required for ctfe can only be omitted from the
> binary, if you use the trick of putting them into a module which is
> the import path but never explicitly given on the command line.

Exactly.  Yet another problem caused by the asymmetry of type-tuples
w.r.t. other built-in types, and naturally solved by making them
symmetric.


> newCTFE has the facility to be extended for this, and implementing
> type-functions is at least on my personal roadmap.

Awesome.


> At Dconf 2018 Andrei and Walter said, a DIP which is substantiated
> enough might make it.
> However due to lack of time, (and productivity-reducing internal
> changes) it will take some time until I can get started on this.
> 
> Also I plan for newCTFE to be in shape before I add type-manipulation
> abilities.

Yes, let's please get the Minimum Viable Product of newCTFE merged into
master first, before we expand the scope (and delay the schedule :-P)
yet again!


[...]
> P.S. There is one caveat: because of how type-functions work they
> cannot, you cannot create a non-anonymous symbol inside a
> type-function, because there is no way to infer a mangle.
>
> You can however create an anonymous symbol and alias it inside a
> template body, which gives it a mangle and it can behave like a
> regular symbol.

Interesting.  Is it possible to assign a "fake" mangle to type functions
that never actually gets emitted into the object code, but just enough
to make various internal compiler stuff that needs to know the mangle
work properly?


T

-- 
Why do conspiracy theories always come from the same people??


Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-17 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Jan 17, 2019 at 06:03:07PM +, Paul Backus via 
Digitalmars-d-announce wrote:
[...]
> [2]
> https://bartoszmilewski.com/2009/10/21/what-does-haskell-have-to-do-with-c/
[...]

Haha, seems D did better than C++ in this respect, but not quite at the
level of Haskell.

The C++ example of a template that takes templates and arguments and
declares another template is a perfect example of why C++ template
syntax is utterly horrible for doing these sorts of things.

Coming back to the D example at the end, I totally agree with the
sentiment that D templates, in spite of their significant improvements
over C++ syntax, ultimately still follow the same recursive model. Yes,
you can use CTFE to achieve the same thing at runtime, but it's not the
same thing, and CTFE cannot manipulate template argument lists (aka
AliasSeq aka whatever it is you call them).  This lack of symmetry
percolates down the entire template system, leading to the necessity of
the hack that Bartosz refers to.

Had template argument lists / AliasSeq been symmetric w.r.t. runtime
list manipulation, we would've been able to write a foreach loop that
manipulates the AliasSeq in the most readable way without needing to
resort to hacks or recursive templates.

//

Lately, as I've been pondering over these fundamental language design
issues, I've become more and more convinced that symmetry is the way to
go.  And by symmetry, I mean the mathematical sense of being "the same
under some given mapping (i.e., transformation or substitution)".

Why is C++ template syntax such a mess to work with?  Because it's a
separate set of syntax and idioms grafted onto the original core
language with little or no symmetry between them.  Where the core
language uses < and > as comparison operators, template syntax uses <
and > as delimiters. This asymmetry leads to all kinds of nastiness,
like earlier versions of C++ being unable to parse
`templateA>` properly (the >> gets wrongly lexed as a
bitshift operator). An intervening space is required to work around this
asymmetry.  This is just one trivial example.

A more fundamental example, which also afflicts D, is that the template
instantiation mechanism is inherently recursive rather than iterative,
so when you need to write a loop, you have to paraphrase it as a
recursive template. This is asymmetric with the runtime part of the
language, where constructs like `foreach` are readily available to
express the desired semantics.

On a different note, the same principle of symmetry applies to built-in
types vs. user-defined types. In TDPL Andrei alludes to programmers
disliking built-in types having "magic" behaviour that's different from
user-defined types.  Why the dislike? Because of asymmetry. Built-in
types have special behaviour that cannot be duplicated by user-defined
types, so when you want the special behaviour but built-in types don't
quite meet your needs, you find yourself without any recourse. It is
frustrating because the reasoning goes "if built-in type X can have
magic behaviour B, why can't user-defined type Y have behaviour B too?"
The desire for behaviour B to be possible both for built-in types and
user-defined types stems from the desire for symmetry.

Why is alias this so powerful?  Because it lets a new type Y behave as
if it were an existing type X -- it's symmetry.  Similarly, the Liskov
Substitution Principle is essentially a statement of symmetry in the
universe of OO polymorphism.

Why is the Unix "everything is a file" abstraction so useful? Because of
symmetry: whether it's a physical file, a network socket, or pipe, it
exposes the same API. Code that works with the data don't have to care
about what kind of object it is; it can simply use the API that is
symmetric across different types of objects.

Similarly, why are D ranges so powerful? Because they make containers,
data sources, data generators, etc., symmetric under the range API
operations.  It allows code to be decoupled from the details of the
concrete types, and focus directly on the problem domain.

Why does the GC simplify many programming tasks so much? Because it
makes every memory-allocated object symmetric w.r.t. memory management:
you stop worrying about whether something is stack-allocated or
heap-allocated, whether it has cycles, or whether somebody else still
holds a reference to it -- you focus on the problem domain and let the
GC do its job.

At a higher level: in the old days, programming languages used to
distinguish between functions and procedures (and maybe some languages
still do, but they seem rare these days). But eventually this
distinction was ditched in favor of things like returning `void` (C,
C++, Java, D), or some other equivalent construct. Why? So that instead
of having two similar but asymmetric units of code encapsulation,
everything is just a "function" (it just so happens some functions don't
return a meaningful value). IOW, introduce symmetry, get rid of the
asymmetry.


On the flip side, 

Re: My Meeting C++ Keynote video is now available

2019-01-17 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Jan 17, 2019 at 11:17:18AM +, Tony via Digitalmars-d-announce wrote:
> On Sunday, 13 January 2019 at 04:04:14 UTC, Walter Bright wrote:
> 
> > One major takeaway is that the bugs/line are the same regardless of
> > the language used. This means that languages that enable more
> > expression in fewer lines of code result in fewer bugs for the same
> > functionality.
> > 
> Is the data to support this conclusion freely available on the web
> somewhere?
> 
> My impression is that Python is considered the easiest language to
> use. If it has no more bugs per line than a statically typed program
> that seems to suggest that non-speed-critical work should be done in
> Python.

No, if the number of bugs is truly proportional to the number of lines,
then we should all ditch D and write APL instead.  :-P


T

-- 
Leather is waterproof.  Ever see a cow with an umbrella?


Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-17 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Jan 16, 2019 at 05:59:29PM -0800, Walter Bright via 
Digitalmars-d-announce wrote:
[...]
> Bartosz Milewski is a C++ programmer and a Haskell fan. He once gave a
> presentation at NWCPP where he wrote a few lines of Haskell code.
> Then, he showed the same code written using C++ template
> metaprogramming.
> 
> The Haskell bits in the C++ code were highlighted in red. It was like
> a sea of grass with a shrubbery here and there. Interestingly, by
> comparing the red dots in the C++ code with the Haskell code, you
> could understand what the C++ was doing. Without the red highlighting,
> it was a hopeless wall of < > :-)
[...]

I don't know Haskell, but I've worked with Scheme (another Lisp dialect
/ derivative) a little, and sometimes I feel like the core of my logic
is little bits of shrubbery lost in an ocean of parentheses. :-P


T

-- 
I don't trust computers, I've spent too long programming to think that they can 
get anything right. -- James Miller


Re: B Revzin - if const expr isn't broken (was Re: My Meeting C++ Keynote video is now available)

2019-01-16 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Jan 16, 2019 at 11:43:19PM +, John Carter via 
Digitalmars-d-announce wrote:
[...]
> Given that I have probably written a lot more C++ code in my life than
> d...
> 
> ...I do find it remarkable that I can read the d code quite easily
> without reaching for the reference manual, but to make sense of his
> C++, it sends me trawling around cppreference.com

Yes, that's one of the outstanding qualities of D, and one that I was
immensely impressed with when I perused the Phobos source code for the
first time.  After having (tried to) read glibc's source code (if you
never have, I advise you not to unless you're a jaded, hardened,
hardcore C professional -- it's *not* for the faint of heart), it was
like a breath of fresh air.  D does have its warts and dark corners, but
I think on the readability front it has scored a home run compared to
the equivalent C/C++ code.


> I find Andrei's claim that checkint with a void hook reverts to int is
> amazing, and would love to verify that at the assembly level for both
> the C++ and d implementations.

This is actually quite trivial in D.  I'm too lazy to actually check the
checkedint source code, but I'd surmise it's something as simple as:

template CheckedInt(alias hook) {
static if (is(typeof(hook) == void))
alias CheckedInt = int;
else {
struct CheckedInt {
... // actual CheckedInt implementation here
}
}
}

or something along these lines.  Standard D practice.  (I daren't even
try to imagine what I'd have to do to make this work in C++. After
having worked with C++ for about 2 decades or so, I don't have many good
things to say about it, nor do I expect very much from it anymore.)


T

-- 
Windows 95 was a joke, and Windows 98 was the punchline.


Re: My Meeting C++ Keynote video is now available

2019-01-14 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Jan 14, 2019 at 08:38:39PM +, Guillaume Piolat via 
Digitalmars-d-announce wrote:
[...]
> Other people often lack interest because of real or perceived template
> bloat, and it's critical.
> 
> - I think it's important to emphasize CTFE over template
> instantiations because (per Stefan's measurements) template
> instantiations are a lot slower and CTFE is already surprisingly
> faster than template meta-programming, and on the road to become even
> faster with the superbly needed newCTFE.

I think another angle of attack that AFAICT has been mostly overlooked
is for the compiler to recognize certain common template patterns, and
optimize away intermediate template instantiations that are not actually
necessary.  Not every template instantiation requires wholesale copying
of the AST.  I surmise certain patterns of templates could be profitably
turned into some kind of compile-time executed code.


T

-- 
Без труда не выловишь и рыбку из пруда. 


Re: My Meeting C++ Keynote video is now available

2019-01-14 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Jan 14, 2019 at 03:57:36PM +, Adam D. Ruppe via 
Digitalmars-d-announce wrote:
> On Monday, 14 January 2019 at 14:56:00 UTC, bachmeier wrote:
> > Only a small sliver of programming involves anything where "overhead
> > of a runtime" is an issue. I hope you intend this comment as
> > pertaining to Better C usage.
> 
> Real D is the true better C. These improvements can improve in various
> situations.
> 
> That said though, I'd be against removing built-in classes and
> interfaces.  They are useful in a lot of places built in...

Yeah, much as I'm a big promoter of struct-based range-based template
style D code, classes and interfaces do still have their place.  When
you need runtime dynamic polymorphism, it just makes sense to use
classes and interfaces instead of trying to bandage your way around
structs and CT introspection.  I'm still searching for a theoretical
model that would bridge the gap between the two and make one unified
model, but for now, they each still have their place.


> though I kinda wish the runtime code was a bit thinner and lighter
> too.

Yeah, the whole thing about the monitor field IMO is an unnecessary
burden on a use case that isn't always needed.  If synchronized classes
or whatever needs it, then it should be an ABI specific to synchronized
classes.  Everybody else shouldn't need to pay tax on it if they never
actually need to use it.


T

-- 
Shin: (n.) A device for finding furniture in the dark.


Re: The New Fundraising Campaign

2019-01-07 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Jan 02, 2019 at 02:49:19PM +, Adam D. Ruppe via 
Digitalmars-d-announce wrote:
> On Wednesday, 2 January 2019 at 11:11:31 UTC, Stefan Koch wrote:
> > On Wednesday, 2 January 2019 at 10:16:11 UTC, Martin Tschierschke wrote:
> > > 
> > > I would love to have a campaign to increase compilation speed for
> > > std.regex and std.format...
> > 
> > You could defer the generation of utf-tables to runtime, which
> > should yield some improvement. But I'll measure the reasons for
> > slowness again and post em.
> 
> We should just generate them in a helper program in the Phobos
> makefile.
> 
> Yeah, it is kinda embarrassing that we are using a C technique instead
> of D CTFE. But whatever, it is less embarrassing than these awful
> compile times in user code.

I don't perceive it as embarrassing at all. In my recent projects I've
resorted quite often to helper D programs that generate D code from
external input. It *could* be done via string imports, CTFE, and string
mixins, but that makes (1) compilation dog-slow, (2) the actual
generated code existing only transiently inside the compiler, which (3)
makes it hard to debug (esp. if the codegen isn't your own code) - (4)
any compile errors are by necessity obscure because there isn't a
concrete file and line number to refer to; to get to the locus of the
problem further effort is required to extract the generated code string
(after figuring out which string is the relevant one!) and then
dereference the line number.

Doing codegen as a separate step is so much better: (1) you get to see
the actual generated code, (2) learn how it works / self-correct by
studying how your (possibly incorrect) input / usage changes the code,
(3) have an actual file/line number that can be looked up at your
leisure, and (4) edit the generated code by hand if it really comes down
to that.

(Of course, this requires that you use a sane build system that doesn't
come with crippling operating assumptions or other arbitrary
restrictions that make this additional codegen step hard / unreliable /
impossible.)

None of this means that string mixins are no good... in fact I use them
quite a bit myself too.  But they are more suitable for small
code snippets to grease the main code, not for large scale, bulk codegen
from external data sources. I'd argue that std.uni tables really belong
to the latter category. In fact they *are* mostly generated statically,
but then they get wrapped inside templates, which arguably could be
avoided esp. since the compiler quickly becomes dog-slow with too many
templates.


T

-- 
Questions are the beginning of intelligence, but the fear of God is the 
beginning of wisdom.


Re: now it's possible! printing floating point numbers at compile-time

2018-12-30 Thread H. S. Teoh via Digitalmars-d-announce
On Sun, Dec 30, 2018 at 03:26:33PM +0200, ketmar via Digitalmars-d-announce 
wrote:
> Basile B. wrote:
> 
> > On Sunday, 30 December 2018 at 12:19:19 UTC, ketmar wrote:
> > > too bad that i didn't knew about Ryu back than.
> > 
> > It's very recent, announce on proggit is < 1 year.
> > 
> > It would be nice to have one to format in phobos. RYU or Grisu3
> > doesn't matter much as long as the two issues that are
> > 
> > - CTFE formatting of floats
> > - formatting is identical on all platforms
> actually, there is a 3rd issue, which is often overlooked: conversion
> from string to float. to get a perfect roundtrip, this one should be
> done right too.
[...]

Doesn't hex output format (e.g. std.format's %a and %A) already solve
this?  It basically outputs the exact bits in hex. No room for error
there.


T

-- 
It said to install Windows 2000 or better, so I installed Linux instead.


Re: Announcing Elembuf

2018-12-19 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Dec 19, 2018 at 11:56:44AM -0500, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
> On 12/18/18 8:41 PM, H. S. Teoh wrote:
> > On Tue, Dec 18, 2018 at 01:56:18PM -0500, Steven Schveighoffer via 
> > Digitalmars-d-announce wrote:
[...]
> > > Although I haven't tested with network sockets, the circular
> > > buffer I implemented for iopipe
> > > (http://schveiguy.github.io/iopipe/iopipe/buffer/RingBuffer.html)
> > > didn't have any significant improvement over a buffer that moves
> > > the data still in the buffer.
> > [...]
> > 
> > Interesting. I wonder why that is. Perhaps with today's CPU cache
> > hierarchies and read prediction, a lot of the cost of moving the data is
> > amortized away.
> 
> I had expected *some* improvement, I even wrote a "grep-like" example
> that tries to keep a lot of data in the buffer such that moving the
> data will be an expensive copy. I got no measurable difference.
> 
> I would suspect due to that experience that any gains made in not
> copying would be dwarfed by the performance of network i/o vs. disk
> i/o.
[...]

Ahh, that makes sense.  Did you test async I/O?  Not that I expect any
difference there either if you're I/O-bound; but reducing CPU load in
that case frees it up for other tasks.  I don't know how easy it would
be to test this, but I'm curious about what results you might get if you
had a compute-intensive background task that you run while waiting for
async I/O, then measure how much of the computation went through while
running the grep-like part of the code with either the circular buffer
or the moving buffer when each async request comes back.

Though that seems like a rather contrived example, since normally you'd
just spawn a different thread and let the OS handle the async for you.


T

-- 
Жил-был король когда-то, при нём блоха жила.


Re: Blog post: What D got wrong

2018-12-18 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 18, 2018 at 06:53:02PM -0700, Jonathan M Davis via 
Digitalmars-d-announce wrote:
[...]
> I confess that I do tend to think about things from the standpoint of
> a library designer though, in part because I work on stuff like
> Phobos, but also because I tend to break up my programs into libraries
> as much as reasonably possible. In general, the more that's in a
> reusable, easily testable library the better. And with that approach,
> a lot less of the code for your programs is actually in the program
> itself, and the attributes tend to matter that much more.
[...]

My recent programming style has also become very library-like, often
with standalone library-style pieces of code budding off a messier,
experimental code in main() (and ultimately, if the project is
long-lasting, main() itself becomes stripped down to the bare
essentials, just a bunch of library components put together).  But I've
not felt a strong urge to deal with attributes in any detailed way;
mostly I just templatize everything and let the compiler do attribute
inference on my behalf. For the few cases where explicit attributes
matter, I still only use the bare minimum I can get away with, and
mostly just enforce template attributes using the unittest idiom rather
than bother with writing explicit attributes everywhere in the actual
code.


T

-- 
He who sacrifices functionality for ease of use, loses both and deserves 
neither. -- Slashdotter


Re: Announcing Elembuf

2018-12-18 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 18, 2018 at 01:56:18PM -0500, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
> On 12/18/18 10:36 AM, H. S. Teoh wrote:
> > On Tue, Dec 18, 2018 at 08:00:48AM +, Cyroxin via 
> > Digitalmars-d-announce wrote:
> > > [...] While the focus of this library is in socket receival,
> > > reading from a file doesn't seem to be bad either.
> > [...]
> > 
> > Ahh, I see. I thought the intent was to read from a file locally. If
> > you're receiving data from a socket, having a circular buffer makes
> > a lot more sense.  Thanks for the clarification.  Of course, a
> > circular buffer works pretty well for reading local files too,
> > though I'd consider its primary intent would be better suited for
> > receiving data from the network.
> 
> Although I haven't tested with network sockets, the circular buffer I
> implemented for iopipe
> (http://schveiguy.github.io/iopipe/iopipe/buffer/RingBuffer.html)
> didn't have any significant improvement over a buffer that moves the
> data still in the buffer.
[...]

Interesting. I wonder why that is. Perhaps with today's CPU cache
hierarchies and read prediction, a lot of the cost of moving the data is
amortized away.


T

-- 
Береги платье снову, а здоровье смолоду. 


Re: Blog post: What D got wrong

2018-12-18 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 18, 2018 at 08:17:28AM +, Russel Winder via 
Digitalmars-d-announce wrote:
> On Mon, 2018-12-17 at 12:16 -0800, Walter Bright via Digitalmars-d-announce
> wrote:
> > […]
> > 
> > Going pure, however, is much harder (at least for me) because I'm
> > not used to programming that way. Making a function pure often
> > requires reorganization of how a task is broken up into data
> > structures and functions.

Component-based programming helps a lot in this regard. It breaks the
problem down in a way that, when done correctly, captures the essence of
the algorithm in a way that's easily translated to pure code (esp. D's
expanded definition of purity).


[...]
> I can recommend a short period of working only with Haskell. And then
> a short period working only with Prolog. Experience with Java and
> Python people trying to get them to internalise the more declarative
> approach to software, shows that leaving their programming languages
> of choice behind for a while is important in improving their use of
> their languages of choice.
[...]

It's all about the mindset. Being forced to think about the problem from
a purely functional perspective gives you a radically different
perspective from the usual imperative paradigm, and IME often yields
insight into the essential structure of your programming problem that is
otherwise easily obscured by the imperative structure imposed upon it.


T

-- 
Klein bottle for rent ... inquire within. -- Stephen Mulraney


Re: Announcing Elembuf

2018-12-18 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 18, 2018 at 08:00:48AM +, Cyroxin via Digitalmars-d-announce 
wrote:
> [...] While the focus of this library is in socket receival, reading
> from a file doesn't seem to be bad either.
[...]

Ahh, I see. I thought the intent was to read from a file locally. If
you're receiving data from a socket, having a circular buffer makes a
lot more sense.  Thanks for the clarification.  Of course, a circular
buffer works pretty well for reading local files too, though I'd
consider its primary intent would be better suited for receiving data
from the network.


T

-- 
Doubtless it is a good thing to have an open mind, but a truly open mind should 
be open at both ends, like the food-pipe, with the capacity for excretion as 
well as absorption. -- Northrop Frye


Re: Announcing Elembuf

2018-12-17 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 18, 2018 at 01:13:32AM +, Cyroxin via Digitalmars-d-announce 
wrote:
[...]
> I would assume that there is much value in having a mapping that can
> be reused instead of having to remap files to the memory when a need
> arises to change source. While I cannot comment on the general
> efficiency between a mapped file and a circular buffer without
> benchmarks, this may be of use:
> https://en.wikipedia.org/wiki/Memory-mapped_file#Drawbacks

You have a good point that unmapping and remapping would be necessary
for large files in a 32-bit arch.


> An interesting fact I found out was that std.mmfile keeps a reference
> of the memory file handle, instead of relying on the system's handle
> closure after unmap. There seems to be quite a lot of globals, which
> is odd as Elembuf only has one.

I'm not sure I understand what you mean by "globals"; AFAICT MmFile just
has a bunch of member variables, most of which are only important on the
initial mapping and later unmapping.  Once you get a T[] out of MmFile,
there's little reason to use the MmFile object directly anymore until
you're done with the mapping.


> In std.mmfile OpSlice returns a void[] instead of a T[], making it
> difficult to work with as it requires a cast, there would also be a
> need to do costly conversions should "T.sizeof != void.sizeof" be
> true.

Are you sure? Casting void[] to T[] only needs to be done once, and the
only cost is recomputing .length. (Casting an array does *not* make a
copy of the elements or anything of that sort, btw.) Once you have a
T[], it's pointless to call Mmfile.opSlice again; just slice the T[]
directly.


> However, from purely a code perspective Elembuf attempts to have
> minimal runtime arguments and variables, with heavy reliance on
> compile time arguments. It also uses a newer system call for Linux
> (Glibc) that is currently not in druntime, the reason for this system
> call is that it allows for faster buffer construction. Read more about
> it here: https://dvdhrm.wordpress.com/2014/06/10/memfd_create2/

Hmm. Isn't that orthogonal to mmap(), though?  You could just map a
memfd descriptor using mmap() to achieve essentially equivalent
functionality.  Am I missing something obvious?


T

-- 
Which is worse: ignorance or apathy? Who knows? Who cares? -- Erich Schubert


Re: Announcing Elembuf

2018-12-17 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Dec 17, 2018 at 09:16:16PM +, Cyroxin via Digitalmars-d-announce 
wrote:
> Elembuf is a library that allows writing efficient parsers and
> readers. It looks as if it were just a regular T[], making it work
> well with libraries and easy to use with slicing. To avoid copying,
> the buffer can only be at maximum one page long.
[...]

What advantage does this have over using std.mmfile to mmap() the input
file into the process' address space, and just using it as an actual T[]
-- which the OS itself will manage the paging for, with basically no
extraneous copying except for what is strictly necessary to transfer it
to/from disk, and with no arbitrary restrictions?

(Or, if you don't like the fact that std.mmfile uses a class, calling
mmap() / the Windows equivalent directly, and taking a slice of the
result?)


T

-- 
My program has no bugs! Only undocumented features...


Re: Blog post: What D got wrong

2018-12-13 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Dec 13, 2018 at 10:29:10AM +, RazvanN via Digitalmars-d-announce 
wrote:
[...]
> D and Rust are competing to get the C/C++/Java/Python market share. In
> order to do that they should make it simple for developers to convert
> to the new language. Due to its design, Rust is insanely hard to
> master, which on the long run I think will kill the language despite
> of the advantages it offers.  On the other side, consider die hard C
> fans: they are willing to accept the possibility of a buffer overflow
> simply because they want more power. Do you honestly think that they
> will ever take D into account if @safe and immutable data will be the
> default?

Why not?  You can opt out. It's not as though you're forced to use
immutable everything and nothing but, like in a pure functional
language.  Just tack on @system or mutable when you need to.

Some people balk at the idea of `mutable` being sprinkled everywhere in
their code, but that's really just a minor syntactic issue. There's
already precedent for using `val` and `var` -- it couldn't get easier to
type than that. The syntax is not a real problem.


[...]
> > It would be if the change weren't accompanied by adding `impure` and
> > some sort of mutable auto. @system already exists. It's a question
> > of opting out (like with variable initialisation) instead of opting
> > in.
> 
> It still is, because the user is imposed to work in certain conditions
> that some might not want to.

No, there's always the option of opting out. There's no imposition. It's
not like Java where everything must be a class, no matter what. You can
write @system code or mutable variables to your heart's content.

The idea is to *default* to @safe so that when the programmer doesn't
really care either way, the default behaviour gives you memory safety.
Or default to immutable, so that unless the programmer consciously wants
to mutate state, he'll get the benefit of being warned about any
unintended mutation. Plus optimization benefits for variables that don't
need to be mutable.  But defaults are called defaults because they're
there to be overridden.


T

-- 
LINUX = Lousy Interface for Nefarious Unix Xenophobes.


Re: Blog post: What D got wrong

2018-12-12 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Dec 12, 2018 at 02:10:31PM -0700, Jonathan M Davis via 
Digitalmars-d-announce wrote:
> On Wednesday, December 12, 2018 6:03:39 AM MST Kagamin via Digitalmars-d-
> announce wrote:
[...]
> > Imagine you have void delegate() prop() and use the property
> > without parentheses everywhere then suddenly m.prop() doesn't
> > call the delegate. So it's mostly for getters and should be used
> > only in edge cases, most code should be fine with optional parens.
> 
> Except that @property does not currently have any effect on this. The
> delegate case (or really, the case of callables in general) is one
> argument for keeping @property for using in that particular corner
> case, since without it, having property functions that return
> callables simply doesn't work, but @property has never been made to
> actually handle that case, so having property functions that return
> callables has never worked in D. It's certainly been discussed before,
> but the implementation has never been changed to make it work.

Yep. Basically, @property as currently implemented is useless, and I've
stopped bothering with it except where Phobos requires it.


> If/when we finally rework @property, that use case would be the number
> one reason to not simply get rid of @property, but until then, it
> doesn't actually fix that use case. As things stand, @property
> basically just serves as documentation of intent for the API and as a
> way to screw up type introspection by having the compiler lie about
> the type of the property.
[...]

Haha yeah, currently @property confers no real benefits and only comes
with bad (and probably unexpected) side-effects.  More confirmation that
it's a waste of time and not worth my attention.

If the delegate property thing is the only real use case for @property,
it seems quite out-of-proportion that an entire @-identifier in the
language is dedicated just for this purpose. One would've thought D
ought to be better designed than this...


T

-- 
Gone Chopin. Bach in a minuet.


Re: A brief survey of build tools, focused on D

2018-12-12 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Dec 12, 2018 at 02:52:09PM -0700, Jonathan M Davis via 
Digitalmars-d-announce wrote:
[...]
> I would think that to be fully flexible, dub would need to abstract
> things a bit more, maybe effectively using a plugin system for builds
> so that it's possible to have a dub project that uses dub for pulling
> in dependencies but which can use whatever build system works best for
> your project (with the current dub build system being the default).
> But of course, even if that is made to work well, it then introduces
> the problem of random dub projects then needing 3rd party build
> systems that you may or may not have (which is one of the things that
> dub's current build system mostly avoids).

And here is the crux of my rant about build systems (earlier in this
thread).  There is no *technical reason* why build systems should be
constricted in this way. Today's landscape of specific projects being
inextricably tied to a specific build system is completely the wrong
approach.

Projects should not be tied to a specific build system.  Instead,
whatever build tool the author uses to build the project should export a
universal description of how to build it, in a standard format that can
be imported by any other build system. This description should be a
fully general DAG, that specifies all inputs, all outputs (including
intermediate ones), and the actions required to get from input to
output.

Armed with this build description, any build system should be able to
import as a dependency any project built with any other build system,
and be able to successfully build said dependency without even knowing
what build system was originally used to build it or what build system
it is "intended" to be built with.  I should be able to import a Gradle
project, a dub project, and an SCons project as dependencies, and be
able to use make to build everything. And my downstream users ought to
be able to build my project with tup, or any other build tool they
choose, without needing to care that I used make to build my project.

Seriously, building a lousy software project is essentially traversing a
DAG of inputs and actions in topological order.  The algorithms have
been known since decades ago, if not longer, and there is absolutely no
valid reason why we cannot import arbitrary sub-DAGs and glue it to the
main DAG, and have everything work with no additional effort, regardless
of where said sub-DAGs came from.  It's just a bunch of nodes and
labelled edges, guys!  All the rest of the complications and build
system dependencies and walled gardens are extraneous and completely
unnecessary baggage imposed upon a straightforward DAG topological walk
that any CS grad could write in less than a day.  It's ridiculous.


> On some level, dub is able to do as well as it does precisely because
> it's able to assume a bunch of stuff about D projects which is true
> the vast majority of the time, and the more it allows projects that
> don't work that way, the worse dub is going to work as a general tool,
> because it increasingly opens up problems with regards to whether you
> have the right tools or environment to build a particular project when
> using it as a dependency. However, if we don't figure out how to make
> it more flexible, then certain classes of projects really aren't going
> to work well with dub.  That's less of a problem if the project is not
> for a library (and thus does not need to be a dub package so that
> other packages can pull it in as a dependency) and if dub provides a
> good way to just make libraries available as dependencies rather than
> requiring the the ultimate target be built with dub, but even then, it
> doesn't solve the problem when the target _is_ a library (e.g. what if
> it were for wrapping a C or C++ library and needed to do a bunch of
> extra code steps for code generation and needed multiple build steps).

Well exactly, again, the monolithic approach to building software is the
wrong approach, and leads to arbitrary and needless limitations of this
sort.  DAG generation should be decoupled from build execution.  You can
use whatever tool or fancy algorithm you want to generate the lousy DAG,
but once generated, all you have to do is to export it in a standard
format, then any arbitrary number of build executors can read the
description and run it.

Again I say: projects should not be bound to this or that build system.
Instead, they should export a universal build description in a standard
format.  Whoever wants to depend on said projects can simply import the
build description and it will Just Work(tm). The build executor will
know exactly how to build the dependency independently of whatever fancy
tooling the upstream author may have used to generate the DAG.


> So, I don't know. Ultimately, what this seems to come down to is that
> all of the stuff that dub does to make things simple for the common
> case make it terrible for complex cases, but making it work well for
> 

Re: A brief survey of build tools, focused on D

2018-12-12 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Dec 12, 2018 at 10:38:55AM +0100, Sönke Ludwig via 
Digitalmars-d-announce wrote:
> Am 11.12.2018 um 20:46 schrieb H. S. Teoh:
> > [...]
> > Wait, what does --parallel do if it doesn't compile multiple files
> > at once?
> 
> It currently only works when building with `--build-mode=singleFile`,
> so compiling individual files in parallel instead of compiling chunks
> of files in parallel, which would be the ideal.

Ah, I see.  But that should be relatively easy to fix, right?


[...]
> There are the three directives sourcePaths, sourceFiles and
> excludedSourceFiles (the latter two supporting wildcard expressions)
> to control the list of files. Once an explicit sourcePaths directive
> is given, the folder that is possibly detected by default
> ("source"/"src") is also skipped. They are documented in the package
> format specs ([1], [2]).

Thanks for the info.


> > Also, you refer to "the output binary". Does that mean I cannot
> > generate multiple executables? 'cos that's a showstopper for me.
> 
> Compiling multiple executables currently either requires multiple
> invocations (e.g. with different configurations or sub packages
> specified), or a targetType "none" package that has one dependency per
> executable - the same configuration/architecture applies to all of
> them in that case. If they are actually build dependencies, a possible
> approach is to invoke dub recursively inside of a preBuildCommand.

Unfortunately, that is not a practical solution for me.  Many of my
projects have source files that are generated by utilities that are
themselves D code that needs to be compiled (and run) as part of the
build.  I suppose in theory I could separate them into subpackages, and
factor out the common code shared between these utilities and the main
executable(s), but that is far too much work for something that IMO
ought to be very simple -- since most of the utilities are single-file
drivers with a small number of imports of some shared modules. Creating
entire subpackages for each of them just seems excessive, esp. during
development where the set of utilities / generated files may change a
lot.  Creating/deleting a subpackage every time is just too much work
for little benefit.

Also, does dub correctly support the case where some .d files are
generated by said utilities (which would be dub subpackages, if we
hypothetically went with that setup), but the output may change
depending on the contents of some input data/config files? I.e., if I
change a data file and run dub again, it ought to re-run the codegen
tool and then recompile the main executable that contains the changed
code.  This is a pretty important use-case for me, since it's kinda the
whole point of having a codegen tool.

Compiling the same set of sources for multiple archs (with each arch
possibly entailing a separate list of source files) is kinda a special
case for my current Android project; generally I don't really need
support for this. But solid support for codegen that properly percolates
changes from input data down to recompiling executables is must-have for
me.  Not being able to do this in the most efficient way possible would
greatly hamper my productivity.


> But what I meant is that there is for example currently no way to
> customize the output binary base name ("targetName") and directory
> ("targetPath") depending on the build type.

But this shouldn't be difficult to support, right?  Though I don't
particularly need this feature -- for the time being.


[...]
> > Does dub support the following scenario?
[...]
> This will currently realistically require invoking an external tool
> such as make through a pre/post-build command (although it may
> actually be possible to hack this together using sub packages, build
> commands, and string import paths for the file dependencies). Most
> notably, there is a directive missing to specify arbitrary files as
> build dependencies.

I see.  I think this is a basic limitation of dub's design -- it assumes
a certain (common) compilation model of sources to (single) executable,
and everything else is only expressible in terms of larger abstractions
like subpackages.  It doesn't really match the way I work, which I guess
explains my continuing frustration with using it.  I think of my build
processes as a general graph of arbitrary input files being converted by
arbitrary operations (not just compilation) into arbitrary output files.
When I'm unable to express this in a simple way in my build spec, or
when I'm forced to use tedious workarounds to express what in my mind
ought to be something very simple, it distracts me from my focusing on
my problem domain, and results in a lot of lost time/energy and
frustration.


[...]
> BTW, my plan for the Android part of this was to add support for
> plugins (fetchable from the registry, see [3] for a draft) that handle
> the details in a centralized manner instead of having to put that
> knowledge into the build recipe of each 

Re: A brief survey of build tools, focused on D

2018-12-11 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 11, 2018 at 11:26:45AM +0100, Sönke Ludwig via 
Digitalmars-d-announce wrote:
[...]
> The upgrade check has been disabled in one of the latest releases, so
> unless the dependencies haven't been resolved before, it will not
> access the network anymore. A notable exception are single-file
> packages, which don't have a dub.selections.json - we should probably
> do something about this, too, at some point.
> 
> I've also rewritten the dependency resolution a while ago and it
> usually is not noticeable anymore nowadays.
> 
> Then there was an issue where LDC was invoked far too frequently to
> determine whether it outputs COFF files or not, making it look like
> scanning the file system for changes took unacceptably long. This has
> also been fixed.

This is very encouraging to hear.  Thanks!


> The main open point right now AFAICS is to make --parallel work with
> the multiple-files-at-once build modes for machines that have enough
> RAM. This is rather simple, but someone has to do it. But apart from
> that, I think that the current state is relatively fine form a
> performance point of view.

Wait, what does --parallel do if it doesn't compile multiple files at
once?


> > Then it requires a specific source layout, with incomplete /
> > non-existent configuration options for alternatives.  Which makes it
> > unusable for existing code bases.  Unacceptable.
> 
> You can define arbitrary import/source directories and list (or
> delist) source files individually if you want. There are restrictions
> on the naming of the output binary, though, is that what you mean?

Is this documented? I couldn't find any info on it the last time I
looked.

Also, you refer to "the output binary". Does that mean I cannot
generate multiple executables? 'cos that's a showstopper for me.


> > Worst of all, it does not support custom build actions, which is a
> > requirement for many of my projects.  It does not support polyglot
> > projects. It either does not support explicit control over exact
> > build commands, or any such support is so poorly documented it might
> > as well not exist.  This is not only unacceptable, it is a
> > show-stopper.
> 
> Do you mean modifying the compiler invocations that DUB generates or
> adding custom commands (aka pre/post build/generate commands)?

Does dub support the following scenario?

- There's a bunch of .java files that have to be compiled with javac.
   - But some of the .java files are generated by an external tool, that
 must be run first, before the .java files are compiled.
- There's a bunch of .d files in two directories.
   - The second directory contains .d files that need to be compiled
 into multiple executables, and they must be compiled with a local
 (i.e., non-cross) compiler.
   - Some of the resulting executables must be run first in order to
 generate a few .d files in the first directory (in addition to
 what's already there).
   - After the .d files are generated, the first directory needs to be
 compiled TWICE: once with a cross-compiler (LDC, targetting
 Arm/Android), once with the local D compiler. The first compilation
 must link with cross-compilation Android runtime libraries, and the
 second compilation must link with local X11 libraries.
  - (And obviously, the build products must be put in separate
subdirectories to prevent stomping over each other.)
- After the .java and .d files are compiled, a series of tools must be
  invoked to generate an .apk file, which also includes a bunch of
  non-code files in resource subdirectories.  Then, another tool must be
  run to align and sign the .apk file.

And here's a critical requirement: any time a file is changed (it can be
a .java file, a .d file, or one of the resources that they depend on),
all affected build products must be correctly updated. This must be done
as efficiently as possible, because it's part of my code-compile-test
cycle, and if it requires more than a few seconds or recompiling the
entire codebase, it's a no-go.

If dub can handle this, then I'm suitably impressed, and retract most of
my criticisms against it. ;-)


T

-- 
Study gravitation, it's a field with a lot of potential.


Re: A brief survey of build tools, focused on D

2018-12-11 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 11, 2018 at 01:56:24PM -0500, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
[...]
> 1. When unittests are enabled, -allinst is enabled as well.
> 2. This means that all templates instantiated are included as if they
> were part of the local module.
> 3. This means that they are semantically analyzed, and if they import
> anything, all those imports are processed as well
> 4. Recurse on step 2.
> 
> Note that the reason allinst is used is because sometimes templates
> compile differently when unittests are enabled. In other words, you
> might for instance get a different struct layout for when unittests
> are enabled -- this prevents that (but only for templates of course).
> 
> The ultimate reason why the PR (which removed the -allinst flag for
> unittests) was failing was because of differences in compiler flags
> for different modules during unittests in Phobos. This caused symbol
> name mangling changes (IIRC, mostly surrounding dip1000 problems).
> 
> I really wish we could have followed through on that PR...
[...]

Argh.  Another badly needed fix stuck in PR limbo. :-( :-( :-(  Some
days, things like these really make me wish D3 was a thing.

Is there some way of recording this info somewhere, probably bugzilla I
guess, so that it will get addressed at *some point*, rather than
forgotten forever?  I was hoping this issue would be addressed within
the next few releases, but hope seems slim now. :-(


T

-- 
"The number you have dialed is imaginary. Please rotate your phone 90 degrees 
and try again."


Re: Blog post: What D got wrong

2018-12-11 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 11, 2018 at 03:34:28PM +, Simen Kjærås via 
Digitalmars-d-announce wrote:
[...]
> I believe a reasonable case can be made for .! for UFCS - it's
> currently invalid syntax and will not compile, and ! is the symbol we
> already associate with template instantiation:
> 
> alias memberFunctions = __traits(allMembers, T)
> .!staticMap!Member
> .!Filter!(isSomeFunction);
[...]

+1.


T

-- 
Too many people have open minds but closed eyes.


Re: Blog post: What D got wrong

2018-12-11 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 11, 2018 at 03:03:19PM +0100, Daniel Kozak via 
Digitalmars-d-announce wrote:
>On Tue, Dec 11, 2018 at 11:50 AM Atila Neves via Digitalmars-d-announce
><[1]digitalmars-d-announce@puremagic.com> wrote:
> 
>  A few things that have annoyed me about writing D lately:
> 
>  [2]https://atilanevesoncode.wordpress.com/2018/12/11/what-d-got-wrong/
> 
>Eponymous templates - workaround
>[3]https://run.dlang.io/is/qIvcVH
[...]

Clever!

Perhaps this should be proposed as the lowering in a DIP for eponymous
templates improvement.


T

-- 
Тише едешь, дальше будешь.


Re: Blog post: What D got wrong

2018-12-11 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 11, 2018 at 12:57:03PM +, Atila Neves via 
Digitalmars-d-announce wrote:
> On Tuesday, 11 December 2018 at 12:52:20 UTC, Adam D. Ruppe wrote:
> > On Tuesday, 11 December 2018 at 10:45:39 UTC, Atila Neves wrote:
> > > A few things that have annoyed me about writing D lately:
> > > 
> > > https://atilanevesoncode.wordpress.com/2018/12/11/what-d-got-wrong/
> > 
> > If @property worked for a thing to return a delegate, it would be
> > useful.
> > 
> > But n, we got worked up over syntax and forgot about semantics
> > :(
> 
> @property is useful for setters. Now, IMHO setters are a code stink
> anyway but sometimes they're the way to go. I have no idea what it's
> supposed to do for getters (nor am I interested in learning or
> retaining that information) and never slap the attribute on.

You don't need @property for setters. This works:

struct S {
void func(int x);
}
S s;
s.func = 1;

Of course, it's generally not a good idea to call it `func` when the
intent is to emulate a member variable. :-D

I agree setters are a code stink, but only when they are trivial:

struct S {
private int _x;

// This is a code smell: just make _x public, dammit!
void x(int val) { _x = val; }
}

But they can be very useful for non-trivial use cases.  Recently I wrote
code that auto-generates a nice D API for setting GLSL inputs. So
instead of writing:

FVec position;
glUniform3fv(myshader.u_dirLightId_pos, 1, position[].ptr);

I write:

FVec position;
myshader.position = position; // much more readable and less error 
prone!

with myshader.position defined as a setter function that does that ugly
glUniform3fv call for me. Plus, I can hide away that ugly internal
detail of attribute position IDs and make it private, instead of
exposing it to the world and adding a needless GL dependency to client
code. (E.g., now I have the possibility of substituing a Direct3D
backend for the OpenGL just by emitting a different implementation for
myshader.position. The calling code doesn't need to be touched.)


T

-- 
If you think you are too small to make a difference, try sleeping in a closed 
room with a mosquito. -- Jan van Steenbergen


Re: Blog post: What D got wrong

2018-12-11 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 11, 2018 at 10:45:39AM +, Atila Neves via 
Digitalmars-d-announce wrote:
> A few things that have annoyed me about writing D lately:
> 
> https://atilanevesoncode.wordpress.com/2018/12/11/what-d-got-wrong/

About UFCS chains for templates: totally agree!  I found myself wishing
for exactly that, many times.  I'd even venture to say we should cook up
a DIP for it. To prevent confusion and potential ambiguity with
non-template UFCS chains, I propose using a separate operator, perhaps
`.!`:

alias blah = AliasSeq!(...)
.!firstTemplate!(...)
.!secondTemplate!(...)
...;

Template lambdas have also been an annoyance for me.  But again, there's
a need to distinguish between a template lambda and a non-template
lambda.

And yes, I've also run into the eponymous template renaming problem. But
I think it will be a pretty small and safe change to use `this` instead
of repeating the template name?  And while we're at it, might as well
use `this` for recursive templates too.  So we'd have something like:

template Eponymous(T...) {
static if (T.length == 0)
enum this = 1;
else
enum this = 1 + 2*this!(T[1 .. $]);
}

Now we can freely rename Eponymous without also having to rename the
occurrences of `this`, which in current syntax would also have to be
spelt out as `Eponymous`.

Though we probably have to write it as `This` instead, in order to
prevent ambiguity when working with class templates.

@property getters... I've pretty much given up on @property by this
point, except for the few places (primarily isInputRange and its ilk)
where @property is explicitly tested for. Optional parens and the
equivalence of `func(1)` vs. `func = 1` have made the distinction
between @property and non-@property more-or-less irrelevant except for
language lawyers and corner cases that nobody uses. Dmd's -property flag
is a flop that nobody uses anymore.  There have been a few efforts in
the past for reviving @property, but nothing has come of it, and in the
recent years nobody has even bothered to talk about it anymore.  So,
tl;dr: @property is moribund, if not completely dead.  As far as I'm
concerned, it belongs only in the history books of D now.

inout... it's still useful for reducing template bloat in certain cases.
But yeah, it has some not-very-pretty corner cases that I don't really
want to talk about right now.  But for the most part, the niche cases
for which it's intended still work pretty well. It can be a life-saver
when you try to be (slightly) const-correct in your code.  Of course,
const is a giant bear to work with -- it's an all-or-nothing deal that
can require refactoring your *entire* codebase -- and generally I don't
bother with it except for leaf modules that don't affect too much else.
Trying to be const-correct in your core application logic can quickly
turn into a nightmare -- and inout is also implicated in such cases.

And yeah, ref as a storage class rather than part of the type is a
strange concept that seems incongruous with much of the rest of the
language. Its asymmetry with type qualifiers makes it hard to reason
about (you have to shift mental gears when parsing it, which hampers
easy understanding of code).  I generally avoid it except in quick-hack
cases, e.g., to make opIndex work with assignment without actually
writing a separate opIndexAssign, or to grant by-reference semantics to
struct parameters (but in the latter case I've often found it better to
just change the struct to a class instead).  So it's a *necessary* part
of the language, but it feels a lot like a square peg jammed in a round
hole sometimes.  If I were to redesign ref, I'd do it a lot differently.

As for attribute soup... I've mostly given up on writing attributes. I
just stick () in front of every function parameter list to turn them
into templates, and let the compiler do auto-inference for me. The only
time I'd spell out attributes is in unittests, or in the rare case where
I want to ensure a certain attribute is in effect.  But seriously, in
the grand scheme of things, attributes are an added annoyance that
nobody wants to deal with (and do so only grudgingly when it cannot be
helped).  Attributes need to be auto-inferred everywhere. Nothing else
is scalable.  Of course, I realize that it's a bit too late to have auto
inference in non-template functions, but I fully applaud Walter's move
to apply inference to auto functions.  The wider the scope of auto
inference, the less attribute soup needs to be visible in your code, and
the better it will be. In an ideal world, attributes would be completely
invisible, and completely inferred and propagated by the compiler via
static analysis. (Yes I know this doesn't work with separate
compilation. But in theory, it *should*. The compiler should just store
attributes in a special section in the object file and load 

Re: A brief survey of build tools, focused on D

2018-12-11 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 11, 2018 at 09:54:06AM +, Atila Neves via 
Digitalmars-d-announce wrote:
[...]
> No reggae? https://github.com/atilaneves/reggae/

I recently finally sat down and took a look at Button, posted here a few
years ago.  It looked pretty good.  One of these days I really need to
sit down and take a good look at reggae.


> dub is simple and has dependency management, and that's about it.
> Speed?  It's as slow as molasses and hits the network every time
> unless explicitly told not to. Never mind if there's already a
> dub.selections.json file and all of the dependencies are fetched
> (which it could check but doesn't).

According to Sönke's post elsewhere in this thread, these performance
issues have been addressed in the latest version.  I haven't tried it
out to verify that yet, though.


> Trying to do anything non-trivial in dub is a exercise in frustration.
> The problem is that it's the de facto D package manager, so as soon as
> you have dependencies you need dub whether you want to or not.

After fighting with dub for 2 days (or was it a week? it certainly felt
longer :-P) in my vibe.d project, I ended up just creating an empty
dummy project in a subdirectory that declares a dependency on vibe.d,
and run dub separately to fetch and build vibe.d, then I ignore the rest
of the dummy project and go back to the real project root and have SCons
build the real executable for me.  So far, that has worked reasonably
well, besides the occasional annoyance of having to re-run dub to update
to the latest vibe.d packages.


> dub works great if you're writing an executable with some dependencies
> and hardly any other needs. After that...

Yeah.  Being unable to handle generated source files is a showstopper
for many of my projects.  As Neia said, while D has some very nice
compile-time codegen features, sometimes you really just need to write
an external utility that generates source code.

For example, one of my current projects involves parsing GLSL source
files and generating D wrapper code as syntactic sugar for calls to
glUniform* and glAttrib* (so that I can just say `myshader.color =
Vector(1, 2, 3);` instead of manually calling glUniform* with fiddly,
error-prone byte offsets.  While in theory I could use string imports
and CTFE to do this, it's far less hairy to do this as an external step.

Most build systems with automatic dependency extraction would fail when
given this sort of setup, because they generally depend on scanning
directory contents, but in this case the file may not have been
generated yet (it would not be generated until the D code of the tool
that generates it is first compiled, then run). So the dependency would
be missed, resulting either in intermittent build failure or failure to
recompile dependents when the generated code changes.  It's not so
simple to just do codegen as a special preprocessing step -- such tasks
need to be treated as 1st class dependency tasks and handled natively as
part of DAG resolution, not as something tacked on as an afterthought.


T

-- 
Music critic: "That's an imitation fugue!"


Re: A brief survey of build tools, focused on D

2018-12-11 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Dec 11, 2018 at 09:58:39AM +, Atila Neves via 
Digitalmars-d-announce wrote:
> On Monday, 10 December 2018 at 22:18:28 UTC, Neia Neutuladh wrote:
[...]
> In typical D code, it's usually faster to compile per package than
> either all-at-once or per module. Which is why it's the default in
> reggae.

Yeah, for projects past a certain size, compiling per package makes the
most sense.


[...]
> > From discussions on IRC about reducing compile times, though, using
> > Phobos is a good way to get slow compilation, and I use Phobos. That
> > alone means incremental builds are likely to go long.
> 
> Yes. Especially with -unittest.

We've talked about this before.  Jonathan Marler actually ran a test and
discovered that it wasn't something *directly* to do with unittests; the
performance hit was coming from some unexpected interactions with the
way the compiler instantiates templates when -unittest is enabled.  I
don't remember what the conclusion was, though.

Either way, the unittest problem needs to be addressed.  I've been
running into problems with compiling my code with -unittest, because it
causes ALL unittests of ALL packages to be compiled, including Phobos
and external libraries.  It's making it very hard to manage exactly what
is unittested -- I want to unittest my *own* code, not any 3rd party
libraries or Phobos, but right now, there's no way to control that.

Recently I ran into a roadblock with -unittest: I have a project with
rather extensive unittests, but it assumes certain things about the
current working directory and the current environment (because those
unittests are run from a special unittest driver). I have that project
as a git submodule in a different project for experimental purposes, but
now I can't compile with -unittest because the former project's
unittests will fail, not being run in the expected environment. :-(

There needs to be a more fine-grained way of controlling which unittests
get compiled.  Generally, I don't see why I should care about unittests
for external dependencies (including Phobos) when what I really want is
to test the *current* project's code.


T

-- 
The two rules of success: 1. Don't tell everything you know. -- YHL


Re: A brief survey of build tools, focused on D

2018-12-10 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Dec 10, 2018 at 06:27:48PM +, Neia Neutuladh via 
Digitalmars-d-announce wrote:
> I wrote a post about language-agnostic (or, more accurately, cross-
> language) build tools, primarily using D as an example and Dub as a
> benchmark.
> 
> Spoiler: dub wins in speed, simplicity, dependency management, and
> actually working without modifying the tool's source code.
> 
> https://blog.ikeran.org/?p=339

Wow.  Thanks for the writeup that convinces me that I don't need to
waste time looking at Meson/Ninja.

I find the current landscape of build systems pretty dismal. Dub may be
simple to use, but speed, seriously?! If *that's* the generally accepted
standard of build speed out there these days, then hope is slim.

Convenience and simplicity, sure.  But speed? I'm sorry to say, I tried
dub for 2 days and gave up in frustration because it was making my
builds *several times longer* than a custom SCons script.  I find that
completely unacceptable.

It also requires network access.  On *every* invocation, unless
explicitly turned off.  And even then, it performs time-consuming
dependency resolutions on every invocation, which doubles or triples
incremental build times.  Again, unacceptable.

Then it requires a specific source layout, with incomplete /
non-existent configuration options for alternatives.  Which makes it
unusable for existing code bases.  Unacceptable.

Worst of all, it does not support custom build actions, which is a
requirement for many of my projects.  It does not support polyglot
projects. It either does not support explicit control over exact build
commands, or any such support is so poorly documented it might as well
not exist.  This is not only unacceptable, it is a show-stopper.

This leaves only package management as the only thing about dub that I
could even remotely recommend (and even that is too unconfigurable for
my tastes -- basically, it's a matter of "take my way or the highway" --
but I'll give it credit for at least being *usable*, if not very
pleasant).  But given its limitations, it means many of my projects
*cannot* ever be dub projects, because they require multiple language
support and/or code generation rules that are not expressible as a dub
build.  Which means the package management feature is mostly useless as
far as my projects are concerned -- if I ever have a dependency that
requires code generation and/or multiple languages, dub is out of the
question.  So I'm back to square one as far as dependency management and
build system are concerned.

This dismal state of affairs means that if my code ever depends on a dub
package (I do have a vibe.d project that does), I have to use dub as a
secondary tool -- and even here dub is so inflexible that I could not
make coax it work nicely with the rest of my build system.  In my vibe.d
project I had to resort to creating a dummy empty project in a
subdirectory, whose sole purpose is to declare dependency on vibe.d so
that I can run dub to download and build vibe.d (and generate a dummy
executable that does nothing). Then I have to manually link in the
vibe.d build products in my real build system as a separate step.

//

Taking a step back, this state of affairs is completely ridiculous. The
various build systems out there are gratuitously incompatible with each
other, and having dependencies that cross build system boundaries is
completely unthinkable, even though at its core, it's exactly the same
miserable old directed acyclic graph, solved by the same old standard
graph algorithms.  Why shouldn't we be able to integrate subgraphs of
different origins into a single, unified dependency graph, with standard
solutions by standard graph algorithms?  Why should build systems be
effectively walled gardens, with artificial barriers that prevent you
from importing a Gradle dependency into a dub project, and importing
*that* into an SCons project, for example?

After so many decades of "advancement", we're still stuck in the
gratuitously incompatible walled gardens, like the gratuitous browser
incompatibilities of the pre-W3C days of the Web. And on modern CPUs
with GHz clock speeds, RAM measured in GBs, and gigabit download speeds,
building Hello World with a system like dub (or Gradle, for that matter)
is still just as slow (if not slower!) as running make back in the 90's
on a 4 *kHz* processor.  It's ridiculous.

Why can't modern source code come equipped with dependency information
in a *standard format* that can be understood by *any* build system?
Build systems shouldn't need to reinvent their own gratuitously
incompatible DSL just to express what's fundamentally the same old
decades-worn directed graph. And programmers shouldn't need to repeat
themselves by manually enumerating individual graph edges (like Meson
apparently does). It should be the compilers that generate this
information -- RELIABLY -- in a standard format that can be processed by
any tool that understands the common format.  You should be able to

Re: LDC 1.13.0-beta2

2018-11-22 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Nov 22, 2018 at 01:25:53PM +, Joakim via Digitalmars-d-announce 
wrote:
> On Wednesday, 21 November 2018 at 10:43:55 UTC, kinke wrote:
> > Glad to announce the second beta for LDC 1.13:
> > 
> > * Based on D 2.083.0+ (yesterday's DMD stable).
[...]
> I've added native builds for Android, including Android/x86_64 for the
> first time. Several tests for std.variant segfault, likely because of
> the 128-bit real causing x64 codegen issues, but most everything else
> passes.
[...]

What's the status of cross-compiling to 64-bit ARM?  On the wiki you
wrote that it doesn't fully work yet.  Does it work with this new
release?


T

-- 
Never wrestle a pig. You both get covered in mud, and the pig likes it.


Re: termcolor-d - Colors with writeln(...);

2018-11-21 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Nov 21, 2018 at 11:49:29PM +, Vladimirs Nordholm via 
Digitalmars-d-announce wrote:
> On Wednesday, 21 November 2018 at 23:46:00 UTC, Dennis wrote:
> > On Wednesday, 21 November 2018 at 18:36:06 UTC, Vladimirs Nordholm
> > wrote:
> > > (hackish, POSIX only)
> > 
> > Windows support coming? :)
> 
> Maybe during another lunch break ;)
> 
> However, not sure if it's active anymore, but ConsoleD (by Robik and
> Adam D.  Ruppe) has most Windows specific colors and attributes
> available. Maybe give that a look?
[...]

I've used Adam Ruppe's terminal.d to great effect in my CLI programs.
Highly recommended:

https://github.com/adamdruppe/arsd/blob/master/terminal.d


T

-- 
Why did the mathematician reinvent the square wheel?  Because he wanted to 
drive smoothly over an inverted catenary road.


Re: sumtype 0.7.0

2018-11-21 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Nov 21, 2018 at 12:38:25AM +, Paul Backus via 
Digitalmars-d-announce wrote:
> SumType is a generic sum type for modern D. It is meant as an
> alternative to `std.variant.Algebraic`.
> 
> Features:
>   - Pattern matching, including support for structural matching (★)
>   - Self-referential types, using `This`
>   - Works with pure, @safe, @nogc, and immutable (★)
>   - Zero runtime overhead compared to hand-written C
> - No heap allocation
> - Does not rely on runtime type information (`TypeInfo`) (★)
> 
> Starred features (★) are those that are missing from `Algebraic`.
[...]

Took a quick look at this.  Wow!  Excellent job!  Very nicely-designed
API, much better than std.variant.* IMO.

Any way this could be expanded to arbitrary types like Variant? Or is
that not possible without reverting to TypeInfo dependency?


T

-- 
Why have vacation when you can work?? -- EC


Re: termcolor-d - Colors with writeln(...);

2018-11-21 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Nov 21, 2018 at 06:36:06PM +, Vladimirs Nordholm via 
Digitalmars-d-announce wrote:
> https://github.com/vladdeSV/termcolor-d
> 
> Saw a library recently which allowed you to color text, but it had an
> odd syntax.
> 
> Since I already had some code for coloring text in terminals, I made
> this (hackish, POSIX only) project during lunch break. It in action:
> 
> import std.stdio : writeln;
> import termcolor;
> 
> // Color → Green → Foreground
> writeln(C.green.fg, "Green text", resetColor);
> 
> // Color → Red → Background
> writeln(C.red.bg, "Red background", resetColor);
> 
> // only tested on macOS running zsh using iTerm2/Hyper.js/Terminal.app
> 
> Hope this helps those who are making CLI applications :^)

Clever idea!  Doesn't quite cover all the color features of newer
terminals, but good enough for basic coloring on the terminal.  Maybe
I'll steal your idea next time I'm looking for some terminal colors. :D


T

-- 
I am not young enough to know everything. -- Oscar Wilde


Re: LDC 1.13.0-beta2

2018-11-21 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Nov 21, 2018 at 10:43:55AM +, kinke via Digitalmars-d-announce 
wrote:
> Glad to announce the second beta for LDC 1.13:
> 
> * Based on D 2.083.0+ (yesterday's DMD stable).
> * The Windows packages are now fully self-sufficient, i.e., a Visual
> Studio/C++ Build Tools installation isn't required anymore.
> * Substantial debug info improvements.
> * New command-line option `-fvisibility=hidden` to hide functions/globals
> not marked as export, to reduce the size of shared libraries.
> 
> Full release log and downloads:
> https://github.com/ldc-developers/ldc/releases/tag/v1.13.0-beta2
> 
> Thanks to all contributors!

Awesome work keeping up with the DMD releases!


T

-- 
Береги платье снову, а здоровье смолоду. 


Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-14 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Nov 14, 2018 at 06:59:30PM +, Carl Sturtivant via 
Digitalmars-d-announce wrote:
> On Monday, 12 November 2018 at 10:05:09 UTC, Jonathan M Davis wrote:
> > *sigh* Well, I guess that's the core issue right there. A lot of us
> > would strongly disagree with the idea that bool is an integral type
> > and consider code that treats it as such as inviting bugs. We _want_
> > bool to be considered as being completely distinct from integer
> > types. The fact that you can ever pass 0 or 1 to a function that
> > accepts bool without a cast is a problem in and of itself.

+1.

Honestly, I think 'bool' as understood by Walter & Andrei ought to be
renamed to 'bit', i.e., a numerical, rather than logical, value.

Of course, that still doesn't address the conceptually awkward behaviour
of && and || returning a numerical value rather than a logical
true/false state.

The crux of the issue is whether we look at it from an implementation
POV, or from a conceptual POV.  Since there's a trivial 1-to-1 mapping
from a logical true/false state to a binary digit, it's tempting to
conflate the two, but they are actually two different things. It just so
happens that in D, a true/false state is *implemented* as a binary value
of 0 or 1.  Hence, if you think of it from an implementation POV, it
sort of makes sense to treat it as a numerical entity, since after all,
at the implementation level it's just a binary digit, a numerical
entity. However, if you look at it from a conceptual POV, the mapping
true=>1, false=>0 is an arbitrary one, and nothing about the truth
values true/false entail an ability to operate on them as numerical
values, much less promotion to multi-bit binary numbers like int.

I argue that viewing it from an implementation POV is a leaky
abstraction, whereas enforcing the distinction of bool from integral
types is more encapsulated -- because it hides away the implementation
detail that a truth value is implemented as a binary digit.

It's a similar situation with char vs. ubyte: if we look at it from an
implementation point of view, there is no need for the existence of char
at all, since at the implementation level it's not any different from a
ubyte.  But clearly, it is useful to distinguish between them, since
otherwise why would Walter & Andrei have introduced distinct types for
them in the first place?  The usefulness is that we can define char to
be a UTF-8 code unit, with a different .init value, and this distinction
lets the compiler catch potentially incorrect usages of the types in
user code.  (Unfortunately, even here there's a fly in the ointment that
char also implicitly converts to int -- again you see the symptoms of
viewing things from an implementation POV, and the trouble that results,
such as the wrong overload being invoked when you pass a char literal
that no-thanks to VRP magically becomes an integral value.)


> > But it doesn't really surprise me that Walter doesn't agree on that
> > point, since he's never agreed on that point, though I was hoping
> > that this DIP was convincing enough, and its failure is certainly
> > disappointing.

I am also disappointed.  One of the reasons I like D so much is its
powerful abstraction mechanisms, and the ability of user types to behave
(almost) like built-in types.  This conflation of bool with its
implementation as a binary digit seems to be antithetical to abstraction
and encapsulation, and frankly does not leave a good taste in the mouth.
(Though I will concede that it's a minor enough point that it wouldn't
be grounds for deserting D. But still, it does leave a bad taste in the
mouth.)


> I'm at a loss to see any significant advantage to having bool as a
> part of the language itself if it isn't deliberately isolated from
> `integral types`.

Same thing with implicit conversion to/from char types and integral
types.  I understand the historical / legacy reasons behind both cases,
but I have to say it's rather disappointing from a modern programming
language design point of view.


T

-- 
Written on the window of a clothing store: No shirt, no shoes, no service.


Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-12 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Nov 13, 2018 at 02:12:30AM +, 12345swordy via 
Digitalmars-d-announce wrote:
> On Monday, 12 November 2018 at 21:38:27 UTC, Walter Bright wrote:
[...]
> > The underlying issue is is bool a one bit integer type, or something
> > special? D defines it as a one bit integer type, fitting it into the
> > other integer types using exactly the same rules.
> > 
> > If it is to be a special type with special rules, what about the
> > other integer types? D has a lot of basic types :-)
> 
> Ok, you don't want to introduce special rules for integers, and that
> understandable.
>
> However there needs be a tool for the programmer to prevent unwanted
> implicit conversation when it comes to other users passing values to
> their public overload functions.(Unless there is already a zero cost
> abstraction that we are not aware of).
[...]

This discussion makes me want to create a custom bool type that does not
allow implicit conversion. Something like:

struct Boolean {
private bool impl;
static Boolean True = Boolean(1);
static Boolean False = Boolean(0);

// For if(Boolean b)
opCast(T : bool)() { return impl; }

...
}

Unfortunately, it wouldn't quite work because there's no way for
built-in comparisons to convert to Boolean instead of bool. So you'd
have to manually surround everything with Boolean(...), which is a
severe usability handicap.


T

-- 
People tell me that I'm skeptical, but I don't believe them.


Re: Backend nearly entirely converted to D

2018-11-08 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Nov 08, 2018 at 06:38:55PM +, welkam via Digitalmars-d-announce 
wrote:
> On Thursday, 8 November 2018 at 18:15:55 UTC, Stanislav Blinov wrote:
> > 
> > One keystroke (well ok, two keys because it's *) ;)
> > https://dl.dropbox.com/s/mifou0ervwspx5i/vimhl.png
> > 
> 
> What sorcery is this? I need to know. I guess its vim but how does it
> highlight symbols?

As I've said, it highlights symbols based on *word* match, not substring
match, which apparently your editor does.  The latter is flawed, and
probably is what led you to your conclusions.  I suggest looking for an
editor with a better syntax highlighter / search function.


T

-- 
The peace of mind---from knowing that viruses which exploit Microsoft system 
vulnerabilities cannot touch Linux---is priceless. -- Frustrated system 
administrator.


Re: Backend nearly entirely converted to D

2018-11-08 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Nov 08, 2018 at 05:50:20PM +, welkam via Digitalmars-d-announce 
wrote:
> On Wednesday, 7 November 2018 at 22:08:36 UTC, H. S. Teoh wrote:
> > I don't speak for the compiler devs, but IMO, one-letter variables
> > are OK if they are local, and cover a relatively small scope.
> 
> By saying more descriptive I should have clarified that I meant to
> change them to 3-7 letter names. Small variable names are ok for small
> functions like the one in attrib.d called void importAll(Scope* sc).
> It has variable named sc and its clear where it is used.
> 
> Now for all of you who think that one letter variables are ok here is
> exercise. Go and open src/dmd/func.d with your favorite code editor.
> Find function FuncDeclaration resolveFuncCall(). Its around 170 LOC
> long. Now find all uses of variable Dsymbol s. Did you found them all?
> Are you sure?

Yes. My editor knows to search for 's' delimited by word boundaries, so
it would not match 's' in the middle of the word, but only 's'
surrounded by non-alphabetic characters.


> Ok now do the same for variable loc. See the difference?

I see no difference.

Moral: use a better editor. :-P


> > Java-style verbosity IMO makes code *harder* to read because the
> > verbosity gets > in your face, crowding out the more interesting
> > (and important) larger picture of code structure.
> 
> What editor do you use?

Vim.

And I don't even use syntax highlighting (because I find it visually
distracting).


> Here is the worst example to prove my point but its still sufficient. All
> editors worth your time highlights the same text when selected and here is
> example of one letter variable.
> https://imgur.com/a/jjxCdmh
> and tree letter variable
> https://imgur.com/a/xOqbkmn

Your syntax highlighter is broken.  It should not match substrings, but
only the entire word.  Vim's search function does this (including the
highlights, if I turned it on, but by default I leave it off).


> where is all that crowding and loss of large picture you speak of? Its
> the opposite. Code structure is more clear with longer variable names
> than one letter.

Your opinion is biased by using a flawed syntax highlighter / symbol
search function.


> > As Walter said in his recent talk, the length of variable names (or
> > identifiers in general, really) should roughly correspond to their
> > scope
> 
> At best this is argument form authority. You said how thing should be
> not why. For argument sake imagine situation where you need to expand
> function.  By your proposed rules you should rename local variables to
> longer names.  Thats ridiculous. Yes I watched that presentation and
> fully disagree with Walter and know for sure he doesnt have sound
> argument to support his position.
[...]

It's very simple.  The human brain is very good at context-sensitive
pattern matching, something which computers are rather poor at (well, at
least, traditional algorithms that aren't neural-network based).
Natural language, for example, is full of ambiguities, but in everyday
speech we have no problems figuring out what is meant, because the
context supplies the necessary information to disambiguate.  Therefore,
frequently-used words tend to be short (and tend to shorten over time),
because it's not necessary to enunciate the full word or phrase to
convey the meaning.  Rarely-used words tend to be longer (and resist
simplification over time) because context provides less information, and
so the full word becomes necessary in order to minimize information
loss.

Function parameters and local variables are, by definition, restricted
in scope, and therefore the context of the function provides enough
information to disambiguate short names.  And since local variables and
parameters would tend to be used frequently, the brain prefers to
simplify and shorten their identifiers.  As a result, after
acclimatizing, one begins to expect that short names correspond with
local variables, and long names correspond with non-local variables.
Going against this natural expectation (e.g., long names for locals,
short names for globals) causes extra mental load to resolve the
referents.

Your counterargument of expanding a function and needing to rename local
variables is a strawman.  A function is a function, and local variables
don't need to be renamed just because you added more code into it.

(However, there *is* a point that when the function starts getting too
long, it ought to be split into separate functions, otherwise it becomes
harder to understand. On that point, I do agree with you that dmd's code
could stand improvement, since 900-line functions are definitely far too
long to keep the entire context in short-term memory, and so using short
local identifiers begins to lose its benefits and increase its
disadvantages.)


T

-- 
Many open minds should be closed for repairs. -- K5 user


Re: Backend nearly entirely converted to D

2018-11-08 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Nov 08, 2018 at 06:13:55PM +0100, Jacob Carlborg via 
Digitalmars-d-announce wrote:
[...]
> I guess we have very different ideas on what "small scope" is. For me
> it means around 10 lines. Here's an example in the DMD code base, the
> method for doing the semantic analyze on a call expression [1]. It's
> 902 lines long and has a parameter called "exp". Another example, the
> semantic analyze for an is expression [2], 310 lines long. It has a
> parameter called "e".
> 
> Someone familiar with the code base might know that the convention is
> that a variable of a type inheriting from the Expression class is
> usually called "e". Someone new to the code base will most likely not.
> I cannot see how starting to call the variable "expression" or
> "callExpression" would be disrupt. Currently when someone familiar
> with the code base reads the code and sees a variable named "e" the
> developer will think "hey, I know by convention that is usual an
> expression". If the variable was renamed to "expression" then both the
> one familiar and unfamiliar with the code base can immediately read
> that this variable holds an expression.
[...]

A function parameter named 'expression' is far too long. I wouldn't go
as far as calling it 'e', but maybe 'expr' is about as long as I would
go.  You're dealing with the code of a compiler, 'expr' should be
blatantly obvious already that it means "expression".  Spelling it out
completely just clutters the code and makes it harder to read.


T

-- 
Three out of two people have difficulties with fractions. -- Dirk Eddelbuettel


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-08 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Nov 08, 2018 at 06:25:06PM +0100, Jacob Carlborg via 
Digitalmars-d-announce wrote:
> On 2018-11-08 05:16, Manu wrote:
> 
> > 4 seconds? That's just untrue. D is actually kinda slow these
> > days...  In my experience it's slower than modern C++ compilers by
> > quite a lot.
> 
> This is my result on macOS:
> 
> $ $ make -f posix.mak clean
> $ time make -f posix.mak -j 16
> real  0m3.127s
> user  0m5.478s
> sys   0m1.686s
[...]

Result on Debian/Linux (amd64):

real0m8.445s
user0m11.088s
sys 0m1.453s


Slower than C++ compilers?! That's impossible.  There must be something
wrong with your setup, or else with your OS.  Dmd is easily one of the
fastest compilers I've ever used, even after the noticeable slowdown
when we started bootstrapping from D. G++, for example, is at least an
order of magnitude slower.  On my system, anyway. YMMV obviously.

(Of course, it depends on what D features you use... template-heavy and
CTFE-heavy code tends to slow it down pretty badly. But still, it's
pretty fast compared to g++.)


T

-- 
Who told you to swim in Crocodile Lake without life insurance??


Re: xlsxd: A Excel xlsx writer

2018-11-07 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Nov 07, 2018 at 04:58:46PM +, Robert Schadek via 
Digitalmars-d-announce wrote:
> On Wednesday, 7 November 2018 at 16:49:58 UTC, H. S. Teoh wrote:
> > 
> > Is there support for reading xlsx files too?
> > 
> 
> No, Pull Requests are welcome

Ah, unfortunately I have no experience working with xlsx or with
libxlsx.  I was hoping for read access in D so that I can write a simple
utility to extract data from xlsx files.  Maybe next time.


T

-- 
It always amuses me that Windows has a Safe Mode during bootup. Does that mean 
that Windows is normally unsafe?


Re: Backend nearly entirely converted to D

2018-11-07 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Nov 07, 2018 at 09:49:41PM +, welkam via Digitalmars-d-announce 
wrote:
[...]
> One of biggest and needless hurdle I face in reading DMD code is
> single letter variable name. If I change one letter variable names to
> more descriptive ones would that patch be welcomed or considered
> needless change?

I don't speak for the compiler devs, but IMO, one-letter variables are
OK if they are local, and cover a relatively small scope.  Java-style
verbosity IMO makes code *harder* to read because the verbosity gets in
your face, crowding out the more interesting (and important) larger
picture of code structure.

As Walter said in his recent talk, the length of variable names (or
identifiers in general, really) should roughly correspond to their
scope: local variable names ought to be concise, but global variables
ought to be verbose (both to avoid identifier collision when a larger
amount of code is concerned, and also to serve as a convenient visual
indication that yes it's a global).


T

-- 
An imaginary friend squared is a real enemy.


Re: xlsxd: A Excel xlsx writer

2018-11-07 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Nov 07, 2018 at 04:41:39PM +, Robert Schadek via 
Digitalmars-d-announce wrote:
> https://code.dlang.org/packages/xlsxd
> 
> Announcing xlsxd a OO wrapper for the C library libxlsxwriter [1].
> 
> Run:
> 
> import libxlsxd;
> auto workbook  = newWorkbook("demo.xlsx");
> auto worksheet = workbook.addWorksheet("a_worksheet");
> worksheet.write(0, 0, "Hello to Excel from D");
> 
> 
> and you have created a Excel spreadsheet in the xlsx format with name
> demo.xlsx
> that contains the string "Hello to Excel from D" in row 0, column 0.
> 
> [1] https://github.com/jmcnamara/libxlsxwriter

Is there support for reading xlsx files too?


T

-- 
Старый друг лучше новых двух.


Re: Backend nearly entirely converted to D

2018-11-06 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Nov 06, 2018 at 02:12:02PM -0800, Walter Bright via 
Digitalmars-d-announce wrote:
> With the recent merging of the last of the big files machobj.d:
> 
> https://github.com/dlang/dmd/pull/8911
> 
> I'm happy to say we're over the hump in converting the backend to D!
> 
> Remaining files are minor: tk.c, cgen.c, dt.c, fp.c, os.c, outbuf.c,
> sizecheck.c, strtold.c and mem.c. I'll probably leave a couple in C
> anyway - os.c and strtold.c. sizecheck.c will just go away upon
> completion.
> 
> Thanks to everyone who helped out with this!

Awesome news!


> Of course, the code remains as ugly as it was in C. It'll take time to
> bit by bit refactor it into idiomatic D.

What sort of refactoring are we looking at?  Any low-hanging fruit here
that we non-compiler-experts can chip away at?


> The more immediate benefit is to get rid of all the parallel .h files,
> which were a constant source of bugs when they didn't match the .d
> versions.

Finally!


T

-- 
Do not reason with the unreasonable; you lose by definition.


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-06 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Nov 06, 2018 at 07:44:41PM +, Atila Neves via 
Digitalmars-d-announce wrote:
> On Tuesday, 6 November 2018 at 18:00:22 UTC, Vladimir Panteleev wrote:
> > This is a tool + article I wrote in February, but never got around
> > to finishing / publishing until today.
> > 
> > https://blog.thecybershadow.net/2018/02/07/dmdprof/
> > 
> > Hopefully someone will find it useful.
> 
> Awesome, great work!
> 
> I really really hate waiting for the compiler.

OTOH, I really really hate that the compiler, in the name of faster
compilation, eats up all available RAM and gets OOM-killed on a low
memory system, so no amount of waiting will get me an executable.


T

-- 
Famous last words: I wonder what will happen if I do *this*...


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-06 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Nov 06, 2018 at 06:00:22PM +, Vladimir Panteleev via 
Digitalmars-d-announce wrote:
> This is a tool + article I wrote in February, but never got around to
> finishing / publishing until today.
> 
> https://blog.thecybershadow.net/2018/02/07/dmdprof/
> 
> Hopefully someone will find it useful.

I don't have the time to look into this right now, but at a cursory
glance, WOW.  This is awesome!  It looks like it would be really useful
one day when I try to tackle the dmd-on-lowmem-system problem again.
This will greatly help identify which Phobos modules cause big
slowdowns on a low-memory host system.


T

-- 
Recently, our IT department hired a bug-fix engineer. He used to work for 
Volkswagen.


Re: Wed Oct 7 - Avoiding Code Smells by Walter Bright

2018-11-05 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Nov 05, 2018 at 11:50:34AM -0500, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
> On 11/5/18 7:19 AM, Codifies wrote:
> > I subscribed to this forum in the hope I'd get irregular updates on
> > useful and interesting things related to the D language.
> > 
> > This thread as far as I see it had degenerated into a somewhat
> > childish and unproductive waste of time, I wouldn't object to a
> > moderator locking this thread
> 
> There is a troll here posting as multiple different aliases, who has
> tried this before, and continually comes back to harp on the same
> issue. It's why I haven't participated, he doesn't need to have more
> encouragement.
> 
> Just give it time, he will give up and go back to being a lurker. It
> would be good if people just stop responding here.
[...]

Yeah, after a while I realized that he was not sincere about
contributing constructively, and I just stopped responding. It's simply
not worth the time and effort, and it only generates more noise anyway.
It's standard online forum advice, guys: don't feed the troll.


T

-- 
The most powerful one-line C program: #include "/dev/tty" -- IOCCC


Re: LDC 1.13.0-beta1

2018-11-04 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Nov 02, 2018 at 09:04:13PM +, kinke via Digitalmars-d-announce 
wrote:
> Glad to announce the first beta for LDC 1.13:
> 
> * Based on D 2.083.0.
> * The Windows packages are now fully self-sufficient, i.e., a Visual
> Studio/C++ Build Tools installation isn't required anymore.
> * Substantial debug info improvements for GDB.
> 
> Full release log and downloads:
> https://github.com/ldc-developers/ldc/releases/tag/v1.13.0-beta1
> 
> Thanks to all contributors!

Just wanted to say thanks to the LDC team and everyone else who was
involved in making it possible for LDC releases to track DMD releases so
closely.  I'm quite tempted to switch to LDC as my main compiler instead
of DMD git master, because of the better codegen and wider range of arch
targets.  Thanks, guys!


T

-- 
Leather is waterproof.  Ever see a cow with an umbrella?


Re: Wed Oct 17 - Avoiding Code Smells by Walter Bright

2018-11-02 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Nov 02, 2018 at 10:18:11AM +, ShadoLight via Digitalmars-d-announce 
wrote:
> On Friday, 2 November 2018 at 00:53:52 UTC, H. S. Teoh wrote:
> > 
> > And along that line, recent wisdom is that it's better to move
> > things *out* of classes (and structs) if they don't need access to
> > private members. (Sorry, I wanted to include a link for this, but I
> > couldn't find the article -- the title eludes me and google isn't
> > turning up anything useful.)  Class and struct APIs should be as
> > minimal as possible -- just enough to do what needs to be done and
> > no more, and the rest of the syntactic sugar (that makes it more
> > palatable to your users) belongs outside as optional convenience
> > functions.
> > 
> 
> Maybe you are thinking of the "Prefer non-member non-friend functions
> to member functions" rule from Herb Sutter's "Effective C++" books?
[...]

Ah yes, that's the one.  Thanks!


T

-- 
Democracy: The triumph of popularity over principle. -- C.Bond


Re: Wed Oct 17 - Avoiding Code Smells by Walter Bright

2018-11-01 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Nov 02, 2018 at 12:25:21AM +, unprotected-entity via 
Digitalmars-d-announce wrote:
[...]
> "Encapsulation is sometimes referred to as the first pillar or
> principle of object-oriented programming. According to the principle
> of encapsulation, a class or struct can specify how accessible each of
> its members is to code outside of the class or struct. Methods and
> variables that are not intended to be used from outside of the class
> .. can be hidden to limit the potential for coding errors or malicious
> exploits."
>
> https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/

That is a narrow definition of encapsulation that is very OO-centric,
and IMO, it sorta misses the larger point behind encapsulation, focusing
instead on the mechanics of it, and that the specific implementation of
it in the OO paradigm.  The concept of encapsulation goes beyond OO and
classes; it has to do with modularity and well-defined interfaces
between modules (or units of encapsulation -- this is not necessary the
same thing as a D module).

It's fine to explain how OO's concept of classes and methods implement
encapsulation, but to *define* encapsulation in terms of classes and
methods is IMO missing the forest for the trees.  If *that's* your basis
for understanding encapsulation, it would it explain a lot of your
reactions to this thread.  But there is more to the world than the
narrow realm of the OO paradigm, and in a multi-paradigm language like
D, it doesn't seem to make sense to be handicapped by a definition of
encapsulation that really only makes sense in the context of an OO
language.

What encapsulation is really about, is the ability to modularize your
code into self-contained units which have well-defined interfaces, with
private code and data that cannot be accessed outside of that unit.  In
OO languages, this unit would be the class.  But in D, it's the module.
That doesn't make D non-encapsulated; it just means we're using a
different unit than an OO language.  It's just a difference between
pounds and dollars, not some fundamental discrepancy.  You just have to
convert one currency to another, that's all.


> D simply has *zero* support for encapsulation of classes or structs,
> within the module (only encapsulation from code that is outside the
> module).
> 
> Any programmers interested in enforcing encapsulation in D (the first
> pillar of OOP), are *forced* (because the language doesn't provide the
> tool to do anything else) to use the one class, one struct, per file
> solution. Which seems really silly to me. D forces you into Java like
> programming - just to encapsulate a little class or struct.
> 
> Speaking of 'structs', I don't see anyone in the D community, telling
> others to use 'one struct per module'.

Because we love UFCS, and structs just lend themselves very well to that
sort of usage.

And along that line, recent wisdom is that it's better to move things
*out* of classes (and structs) if they don't need access to private
members. (Sorry, I wanted to include a link for this, but I couldn't
find the article -- the title eludes me and google isn't turning up
anything useful.)  Class and struct APIs should be as minimal as
possible -- just enough to do what needs to be done and no more, and the
rest of the syntactic sugar (that makes it more palatable to your users)
belongs outside as optional convenience functions.


T

-- 
There is no gravity. The earth sucks.


Re: Wed Oct 17 - Avoiding Code Smells by Walter Bright

2018-11-01 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Nov 01, 2018 at 10:37:59PM +, unprotected-entity via 
Digitalmars-d-announce wrote:
> On Thursday, 1 November 2018 at 03:10:22 UTC, H. S. Teoh wrote:
> > 
> > Actually, code within a module *should* be tightly coupled and
> > cohesive -- that's the whole reason to put that code inside a single
> > module in the first place.  If two pieces of code inside a module
> > are only weakly coupled or completely decoupled, that's a sign that
> > they should not be in the same module at all.  Or at the very least,
> > they should belong in separate submodules that are isolated from
> > each other.
> 
> How does one determine, whether a 10,000 line module, is tightly
> coupled and cohesive?
> 
> Only the author can make that statement - which they naturally will,
> even if it's not true.
> 
> An outsider, seeking to verify that statement, has a hell of a job on
> their hands...(I for one, think code smell immediately).

The code reviewer can reject a 10,000-line module off the bat as being
too large.  It's up to the project to enforce such conventions.


> As soon as you see a class interface in a module, in D, you have to
> assume their is other code in the module, perhaps down around line
> 9,900, that is bypassing its interface, and doing who knows what to
> it
> 
> Sure, the author might be happy, but an auditor/code reviewer, will
> likely have a different view, when a 10,000 line module is shoved in
> front of them, and they know, that 'anything goes', 'all bets are
> off', inside the D module

Yes, and this is when you, or the project manager, puts down the foot
and say this is unacceptable, break this module up into smaller
submodules or else it doesn't go into the master repository.  Simple.


[...]
> I don't use a particular language. I'm more interested in design and
> architecture.
> 
> In the age of 'lock-down-everything', increased modularity is becoming
> more important. A monolithic module approach, is already outdated, and
> risky, in terms of developing secure, maintainable software

I don't understand what D's approach to modules has to do with being
monolithic.  Why can't you just write smaller modules if private being
module-wide bothers you so much?  D has package.d that can make a set of
smaller submodules appear as a larger module to outside code.  Use it.


> I think providing an additional tool, to those who seek to use D, such
> as 'strict private' (syntax can be argued about), would aid better
> design - it can't make it any worse, that's for sure).
> 
> Although. I don't mean strict private like freepascal, but strict
> private, as in it inherits everything that private already does, but
> additionally, become private within the module too.
> 
> Is that really such a bad idea? Are there no programmers out there in
> the D world that might think this could be a good, additional tool, to
> give programmers, so they can better architect their solution?

It's not a bad idea. In fact, if I were in charge of designing D, I'd
probably do the same.  But D chose to do it differently -- and part of
the rationale, if I understand it correctly, is to eliminate the need
for `friend` declarations like in C++.  But maybe Andrei or whoever it
was that made this decision can step up and explain why it was designed
this way.

All we're saying is that it's not the end of the world if private is
module-wide rather than aggregate-wide.  You can still have your
encapsulation by splitting up your code into smaller files, which, given
what you said above, you want to be doing anyway since the idea of a
10,000-line source file is so abhorrent to you.

The fundamental issue here is that there needs to be some kind of unit
of encapsulation.  Java and C# chose to make the class that unit; D
chose the module.  If the Java/C# unit of encapsulation is what you
want, all you have to do is to make the module equivalent to the class,
problem solved.  It's not as though there is no encapsulation at all and
you're left out in the wild west of C's global namespace, or that the
workaround is too onerous to be practical.


> The amount of push back in the D community on this idea, is really odd
> to me. I'm still trying to understand why that is. Are D programmers
> just hackers, insterested in getting their code to work, no matter
> what? Are their not enough Java/C# programmers coming to D - and
> bringing their design skills with them?

It's not so much push back, as being worn out from every other newcomer
clamoring for the same thing year after year without ever having tried
to do it the D way.  Not saying that you're doing that, but it just
becomes a sort of knee-jerk reaction after X number of newcomers barge
in repeating the same old complaints that has the same old answers that
everyone is tired of repeating.

Having said that, though, there are some here who *do* want something
like what you describe... IIRC Manu has voiced this before, and there
may be others. (I myself don't consider it a 

Re: Wed Oct 17 - Avoiding Code Smells by Walter Bright

2018-10-31 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Nov 01, 2018 at 02:45:19AM +, unprotected-entity via 
Digitalmars-d-announce wrote:
[...]
> Another thing to look for, is signs of code smell. I would include in
> this, unit tests calling private methods (which seems to be a popular
> thing for D programmers to do). Some will disagree that this is a code
> smell, but I have yet to see a good argument for why it is not.

White-box testing.

In principle, I agree with you that if your unittests are doing
black-box testing, then they should definitely not be calling private
methods.

However, limiting yourself to black-box testing means your private
functions can be arbitrarily complex and yet it's not thoroughly tested.
Sometimes you really do want a unittest to ensure the private method is
doing what you think it's doing, and this requires white-box testing.
This is especially important to prevent regressions, even if it seems
redundant at first.  Only doing black-box testing means a later code
change in the private method can subtly introduce a bug that's not
caught by the unittest (because it cannot call a private method directly
to verify this).


> Forget LOC. Look for good architecture, decoupling, modularity,
> encapsulation, information hidingetc..etc... again, sadly, these
> concepts are not directly promoted when writing modules in D, since
> the module exposes everything to everything else in the module - and
> programmers will surely make use of any convenient hack that avoids
> them having to think about good architecture ;-)
[...]

Actually, code within a module *should* be tightly coupled and cohesive
-- that's the whole reason to put that code inside a single module in
the first place.  If two pieces of code inside a module are only weakly
coupled or completely decoupled, that's a sign that they should not be
in the same module at all.  Or at the very least, they should belong in
separate submodules that are isolated from each other.

But besides all this, D's philosophy is about mechanism rather than
policy.  The goal is to give the programmer the tools to do what he
needs to do, rather than a bunch of red tape to dictate what he cannot
do.  That's why we have @trusted, @system, and even asm.  The programmer
is responsible for making sane architectural decisions with the tools he
is given, rather than being told what (not) to do within the confines of
his cell.  If you're looking for policy, maybe Java would suit you
better. :-P


T

-- 
Nobody is perfect.  I am Nobody. -- pepoluan, GKC forum


Re: New Initiative for Donations

2018-10-26 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Oct 26, 2018 at 02:38:08AM +, Joakim via Digitalmars-d-announce 
wrote:
[...]
> On Thursday, 25 October 2018 at 23:10:50 UTC, H. S. Teoh wrote:
[...]
> > Common fallacy: new == better.
> 
> As with D, sometimes the new _is_ better, so perhaps you shouldn't
> assume old is better either.

Another common fallacy: negation of "new == better" implies "old ==
better".

:-D


T

-- 
No! I'm not in denial!


Re: fearless v0.0.1 - shared made easy (and @safe)

2018-09-19 Thread H. S. Teoh via Digitalmars-d-announce
On Wed, Sep 19, 2018 at 10:58:04AM +, thedeemon via Digitalmars-d-announce 
wrote:
> On Tuesday, 18 September 2018 at 17:20:26 UTC, Atila Neves wrote:
> > I was envious of std::sync::Mutex from Rust and thought: can I use
> > DIP1000 to make this work in D and be @safe? Turns out, yes.
> 
> Nice! I spent a few minutes playing with the example and trying to
> break it, make the pointer escape, but I couldn't, the compiler caught
> me every time.  This DIP1000 thing seems to be working!

Finally, dip1000 is proving its value.


T

-- 
Doubt is a self-fulfilling prophecy.


Re: dxml 0.2.0 released

2018-09-13 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, Aug 30, 2018 at 07:26:28PM +, nkm1 via Digitalmars-d-announce wrote:
> On Monday, 12 February 2018 at 16:50:16 UTC, Jonathan M Davis wrote:
> > Folks are free to decide to support dxml for inclusion when the time
> > comes and free to vote it as unacceptable. Personally, I think that
> > dxml's approach is ideal for XML that doesn't use entity references,
> > and I'd much rather use that kind of parser regardless of whether
> > it's in the standard library or not. I think that the D community
> > would be far better off with std.xml being replaced by dxml, but
> > whatever happens happens.

+1.  I vote for adding dxml to Phobos.


[...]
> I'm using dxml now, and it's a very good library. So I thought "it
> should be in Phobos instead of std.xml" and searched the newsgroup.
> Sorry for necroposting. Anyway, what I wanted to say is just take an
> example from Perl and call it std.xml.simple. Then people would know
> what to expect from it and would use it (because everyone likes
> simple). That would also leave a way to include std.xml.full (or some
> such) at some indefinite point in the future. Which is, in practice,
> probably never - and that's fine, because who needs DTD? screw it...
[...]

That's a good idea, actually.  That will stop people who expect full
DTD support from complaining that it's not supported by the standard
library.

I vote for adding dxml to Phobos as std.xml.simple.  We can either leave
std.xml as-is, or deprecate it and work on std.xml.full (or
std.xml.complex, or whatever).  The current state of std.xml gives a
poor impression to anyone coming to D the first time and wanting to work
with XML, and having std.xml.simple would be a big plus.


T

-- 
This is not a sentence.


Re: Copy Constructor DIP and implementation

2018-09-12 Thread H. S. Teoh via Digitalmars-d-announce
On Tue, Sep 11, 2018 at 03:08:33PM +, RazvanN via Digitalmars-d-announce 
wrote:
> I have finished writing the last details of the copy constructor
> DIP[1] and also I have published the first implementation [2].
[...]

Here are some comments:

- The DIP should address what @implicit means when applied to a function
  that isn't a ctor.  Either it should be made illegal, or the spec
  should clearly state that it's ignored.

   - I prefer the former option, because that minimizes the risk of
 conflicts in the future if we were to expand the scope of @implicit
 to other language constructs.

   - However, the latter option is safer in that if existing user code
 uses @implicit with a different meaning, the first option would
 cause code breakage and require the user to replace all uses of
 @implicit with something else.

- The DIP needs to address what a copy ctor might mean in a situation
  with unions, and/or whether it's legal to use unions with copy ctors.
  There are a few important cases to consider (there may be others):

   - Should copy ctors even be allowed in unions?

   - The copy ctor is defined in the union itself.

   - The union contains fields that have copy ctors. If two overlapping
 fields have copy ctors, which ctor will get called?  Should this
 case be allowed, or made illegal?

   - How would type qualifiers (const, immutable, etc.) interact with
 unions of the above two cases?

   - If a struct contains a union, and another non-union field that has
 a copy ctor, how should the compiler define the generated copy
 ctor of the outer struct?

- If a struct declares only one copy ctor, say mutable -> mutable, then
  according to the DIP (under the section "copy constructor call vs.
  standard copying (memcpy)"), declaring an immutable variable of that
  type will default to standard copying instead.

   - This means if the struct needs explicit handling of copying in a
 copy ctor, the user must remember to write all overloads of the
 copy ctor, otherwise there will be cases where standard copying is
 silently employed, bypassing any user-defined semantics that may be
 necessary for correct copying.

   - Shouldn't there be a way for the compiler to automatically generate
 this boilerplate code instead?  Should there be a way to optionally
 generate warnings in such cases, so that the user can be aware in
 case default copying isn't desired?

- What should happen if the user declares a copy ctor as a template
  function?  Should the compiler automatically use that template to
  generate const/immutable/etc. copy ctors when needed?


T

-- 
Change is inevitable, except from a vending machine.


Re: Assertions in production code on Reddit

2018-08-31 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Aug 31, 2018 at 02:02:39PM -0700, Walter Bright via 
Digitalmars-d-announce wrote:
> On 8/31/2018 1:19 PM, Nick Sabalausky (Abscissa) wrote:
> > (IF the programmer in question even has the expertise to implement
> > such a system correctly anyway - and most don't).
> 
> The closer you can get to the ideal, the better. It's not all or
> nothing.
> 
> I'll have done my job if people would just quit justifying sticking
> their fingers in their ears and shouting lalalalalalalalal when a bug
> is detected.
> 
> Don't you find it terrifying that nobody has even written a book on
> the topic?

Maybe you should write the first one. :-D


T

-- 
Never wrestle a pig. You both get covered in mud, and the pig likes it.


Re: Write for the D Blog!

2018-08-13 Thread H. S. Teoh via Digitalmars-d-announce
On Mon, Aug 13, 2018 at 02:29:30PM +, Mike Parker via 
Digitalmars-d-announce wrote:
[...]
> If you've got something D-related you'd like to tell the world about,
> please let me know. It doesn't have to be a guest post -- project
> highlights, where you give me info about your project and I write the
> post, are also welcome.  I'm open to ideas for other formats, too.
> Just drop me a line at aldac...@gmail.com and let me know what you'd
> like to do.
[...]

I'm too busy to actually work on much right now, but if you're running
short of material to post, what about this draft wiki article that I
haven't been able to finish?

https://wiki.dlang.org/User:Quickfur/Compile-time_vs._compile-time

It's pretty much ready except for some details that need to be updated
(that I'm sorry to say I haven't had time to get around to).  You can
freely copy/adapt/etc. whatever you need from this article to throw
together a blog post.

Just throwing it out there.


T

-- 
Живёшь только однажды.


  1   2   3   >