Re: The New DIP Process

2024-02-28 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, February 28, 2024 10:44:08 PM MST Brad Roberts via Digitalmars-
d-announce wrote:
> On 2/28/2024 7:34 PM, Jonathan M Davis via Digitalmars-d-announce wrote:
> > On Wednesday, February 28, 2024 7:18:29 PM MST Mike Parker via
> > Digitalmars-d->
> > announce wrote:
> >> On Wednesday, 28 February 2024 at 19:24:32 UTC, Jonathan M Davis
> >>
> >> wrote:
> >>> I see that they're up on the NNTP server, and the web forum is
> >>> hooked up to them, but there is no mailing list. Is that
> >>> forthcoming and just isn't up yet since that takes some time,
> >>> or are these lists not going to have mailing lists like the
> >>> others?
> >>
> >> They had to be up on NNTP for them to be added to the forums. I
> >> just didn't think about the mailing list. I'll contact Brad.
> >
> > Thanks.
> >
> > - Jonathan M Davis
>
> I set them up earlier today.  It's entirely possible I missed something
> while configuring them as it's been just over 6 years since the last new
> group was added, so do shout if anything looks off.  I see that the
> first two messages already posted made it through, so my confidence is
> reasonably high.
>
> Also worth noting, the news group names are NOT dip.idea and
> dip.development.  They're actually digitalmars.dip.ideas (note the
> plural) and digitalmars.dip.development.
>
> I made the list names just dip.ideas@ and dip.development@ for brevity.
>
> Later,
> Brad

I was able to subscribe. Thanks!

- Jonathan M Davis





Re: The New DIP Process

2024-02-28 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, February 28, 2024 7:18:29 PM MST Mike Parker via Digitalmars-d-
announce wrote:
> On Wednesday, 28 February 2024 at 19:24:32 UTC, Jonathan M Davis
>
> wrote:
> > I see that they're up on the NNTP server, and the web forum is
> > hooked up to them, but there is no mailing list. Is that
> > forthcoming and just isn't up yet since that takes some time,
> > or are these lists not going to have mailing lists like the
> > others?
>
> They had to be up on NNTP for them to be added to the forums. I
> just didn't think about the mailing list. I'll contact Brad.

Thanks.

- Jonathan M Davis





Re: The New DIP Process

2024-02-28 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, February 27, 2024 8:28:01 PM MST Mike Parker via Digitalmars-d-
announce wrote:
> Most of the process takes place in two new forums: DIP Ideas and
> DIP Development (dip.idea and dip.development on the NNTP
> server). The purpose of the Ideas forum is to determine if
> developing the DIP is feasible. The purpose of the Development
> forum is to strengthen proposals in development.

I see that they're up on the NNTP server, and the web forum is hooked up to
them, but there is no mailing list. Is that forthcoming and just isn't up
yet since that takes some time, or are these lists not going to have mailing
lists like the others?

- Jonathan M Davis





Re: Preparing for the New DIP Process

2024-01-25 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, January 25, 2024 8:03:41 AM MST Max Samukha via Digitalmars-d-
announce wrote:
> On Monday, 22 January 2024 at 23:28:40 UTC, Jonathan M Davis
>
> wrote:
> > Of course, ultimately, different programmers have different
> > preferences, and none of us are going to be happy about
> > everything in any language.
>
> It's not only about preferences. The feature is inconsistent with
> how 'invariant' and 'synchronized' are specified. They imply
> class-instance-level private, while the language dictates
> module-level. Consider:
>
> ```
> synchronized class C
> {
>  private int x;
>  private int y;
>
>  invariant () { assert (x == y); }
> }
>
> void foo(C c)
> {
>  // mutate c
> }
> ```
>
> With module-level private, 'foo' is part of C's public interface,
> but it neither locks on c, nor runs the invariant checks. I
> personally have no idea how to fix that sensibly except by
> ditching class invariant/synchronized entirely.

Well, sychronized is actually a function attribute, not a class attribute
(TDPL talks about synchronized classes, but they've never actually been a
thing; it was just a planned idea that was never implemented). You can stick
synchronized on the class itself, but it still only affects the member
functions. So, mutating the class object via non-member functions in the
module really isn't any different from mutating the object with member
functions which aren't marked with synchronized. So, if anything here, I
would argue that the confusion comes from being allowed to stick attributes
on a class and then have them affect the member functions. It does allow you
to stick the attribute in one place and then affect the entire class, but
I'm inclined to think that it probably shouldn't have been allowed in cases
where the attribute isn't actually for the class itself.

Of course, the change that I'd really like to see here is synchronized
removed from the language, since I think that it was definitely a misfeature
(along with having a monitor inside of every class instance to allow
synchronized to work, whether the type is shared or not or has an
synchronized methods or not).

Regardless, because synchronized is not at all a class attribute, I don't
agree that it implies anything related to class-level private, much as I can
see how being allowed to put it directly on a class could cause confusion.

As for invariants, all that the spec promises is that they're called when
public member functions are called. So, again, having a module-level
function directly mutate the members doesn't really violate anything.
However, the part there that I do agree is questionable is that because the
module-level function could be public, it makes it so that it's pretty easy
to end up in a situation where an invariant is skipped when the object is
mutated by calling public functions from the module. But there are also
likely to be cases where it's useful to be able to bypass the invariant like
that (though obviously, it then requires that the code maintain the
invariant, just like @trusted code needs to maintain the promises of @safe).
So, I don't think that it's necessarily a problem that the language works
this way, but it's certainly true that it's something to be mindful of. And
if you want to explictly run the invariant in such sitations, then you can
just assert the class reference. But as with anything related to private, if
you want to guarantee that something only accesses an object via its public
API, you can always just put it in another module.

- Jonathan M Davis





Re: Would this be a useful construct to add to D? auto for constructor call.

2024-01-23 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, January 23, 2024 4:11:00 AM MST ryuukk_ via Digitalmars-d-announce 
wrote:
> On Tuesday, 23 January 2024 at 06:30:08 UTC, Jonathan M Davis
>
> wrote:
> > That being said, I expect that it would be pretty easy to write
> > a mixin to do something like that if you really wanted to.
> > Also, if you're simply looking to not have to name the type,
> > you could do
> >
> > dataGrid = new typeof(datagrid)(15);
> >
> > - Jonathan M Davis
>
> You like to turn off people before they get the chance to develop
> further, this is bad
>
> You should try more languages, it'll be eye opener
>
> ``dataGrid = new typeof(datagrid)(15);`` is both, verbose and ugly
>
> Besides, you seem to have missed this:
>
> https://github.com/dlang/DIPs/blob/e2ca557ab9d3e60305a37da0d5b58299e0a9de0e/
> DIPs/DIP1044.md
>
> https://github.com/dlang/dmd/pull/14650
>
> It could be expanded with structs/classes
>
> So your "it can't be done" argument is already wrong

I never said that it couldn't be done. I said that it goes against how
expressions and assignment in D normally work, so it's probably not a change
that would be accepted. And the DIP and PR that you linked to have been
rejected. If the OP wants to push for a change like this, then they can, and
they might get lucky, but I would expect it to be rejected.

Either way, there are ways to do something similar with what we already
have, so I pointed them out. While you might not like a solution like using
typeof, it is an option that someone can use right now regardless of what
improvements we get in the future.

- Jonathan M Davis





Re: Would this be a useful construct to add to D? auto for constructor call.

2024-01-22 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, January 22, 2024 11:05:28 PM MST Chris Katko via Digitalmars-d-
announce wrote:
> ```D
> class dataGridLayerView{
> int t;
> this(int _t){
>t = _t;
>}
> }
>
> class myClass{
> dataGridLayerView dataGrid;
>
> this()
>{
>dataGrid = new auto(15); // <--- new
>// instead of
>dataGrid = new dataGridLayerView(15);
>}
> }
> ```
>
> Because it seems, conceptually, the compiler should know all the
> details required here to simply insert the right constructor
> name. Basically just the reverse of:
>
> ```D
> auto t = sqrt(15);
> ```
>
> So intuitively it makes sense to me.
>
> It might even be possible to write a mixin to do this, I'm not
> sure.
>
> I'm no D wizard, but I don't see any obvious name or lexing
> conflicts/ambiguity.
>
> Cheers,
> --Chris

This is the wrong forum for questions. This is for announcements.

https://forum.dlang.org/group/general is for general discussions on D.

https://forum.dlang.org/group/learn is for asking questions about using D.

In any case, as far as your question goes, it is unlikely that a feature
like that would be implemented, because expressions in D do not typically
get their type from what they're assigned to. The type of the expression is
determined separately from where it is used, and then it's checked to see
whether it works where it's being used. There are a few exceptions (mostly
having to do with initializing variables from array literals), so I don't
know that the chances of adding a feature like this are zero, but I don't
think that they're high.

That being said, I expect that it would be pretty easy to write a mixin to
do something like that if you really wanted to. Also, if you're simply
looking to not have to name the type, you could do

dataGrid = new typeof(datagrid)(15);

- Jonathan M Davis





Re: Preparing for the New DIP Process

2024-01-22 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, January 22, 2024 4:01:54 PM MST Walter Bright via Digitalmars-d-
announce wrote:
> On 1/21/2024 3:51 AM, zjh wrote:
> > When you need `friend`, You can put them all in one module.
> > Sometimes, when `multiple classes` are closely related and independent,
> > `class level privacy` becomes very important. No one wants , someone from
> > outside may secretly steal money from your home.
>
> D does not allow a different module accessing private class members. Private
> access is limited to the module enclosing that class.
>
> This encourages a different use of modules than C++ "toss code randomly into
> different source files."

It also greatly simplifies having to deal with visibility attributes. I can
understand folks being concerned about it at first, but in practice, it
really isn't a problem.

If it really is a problem for a piece of code to have access to something
private that's in the same module, then you can simply put them in separate
modules, and if your module is large enough that you can't keep track of
that sort of thing like you need to, then it should probably be split up.

And for those cases where you need cross-module access without it being
public, there's always the package attribute.

The end result is that visibility attributes are very easy to reason about,
and you don't have to worry about them much. I really think that the folks
who insist on having private be private to the class/struct (or who insist
on a new attribute which does that) are just too tied to how other languages
work and haven't given D's approach a chance. I don't think that I have
_ever_ seen a problem stem from the fact that D's private is private to the
module. And in fact, it causes so few issues that way that plenty of folks
end up being surprised to find out that it works that way, because they've
never run into a situation where it actually mattered that it wasn't private
to the class/struct.

Of course, ultimately, different programmers have different preferences, and
none of us are going to be happy about everything in any language. So, of
course, there are going to be some folks who are unhappy with how D defines
private, but it's not a feature that has actually been causing us problems,
and it really doesn't make sense at this point to change how it works.

- Jonathan M Davis





Re: DLF September 2023 Planning Update

2023-11-15 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, November 15, 2023 3:26:27 AM MST Sergey via Digitalmars-d-
announce wrote:
> On Wednesday, 15 November 2023 at 09:27:53 UTC, Jonathan M Davis
>
> wrote:
> > On Tuesday, November 14, 2023 12:37:29 PM MST Sergey via
> >
> > Digitalmars-d- announce wrote:
> >> +1 to Steven’s approach
> >>
> >> Idk why DLF don’t like KISS approach :(
> >
> > Their focus is on allowing existing dub packages to continue to
> > compile without any effort whatsoever on the part of anyone
> > using them, because the breakage of dub packages over time as
> > the language changes has become a serious issue (that's the
> > main reason that they started looking at doing editions in the
> > first place).
>
> Maybe I didn't understand the approach.
> like let's assume several editions already in place (and we are
> in 2030) and the project has several dependencies:
> - main code (works with latest version. Latest edition)
> - depmain.1 (work only with edition 2022)
> - depmain.2 (work only with edition 2023)
>
>   |- dep2.1 (work with editions 2023-2027)
>   |- dep2.2 (work only with edition 2029)
>
> So instead of force users to explicitly specify editions in
> dub.jsons - DLF propose to use just "dub build" (without any
> changes in code of dependency packages and main project) and
> somehow each edition should be identified for each package?

As I understand it, the language that we have now (or whatever it is when
editions are actually added) will be the default edition, and any code
wanting to build with a newer edition will have to specify it in the source
code. So, existing code will continue to compile how it does now
theoretically forever, and new code that gets written and wants to use a
newer edition will then specify the edition that it wants to use in each
module, so it will continue to compile with that edition forever. Code can
of course be updated to use a newer edition, but the edition that it uses
will never change unless the code is changed, so it will theoretically all
continue to compile the same way that it always has. It does come with the
downside of having to slap an attribute on your code to use the latest
version of the language, but it means that code will continue to compile the
same way that it always has with no effort regardless of what is done with
future editions.

In contrast, if the default edition is the latest, then code will
potentially break as new editions are released, forcing code to be changed
over time in order to continue to compile (even if it's just adding an
edition attribute to it to force the old behavior), and as we've seen with
deprecations, plenty of folks don't want to bother updating their code until
they have to (and that's assuming that the code is even still maintained,
which can be a real problem with dependencies). The problem could be
mitigated by doing stuff with dub (be it by specifying the edition that you
want to build a dependency with or by having dub figure it out based on when
the code was last updated), but we would have to put in extra work of some
kind to then make old code compile again instead of having it just compile
forever the same way that it always has.

Both approaches have pros and cons. The approach that they currently seem to
want to go with is aimed at keeping code compiling with no effort, which
should keep dub packages working long term, whereas right now, we have
issues with them breaking over time depending on how well they're
maintained. And since it's issues with that that are the primary motivators
behind editions in the first place (e.g. some of the companies using D
depend on packages - potentially many packages - from code.dlang.org, and
it's definitely becoming a problem when some of them aren't kept up-to-date
with the latest language changes). So, it's not terribly surprising that
that's what Walter and Atila would favor.

Of course, that does make writing new code more annoying, which is part of
why there are objections to it. It also makes it much more likely that a lot
of code will just be written for the old version of the language instead of
the latest, which could cause issues. So, it's hard to say which approach is
better.

And of course, regardless of how we deal with specifying editions and which
the default is, we still have the question of whether they're actually going
to manage to make it sane to mix editions (which you'll inevitably do when a
dependency uses a different edition), since features like type introspection
(which D code typically uses quite heavily) are likely to make it pretty
hard to actually implement. So, we'll have to see what they actually manage
to come up with.

- Jonathan M Davis






Re: DLF September 2023 Planning Update

2023-11-15 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, November 14, 2023 12:37:29 PM MST Sergey via Digitalmars-d-
announce wrote:
> +1 to Steven’s approach
>
> Idk why DLF don’t like KISS approach :(

Their focus is on allowing existing dub packages to continue to compile
without any effort whatsoever on the part of anyone using them, because the
breakage of dub packages over time as the language changes has become a
serious issue (that's the main reason that they started looking at doing
editions in the first place). Whether that's the right approach is certainly
debatable (and personally, I'd rather see something in dub take care of it
rather than require that new code slap editions stuff everywhere), but there
is a good reason for the approach that they're currently looking at taking.

- Jonathan M Davis






Re: DConf '23 Talk Videos

2023-09-20 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, September 20, 2023 6:43:49 AM MDT Mike Parker via Digitalmars-d-
announce wrote:
> On Wednesday, 20 September 2023 at 08:41:01 UTC, Stefan Koch
>
> wrote:
> > My feeling is this could be a faulty power supply.
> > You should try a new PSU first.
> >
> > How old is the hardware?
>
> It's a two-year-old box. Yes, it could be the PSU, but in my
> experience when they go bad it manifests in more than one way.
> I've had a couple of them fail over the years.
>
> At any rate, the guy I took it to will zero in on it. I'll have a
> definitive answer tomorrow. If I'm lucky, a full internal
> cleaning of the graphics card and some new thermal paste will
> solve it. I'm usually not that lucky, though :-)

Clearly, your computer is just sick of hearing about dconf and decided to go
on strike. ;)

- Jonathan M Davis





Re: DIP 1028 "Make @safe the Default" is dead

2020-05-29 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, May 28, 2020 10:53:07 PM MDT Walter Bright via Digitalmars-d-
announce wrote:
> The subject says it all.
>
> If you care about memory safety, I recommending adding `safe:` as the first
> line in all your project modules, and annotate individual functions
> otherwise as necessary. For modules with C declarations, do as you think
> best.
>
> For everyone else, carry on as before.

Thank you.

- Jonathan M Davis





Re: DIP 1028 "Make @safe the Default" is dead

2020-05-29 Thread Jonathan M Davis via Digitalmars-d-announce
On Friday, May 29, 2020 6:48:20 AM MDT Meta via Digitalmars-d-announce wrote:
> On Friday, 29 May 2020 at 12:22:07 UTC, Steven Schveighoffer
>
> wrote:
> > On 5/29/20 12:53 AM, Walter Bright wrote:
> >> The subject says it all.
> >>
> >> If you care about memory safety, I recommending adding `safe:`
> >> as the first line in all your project modules, and annotate
> >> individual functions otherwise as necessary. For modules with
> >> C declarations, do as you think best.
> >>
> >> For everyone else, carry on as before.
> >
> > Thank you Walter.
> >
> > I'm sure this was not easy to decide, and is frustrating. It's
> > unfortunate that the thrust of DIP1028 could not be saved and
> > we had to throw out the whole thing for the one bad piece.
>
> It's not unfortunate - it's unnecessary. @safe by default is
> still a laudable and (seemingly) attainable goal. Why throw out
> the entire DIP instead of removing or altering the controversial
> aspect?

IIRC, based on how the DIP process it works, if a DIP gets rejected, it
basically has to go through the whole process again. Walter could certainly
make an executive decision to skip that process and just implement an
altered version of the DIP, but as much flak as he's gotten over his DIPs,
he's very much been trying to stick to the process rather than simply
implementing his ideas.

Now, whether in the future, we'll get a DIP proposing @safe as the default
for all code that the compiler can check while leaving it @system for the
code that it can't, I don't know. The way that Walter stated that DIP 1028
was dead kind of implies that he's given up on it entirely, but we'll have
to wait and see. Based on what he's said, it seems like he may be convinced
that @safe by default will result in @trusted being used inappropriately way
too much if extern(C) declarations aren't @safe by default (in which case,
making @safe the default would actually make things worse), and he clearly
thought that treating declarations differently from definitions would mean
adding an exception to the rules and that such an exception would be very
negative.

- Jonathan M Davis





Re: Rationale for accepting DIP 1028 as is

2020-05-28 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, May 28, 2020 2:50:44 AM MDT Daniel Kozak via Digitalmars-d-
announce wrote:
> On Thu, May 28, 2020 at 4:56 AM Jonathan M Davis via
>
> Digitalmars-d-announce  wrote:
> > As far as I can tell, Walter understands the issues but fundamentally
> > disagrees with pretty much everyone else on the issue.
>
> I do not think so, the issue is, that there could be more people who
> agree with Walter (like me),
> but because we agree we do not participate.

There may be some silent people who agree, but in all the discussions on
this DIP, almost no one has agreed with Walter on this. It has not been a
back and forth discussion with some on Walter's side and some against. It's
been pretty much everyone against Walter. He did unfortunately manage to
convince Atila, so the DIP has been accepted, but based on the discussions,
I think that you may be the only person I've seen say anything positive
about the DIP treating extern(C) functions as @safe. The fact that @safe
becomes the default has garned some debate with some in favor and some
against (with most seeming to be in favor), but the idea of making extern(C)
declarations @safe by default has been almost universally considered a bad
idea by anyone who has responded on the topic. Most DIPs do not get anywhere
close to this level of negative feedback.

> > But since Walter managed to convince Atila, the DIP has been accepted.
>
> So everything is OK right?

Everything is okay? Because a bad DIP got accepted? No, most definitely not.
Quite the opposite. With the DIP in its current state, @safe becomes a lie.
The compiler no longer guarantees that @safe code is memory safe so long as
it doesn't call any @trusted code where the programmer incorrectly marked it
as @trusted. Instead, the compiler blindly treats non-extern(D) declarations
as @safe and invisibly introduces memory safety bugs into @safe code.
Nothing about that is "OK." From the way things currently look, we're going
to have to deal with that hole in @safe in D code in the future, because the
DIP has been accepted, but it adds yet another dark corner to the language
of the sort that folks here tend to make fun of C++ for. Going forward,
we're going to have to be explaining to people why @safe code doesn't
actually guarantee that code is memory safe (in spite of the fact that
that's what it claims to do) and why any and all non-extern(D) declarations
have to be treated with extreme caution to avoid invisibly introducing
memory safety bugs into your code.

Walter is very intelligent and has made many great decisions with D, but
IMHO, this is definitely not one of them.

- Jonathan M Davis





Re: Rationale for accepting DIP 1028 as is

2020-05-27 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, May 27, 2020 3:30:32 AM MDT Andrei Alexandrescu via Digitalmars-
d-announce wrote:
> On 5/27/20 1:49 AM, Walter Bright wrote:
> > On 5/26/2020 9:31 AM, Bruce Carneal wrote:
> >> Currently a machine checked @safe function calling an unannotated
> >> extern C routine will error out during compilation. This is great as
> >> the C routine was not machine checked, and generally can not be
> >> checked.  Post 1028, IIUC, the compilation will go through without
> >> complaint.  This seems quite clear.  What am I missing?
> >
> > Nothing at all.
>
> That means safe by default is effectively loosening D's notion of safety.
>
> This DIP must go.

Which is exactly what most of us have been arguing for weeks (months?). It
either needs to go or be ammended so that non-extern(D) declarations
continue to be treated as @system instead of automatically becoming @safe
with the DIP.

The result of all of that arguing is that Walter accepted the DIP and then
started this thread as his more detailed reply when there were a ton of
complaints about the DIP's acceptance - and of course, you've already read
and replied to his reasoning.

As far as I can tell, Walter understands the issues but fundamentally
disagrees with pretty much everyone else on the issue. He seems to think
that weakening @safe is worth doing, because it will ultimately mean that
more code will be treated as @safe and mechnically checked by the compiler,
whereas most everyone else thinks that weakening @safe is unacceptable. But
since Walter managed to convince Atila, the DIP has been accepted.

- Jonathan M Davis





Re: Rationale for accepting DIP 1028 as is

2020-05-27 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, May 26, 2020 8:58:16 PM MDT Andrei Alexandrescu via Digitalmars-d-
announce wrote:
> On 5/26/20 12:31 PM, Bruce Carneal wrote:
> > Currently a machine checked @safe function calling an unannotated extern
> > C routine will error out during compilation. This is great as the C
> > routine was not machine checked, and generally can not be checked.  Post
> > 1028, IIUC, the compilation will go through without complaint.  This
> > seems quite clear.  What am I missing?
>
> If that's the case, it's the death of DIP 1028.

Walter has acknowledged the problem and seems to think that because it's the
programmer's responsibility to deal with extern(C) functions correctly
(since it's not possible for the compiler to do it), it's up to the
programmer to go and fix any existing code that should be marked @system and
isn't and that having the compiler incorrectly mark extern(C) declarations
as @safe isn't a big problem, because programmers need to be spending the
time to check them anyway. He's already created some PRs to try to fix some
issues with extern(C) declarations in druntime and explicitly markingthem as
@system but doesn't seem to think that it's ultimately a big deal.

- Jonathan M Davis





Re: DIP1028 - Rationale for accepting as is

2020-05-27 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, May 27, 2020 8:13:52 PM MDT Bruce Carneal via Digitalmars-d-
announce wrote:
> On Thursday, 28 May 2020 at 01:14:43 UTC, Jonathan M Davis wrote:
> > On Friday, May 22, 2020 12:09:16 PM MDT rikki cattermole via
> >
> > Digitalmars-d- announce wrote:
> >> [...]
> >
> > Except that the linker matters a great deal in this discussion
> > with regards to extern(D) functions, because @safe and @trusted
> > are part of the name mangling for extern(D) functions. That
> > means that if an extern(D) function declaration's attributes do
> > not match its definition, then you'll get a linker error. So,
> > treating non-extern(D) function declarations as @safe by
> > default isn't necessarily a problem (though it would certainly
> > work to just treat all function declarations as @system by
> > default rather than treating extern(D) function declarations
> > differently). The cases where non-extern(D) function
> > declarations weren't actually @safe would be caught during the
> > linking process. Sure, it would be nice if it were caught
> > sooner, but you don't end up with them being invisibly treated
> > @safe when they're not like we're going to get with DIP 1028.
> >
> > [...]
>
> I remember reading a suggestion that additional linker symbols be
> emitted to carry the attribute and possibly type information
> while leaving the ABI untouched.  Was this found to be
> impractical?

Steven suggested something along those lines. I don't know how practical it
would or wouldn't be, but I don't think that Walter even responded to the
idea.

Regardless, even if it worked perfectly, all such a solution would deal with
would be the cases where you have a non-extern(D) function which is actually
written in D and had its function body marked as @safe (and thus was
verified to be @safe). Such functions definitely exist, but the main problem
is really non-extern(D) functions which are actually written in C or C++ or
whatnot. They won't have been checked for @safety.

Right now, such declarations are treated by the compiler as @system by
default, because it cannot check them for @safety, whereas with DIP 1028, it
will then treat them as @safe in spite of the fact that it can't check them
for @safety simply because Walter thinks that it would be too exceptional to
treat them differently from the general rule of @safe by default and that if
they're left as @system, too many people will just slap @trusted on their
code as a whole to get the compiler to shut up.

But regardless of whether DIP 1028 is the correct decision, the problem
remains that your typical extern(C) function cannot be checked for @safety
by the compiler, because it was compiled by a C compiler. The question then
is just how the D compiler should treat them, and that's the main point of
contention. There may be solutions like Steven suggested which would deal
with edge cases where the implementation is in D but the linkage isn't, but
ultimately, they're just edge cases.

- Jonathan M Davis





Re: DIP1028 - Rationale for accepting as is

2020-05-27 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, May 27, 2020 6:24:25 PM MDT Andrei Alexandrescu via Digitalmars-
d-announce wrote:
> On 5/27/20 9:42 AM, Andrej Mitrovic wrote:
> > On Wednesday, 27 May 2020 at 09:50:50 UTC, Walter Bright wrote:
> >> Un-annotated C declarations should be a red flag to any competent QA
> >> team. Recognizing a false @trusted is a whole lot harder.
> >
> > Is the actual problem those `@trusted:` declarations at the top of C
> > headers?
> >
> > There could be a simple solution to that:
> >
> > Ban `@trusted:` and `@trusted { }` which apply to multiple symbols. Only
>
> > allow `@trusted` to apply to a single symbol. For example:
>
> Oh wow what an interesting idea. Thanks.

I've argued for years that mass-applying any attribute is bad practice. It
makes it way too easy to accidentally apply an attribute to a function, and
experience has shown that it makes it _far_ easier to not realize that a
funtion already has a particular attribute (it's definitely happened in
Phobos PRs that an attribute has been mass-applied and then someone comes
along later and applies it to a specific function, because they didn't think
that that function had that attribute). Unfortunately, plenty of people seem
to love to mass apply attributes rather than marking each function
individually (presumably, because they hate having to use attributes all
over the place). And regardless of whether DIP 1028 is a good idea as-is,
the sad reality of the matter is that it's not uncommon even in druntime for
someone to slap @trusted: at the top of a module. It implies that the
declarations in question were not actually checked, and it's incredibly
error-prone when new declarations are added.

Personally, I normally only use : with access-level modifiers, and while I
like doing that (especially since I think that public and private functions
should normally be segregated anyway), I'm increasingly coming closer to the
conclusion that it's actually a bad idea in practice, because it's not
uncommon for people to not know which access-level modifier applies -
especially when looking at diffs in PRs.

I would _love_ to see it become illegal to mass-apply @trusted (or even
attributes in general), but I have no clue how easy it would be to get such
a DIP accepted or how much screaming there would be over it if it were
actually accepted.

- Jonathan M Davis





Re: DIP1028 - Rationale for accepting as is

2020-05-27 Thread Jonathan M Davis via Digitalmars-d-announce
On Friday, May 22, 2020 9:29:03 AM MDT Timon Gehr via Digitalmars-d-announce 
wrote:
> On 22.05.20 16:49, bachmeier wrote:
> > I don't see that marking an extern(C) function @trusted buys you
> > anything, at least not until you can provide a compiler guarantee for
> > arbitrary C code.
>
> It buys you the ability to call that function from @safe code. Clearly
> you can't mark it @safe because the compiler has not checked it.

Yes, and @trusted in general buys you the ability to segregate and find code
that is doing stuff that the compiler was unable to prove was @safe. Then
when there's a bug related to memory safety, you know where to look.

C functions cannot be @safe, because they haven't been vetted by the
compiler. So, either what they're doing is actually memory safe (and it's
fine to mark whem with @trusted), or what it's doing is memory safe if used
correctly, and the calling code needs to be marked @trusted when using it
correctly. Either way, the code with potential memory safety issues is
segregated, and so when inevitably you do run into a memory safety bug, you
know that it's either the @trusted code or the @system code that it calls
which needs to be fixed.

Of course, with this DIP, @safe no longer truly means that the code was
vetted by @safety by the compiler but rather that all of the code that the
compiler can see has been vetted for @safety, meaning that when there is an
@safety bug, you must look not only for @trusted code, but you must look for
non-extern(D) declarations which have been implictly treated as @trusted by
the compiler. @safe still has value (and may even provide more value in that
it will be used more often), but it provides much weaker guarantees in the
process.

- Jonathan M Davis





Re: DIP1028 - Rationale for accepting as is

2020-05-27 Thread Jonathan M Davis via Digitalmars-d-announce
On Friday, May 22, 2020 12:09:16 PM MDT rikki cattermole via Digitalmars-d-
announce wrote:
> It kept being swapped about in the discussion thread, so I have been a
> little on edge over people using non-extern(D). Because linkage doesn't
> mean anything to anything but to cpu and linker in this discussion.

Except that the linker matters a great deal in this discussion with regards
to extern(D) functions, because @safe and @trusted are part of the name
mangling for extern(D) functions. That means that if an extern(D) function
declaration's attributes do not match its definition, then you'll get a
linker error. So, treating non-extern(D) function declarations as @safe by
default isn't necessarily a problem (though it would certainly work to just
treat all function declarations as @system by default rather than treating
extern(D) function declarations differently). The cases where non-extern(D)
function declarations weren't actually @safe would be caught during the
linking process. Sure, it would be nice if it were caught sooner, but you
don't end up with them being invisibly treated @safe when they're not like
we're going to get with DIP 1028.

However, of course, with non-extern(D) declarations, even if the function
definition is actually written in D, the fact that the function body was
checked is not transmitted via the linking process, because it doesn't end
up in the name mangling. So, the linkage used has a huge impact on whether
you can rely on the @safe attribute on the function declaration actually
meaning anything about whether the body was verified by the compiler.

- Jonathan M Davis





Re: Release D 2.089.0

2019-11-11 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, November 7, 2019 3:25:46 AM MST Ron Tarrant via Digitalmars-d-
announce wrote:
> On Wednesday, 6 November 2019 at 14:09:35 UTC, Mike Parker wrote:
> > Are you putting libs in the compiler's directory tree? Or are
> > you editing sc.ini/dmd.conf? You really shouldn't be doing the
> > former.
>
> I follow the steps outlined here:
> https://github.com/gtkd-developers/GtkD/wiki/Installing-on-Windows
>
> And one of those steps (Step #5) says to copy the gtkd libs to
> the compiler's directory tree. Is that what you mean by "You
> really shouldn't be doing the
> former."?
>
> And I also edit sc.ini.

You should pretty much never put anything in the compiler's directory tree,
and there's no need to. If you're editing the sc.ini, you might as well just
put your external libraries somewhere else and have sc.ini point to them
there. Putting code in a directory that the installer manages provides no
benefit while causing problems like the installer deleting it when updating.
In most cases though, the recommended thing to do is to just use dub rather
than manually mucking around with sc.ini or dmd.conf.

This reminds me of someone complaining that they couldn't just unzip the dmd
install on top of another and have it work (their code no longer compiled
aftery they'd just unzipped a release of dmd/phobos which had a split
std.datetime on top of one that didn't).

- Jonathan M Davis





Re: DIP 1021--Argument Ownership and Function Calls--Formal Assessment

2019-10-22 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, October 21, 2019 6:59:21 AM MDT Exil via Digitalmars-d-announce 
wrote:
> >  This proposal is one step toward a larger goal outlined in the
> >
> > blog post ['Ownership and Borrowing in
> > D'](https://dlang.org/blog/2019/07/15/ownership-and-borrowing-in-d/).
>
> That's the only line that was added, no other changes were made
> to the core DIP from the first revision to the last. Big ducking
> surprise this got accepted anyways.

Did you expect anything else? Given that it was Walter's DIP, and he's
making the decisions, the only way that the DIP was going to change was if
he were convinced that the DIP was flawed. He's been convinced of that
before (e.g. the DIP that was for adding a bottom type was written by
Walter, but it was rejected, because the community convinced him that it was
a bad idea). He just wasn't convinced that this DIP was a bad idea.

Personally, I didn't see any problem with this DIP, since it just tightened
down @safe a bit. Whether the next steps in the "larger goal" are good ones
is another matter entirely, and those will be put in a DIP (or multiple
DIPs) and argued on their own at some point. And if they're bad ideas, then
hopefully he will be convinced of that when those DIPs are discussed.
Ultimately though, no matter who comes up with the DIP, Walter has to be
convinced that it's a good idea. It's just that if it's his DIP, he's
already convinced that it's a good idea, so someone has to then convince him
otherwise for it to not be accepted.

Fortunately, while Walter certainly doesn't have a perfect track record, he
has a pretty darn good one, or D wouldn't be what it is today.

- Jonathan M Davis





Re: Release Candidate [was: Re: Beta 2.087.0]

2019-07-04 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, July 3, 2019 1:30:37 AM MDT Andre Pany via Digitalmars-d-
announce wrote:
> Thanks, you helped me to find the issue. The productive coding
> looks like this:
>
> import std.algorithm : all;
>
> void main()
> {
>  import std.ascii : isAlpha, isDigit;
>  assert("abc123".all!(c => (c.isAlpha && c.isUpper == false)
>
> || c.isDigit));
>
> }
>
> With previous dmd version, although import isUpper was not
> defined, somehow it worked.
> This seems to be fixed now. This issue was the missing isUpper
> import statement.
> The error message is a little bit odd:
> Error: static assert:  "_lambda isn't a unary predicate function
> for range.front"

There have been bugs in the past with regards to symbols being pulled in
without the actual import being there (due to stuff that was imported using
it IIRC). I don't know what the current state of that is, but I recall there
being a deprecation message about that behavior going away. So, if that
behavior finally went away, then code could have compiled with the previous
release but not the new one.

- Jonathan M Davis





Re: Release D 2.087.0

2019-07-04 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, July 4, 2019 6:48:15 AM MDT Robert M. Münch via Digitalmars-d-
announce wrote:
> On 2019-07-04 10:11:18 +, Mike Franklin said:
> > I don't know what digger is doing, but from the error messages, it
> > appears that the new files in `rt/array` can't be found.  I believe the
> > build is trying to use a new compiler with an older or existing
> > runtime.  You'll need both the latest compiler and the latest runtime
> > together for them to work.
> >
> > If you can identify a bug in the makefiles, or some other such problem
> > preventing the build, let me know and I'll try to fix it right away.
>
> So, the problem is, that digger somehow misses to copy over the new
> source to the install directory. It does for some parts (phobos, but
> I'm not sure if for every file necessary) but not for druntime files.
>
> I just manually copied the files now.

Yeah. I ran into the same problem with my own build tool. There wasn't
previously an rt folder in the imports. It was all hidden in the
implementation, and my build tool didn't copy it over, resulting in
confusing errors at first when druntime was recently changed to have an rt
folder in the imports. I think that for druntime, it currently works to just
copy over everything in the import folder, whereas with Phobos, you have to
copy over specific directories (std and etc IIRC). So, for Phobos, you can't
just grab everything from a single folder, and it may have been the case
with druntime at one point that you couldn't either (I'm not sure). So, it
doesn't really surprise me that digger broke. Any time that a tool is having
to duplicate any logic from the build system (even if it's just which files
to grab to install rather than for the build itself), it risks breaking any
time that the build system is altered.

- Jonathan M Davis






Re: DIP 1013--The Deprecation Process--Formal Assessment

2019-06-12 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, June 12, 2019 2:47:23 AM MDT Nicholas Wilson via Digitalmars-
d-announce wrote:
> On Monday, 10 June 2019 at 13:49:27 UTC, Mike Parker wrote:
> > DIP 1013, "The Deprecation Process", has been accepted.
> > ...
> > https://github.com/dlang/DIPs/blob/master/DIPs/accepted/DIP1013.md
>
> So what is the "version" in @@@DEPRECATED_[version]@@@ supposed
> to be? That still seems to be ambiguous.

How is it ambiguous? It says right in the same sentence that
@@@DEPRECATED_[version]@@@ is mentioned.

- Jonathan M Davis





Re: Phobos is now compiled with -preview=dip1000

2019-05-17 Thread Jonathan M Davis via Digitalmars-d-announce
On Friday, May 17, 2019 11:25:40 AM MDT Meta via Digitalmars-d-announce 
wrote:
> I don't want to *restrict* the lifetime of a heap allocation. I
> want the compiler to recognize that the lifetime of my original
> data is the same as the processed output, and thus allow my code
> to compile.

It is my understanding that DIP 1000 really doesn't track lifetimes at all.
It just ensures that no references to the data escape. So, you can't do
something like take a scope variable and put any references to it or what it
refers to in a container. Honestly, from what I've seen, what you can
ultimately do with scope is pretty limited. It definitely helps in simple
cases, but it quickly gets to the point that it's unable to be used in more
complex cases - at least not without casting and needing to use @trusted.
So, it's an improvement for some kinds of code, but I suspect that in
general, it's just going to be more annoying than it's worth. Time will tell
though.

- Jonathan M Davis





Re: Phobos is now compiled with -preview=dip1000

2019-05-17 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, May 16, 2019 11:22:30 PM MDT Mike Franklin via Digitalmars-d-
announce wrote:
> I consider it a bug that the compiler doesn't emit an error when
> using attributes on types for which they are not intended.

As in you think that something like

auto foo(scope int i) {...}

should be illegal, because scope makes no sense on an int? That's nice in
theory, but templates make such an approach a serious problem. It needs to
work to do something like

auto foo(T)(scope T t) {...}

without having to have separate overloads for types where scope makes sense
and types where it doesn't. Similarly, you don't want to have to use static
ifs whenever you declare a variable that you want to be scope in the cases
where the template argument is a type where scope does work.

In general, D ignores attributes when they don't apply rather than making it
an error, because making it an error causes serious problems for generic
code. This does unfortunately mean that some people are bound to sometimes
end up using an attribute when it doesn't apply, thinking that it does, and
that's unfortunate, but overall, it just works better for the compiler not
to complain about such cases.

- Jonathan M Davis





Re: bool (was DConf 2019 AGM Livestream)

2019-05-15 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, May 14, 2019 7:15:43 PM MDT Andrei Alexandrescu via Digitalmars-
d-announce wrote:
> On 5/14/19 2:00 AM, Mike Franklin wrote:
> > On Wednesday, 15 May 2019 at 00:23:44 UTC, Andrei Alexandrescu wrote:
> >> There are many clowny things in D, of which bool is at best somewhere
> >> beyond the radar. I suggest investing time * expertise in the larger
> >> ones.
> >
> > Once again, I disagree with what you think is important.  `bool` is a
> > fundamental type on which many things in D depend.
>
> I'd be hard pressed to find my style cramped by D's bool.

There are well-known issues where the current behavior causes bugs, and
personally, I'd prefer that the DIP have been accepted, but I have to agree
that it isn't a big problem. It's basically just one of those small warts in
the language that it would be nice to have fixed, and a number of the people
who come to D looking for a better language seem to want it to be absolutely
perfect and want every minor issue fixed. Unfortunately, while that would be
nice, it really isn't practical. Every language has warts, and if something
like this were the worst issue that D had, then we'd be _very_ well off.

> > If it doesn't work
> > right, neither will the features that depend on it.
> > But, that's your
> > decision, and there's little to nothing we can do about it, so I guess
> > we just accept the fact that D is clowny and deal with it; it's what so
> > many of us, so often do.
>
> (At any rate, going forward it's not me who needs convincing.) In my
> humble opinion, any language would have minor quirks, and a landmark of
> good engineering is attacking the right problems. That we even discuss
> just how bad bool is while we have no done deals for safety, reference
> counting, shared, package distribution/versioning, pay-as-you-go
> runtime, collections, ..., is a fascinating puzzle.

I think that in this case, it's a combination of it being a form of
bikeshedding (since the DIP is on an issue that's easy to understand and
have an opinion on) and that a DIP was written up for it and rejected. So,
some of the folks who disagree with the decison want to debate it and
somehow get a different decision.

In general though, I think that the problem with tackling harder problems
like ref-counting or shared or whatever is simply that it takes a lot more
time and effort, and most people either don't have that kind of time or
don't want to spend it on a hard problem (assuming that they even have
enough expertise to do so in the first place). It's easy to point out and
debate a small problem like how bool is treated like an integral type when
many of us don't think that that it should ever be treated as an integral
type, but it's much harder and more time-consuming to tackle the larger
problems. So, unfortunately, the harder problems are too frequently left by
the wayside. And as I'm sure you can attest to, even those of us who might
consider tackling the harder problems have enough on our plates already that
even if an issue is on our TODO list, it can easily be drowned out by other
issues.

Regardless, as a group, we do need to find ways to better tackle some of our
larger, more pressing problems. The small stuff does matter, and it's easier
to tackle, but if we're consistently trying to solve the small problems
without tackling the larger ones, then we have a serious problem.

- Jonathan M Davis





Re: Static Webpages of Forum Threads

2019-05-13 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, May 13, 2019 1:45:27 AM MDT Walter Bright via Digitalmars-d-
announce wrote:
> I run it manually whenever I think of it, which is erratically, because
> I've tried repeatedly to set up a cron job to do it, but somehow it never
> runs. Bah.

Clearly, you're not yelling at it enough. ;)

- Jonathan M Davis





Re: bool (was DConf 2019 AGM Livestream)

2019-05-13 Thread Jonathan M Davis via Digitalmars-d-announce
On Sunday, May 12, 2019 2:58:58 PM MDT Nicholas Wilson via Digitalmars-d-
announce wrote:
> On Sunday, 12 May 2019 at 14:50:33 UTC, Andrei Alexandrescu wrote:
> > On 5/12/19 1:34 PM, Nicholas Wilson wrote:
> >> However in this case the community consensus is that the chain
> >> of reasoning you have used to arrive at your decision is wrong.
> >
> > It's a simple enough matter to be understood, and reasonable to
> > assume Walter is not missing any important facts or details.
> > Poking holes in his explanations is, I confess, attractive, but
> > ultimately are about debate skills rather than technical. I do
> > agree that the way explanations on DIP decisions go could and
> > should be improved a lot.
>
> Then let me rephrase my complaints as a question (to you, Walter
> and the community):
>
> At what level of egregiousness of the degree to which the
> unanimous community consensus believes both your decision and
> chain of reasoning are fundamentally wrong, do we, the community,
> decide to reject your position completely and implement the
> community consensus?

Anyone is free to fork the language at any time, but unless you're going to
try to get Walter to step down, he's in charge of D, and he has the final
say. It's obviously problematic if he makes a bad decison and no one can
convince him otherwise, but if you can't convince him of something, and
you're so convinced that he's wrong that you want to effectively take the
decision away from him, then that basically comes down to forking the
language or getting him to step down. The language and its implementation
are certainly a cooperative effort, but ultimately, it's not a democracy.
Walter is the one in charge, and it's his decision. He's chosen to share his
position on some level with Andrei (and now Atila), but it's his language.

Personally, I think that bool should never be treated as an integral value
rather than always treated as an integral value or partially treated as an
integral value. The current behavior is arguably a bad legacy from C/C++,
and it definitely causes bugs. So, I'm quite disappointed with the rejection
of this DIP. But I honestly don't think that this is a big enough issue to
effectively start discussing how or when we take the decision making power
away from Walter. Unless someone can come up with a way to convince Walter
to view bools differently (which I very much doubt is going to happen), I
think that it's quite clear that we're just going to have to learn to
continue to live with the status quo on this issue.

- Jonathan M Davis





Re: bool (was DConf 2019 AGM Livestream)

2019-05-12 Thread Jonathan M Davis via Digitalmars-d-announce
On Sunday, May 12, 2019 8:50:33 AM MDT Andrei Alexandrescu via Digitalmars-
d-announce wrote:
> On 5/12/19 1:34 PM, Nicholas Wilson wrote:
> > However in this case the community consensus is that the chain of
> > reasoning you have used to arrive at your decision is wrong.
>
> It's a simple enough matter to be understood, and reasonable to assume
> Walter is not missing any important facts or details. Poking holes in
> his explanations is, I confess, attractive, but ultimately are about
> debate skills rather than technical. I do agree that the way
> explanations on DIP decisions go could and should be improved a lot.

Really, what it comes down to is that Walter has a very different view on
what a bool is and what it means than many in the community. His explanation
makes it clear why he made the decision that he made. Many of us don't agree
with the decision, because we view bool and its purpose very differently,
but unless someone has an argument that would actually convince Walter to
look at bools differently, it's a pretty pointless discussion. It's
frustrating, and many of us do think that D is worse than it could be
because of the decision, but ultimately, Walter is the one in charge, and
that's just how it goes. This just highlights how the language is ultimately
controlled by the one or two people at the top and not by the community at
large. It's not a democracy, which on the whole is a good thing. If all
language decisions were made by majority vote, the language would be an
utter mess. But that does have the downside that sometimes something that
many people want doesn't happen, because those in charge don't agree. Such
is life. Fortunately, in the grand scheme of things, while this issue does
matter, it's still much smaller than almost all of the issues that we have
to worry about and consider having DIPs for.

Personally, I'm not at all happy that this DIP was rejected, but I think
that continued debate on it is a waste of everyone's time.

- Jonathan M Davis





Re: DIP 1018--The Copy Constructor--Formal Review

2019-02-25 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, February 25, 2019 4:09:55 PM MST Olivier FAURE via Digitalmars-d-
announce wrote:
> Yes, this DIP was fast-tracked. Yes, this can feel unfair. And
> yet, it makes sense that it was fast-tracked, because it fits a
> priority of the project owners (C++ interoperability + reference
> counting) and project owners are allowed to have priorities. It's
> not like this DIP was rushed or has major vulnerabilities (the
> "mutable copy constructor" thing is necessary for reference
> counting).

It's worth noting that the copy constructor DIP went through a _lot_ of
discussion and was revised accordingly. So, while Walter and Andrei may have
considered it a priority, it still took a while for it to get to the point
that it was acceptable.

- Jonathan M Davis





Re: Blog post: What D got wrong

2018-12-18 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, December 18, 2018 1:17:28 AM MST Russel Winder via Digitalmars-
d-announce wrote:
> On Mon, 2018-12-17 at 12:16 -0800, Walter Bright via
> Digitalmars-d-announce
> wrote:
> > […]
> >
> > Going pure, however, is much harder (at least for me) because I'm not
> > used to
> > programming that way. Making a function pure often requires
> > reorganization of
> > how a task is broken up into data structures and functions.
> >
> > […]
>
> I can recommend a short period of working only with Haskell. And then a
> short period working only with Prolog. Experience with Java and Python
> people trying to get them to internalise the more declarative approach to
> software, shows that leaving their programming languages of choice behind
> for a while is important in improving their use of their languages of
> choice.

+1

The only reason that I'm as good at writing functional-style code as I am is
because I used Haskell as my goto language for a couple of years towards the
end of college. By being forced to program functionally, I got _much_ better
at things like recursion, and it significantly improved aspects of my
programming in languages like C++ or D.

That being said, I think that anyone who programs in such a language
longterm by choice has got to be a masochist. Functional programming is a
fantastic tool to have in your toolbox, but man does it force you to contort
things to make it work in many cases. I've never spent as much time per line
of code with any other language as I did with haskell. It's great for
stretching your brain but terrible for being productive. I'm sure that folks
who always program that way get much better at it, but some problems simply
work better with other programming paradigms, and it's fantastic to use a
language like D that allows you to program in a variety of paradigms rather
than forcing a particular paradigm on you all the time. But without spending
a lot of time in a language that forces a particular paradigm, you're likely
to be much worse at that particular paradigm in a flexible language such as
D.

One side effect of having spent as much time as I did with haskell is that
I've rarely found the functional nature of D's templates to be much of a
problem. Sometimes, it can be, and the addition of static foreach is
certainly welcome, but I rarely missed it, because I rarely needed it. The
declaritive stuff just tends to fit nicely into the functional paradigm in
ways that normal functions often don't. LOL. And it took quite a while
before I realized that templates were really a functional language (in fact,
I think that I only realized it when I read Bartosz's article on the
subject). I just understood how to use them and didn't think about it,
though once I did realize it, I felt stupid for not having realized it.

- Jonathan M Davis






Re: Blog post: What D got wrong

2018-12-18 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, December 18, 2018 8:00:48 AM MST Steven Schveighoffer via 
Digitalmars-d-announce wrote:
> On 12/17/18 4:42 AM, Dukc wrote:
> > On Monday, 17 December 2018 at 09:41:01 UTC, Dukc wrote:
> >> On Saturday, 15 December 2018 at 19:53:06 UTC, Atila Neves wrote:
> >>> @safe and pure though...
> >>
> >> Why @safe? Can't you just write "@safe:" on top and switch to
> >> @system/@trusted as needed?
> >
> > Argh, I forgot that you are not supposed to @safe templates away.
>
> You can apply @safe to anything. It's @trusted you have to be careful
> with.
>
> Of course, it probably won't work for many templates.

@safe should only be used on a template if the @safety of the code does not
depend on the template arguments, and it frequently does depend on them.
Mass-applying attributes should rarely be done with templated code. It
already causes enough problems with non-templated code, because it's easy to
not realize that an attribute has been mass-applied, but without a way to
explicitly mark a templated function so that an attribute is inferred in
spite of it being mass-applied, mass-applying attributes with templated code
will usually result in attributes being wrongly applied.

Now, for any attribute other than @trusted, the worst that you're going to
get out of incorrectly applying an attribute to a template is a compilation
error when a particular instantiation doesn't work with that attribute,
whereas for @trusted it's a disaster in the making. Mass-applying @trusted
is almost always a terrible idea. The one exception would maybe be something
like the bindings in druntime, where a number of modules do it, because it's
just a bunch of C prototypes. But even then, there's a high risk of marking
a function as @trusted later when someone adds it and doesn't realize that
@trusted was applied.

- Jonathan M Davis





Re: Blog post: What D got wrong

2018-12-18 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, December 18, 2018 7:02:43 PM MST H. S. Teoh via Digitalmars-d-
announce wrote:
> On Tue, Dec 18, 2018 at 06:53:02PM -0700, Jonathan M Davis via
> Digitalmars-d-announce wrote: [...]
>
> > I confess that I do tend to think about things from the standpoint of
> > a library designer though, in part because I work on stuff like
> > Phobos, but also because I tend to break up my programs into libraries
> > as much as reasonably possible. In general, the more that's in a
> > reusable, easily testable library the better. And with that approach,
> > a lot less of the code for your programs is actually in the program
> > itself, and the attributes tend to matter that much more.
>
> [...]
>
> My recent programming style has also become very library-like, often
> with standalone library-style pieces of code budding off a messier,
> experimental code in main() (and ultimately, if the project is
> long-lasting, main() itself becomes stripped down to the bare
> essentials, just a bunch of library components put together).  But I've
> not felt a strong urge to deal with attributes in any detailed way;
> mostly I just templatize everything and let the compiler do attribute
> inference on my behalf. For the few cases where explicit attributes
> matter, I still only use the bare minimum I can get away with, and
> mostly just enforce template attributes using the unittest idiom rather
> than bother with writing explicit attributes everywhere in the actual
> code.

That works when you're in control of everything and not making libraries
public, but it tends to be worse when you're making stuff publicly
available, because then there's a much higher risk of accidentally screwing
up someone else's code by breaking an attribute. It's also much worse from a
documentation standpoint, because users of the library can't look at the
documentation to know which attributes are in effect. That's pretty much
unavoidable with heavily templated code, but there are generally going to be
fewer maintainence problems with a publicly available library if attributes
are explicit. And the more heavily used the library is, the more likely it
is that there are going to be problems.

It's like how if you want your library to be stable with regards to CTFE,
you pretty much have to test that CTFE works in your unit tests. It's easy
to forget, and ideally, you wouldn't have to worry about it, but if someone
else uses your library and uses one of its functions with CTFE, and you then
make a change that doesn't work with CTFE and don't realize it, you'll break
their code. Ideally, you wouldn't have to worry about ensuring that stuff
works with CTFE (after all, D specifically doesn't require that stuff be
marked to work with it), but with regards to publicly available libraries,
if it's not tested for, it can become a problem. So, arguably, it's best
practice to test that stuff works with CTFE (at least in publicly available
libraries), but I doubt that all that many libraries actually do it. And
attributes are in the same boat in many respects (especially if attribute
inference is used heavily).

But again, if you're not making stuff publicly available, it's usually easy
to just not worry about attributes (or CTFE-ability) except in those cases
where you have to or decide that you need to in order to ensure that a
particular attribute and its benefits are in effect. It's when you make
stuff publicly available that it becomes more of an issue - especially if
the library is heavily used.

- Jonathan M Davis





Re: Blog post: What D got wrong

2018-12-18 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, December 18, 2018 6:35:34 AM MST Pjotr Prins via Digitalmars-d-
announce wrote:
> On Tuesday, 18 December 2018 at 11:25:17 UTC, Jonathan M Davis
>
> wrote:
> > Of course, even if we _did_ have a solution for reversing
> > attributes, slapping an attribute on the top of the module
> > would still potentially be a maintenance problem, because it's
> > then really easy to miss that an attribute is in effect (it's a
> > problem that we've had on several occasions with druntime and
> > Phobos in the few cases where attributes are mass-applied). So,
> > there is no silver bullet here (though regardless of whether
> > mass-applying attributes is something that should ever be
> > considered good practice, we really should add a way to be able
> > to reverse them).
>
> Thanks Jonathan for your elaborate explanation. I personally have
> no problem with the attributes which - in practice - means I
> don't use them much unless I want to make sure something is nogc,
> for example. For library designers it makes sense to be explicit.
> I guess that is where the trade-off kicks in. Maybe it is just a
> feature. We argue against specifying them because other languages
> are not as explicit. It does add a little noise.

In practice, library developers are forced to worry about it more, because
it affects everyone using their code, whereas within a program, how valuable
it is to worry about them depends on the size of the program and what you
expect to get out of them. Very large programs can definitely benefit
(especially with @safe and pure), because it reduces how much code you have
to worry about when tracking down the problems that those attributes
address, but with small programs, the benefit is far more debatable. And for
simple scripts and the like, they're almost certainly a waste of time -
which is part of why more restrictive attributes are not the default. It's
supposed to be easy to write code that doesn't deal with attributes if you
don't want to, but they're there for those who do care. The problem of
course is that when you do care, they tend to become a bit of a pain.

I confess that I do tend to think about things from the standpoint of a
library designer though, in part because I work on stuff like Phobos, but
also because I tend to break up my programs into libraries as much as
reasonably possible. In general, the more that's in a reusable, easily
testable library the better. And with that approach, a lot less of the code
for your programs is actually in the program itself, and the attributes tend
to matter that much more.

- Jonathan M Davis





Re: Blog post: What D got wrong

2018-12-18 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, December 18, 2018 3:36:15 AM MST Pjotr Prins via Digitalmars-d-
announce wrote:
> On Thursday, 13 December 2018 at 18:29:39 UTC, Adam D. Ruppe
>
> wrote:
> > On Thursday, 13 December 2018 at 10:29:10 UTC, RazvanN wrote:
> >> Do you honestly think that they will ever take D into account
> >> if @safe and immutable data will be the default?
> >
> > D needs to stop chasing after what you think people think they
> > want and just start being good for us.
> >
> > The majority of my code is written pretty permissively - I like
> > my pointers, my gotos, my GC, my exceptions. But I'm willing to
> > consider the change because I actually don't think it will be
> > that big of a hassle, and will be better overall. I wanna show
> > you something:
> >
> > /// Static convenience functions for common color names
> > nothrow pure @nogc @safe
> > static Color transparent() { return Color(0, 0, 0, 0); }
> >
> >
> > The attribute spam is almost longer than the function itself.
>
> Isn't it the way forward that the compiler deduces these
> attributes and fills them in automatically? All these can be
> inferenced. Only when the caller wants to guarantee, say pure, it
> could add it explicitly. I read somewhere that the compiler
> already does this to some degree. And even the generated docs
> should be able to show it.

In general, functions that have to have their source available have their
attributes inferred. So, templated functions, lambdas, and auto return
functions all have attribute inference at this point. Attribute inference
was introduced originally as being only for templated functions, because you
_have_ to have it for them for them to really work with attributes (at least
in any situation where whether an attribute is applicable depends on the
template arguments - which is frequently the case), but it's been expanded
over time. However, D's compilation model is such that many functions will
never have attribute inference, because it's frequently not guaranteed that
the compiler has the source code for a function in all cases where it's
called.

That being said, there are some serious downsides to attribute inference. It
makes it much harder to know which attributes actually apply to a function,
and it tends to result in folks not bothering with making sure that they're
code works with a particular attribute; they just let attribute inference
take care of it all and don't worry about it (in which case, the result is
comparable to not having attribute inference in some respects). Another big
issue is that when the attributes are inferred, it, becomes _very_ easy to
accidentally change which attributes a function has when changing its
implementation (similar to how its very easy to accidentally make it so that
a function no longer works with CTFE). The primary way to combat that is to
use explicit attributes on the unittest blocks which test the function, but
that's easy to forget to do, and in a way, it's just moving the explicit
attributes from the function itself to the unit tests. So, whether it
actually fixes anything is debatable.

In general, the cleanest approach is to be as explicit about attributes as
possible (which means still using attribute inference with templated
functions when the attribute should depend on the template arguments but to
not use it much of anywhere else). However, that then requires that you mark
up your functions everywhere, which can be very tedious, and many folks
don't want to do it. Of course, using more attribute inference reduces that
particular problem (which is why many folks want it), but you then get a
different set of problems due to the fact that attributes are inferred
instead of explicit.

So, there's really no winning. In some ways, minimizing attribute inference
is the best option, and in others, maximizing it would be better. Probably
the best solution to the problem would be to have the defaults better match
up with what your average program needs, but no set of defaults fits every
program, and different coding styles can result in very different opinions
on which set of attributes should be the default. I very much doubt that you
would find much of a consensus on it even just within the core set of
developers working on dmd, druntime, and Phobos.

My guess is that if code breakage were somehow not part of the question that
the majority of D programmers would be in favor of @safe by default, since
the vast majority of code can be @safe (though plenty of the time
programmers don't bother to mark it as @safe), and in theory, only small
portions of a program should typically need to be marked as @system. But I
expect that any other attribute would result in a lot of arguing. For
instance, in some respects, pure would be great as the default, but that
doesn't interact well at all with stuff like I/O, making it so that you have
to write your programs in a certain way for pure to work well for most
functions, and not everyone wants to do that. Some folks 

Re: Blog post: What D got wrong

2018-12-12 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, December 12, 2018 3:49:51 PM MST H. S. Teoh via Digitalmars-d-
announce wrote:
> If the delegate property thing is the only real use case for @property,
> it seems quite out-of-proportion that an entire @-identifier in the
> language is dedicated just for this purpose. One would've thought D
> ought to be better designed than this...

Originally, the idea was to restrict property syntax to functions marked
with @property, which would mean no more lax parens. If it's a property
function, then it must be called like one, and if it's not, then it must be
called as a function (i.e. with parens), whereas right now, we have this
mess where folks can use parens or not however they feel like. That doesn't
work well with being able to swap between property functions and public
variables, and it makes generic code harder, because in general, you can't
rely on whether something is called with parens or not, meaning that the
symbol in question has to be an actual function (where parens are optional)
instead of being allowed to be a different kind of callable (which requires
parens) or be a variable (which can't have parens). @property would have
fixed all of that by forcing functions to either be called with or without
parens based on what they're used for, allowing generic code to rely on more
than convention ensuring that symbols are called consistently with or
without parens (and thus allow symbols other than functions to be reliably
used in place of functions where appropriate). So, as originally envisioned,
@property was anything but useless.

However, all of that became extremely unpopular once UFCS became a thing,
because most folks didn't like having an empty set of parens when calling a
templated function that had a template argument that used parens, and as
such, they wanted to be able to continue to drop the parens, which goes
against the core idea behind @property. So, the end result is that the
original plans for @property got dropped, and plenty of folks would be very
unhappy if we went in that direction now - but it's still the case that
@property was supposed to solve a very real problem, and that problem
remains unsolved.

As things stand, you have to be _very_ careful when using anything other
than a function in a generic context that normally uses a function, because
there's no guarantee that using something other than a function will work
due to the lack of guarantee of whether parens will be used or not. It tends
to work better with variables than with callables, because dropping parens
is so popular, and callables aren't, but it's still a problem. Anyone who
wants to use a callable instead of a function in generic code is almost
certainly in for a world of hurt unless they're in control of all of the
code involved - and that's without even getting into the issue of property
functions that return callables (those simply don't work at all).

Template constraints combat this to an extent in that they end up requiring
that the construct in question either be callable with parens or usable
without them, but that places no restrictions on the code that actually uses
the symbols, making it easy for code to use parens when it shouldn't or not
use parens when it should and then run into problems when it's given a type
that conforms to the template constraint, but the code didn't use the symbol
in the same way as the constraint. The only thing things that really prevent
this from be a much bigger problem than it is is that many folks do follow
the conventions set forth by the template constraint (e.g. always calling
front without parens) and the fact that in most cases, it's the lack of
parens which is required, and using variables instead of functions is far
more popular than using callables instead of functions. So, convention is
really all that prevents this from being a bigger problem, and the end
result is that callables in generic code are borderline useless.

On example of trying to work around this problem is that not all that long
ago, isInputRange was actually changed to use popFront without parens just
so that folks could rely on being able to call it without parens, since
previously it was possible to use a delegate or other callable for popFront
instead of a function, which would then not have worked with any code where
folks didn't bother to put parens on popFront when calling it.

All in all though, I think that the fact that we aren't strict about parens
usage mostly kills the use of callables in generic code except in cases
where you're in control of all of the code involved. It could be argued that
callables are desirable infrequently enough that being able to drop parens
when calling functions for whatever syntactic beauty supposedly comes with
outweighs the loss, but that doesn't mean that the problem isn't there, just
that many folks don't care and think that the tradeoff is worth it.

- Jonathan M Davis





Re: A brief survey of build tools, focused on D

2018-12-12 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, December 12, 2018 1:33:39 PM MST H. S. Teoh via Digitalmars-d-
announce wrote:
> On Wed, Dec 12, 2018 at 10:38:55AM +0100, Sönke Ludwig via Digitalmars-d-
announce wrote:
> > Am 11.12.2018 um 20:46 schrieb H. S. Teoh:
> > > Does dub support the following scenario?
>
> [...]
>
> > This will currently realistically require invoking an external tool
> > such as make through a pre/post-build command (although it may
> > actually be possible to hack this together using sub packages, build
> > commands, and string import paths for the file dependencies). Most
> > notably, there is a directive missing to specify arbitrary files as
> > build dependencies.
>
> I see.  I think this is a basic limitation of dub's design -- it assumes
> a certain (common) compilation model of sources to (single) executable,
> and everything else is only expressible in terms of larger abstractions
> like subpackages.  It doesn't really match the way I work, which I guess
> explains my continuing frustration with using it.  I think of my build
> processes as a general graph of arbitrary input files being converted by
> arbitrary operations (not just compilation) into arbitrary output files.
> When I'm unable to express this in a simple way in my build spec, or
> when I'm forced to use tedious workarounds to express what in my mind
> ought to be something very simple, it distracts me from my focusing on
> my problem domain, and results in a lot of lost time/energy and
> frustration.

What you're describing sounds like it would require a lot of extra machinery
in comparison to how dub is designed to work. dub solves the typical use
case of building a single executable or library (which is what the vast
majority of projects do), and it removes the need to specify much of
anything to make that work, making it fantastic for the typical use case but
causing problems for any use cases that have more complicated needs. I
really don't see how doing much of anything other than building a single
executable or library from a dub project is going to result in anything
other than frustration from the tool even if you can make it work. By the
very nature of what you'd be trying to do, you'd be constantly trying to
work around how dub is designed to work. dub can do more thanks to
subprojects and some of the extra facilities it has for running stuff before
or after the build, but all of that sort of stuff has to work around dub's
core design, making it generally awkward to use, whereas to do something
more complex, at some point, what you really want is basically a build
script (albeit maybe with some extra facilities to properly detect whether
certain phases of the build can be skipped).

I would think that to be fully flexible, dub would need to abstract things a
bit more, maybe effectively using a plugin system for builds so that it's
possible to have a dub project that uses dub for pulling in dependencies but
which can use whatever build system works best for your project (with the
current dub build system being the default). But of course, even if that is
made to work well, it then introduces the problem of random dub projects
then needing 3rd party build systems that you may or may not have (which is
one of the things that dub's current build system mostly avoids).

On some level, dub is able to do as well as it does precisely because it's
able to assume a bunch of stuff about D projects which is true the vast
majority of the time, and the more it allows projects that don't work that
way, the worse dub is going to work as a general tool, because it
increasingly opens up problems with regards to whether you have the right
tools or environment to build a particular project when using it as a
dependency. However, if we don't figure out how to make it more flexible,
then certain classes of projects really aren't going to work well with dub.
That's less of a problem if the project is not for a library (and thus does
not need to be a dub package so that other packages can pull it in as a
dependency) and if dub provides a good way to just make libraries available
as dependencies rather than requiring the the ultimate target be built with
dub, but even then, it doesn't solve the problem when the target _is_ a
library (e.g. what if it were for wrapping a C or C++ library and needed to
do a bunch of extra code steps for code generation and needed multiple build
steps).

So, I don't know. Ultimately, what this seems to come down to is that all of
the stuff that dub does to make things simple for the common case make it
terrible for complex cases, but making it work well for complex cases would
almost certainly make it _far_ worse for the common case. So, I don't know
that we really want to be drastically changing how dub works, but I do think
that we need to make it so that more is possible with it (even if it's more
painful, because it's doing something that goes against the typical use
case).

The most obvious thing that I can think of is 

Re: Blog post: What D got wrong

2018-12-12 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, December 12, 2018 6:03:39 AM MST Kagamin via Digitalmars-d-
announce wrote:
> On Tuesday, 11 December 2018 at 12:57:03 UTC, Atila Neves wrote:
> > @property is useful for setters. Now, IMHO setters are a code
> > stink anyway but sometimes they're the way to go. I have no
> > idea what it's supposed to do for getters (nor am I interested
> > in learning or retaining that information) and never slap the
> > attribute on.
>
> Imagine you have void delegate() prop() and use the property
> without parentheses everywhere then suddenly m.prop() doesn't
> call the delegate. So it's mostly for getters and should be used
> only in edge cases, most code should be fine with optional parens.

Except that @property does not currently have any effect on this. The
delegate case (or really, the case of callables in general) is one argument
for keeping @property for using in that particular corner case, since
without it, having property functions that return callables simply doesn't
work, but @property has never been made to actually handle that case, so
having property functions that return callables has never worked in D. It's
certainly been discussed before, but the implementation has never been
changed to make it work. If/when we finally rework @property, that use case
would be the number one reason to not simply get rid of @property, but until
then, it doesn't actually fix that use case. As things stand, @property
basically just serves as documentation of intent for the API and as a way to
screw up type introspection by having the compiler lie about the type of the
property.

> >I think there’s a general consensus that @safe, pure and
> >immutable should be default.
>
> I can agree there are at least 5 people holding that firm belief,
> but that's hardly a consensus.

There are definitely people who want one or more of those attributes as the
default, but I very much doubt that it's a consensus. It wouldn't surprise
me if @safe or pure by default went over fairly well, but I'm sure that
immutable or const by default would be far more controversial, because
that's a big shift from what C-derived languages normally do. Personally, I
would be very unhappy if it were the default, though I know that there are
some folks who would very much like to see const or immutable be the
default.

> >I’ve lost count now of how many times I’ve had to write @safe
> >@nogc pure nothrow const scope return. Really.
>
> If immutable was default, wouldn't you still need to write const
> attribute everywhere, and @nogc, and nothrow? Strings are like
> the only relevant immutable data structure (and they are already
> immutable), everything else is inherently mutable except for use
> cases with genuine need for immutability like a shared cache of
> objects.

If immutable were the default, then I expect that writing types that worked
with immutable would become more common, because it would then be encouraged
by the language, but I think that your average type is written to work as
mutable (and maybe const), and it's a pretty big shift to write types to be
immutable unless you're talking about simple POD types, so if immutable
became the default, I expect that mutable (or whatever the modifier to make
a type mutable would be) would start getting plastered everywhere. And
without the range API being changed, ranges wouldn't work unless you marked
them as mutable, making const or immutable by default a bit of a mess for
what would now be idiomatic D code (though if the default were changed to
const or immutable, we'd probably see the range API be changed to use the
classic, functional head/tail list mechanism rather than front and popFront,
which could very well be an improvement anyway).

- Jonathan M Davis






Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-13 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, November 13, 2018 9:27:29 PM MST Nicholas Wilson via 
Digitalmars-d-announce wrote:
> On Wednesday, 14 November 2018 at 04:24:20 UTC, Jonathan M Davis
>
> wrote:
> > Given how strong the negative response is to this and how
> > incomprenhensible a number of us find the reasoning behind how
> > bool functions in some scenarios, Walter probably does need to
> > sit back and think about this, but using words like asinine is
> > pretty much always uncalled for in a professional discussion. I
> > can very much understand Isaac's frustration, but making
> > statements like that really is the sort of thing that comes
> > across as attacking the poster and is going to tend to result
> > in folks not listening to your arguments anymore, even if
> > they're well-reasoned and logical. It's already hard enough to
> > convince people when your arguments are solid without getting
> > anything into the mix that could come across as insulting.
> >
> > - Jonathan M Davis
>
> asinine, adjective: extremely stupid or foolish. Is there some
> additional connotation I am missing on this living
> (comparatively) in the middle of nowhere? (Genuine question.)

Not AFAIK, but calling someone or something extremely stupid or foolish is
almost always a terrible idea in a professional discussion (or pretty much
any discussion that you want to be civil) - especially if it can be
interpreted as calling the person stupid or foolish. That's just throwing
insults around. If an idea or decision is bad, then it should be shown as to
why it's bad, and if it is indeed a terrible idea, then the arguments
themselves should make that obvious without needing to throw insults around.

It's not always easy to avoid calling ideas stupid when you get emotional
about something, but the stronger the language used, the more likely it is
that you're going to get a strong emotional response out of the other person
rather than a logical, reasoned discussion that can come to a useful
conclusion rather than a flame war, and asinine is a pretty strong word.
It's the sort of word that's going to tend to get people mad and insulted
rather than help with a logical argument in any way - which is why Walter
called in unprofessional.

- Jonathan M Davis





Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-13 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, November 13, 2018 8:47:01 PM MST Nicholas Wilson via 
Digitalmars-d-announce wrote:
> On Wednesday, 14 November 2018 at 03:02:48 UTC, Walter Bright
>
> wrote:
> > On 11/13/2018 3:50 PM, Isaac S. wrote:
> >> is asinine and ignorant.
> >
> > Some friendly advice - nobody is going to pay serious attention
> > to articles that sum up with such unprofessional statements.
> > Continuing the practice will just result in the moderators
> > removing them.
>
> I read the first adjective as a statement of opinion about your
> reasoning for rejection and the second about the way you have
> dismissed the opinions of others, neither of which are uncalled
> for and certainly not unprofessional.
>
> You would do well to think about that before you post further.

Given how strong the negative response is to this and how incomprenhensible
a number of us find the reasoning behind how bool functions in some
scenarios, Walter probably does need to sit back and think about this, but
using words like asinine is pretty much always uncalled for in a
professional discussion. I can very much understand Isaac's frustration, but
making statements like that really is the sort of thing that comes across as
attacking the poster and is going to tend to result in folks not listening
to your arguments anymore, even if they're well-reasoned and logical. It's
already hard enough to convince people when your arguments are solid without
getting anything into the mix that could come across as insulting.

- Jonathan M Davis





Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-12 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, November 12, 2018 2:45:14 AM MST Mike Parker via Digitalmars-d-
announce wrote:
> DIP 1015, "Deprecation and removal of implicit conversion from
> integer and character literals to bool, has been rejected,
> primarily on the grounds that it is factually incorrect in
> treating bool as a type distinct from other integral types.

*sigh* Well, I guess that's the core issue right there. A lot of us would
strongly disagree with the idea that bool is an integral type and consider
code that treats it as such as inviting bugs. We _want_ bool to be
considered as being completely distinct from integer types. The fact that
you can ever pass 0 or 1 to a function that accepts bool without a cast is a
problem in and of itself. But it doesn't really surprise me that Walter
doesn't agree on that point, since he's never agreed on that point, though I
was hoping that this DIP was convincing enough, and its failure is certainly
disappointing.

- Jonathan M Davis





Re: DIP 1014--Hooking D's struct move semantics--Has Been Accepted

2018-11-07 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, November 7, 2018 6:54:54 PM MST Mike Parker via Digitalmars-d-
announce wrote:
> I'm happy to announce that Walter and Andrei have rendered their
> verdict on DIP 1014. They were in agreement on two points: they
> don't like it, but they know we need it. Given that there are no
> other alternative proposals and that they could see no
> alternative themselves, they decided to accept this DIP without
> modification.

I think that that probably sums the situation up pretty nicely. The fact
that we need something like this is just plain ugly, and if it starts
getting used frequently, there's definitely a problem, but there are use
cases, where it's going to be invaluable.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-10-11 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, October 8, 2018 4:27:47 AM MDT RazvanN via Digitalmars-d-announce 
wrote:
> On Monday, 8 October 2018 at 10:26:17 UTC, Nicholas Wilson wrote:
> > On Monday, 8 October 2018 at 10:14:51 UTC, RazvanN wrote:
> >> On Tuesday, 2 October 2018 at 09:26:34 UTC, RazvanN wrote:
> >>> Hi all,
> >>>
> >>> I just pushed another version of the DIP in which the major
> >>> modifications among otthers are removing implicit and use
> >>> copy constructor calls in all situations where a copy is
> >>> made. For more details, please visit [1] and if you have the
> >>> time, please offer some feedback,
> >>>
> >>> Thank you,
> >>> RazvanN
> >>>
> >>> [1] https://github.com/dlang/DIPs/pull/129/
> >>
> >> I've made all the changes in the code that the DIP includes[1]
> >> and the tests seem to be all green. I still need to add more
> >> tests; if you have any tests that you want to make sure the
> >> implementation takes into account please post them.
> >>
> >> Cheers,
> >> RazvanN
> >>
> >> [1] https://github.com/dlang/dmd/pull/8688
> >
> > Both the DIP and the implementation still lack a -dip10xx
> > switch.
>
> After discussing with Walter and Andrei we came to the conclusion
> that a flag is not necessary in this case. Immediately after the
> DIP is accepted, the postblit will be deprecated.

Even without a transitional switch, I would ask that we _please_ delay
actually deprecating postblit constructors until copy constructors have been
in at least one release. We do this when replacing old symbols in Phobos
with new ones, because if we don't, it's not possible to have your code
compile with the current release and master at the same time without getting
deprecation messages (while code _could_ use static if with __VERSION__ to
support multiple releases, that doesn't work with master, since
unfortunately, __VERSION__ on master always matches the most recent release
rather than the next one). The DIP that covers deprecations talks about
delaying deprecations when adding new symbols that for exactly that reason,
and replacing postblit constructors with copy constructors poses exactly the
same problem. It should always be possible to make code compile with both
master and the latest release without deprecation messages, since otherwise,
even programmers who are completely on top of things could end up having to
deal with a flood of deprecation messages that they can't fix.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-25 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, September 25, 2018 6:33:30 AM MDT RazvanN via Digitalmars-d-
announce wrote:
> After discussing with Walter and Andrei we have decided that we
> are going to drop @implicit for now as it may cause bugs (as
> Jonathan has highlighted) and consider constructors that have the
> form this(ref $q1 S rhs) $q2 as copy constructors. I will update
> the DIP with more information.
>
> Also, regarding the cohabitation between postblit and copy
> constructor: in order to make the transition smoother, whenever a
> postblit and a copy constructor are found togheter in a struct,
> the former is used and the latter is ignored (even if it is a
> field postblit). Once the postblit is going to be deprecated we
> can do the opposite and use the copy constructor and ignore the
> postblit.
>
> If @implicit is going to be introduced then that is going to be a
> DIP on its own.

Yay! Thank you.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-24 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, September 24, 2018 9:33:19 PM MDT Manu via Digitalmars-d-announce 
wrote:
> On Mon, 24 Sep 2018 at 16:22, Jonathan M Davis via
>
> Digitalmars-d-announce  wrote:
> > On Monday, September 24, 2018 3:20:28 PM MDT Manu via
> > Digitalmars-d-announce>
> > wrote:
> > > copy-ctor is good, @implicit is also good... we want both. Even though
> > > copy-ctor is not strictly dependent on @implicit, allowing it will
> > > satisfy that there's not a breaking change, it it will also
> > > self-justify expansion of @implicit as intended without a separate and
> > > time-consuming fight, which is actually the true value of this DIP!
> >
> > @implicit on copy constructors is outright bad. It would just be a
> > source of bugs. Every time that someone forgets to use it (which plenty
> > of programmers will forget, just like they forget to use @safe, pure,
> > nothrow, etc.), they're going to have a bug in their program.
>
> perhaps a rule where declaring a copy-ctor WITHOUT @explicit emits a
> compile error...?

That would pretty much defeat the whole purpose of why @implicit was added
to the DIP in the first place. It's just that it would turn the feared
silent code breakage into a compiler error. So, I suppose that it would be
an improvement, but it's the sort of thing that would have to go behind a
transitional compiler switch, and as soon as we do that, why have @implicit
on copy constructors at all? It makes far more sense to use a transitional
compiler switch to simply sort out the problem with the potential (but
highly unlikely) silent breakage and not do anything with @implicit with
copy constructors whatsoever. It avoids having to put a useless attribute on
copy constructors forever, and it avoids any potential bugs.

Then we can have a separate DIP that deals with @implicit and non-copy
constructors for implicit construction. And maybe we do ultimately end up
with @implicit on constructors in D. But if @implicit ends up on copy
constructors, at best, it's going to be a language wort, since it adds zero
value there, but as the DIP currently stands, it's going to be a source of
bugs.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-24 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, September 24, 2018 7:59:36 PM MDT Nicholas Wilson via 
Digitalmars-d-announce wrote:
> On Monday, 24 September 2018 at 23:22:13 UTC, Jonathan M Davis
>
> wrote:
> > @implicit on copy constructors is outright bad. It would just
> > be a source of bugs. Every time that someone forgets to use it
> > (which plenty of programmers will forget, just like they forget
> > to use @safe, pure, nothrow, etc.), they're going to have a bug
> > in their program. However, unlike, with attributes like @safe
> > or pure, they're not going to get a compiler error in their
> > program; they're going to get a logic error. They may or may
> > not find that bug quickly, but the compiler isn't going to
> > point it out to them.
>
> I think this is the most important reason. In C++, where
> everything is implicit by default (which is bad) and (I think)
> you are encouraged to use explicit where possible, you should
> never use it for the copy constructor because the compiler always
> calls it implicitly for you and is the whole point of having:
>
> Foo a; Foo b = a;
>
> do something useful. Putting explicit on the copy ctor means that
> no longer works, one can then only use it with
>
> Foo a; Foo b(a);
>
> Having `Foo a; Foo b = a;` do something completely different by
> the addition or removal of one attribute is a serious problem
> just waiting to happen.

The key thing with explicit in C++ is to avoid issues with implicit
conversions between types. You can get weird junk like where

int foo(string s);

foo(42);

compiles, because you have a type A with an implicit constructor which takes
int, and an implicit conversion to string. Even though A isn't explicitly
used here anywhere, because it's available, and no more than three
conversions are required to go from int to string when using A, the compiler
will use A to from int to string. That probably isn't what the programmer
wanted, but the compiler will happily do it.

Because of this, it's best practice in C++ to put explicit on any
constructors which take other types where you don't explicitly want the
implicit conversion in order to try to avoid this conversion mess. You then
only leave explicit off of constructors where you want the implicit
conversion or constructors where it's unnecessary (e.g. default constructors
or copy constructors). D avoids this mess by simply not having any kind of
implicit construction. It just has implicit conversion via alias this. It
may or may not be worth adding implicit construction via something like
@implicit, but it would at least then only be on constructors which were
explicitly marked with @implicit, whereas with C++, you have to use explicit
to turn off the behavior. So, in C++, it's a total mess by default, whereas
in D, we'd only have it where the programmer asked for it. And presumably,
any DIP that added it wouldn't have the compiler do more than one conversion
at a time, whereas as I understand it, C++ allows the compiler to do up to
three conversions in order to make a piece of code work. But ultimately,
such details will have to be decided with such a DIP.

Either way, none of this has anything to do with copy constructors. It makes
no sense to have copy constructors that require that you call them
explicitly. And while it would be bad enough if the DIP required that you
mark copy constructors with @implicit for them to work as copy constructors
and then didn't have implicit copying without @implicit, that's not what it
does. Instead, what it does when you forget @implicit is revert to the
default copy semantics, meaning that you have a copy constructor that needs
to be called explicitly, but you didn't realize that it needed to be called
explicitly, and you're blindly using the default copy semantics rather than
the copy constructor. So, the semantics that the DIP describes are just
plain a recipe for bugs. But since the PR has been updated to remove
@implicit, maybe the DIP will be updated to remove it as well, and the whole
@implicit discussion can be left to a future DIP on implicit construction
and left out of copy construction entirely.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-24 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, September 24, 2018 3:20:28 PM MDT Manu via Digitalmars-d-announce 
wrote:
> copy-ctor is good, @implicit is also good... we want both. Even though
> copy-ctor is not strictly dependent on @implicit, allowing it will
> satisfy that there's not a breaking change, it it will also
> self-justify expansion of @implicit as intended without a separate and
> time-consuming fight, which is actually the true value of this DIP!

@implicit on copy constructors is outright bad. It would just be a source of
bugs. Every time that someone forgets to use it (which plenty of programmers
will forget, just like they forget to use @safe, pure, nothrow, etc.),
they're going to have a bug in their program. However, unlike, with
attributes like @safe or pure, they're not going to get a compiler error in
their program; they're going to get a logic error. They may or may not find
that bug quickly, but the compiler isn't going to point it out to them. And
if @implicit weren't a thing, then the problem wouldn't even exist.
@implicit is trying to get around breaking an extremely unlikely piece of
code which would be far better fixed by using a transitional compiler flag
and which adds _no_ value for copy constructors in the long run.

Even if we do later add @implicit to the language for regular constructors,
it has no business on copy constructors. And its value on other constructors
needs to be evaluated and examined on its own separately from the issue of
copy constructors. We should not be letting @implicit be put onto copy
constructors just to get it into the language. If it's such a valuable
feature on regular constructors, it should be able to get in on a separate
DIP specifically for that purpose without piggybacking in on the copy
constructor DIP - especially when it's clearly going to be a source of bugs
if it's required to be on copy constructors. And it just needlessly adds to
the attribute soup. At least if it were added for regular constructors, it
would be adding value. For copy constructors, it's just adding annoyance and
the risk of bugs.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-24 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, September 24, 2018 10:44:01 AM MDT Meta via Digitalmars-d-
announce wrote:
> On Sunday, 23 September 2018 at 01:08:50 UTC, Jonathan M Davis
>
> wrote:
> > @implicit is just there because of the fear of breaking a
> > theoretical piece of code that's going to be extremely rare if
> > it exists at all and in most cases would continue to work just
> > fine even if it did exist.
> >
> > - Jonathan M Davis
>
> I somewhat agree with this argument, but overall I hate this
> attitude of "we should just make the change because it *probably*
> won't break any code", and then bonus points for "code that does
> this is wrong anyway so we shouldn't care if we break it". I've
> already been burned by that a couple times using D, and I imagine
> heavy corporate users with large code bases have many more
> problems with this.

In this particular case, a transitional compiler flag like we've done with
other DIPs makes _far_ more sense than forcing everyone to use an attribute
forever - especially when forgetting to use the attribute is going to be a
further source of bugs, one which would _easily_ outweigh any code breakage
caused by not having the attribute. But having a transitional compiler flag
would give us a way to deal with the potential code breakage if it exists.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-22 Thread Jonathan M Davis via Digitalmars-d-announce
On Saturday, September 22, 2018 8:40:15 PM MDT Nicholas Wilson via 
Digitalmars-d-announce wrote:
> On Sunday, 23 September 2018 at 01:08:50 UTC, Jonathan M Davis
>
> wrote:
> > On Saturday, September 22, 2018 6:13:25 PM MDT Adam D. Ruppe
> >
> > via Digitalmars-d-announce wrote:
> >> [...]
> >
> > Yeah, the problem has to do with how much you have to mark up
> > your code. Whether you have @foo @bar @baz or foo bar baz is
> > pretty irrelevant. And keywords eat up identifiers, so they're
> > actually worse.
> >
> > In addition, most of the complaints about @implicit have to do
> > with the fact that it doesn't even add anything. It's annoying
> > that we have @nogc, @safe, pure, etc. but at least each of
> > those adds something. @implicit is just there because of the
> > fear of breaking a theoretical piece of code that's going to be
> > extremely rare if it exists at all and in most cases would
> > continue to work just fine even if it did exist.
> >
> > - Jonathan M Davis
>
> It appears that @implicit has been removed from the
> implementation [1], but not yet from the DIP.
>
> https://github.com/dlang/dmd/commit/cdd8100

Well, that's a good sign.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-22 Thread Jonathan M Davis via Digitalmars-d-announce
On Saturday, September 22, 2018 6:13:25 PM MDT Adam D. Ruppe via 
Digitalmars-d-announce wrote:
> On Saturday, 22 September 2018 at 17:43:57 UTC, 12345swordy wrote:
> > If that where the case, then why not make it an actual keyword?
> > A frequent complaint regarding D is that there are too many
> > attributes, this will undoubtedly adding more to it.
>
> When I (and surely others like me) complain that there are too
> many attributes, the complaint has nothing to do with the @
> character. I consider "nothrow" and "pure" to be part of the
> problem and they lack @.

Yeah, the problem has to do with how much you have to mark up your code.
Whether you have @foo @bar @baz or foo bar baz is pretty irrelevant. And
keywords eat up identifiers, so they're actually worse.

In addition, most of the complaints about @implicit have to do with the fact
that it doesn't even add anything. It's annoying that we have @nogc, @safe,
pure, etc. but at least each of those adds something. @implicit is just
there because of the fear of breaking a theoretical piece of code that's
going to be extremely rare if it exists at all and in most cases would
continue to work just fine even if it did exist.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-18 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, September 18, 2018 10:58:39 AM MDT aliak via Digitalmars-d-
announce wrote:
> This will break compilation of current code that has an explicit
> copy constructor, and the fix is simply to add the attribute
> @implicit.

In that case, why not just use a transitional compiler switch? Why force
everyone to mark their copy constructors with @implicit forever? The whole
point of adding the attribute was to avoid breaking existing code.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-17 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, September 17, 2018 5:07:22 PM MDT Manu via Digitalmars-d-announce 
wrote:
> On Mon, 17 Sep 2018 at 13:55, 12345swordy via Digitalmars-d-announce
>
>  wrote:
> > On Tuesday, 11 September 2018 at 15:08:33 UTC, RazvanN wrote:
> > > Hello everyone,
> > >
> > > I have finished writing the last details of the copy
> > > constructor DIP[1] and also I have published the first
> > > implementation [2]. As I wrongfully made a PR for the DIP queue
> > > in the early stages of the development of the DIP, I want to
> > > announce this way that the DIP is ready for the draft review
> > > now. Those who are familiar with the compiler, please take a
> > > look at the implementation and help me improve it!
> > >
> > > Thanks,
> > > RazvanN
> > >
> > > [1] https://github.com/dlang/DIPs/pull/129
> > > [2] https://github.com/dlang/dmd/pull/8688
> >
> > The only thing I object is adding yet another attribute to a
> > already big bag of attributes. What's wrong with adding keywords?
> >
> > -Alexander
>
> I initially felt strongly against @implicit, it shouldn't be
> necessary, and we could migrate without it.
> But... assuming that @implicit should make an appearance anyway (it
> should! being able to mark implicit constructors will fill a massive
> usability hole in D!), then it doesn't hurt to use it eagerly here and
> avoid a breaking change at this time, since it will be the correct
> expression for the future regardless.

Except that @implicit could be introduced for other constructors without
having it on copy constructors, and the fact that copy constructors will
require it is just going to cause bugs, because plenty of folks are going to
forget to use it and end up with the default copying behavior instead of
their custom copy constructor being used. Good testing should find that
pretty quickly, but it's almost certainly going to be a common bug, and it
has no need to exist. It's all there in order to avoid breaking code that's
likely only theoretical and not something that actual D code bases have
done. And if there is a stray code base that did it, it's certainly going to
be in the extreme minority, and the code will almost certainly work as a
proper copy constructor anyway, since that's pretty much the only reason to
write such a constructor. So, we'd be trying to avoid breaking very rare
code by introducing a feature that will definitely cause bugs. IMHO, it
would be _far_ better to just use a transitional -dip* compiler flag like we
have with other DIPs. It would also give us the benefit of being able to
bang on the implementation a bit before making it the normal behavior.

We can still add @implicit to other constructors for implicit construction
with a later DIP (assuming that Walter and Andrei could be convinced of it).
I don't see how having it on copy constructors really helps with that. It
just means that the attribute would already be there, not that it would
necessarily ever be used for what you want, and _not_ having it on copy
constructors wouldn't prevent it from being used for implicit construction
if such a DIP were ever accepted. So, while I understand that you want
implicit construction, I think that it's a huge mistake to tie that up into
copy constructors, particularly since it really doesn't make sense to have
copy constructors that aren't implicit, and having @implicit for copy
constructiors is going to cause bugs when it's forgotten.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-17 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, September 17, 2018 5:14:28 PM MDT tide via Digitalmars-d-announce 
wrote:
> On Monday, 17 September 2018 at 19:10:27 UTC, Jonathan M Davis
>
> wrote:
> > Basically, @implicit is being proposed out of fear that
> > someone, somewhere wrote a constructor that had what would be a
> > copy constructor if D had them instead of postblit constructors
> > and that that code would break with the DIP. Does anyone expect
> > that such a constructor would be intended as anything other
> > than a copy constructor (albeit one that has to be called
> > explicitly)? And does anyone really think that such
> > constructors are at all common, given that the correct way to
> > handle the problem in D right now is the postblit constructor?
> > We're talking about introducing an attribute that should be
> > unnecessary, which will be annoying to use, and which will be
> > error-prone given the bugs that you'll get if you forget to
> > mark your copy constructor with it. And it's all to avoid
> > breaking a theoretical piece of code that I would think that we
> > could all agree is extremely rare if it exists in any real D
> > code base at all. Simply using a transitional compiler switch
> > like we have with other DIPs would make _way_ more sense than
> > burdening the language with an unnecessary attribute that's
> > just going to make it easier to write buggy code. This is
> > clearly a case of making the language worse long term in order
> > to avoid a theoretical problem in the short term.
> >
> > - Jonathan M Davis
>
>  From what I've read, the copy constructor can be used with
> different types:
>
> struct B
> {
> }
>
> struct A
> {
>  @implicit this(ref B b)
>  {
>  }
> }
>
>
> B foo();
>
> A a;
> a = foo(); // ok because of @implicit
> a = A(foo()); // ok without @implicit
>
> That's why it exists, otherwise I wouldn't want two types to be
> implicitly convertible unless i explicitly tell it to be implicit.

No. That wouldn't be a copy constructor. The DIP specifically says that "A
declaration is a copy constructor declaration if it is a constructor
declaration annotated with the @implicit attribute and takes only one
parameter by reference that is of the same type as typeof(this)." So, it
clearly must be the same type and can't be a different type.

It also says that "The copy constructor needs to be annotated with @implicit
in order to distinguish a copy constructor from a normal constructor and
avoid silent modification of code behavior." Andrei confirmed that that's
why @implicit was there in the last major discussion on the DIP in the
newsgroup. So, it's clearly there in order to avoid breaking existing code -
code which is going to be _extremely_ rare.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-17 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, September 17, 2018 2:53:42 PM MDT 12345swordy via Digitalmars-d-
announce wrote:
> On Tuesday, 11 September 2018 at 15:08:33 UTC, RazvanN wrote:
> > Hello everyone,
> >
> > I have finished writing the last details of the copy
> > constructor DIP[1] and also I have published the first
> > implementation [2]. As I wrongfully made a PR for the DIP queue
> > in the early stages of the development of the DIP, I want to
> > announce this way that the DIP is ready for the draft review
> > now. Those who are familiar with the compiler, please take a
> > look at the implementation and help me improve it!
> >
> > Thanks,
> > RazvanN
> >
> > [1] https://github.com/dlang/DIPs/pull/129
> > [2] https://github.com/dlang/dmd/pull/8688
>
> The only thing I object is adding yet another attribute to a
> already big bag of attributes. What's wrong with adding keywords?

Every keyword that gets added is one more word that can't be used for
identifiers, which we don't want to do without a really good reason, and in
this particular context, I don't see what it would buy you anyway. You'd
just end up not having to put @ in front of it - and then of course, that
word couldn't be used as an identifier anymore. So, overall, going with a
keyword over an attribute seems like a net negative.

IMHO, the core problem is that the DIP adds _anything_ that you have to mark
up copy constructors with. We should just have a -dip* flag as a transition
to deal with the theoretical breakage that @implicit is supposed to prevent
(as well as gives us a chance to kick the tires of the implementation a bit
first) and not do anything special to mark copy constructors aside from what
their parameters are.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-17 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, September 17, 2018 8:27:16 AM MDT Meta via Digitalmars-d-announce 
wrote:
> On Tuesday, 11 September 2018 at 15:08:33 UTC, RazvanN wrote:
> > Hello everyone,
> >
> > I have finished writing the last details of the copy
> > constructor DIP[1] and also I have published the first
> > implementation [2]. As I wrongfully made a PR for the DIP queue
> > in the early stages of the development of the DIP, I want to
> > announce this way that the DIP is ready for the draft review
> > now. Those who are familiar with the compiler, please take a
> > look at the implementation and help me improve it!
> >
> > Thanks,
> > RazvanN
> >
> > [1] https://github.com/dlang/DIPs/pull/129
> > [2] https://github.com/dlang/dmd/pull/8688
>
> If @implicit is the contentious part of this DIP, maybe it would
> be a good idea for us to instead use a `pragma(copyCtor)` or the
> like to avoid having to add another attribute while preserving
> backward-compatibility. Like @implicit, it's very easy to turn
> existing constructors into copy constructors just by adding the
> pragma, and they can be added to code with impunity because even
> older versions of the compiler will just ignore pragmas they
> don't recognize.

Honestly, I don't think that using a pragma instead of an attribute fixes
much, and it goes against the idea of what pragmas are supposed to be for in
that pragmas are supposed to be compiler-specific, not really part of the
language.

The core problem is that no such attribute or pragma should be necessary in
the first place. It makes no sense to have a copy constructor that must be
called explicitly, and if we have to use an attribute (or pragma or anything
else) to optionally mark the copy constructor as a copy constructor, then
it's something that people are going to forget to do at least some portion
of the time, causing bugs. It also seems downright silly to have an
attribute (or pragma or whatever) that you have to _always_ use. No one is
going to be purposefully writing copy constructors that aren't "implicit."
So, they're _all_ going to have to have it. It would be like having to mark
all destructors with an attribute just so that they'd be treated as
destructors. It's something that's simply inviting bugs, because at least
some of the time, programmers will forget to use the attribute.

Basically, @implicit is being proposed out of fear that someone, somewhere
wrote a constructor that had what would be a copy constructor if D had them
instead of postblit constructors and that that code would break with the
DIP. Does anyone expect that such a constructor would be intended as
anything other than a copy constructor (albeit one that has to be called
explicitly)? And does anyone really think that such constructors are at all
common, given that the correct way to handle the problem in D right now is
the postblit constructor? We're talking about introducing an attribute that
should be unnecessary, which will be annoying to use, and which will be
error-prone given the bugs that you'll get if you forget to mark your copy
constructor with it. And it's all to avoid breaking a theoretical piece of
code that I would think that we could all agree is extremely rare if it
exists in any real D code base at all. Simply using a transitional compiler
switch like we have with other DIPs would make _way_ more sense than
burdening the language with an unnecessary attribute that's just going to
make it easier to write buggy code. This is clearly a case of making the
language worse long term in order to avoid a theoretical problem in the
short term.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-17 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, September 17, 2018 7:30:24 AM MDT rmc via Digitalmars-d-announce 
wrote:
> On Wednesday, 12 September 2018 at 16:40:45 UTC, Jonathan M Davis
>
> wrote:
> > [snip]
> > Personally, I'd rather that we just risk the code breakage
> > caused by not having an attribute for copy constructors than
> > use either @implicit or @copy, since it really only risks
> > breaking code using constructors that were intended to be copy
> > constructors but had to be called explicitly, and that code
> > would almost certainly be fine if copy constructors then became
> > implicit, but Andrei seems unwilling to do that. [snip]
> > - Jonathan M Davis
>
> I'd also vote for no attribute for copy constructors and have a
> tool to warn us of changes across compiler versions that can
> detect constructors that look like copy constructors.
>
> If dub keeps track of the dmd compiler version we could even have
> automated warnings for all dub packages.
>
> This would also start us on the road towards a tool that allows
> us to make breaking changes. At first the tool could just warn
> us. Then we could slowly add automated code transforms.
>
> Pretty sure this sort of tool has been mentioned before. This
> seems like a good use-case.

At minimum, we could use a transitionary compiler flag like we have with
other DIPs.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-12 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, September 12, 2018 5:55:05 PM MDT Nicholas Wilson via 
Digitalmars-d-announce wrote:
> On Wednesday, 12 September 2018 at 23:36:11 UTC, Jonathan M Davis
>
> wrote:
> > On Wednesday, September 12, 2018 5:17:44 PM MDT Nicholas Wilson
> >
> > via Digitalmars-d-announce wrote:
> >> it seems that even if we were to want to have @implicit as an
> >> opposite of C++'s explicit it would _always_ be present on
> >> copy-constructors which means that @implicit for copy
> >> constructors should itself be implicit.
> >
> > Oh, yes. The whole reason it's there is the fear that not
> > requiring it would break code that currently declares a
> > constructor that would be a copy constructor if we didn't
> > require @implicit. So, if the DIP is accepted, you _could_
> > declare a constructor that should be a copy constructor but
> > isn't, because it wasn't marked with @implicit (just like you
> > can right now). If code breakage were not a concern, then
> > there's pretty much no way that @implicit would be part of the
> > DIP. Personally, I don't think that the risk of breakage is
> > high enough for it to be worth requiring an attribute for what
> > should be the normal behavior (especially when such a
> > constructor almost certainly was intended to act like a copy
> > constructor, albeit an explicit one), but Andrei doesn't agree.
>
> The bog-standard way of dealing with avoidable breakage with DIPs
> is a -dip-10xx flag. In this case, if set, would prefer to call
> copy constructors over blit + postblit.
>
> Also adding @implicit is a backwards incompatible change to a
> codebase that wants to use it as it will cause it to fail on
> older compilers.  Even if one does :
>
> static if (__VERSION__ < 2085) // or whenever it gets implemented
>   enum implicit;
> all over the place,

I don't disagree, but it's not my decision.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-12 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, September 12, 2018 4:11:20 PM MDT Manu via Digitalmars-d-
announce wrote:
> On Wed, 12 Sep 2018 at 04:40, Dejan Lekic via Digitalmars-d-announce
>
>  wrote:
> > On Tuesday, 11 September 2018 at 15:22:55 UTC, rikki cattermole
> >
> > wrote:
> > > Here is a question (that I don't think has been asked) why not
> > > @copy?
> > >
> > > @copy this(ref Foo other) { }
> > >
> > > It can be read as copy constructor, which would be excellent
> > > for helping people learn what it is doing (spec lookup).
> > >
> > > Also can we really not come up with an alternative bit of code
> > > than the tupleof to copying wholesale? E.g. super(other);
> >
> > I could not agree more. @implicit can mean many things, while
> > @copy is much more specific... For what is worth I vote for @copy
> > ! :)
>
> @implicit may be attributed to any constructor allowing it to be
> invoked implicitly. It's the inverse of C++'s `explicit` keyword.
> As such, @implicit is overwhelmingly useful in its own right.
>
> This will address my single biggest usability complaint of D as
> compared to C++. @implicit is super awesome, and we must embrace it.

Except that this DIP doesn't do anything of the sort. It specifically only
affects copy constructors. Yes, in theory, we could later extend @implicit
to do something like what you describe, but there are not currently any
plans to do so. So, @implicit makes more sense than @copy in the sense that
it's more likely to be forward-compatible (or at least, @implicit could be
reused in a sensible manner, whereas @copy couldn't be; so, if we used
@copy, we might also have to introduce @implicit later anyway), but either
way, saying that @implicit has anything to do with adding implicit
construction to D like C++ has is currently false. In fact, the DIP
specifically makes it an error to use @implicit on anything other than a
copy constructor.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-12 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, September 12, 2018 5:17:44 PM MDT Nicholas Wilson via 
Digitalmars-d-announce wrote:
> it seems that even if we were to want to have @implicit as an
> opposite of C++'s explicit it would _always_ be present on
> copy-constructors which means that @implicit for copy
> constructors should itself be implicit.

Oh, yes. The whole reason it's there is the fear that not requiring it would
break code that currently declares a constructor that would be a copy
constructor if we didn't require @implicit. So, if the DIP is accepted, you
_could_ declare a constructor that should be a copy constructor but isn't,
because it wasn't marked with @implicit (just like you can right now). If
code breakage were not a concern, then there's pretty much no way that
@implicit would be part of the DIP. Personally, I don't think that the risk
of breakage is high enough for it to be worth requiring an attribute for
what should be the normal behavior (especially when such a constructor
almost certainly was intended to act like a copy constructor, albeit an
explicit one), but Andrei doesn't agree.

> If at some point in the future we decide that we do want to add
> @implicit construction, then we can make the copy constructor
> always @implicit. Until that point I see no need for this,
> because it is replacing postblit which is always called
> implicitly.

Except that the whole reason that @implicit is being added is to avoid the
risk of breaking code, and that problem really isn't going to go away. So,
it's hard to see how we would ever be able to remove it. Certainly, if we
were willing to take the risks associated with it, there wouldn't be any
reason to introduce @implicit in the first place (at least not for copy
constructors).

If it were my decision, I wouldn't introduce @implicit and would risk the
code breakage (which I would expect to be pretty much non-existent much as
it theoretically could happen), but it's not my decision.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-12 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, September 12, 2018 1:18:11 PM MDT Gary Willoughby via 
Digitalmars-d-announce wrote:
> On Wednesday, 12 September 2018 at 16:40:45 UTC, Jonathan M Davis
>
> wrote:
> > Ultimately, I expect that if we add any attribute for this,
> > people coming to D are going to think that it's downright
> > weird, but if we're going to have one, if we go with @implicit,
> > we're future-proofing things a bit, and personally, thinking
> > about it over time, I've found that it feels less like a hack
> > than something like @copy would. If we had @copy, this would
> > clearly forever be something that we added just because we has
> > postblit constructors first, whereas @implicit at least _might_
> > be used for something more.
>
> That does actually make a lot of sense. Isn't there any way these
> constructors could be inferred without the attribute?

As I understand it, the entire point of the attribute is to avoid any
breakage related to constructors that have a signature that matches what a
copy constructor would be. If we didn't care about that, there would be no
reason to have an attribute. Personally, I think that the concerns about
such possible breakage are overblown, because it really wouldn't make sense
to declare a constructor with the signature of a copy constructor unless you
intended to use it as one (though right now, it would have to be called
explicitly). And such code shouldn't break if "explicit" copy constructors
then become "implicit" copy constructors automatically. However, Andrei does
not believe that the risk is worth it and insists that we need a way to
differentiate between the new copy constructors and any existing
constructors that happen to look like them. So, there won't be any kind
inference here. If we were going to do that, we wouldn't be adding the
attribute in the first place.

- Jonathan M Davis





Re: Copy Constructor DIP and implementation

2018-09-12 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, September 12, 2018 10:04:57 AM MDT Elie Morisse via 
Digitalmars-d-announce wrote:
> On Wednesday, 12 September 2018 at 11:39:21 UTC, Dejan Lekic
>
> wrote:
> > On Tuesday, 11 September 2018 at 15:22:55 UTC, rikki cattermole
> >
> > wrote:
> >> Here is a question (that I don't think has been asked) why not
> >> @copy?
> >>
> >> @copy this(ref Foo other) { }
> >>
> >> It can be read as copy constructor, which would be excellent
> >> for helping people learn what it is doing (spec lookup).
> >>
> >> Also can we really not come up with an alternative bit of code
> >> than the tupleof to copying wholesale? E.g. super(other);
> >
> > I could not agree more. @implicit can mean many things, while
> > @copy is much more specific... For what is worth I vote for
> > @copy ! :)
>
> @implicit makes sense if extending explicitly implicit calls to
> all other constructors gets somday considered. Some people argued
> for it and I agree with them that it'd be nice to have, for ex.
> to make a custom string struct type usable without having to
> smear the code with constructor calls.

That's why some argued in a previous thread on the topic that we should
decide what (if anything) we're going to do with adding implicit
construction to the language before finalizing this DIP. If we added some
sort of implicit constructor to the language, then @implicit would make some
sense on copy constructors (it's still a bit weird IMHO, but it does make
some sense when explained), and in that case, having used @copy could
actually be a problem.

If we're looking at this as an attribute that's purely going to be used on
copy constructors, then @copy does make more sense, but it's also less
flexible. @implicit could potentially be used for more, whereas @copy really
couldn't - not when it literally means copy constructor.

Personally, I'd rather that we just risk the code breakage caused by not
having an attribute for copy constructors than use either @implicit or
@copy, since it really only risks breaking code using constructors that were
intended to be copy constructors but had to be called explicitly, and that
code would almost certainly be fine if copy constructors then became
implicit, but Andrei seems unwilling to do that. But at least, when start
arguing about that, fact that the work "explicitly" very naturally fits into
that as the description for what a copy constructor would be currently, it
does make more sense that @implicit would be used.

Ultimately, I expect that if we add any attribute for this, people coming to
D are going to think that it's downright weird, but if we're going to have
one, if we go with @implicit, we're future-proofing things a bit, and
personally, thinking about it over time, I've found that it feels less like
a hack than something like @copy would. If we had @copy, this would clearly
forever be something that we added just because we has postblit constructors
first, whereas @implicit at least _might_ be used for something more. It
would still feel weird and hacky if it never was used for anything more, but
at least we'd be future-proofing the language a bit, and @implicit does make
_some_ sense after it's explained, even if very few people (if any) will
initially think that it makes sense.

- Jonathan M Davis





Re: DIP Draft Reviews

2018-09-06 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, September 6, 2018 4:49:55 AM MDT Mike Parker via Digitalmars-d-
announce wrote:
> On Thursday, 6 September 2018 at 10:22:47 UTC, Nicholas Wilson
>
> wrote:
> > Put it this way: DIP1017 should not go to formal without
> > change, as it did from draft to community (which I don't think
> > should have happened without at least some acknowledgement or
> > refutation of the points raised in draft).
>
> I always ask DIP authors about unaddressed feedback before moving
> from one stage to the other, and I did so with DIP 1017 when
> moving out of Draft Review. It's entirely up to the author
> whether or not to address it and there is no requirement for DIP
> authors to respond to any feedback. I would prefer it if they
> did, especially in the Post-Community stage and later as it helps
> me with my review summaries, but 1017 is not the first DIP where
> feedback went unaddressed and I'm sure it won't be the last.

Of course, what further complicates things here is that the author is
Walter, and ultimately, it's Walter and Andrei who make the decision on
their own. And if Walter doesn't respond to any of the feedback or address
it in the DIP, it all comes across as if the DIP itself is just a formality.
The fact that he wrote a DIP and presented it for feedback is definitely
better than him simply implementing it, since it does give him the chance to
get feedback on the plan and improve upon it, but if he then doesn't change
anything or even respond to any of the review comments, then it makes it
seem kind of pointless that he bothered with a DIP. At that point, it just
serves as documentation of his intentions.

This is all in stark contrast to the case where someone other than Walter or
Andrei wrote the DIP, and the author doesn't bother to even respond to the
feedback let alone incorporate it, since they then at least still have to
get the DIP past Walter and Andrei, and if the DIP has not taken any of the
feedback into account, then presumably, it stands a much worse chance of
making it through. On the other hand, if the DIP comes from Walter or
Andrei, they only have the other person to convince, and that makes it at
least seem like there's a decent chance that it's just going to be
rubber-stamped when the DIP author doesn't even respond to feedback.

I think that it's great for Walter and Andrei to need to put big changes
through the DIP process just like the rest of us do, but given that they're
the only ones deciding what's accepted, it makes the whole thing rather
weird when a DIP comes from them.

- Jonathan M Davis





Re: Beta 2.082.0

2018-08-18 Thread Jonathan M Davis via Digitalmars-d-announce
On Saturday, August 18, 2018 1:01:18 AM MDT Martin Nowak via Digitalmars-d-
announce wrote:
> On Friday, 17 August 2018 at 22:08:16 UTC, Mike Franklin wrote:
> > On Friday, 17 August 2018 at 20:01:32 UTC, Martin Nowak wrote:
> >> Glad to announce the first beta for the 2.082.0 release
> >
> > According to https://issues.dlang.org/show_bug.cgi?id=18786
> > VirusTotal used to report a virus for the installer.  This beta
> > is now reporting clean:
> > https://www.virustotal.com/#/file/dabf7c3b10ecb70025789c775756bee39bb401
> > d7ef31f5a9131ff8760450fcab/detection
> >
> > Windows Defender also reports it as clean.
>
> Good to hear that paying the certificate ransom helped making
> peace with the blackmailers ;).

Technically, I think that they're extortionists rather than blackmailers. ;)

- Jonathan M Davis





dxml 0.4.0 released

2018-08-04 Thread Jonathan M Davis via Digitalmars-d-announce
dxml 0.4.0 has now been released. It doesn't have a lot of changes, but
dxml.parser.getAttrs should be particularly useful. It's like getopt but for
attributes.

Changelog: http://jmdavisprog.com/changelog/dxml/0.4.0.html
Documentation: http://jmdavisprog.com/docs/dxml/0.4.0
Github: https://github.com/jmdavis/dxml/tree/v0.4.0
Dub: http://code.dlang.org/packages/dxml

For those who haven't seen it, here's a link to my dconf talk on dxml:

http://dconf.org/2018/talks/davis.html

- Jonathan M Davis



Re: The dub documentation is now on dub.pm

2018-07-19 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, July 19, 2018 13:51:25 Seb via Digitalmars-d-announce wrote:
> As you mentioned SDL already supports comments. JSON support was
> actually only added because too many people complained about SDL
> not being properly supported.

Actually, originally, dub only supported json. sdl came later. However,
there was a lot of push back when Sonke made sdl the default, so json not
only didn't get phased out, but it became the default again.

- Jonathan M Davis



Re: I have a plan.. I really DO

2018-07-01 Thread Jonathan M Davis via Digitalmars-d-announce
On Sunday, July 01, 2018 13:37:32 Ecstatic Coder via Digitalmars-d-announce 
wrote:
> On Sunday, 1 July 2018 at 12:43:53 UTC, Johannes Loher wrote:
> > Am 01.07.2018 um 14:12 schrieb Ecstatic Coder:
> >> Add a 10-liner "Hello World" web server example on the main
> >> page and that's it.
> >
> > There already is one in the examples:
> >
> > #!/usr/bin/env dub
> > /+ dub.sdl:
> > name "hello_vibed"
> > dependency "vibe-d" version="~>0.8.0"
> > +/
> > void main()
> > {
> >
> > import vibe.d;
> > listenHTTP(":8080", (req, res) {
> >
> > res.writeBody("Hello, World: " ~ req.path);
> >
> > });
> > runApplication();
> >
> > }
>
> Yeah I know, guess who asked for it...
>
> But the last step, which is including such functionality into the
> standard library , will never happen, because nobody here seems
> to see the point of doing this.
>
> I guess those who made that for Go and Crystal probably did it
> wrong.
>
> What a mistake they did, and they don't even know they make a
> mistake, silly them... ;)

What should and shouldn't go in the standard library for a language is
something that's up for a lot of debate and is likely to often be a point of
contention. There is no clear right or wrong here. Languages that have had
very sparse standard libraries have done quite well, and languages that have
had kitchen sink libraries have done quite well. There are pros and cons to
both approaches.

- Jonathan M Davis



Re: Encouraging preliminary results implementing memcpy in D

2018-06-14 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, June 14, 2018 16:04:32 Nick Sabalausky  via Digitalmars-d-
announce wrote:
> On 06/14/2018 05:01 AM, AnotherTorUser wrote:
> > If all such people stopped working for such companies, what do you think
> > the economic impact would be?
>
> What do you think is the social impact if they don't? And don't even try
> to pretend the companies can't trivially solve the "economic" issues for
> themselves in an instant by knocking off the behaviour that causes loss
> of talent.

But that would imply that they have a frontal lobe. :)

In all seriousness, it is surprising how frequently companies seem to be
incapable of making decisions that would fix a lot of their problems, and
they seem to be incredibly prone to thinking about things in a shortsighted
manner.

I'm reminded of an article by Joel Spoelskey where he talks about how one of
the key things that a source control software solution can do to make it
more likely for folks to be willing to try it is to make it easy to get your
source code and history back out again and into another source control
system. However, companies typically freak out at the idea of making it easy
to switch from their product to another product. They're quite willing to
make it easy to switch _to_ their product so that they can start making
money off of you, but the idea that making it low cost to leave could
actually improve the odds of someone trying their product - and thus
increase their profits - seems to be beyond them.

Another case which is closer to the exact topic at hand is that many
companies seem to forget how much it costs to hire someone when they
consider what they should do to make it so that their employees are willing
- or even eager - to stay. Spending more money on current employees (be that
on salary or something else to make the workplace desirable) or avoiding
practices that tick employees off so that they leave can often save money in
the long run, but companies frequently ignore that fact. They're usually
more interested in saving on the bottom line right now than making decisions
that save money over time.

So, while I completely agree that companies can technically make decisions
that solve some of their problems with things like retaining talent, it
seems like it's frequently the case that they're simply incapable of doing
it in practice - though YMMV; some companies are better about it than
others.

- Jonathan M Davis



Re: Encouraging preliminary results implementing memcpy in D

2018-06-14 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, June 13, 2018 14:34:28 Uknown via Digitalmars-d-announce 
wrote:
> Looks very promising. One question though, why not use
> std.datetime.stopwatch.benchmark? I think it has some protection
> against optimizing compilers (I may be wrong though). Also, LDC
> has attributes to control optimizations on a per function
> basis.see : https://wiki.dlang.org/LDC-specific_language_changes

Unfortunately, std.datetime.stopwatch.benchmark does not yet have such
protections. It has been discussed, but there were issues with it that still
need to be sorted out.

In any case, what he has implemented is pretty much what's in Phobos except
for the fact that he set up his to take arguments, whereas Phobos' solution
just takes the function(s) to call, so anything that it does has to be
self-contained.

- Jonathan M Davis



Re: SecureD moving to GitLab

2018-06-09 Thread Jonathan M Davis via Digitalmars-d-announce
On Saturday, June 09, 2018 04:03:40 Nick Sabalausky  via Digitalmars-d-
announce wrote:
> On 06/09/2018 01:47 AM, Walter Bright wrote:
> > Oh, employers do try that. I would negotiate what is mine and what is
> > the company's, before signing. In particular, I'd disclose all projects
> > I'd worked on before, and get a specific acknowledgement that those were
> > not the company's. When I'd moonlight, before I'd do so, I'd describe
> > the project on a piece of paper and get acknowledgement from the company
> > that it is not their project.
> >
> > And I never had any trouble about it.
> >
> > (These days, life is a bit simpler. One thing I like about Github is the
> > software is all date stamped, so I could, for instance, prove I wrote it
> > before joining company X.)
>
> Maybe naive, maybe not, but my policy is that: Any hour of any day an
> employer claims ***ANY*** influence over, must be paid for ($$$) by said
> employer when attempting to make ANY claim on that hour of my life.
> Period.
>
> There are already far too many
> would-be-slavedrivers^H^H^H^H^H^H^employers who attempt to stake claim
> to the hours of a human being's life WHICH THEY DO *NOT* COMPENSATE FOR.
>
> If an employer *does not* pay me for an hour of my life which they
> *claim control over*, then the employer WILL NOT BE MY EMPLOYER. Period.
>
> If others held themselves to the same basic standards, then nobody in
> the world would ever be slave^H^H^H^H^Hpersonal-property to a business
> which makes claim to a human life without accepted compensation.

Well, the actual, legal situation doesn't always match what it arguably
should be, and anyone working on salary doesn't technically get paid for any
specific hours. So, that sort of argument doesn't necessarily fly.

Also, there _is_ potentially a legitimate concern on the part of the
employer. If you use your free time to write the same sort of stuff that you
write for work, you're potentially using their IP. In particular, they
really don't want you going home and writing a competing product using all
of the knowledge you got working for them. And legally, attempting to do
anything like that (in the US at least) will almost certainly get you in
legal trouble if your employer finds out.

The real problem is when employers try to claim anything unrelated to your
job that you do in your free time. _That_ is completely inappropriate, but
some employers try anyway, and depending on which state you live in and what
you signed for the company, they may or may not be able to come after you
even if it's ridiculous for them to be able to.

- Jonathan M Davis



Re: GitHub could be acquired by Microsoft

2018-06-07 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, June 07, 2018 20:02:31 Russel Winder via Digitalmars-d-announce 
wrote:
> On Thu, 2018-06-07 at 10:17 -0700, H. S. Teoh via Digitalmars-d-announce
>
> wrote:
> > […]
> >
> > Exactly!!!  Git was built precisely for decentralized, distributed
> > development.  Anyone should be (and is, if they bothered to put just a
> > tiny amount of effort into it) able to set up a git server and send the
> > URL to prospective collaborators.  Anyone is free to clone the git repo
> > and redistribute that clone to anyone else.  Anyone can create new
> > commits in a local clone and send the URL to another collaborator who
> > can pull the commits.  It should never have become the tool to build
> > walled gardens that inhibit this free sharing of code.
>
> I think there is an interesting tension between using a DVCS as a DVCS and
> no central resource, and thus no mainline version, and using a DVCS in
> combination with a central resource.  In the latter category the central
> resource may just be the repository acting as the mainline, or, as with
> GitHub, GitLab, Launchpad, the central resource provides sharing and
> reviewing support.
>
> Very few organisations, except perhaps those that use Fossil, actually use
> DVCS as a DVCS. Everyone seems to want a public mainline version: the
> repository that represents the official state of the project. It seems
> the world is not capable of working with a DVCS system that does not even
> support "eventually consistent". Perhaps because of lack of trying or
> perhaps because the idea of the mainline version of a project is
> important to projects.
>
> In the past Gnome, Debian, GStreamer, and many others have had a central
> mainline Git repository and everything was handled as DVCS, with emailed
> patches. They tended not to support using remotes and merges via that
> route, not entirely sure why. GitHub and GitLab supported forking,
> issues, pull requests, and CI. So many people have found this useful. Not
> just for having ready made CI on PRs, but because there was a central
> place that lots of projects were at, there was lots of serendipitous
> contribution. Gnome, Debian, and GStreamer are moving to private GitLab
> instances. It seems the use of a bare Git repository is not as appealing
> to these projects as having the support of a centralised system.
>
> I think that whilst there are many technical reasons for having an element
> of process support at the mainline location favouring the GitHubs and
> GitLabs of this Gitty world, a lot of it is about the people and the
> social system: there is a sense of belonging, a sense of accessibility,
> and being able to contribute more easily.
>
> One of the aspects of the total DVCS is that it can exclude, it is in
> itself a walled garden, you have to be in the clique to even know the
> activity is happening.
>
> All of this is not just technical, it is socio-technical.

Honestly, I don't see how it makes sense to release any software without a
definitive repo. Decentralized source control systems like git are great in
that they allow you to have your own fork and do things locally without
needing to talk to any central repo and because having folks be able to fork
and muck around with stuff easily is incredibly valuable. But actually
releasing software that way is a bit of a mess, and there usually needs to
be a main repo where the official version of stuff goes. So, the
decentralization is great for collaboration, and it removes the need to
communicate with the main repo when you don't actually need to, but it
really doesn't remove the need for a central repository for the official
version of the project.

Whether that central repo needs to be somewhere like github or gitlab or
bitbucket or whatever is another matter entirely, but ultimately, I think
that the main benefits of DVCS is that it removes the dependency on the
central repo from any operations that don't actually need the central repo,
not that it removes the need for a central repo, because it really doesn't -
not if you want to be organized about releases anyway.

- Jonathan M Davis




Re: SecureD moving to GitLab

2018-06-05 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, June 05, 2018 19:15:12 biocyberman via Digitalmars-d-announce 
wrote:
> On Tuesday, 5 June 2018 at 11:09:31 UTC, Jonathan M Davis wrote:
> > [...]
>
> Very informative. I don't live in the US, but this gives me a
> feeling of how tough life can be over there for everyone, except
> lawyers.

Fortunately, it's not usually a problem, but it's something that any
programmer who writes code in their free time has to be aware of. In most
cases, if you have a reasonable employer, you can do whatever programming
you want in your free time so long as it's not related to what you work on
at work. But it is occasionally a problem.

- Jonathan M Davis



Re: SecureD moving to GitLab

2018-06-05 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, June 05, 2018 10:34:54 ExportThis via Digitalmars-d-announce 
wrote:
> On Tuesday, 5 June 2018 at 06:55:42 UTC, Joakim wrote:
> > This reads like a joke, why would it matter if you contributed
> > to open source projects on an open platform that your employer
> > runs?
>
> If you read between the lines, you can 'kinda' get the message.
>
> A Microsoft employee.
> A Microsoft platform.
> Encryption.
> U.S Export Controls.
>
> How they all come together is anyones guess though ;-)
>
> That's why we have lawyers.

Given that he works on SecureD, that could be part of it, but I don't think
that exporting encryption is the problem that it once was in the US, and I'd
think that the issue was more likely related to what Microsoft can claim to
own. In general in the US, if your employer can claim that what you're doing
in your free time is related to what you do for work, then they can claim
that they own it. And if you're in a state with fewer employee protections,
they can even claim to own everything you do in your free time regardless of
whether it really has anything to do with any company intellectual property
(e.g. a coworker at a previous company told me of a coworker who had gone to
work at Bloomberg in NY after the division he was in was laid off, but he
quit Bloomberg soon therefafter, because Bloomberg was going to claim to own
everything he did in his free time - and he was a Linux kernel developer, so
that would have caused serious problems for him). What paperwork you signed
for your employer can also affect this. So, the exact situation you're in
can vary wildly depending on where you live, who you work for, what exactly
you do at work, and what exactly you do in your free time. If you want to
sort out exactly what situation you're in, you do potentially need to see a
lawyer about it.

That whole set of issues may or may not be why Adam is moving his stuff to
gitlab, but it does mean that you have to tread carefully when doing
anything that relates at all to your employer or what you do for work. So, I
can easily see it as a good idea to avoid doing anything in your free time
with a site that is owned or operated by your employer. It may or may not
actually be necessary, but playing it safe can avoid legal problems down the
road, and typically, employees are going to have a _very_ hard time winning
against employers in court, even if the employee is clearly in the right,
simply because the legal fees stand a good chance of destroying the employee
financially, whereas the employer can typically afford it. You simply don't
want to be in a situation where your employer ever might try and do anything
to you with the legal system - and of course, you don't want to be in a
position where your employer fires you. So, an abundance of caution is
sometimes warranted even if it arguably shouldn't need to be.

- Jonathan M Davis



Re: GitHub could be acquired by Microsoft

2018-06-04 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, June 04, 2018 14:51:24 Kagamin via Digitalmars-d-announce wrote:
> On Monday, 4 June 2018 at 05:50:26 UTC, Anton Fediushin wrote:
> > I can think of hundreds of things what can go wrong including:
> > forcing users to use Microsoft accounts
>
> That didn't happen to skype yet.
> MS recently tries to mend its reputation, though the past will
> linger for a while.

In many respects, they're better behaved than they used to be. They're
biggest problems seem to have to do with what they're doing with Windows
(e.g. tracking what you're doing and not letting you turn it off). It's
certainly not desriable that they bought github, but it probably won't have
any obvious effects for a while. The biggest concerns probably have to do
with collecting data on users, and github was doutblessly doing that
already.

- Jonathan M Davis



Re: GitHub could be acquired by Microsoft

2018-06-03 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, June 04, 2018 03:51:15 Anton Fediushin via Digitalmars-d-announce 
wrote:
> This is still just a rumour, we'll know the truth on Monday
> (which is today).
>
> Some articles about the topic:
>
> https://fossbytes.com/microsoft-github-aquisition-report/
> https://www.theverge.com/2018/6/3/17422752/microsoft-github-acquisition-ru
> mors
>
> What's your opinion about that? Will you continue using GitHub?
>
> Both GitLab and Bitbucket can be used instead to host your D
> projects - dub registry supported them for a while now.
>
> IMHO Microsoft isn't the type of company I want to see behind the
> GitHub. Maybe I am wrong since Microsoft has both money and
> programmers to improve it further, I just don't trust them too
> much which is the right thing to do when dealing with companies.
> This means that I will move my repositories elsewhere and use
> GitHub just to contribute to other projects.

It would increase the odds that I would put public repos on bitbucket
(that's already where I put all of my private repos, since it's free there
but not at github). But I don't know. The main reason to go with github is
because it's the main place that folks go looking for open source projects,
and anyone who finds you on github (be it through the official dlang
projects or something else) will find your stuff on github that way but not
the stuff that you have elsewhere. On some level, what you have on github
serves as a resume in a way that it wouldn't with other sites (especially if
folks are finding you online rather than you pointing to your repos in your
actual resume).

I can't say that I'll be happy if Microosft buys github (just like I wasn't
happy when they bought linkedin), but I also can't say for sure that it will
change what I do. A lot of that will depend on what Microsoft does and how
it affects the open source community at large. The odds of my hosting stuff
elsewhere definitely then go up, but I don't know what I'll do. This is bad
news but likely not catastrophic.

On the bright side, maybe this will encourage online repo hosting to become
less of a monopoly as folks move elsewhere due to their concerns about
Microsoft.

- Jonathan M Davis



Re: serialport v1.0.0

2018-05-13 Thread Jonathan M Davis via Digitalmars-d-announce
On Sunday, May 13, 2018 17:57:56 Andre Pany via Digitalmars-d-announce 
wrote:
> On Sunday, 6 May 2018 at 22:02:05 UTC, Oleg B wrote:
> > Stable version of serialport package
> >
> > * Blocking `SerialPortBlk` for classic usage
> >
> > * Non-blocking `SerialPortNonBlk` and `SerialPortFR` for usage
> > in fibers or in vibe-d
> >
> > * Variative initialization and configuration
> >
> > * Hardware flow control config flag
> >
> > Doc: http://serialport.dpldocs.info/v1.0.0/serialport.html
> > Dub: http://code.dlang.org/packages/serialport
> > Git: https://github.com/deviator/serialport
>
> Thanks for this library. The announcement is at the right time as
> I want to write a smart home application to control my shutters.
> The application will run on a raspberry pi (ftdi sub stick).


So, now we'll be able to hack your shutters? ;)

> Do you thought about including your library into phobos? A std
> library really should contain this functionality.

Really? If the consensus is that it should go in, then okay, but I don't
think that I've ever seen a standard library with anything like
functionality for talking to serial ports. And what would having it be in
Phobos buy you over just grabbing it from code.dlang.org?

- Jonathan M Davis



Re: iopipe v0.0.4 - RingBuffers!

2018-05-11 Thread Jonathan M Davis via Digitalmars-d-announce
On Friday, May 11, 2018 11:44:04 Steven Schveighoffer via Digitalmars-d-
announce wrote:
> Shameful note: Macos grep is BSD grep, and is not NEARLY as fast as GNU
> grep, which has much better performance (and is 2x as fast as
> iopipe_search on my Linux VM, even when printing line numbers).

Curiously, the grep on FreeBSD seems to be GNU's grep with some additional
patches, though I expect that it's a ways behind whatever GNU is releasing
now, because while they were willing to put some GPLv2 stuff in FreeBSD,
they have not been willing to have anything to do with GPLv3. FreeBSD's grep
claims to be version 2.5.1-FreeBSD, whereas ports has the gnugrep package
which is version 2.27, so that implies a fairly large version difference
between the two. I have no idea how they compare in terms of performance.
Either way, I would have expected FreeBSD to be using their own
implementation, not something from GNU, especially since they seem to be
trying to purge GPL stuff from FreeBSD. So, the fact that FreeBSD is using
GNU's grep is a bit surprising. If I had to guess, I would guess that they
switched to the GNU version at some point in the past, because it was easier
to grab it than to make what they had faster, but I don't know. Either way,
it sounds like Mac OS X either didn't take their grep from FreeBSD in this
case, or they took it from an older version before FreeBSD switching to
using GNU's grep.

- Jonathan M Davis


Re: dxml 0.3.0 released

2018-04-20 Thread Jonathan M Davis via Digitalmars-d-announce
On Friday, April 20, 2018 16:07:06 Jesse Phillips via Digitalmars-d-announce 
wrote:
> On Friday, 20 April 2018 at 00:46:38 UTC, Jonathan M Davis wrote:
> > Yes. I would have thought that that was clear. It throws if any
> > of the characters or sequence of characters in the argument
> > aren't legal in the text portion of an XML document. Those
> > characters that can be legally present in encoded form but not
> > in their literal form can be encoded first with encodeText.
> > I'll try to make the documentation clearer.
>
> I think I just feel it is a little hidden under the exception
> list. A note in the general description about utilizing
> encodeText on text which needs encoding would be good.

Well, for better or worse, it now mentions it directly in the Throws
section.

- Jonathan M Davis



Re: dxml 0.3.0 released

2018-04-20 Thread Jonathan M Davis via Digitalmars-d-announce
On Friday, April 20, 2018 08:45:45 Dejan Lekic via Digitalmars-d-announce 
wrote:
> On Thursday, 19 April 2018 at 14:40:58 UTC, Jonathan M Davis
>
> wrote:
> > Well, since I'm going to be talking about dxml at dconf, and
> > it's likely that I'll be talking about stuff that was not in
> > the 0.2.* releases, it seemed like I should get a new release
> > out before dconf. So, here it is.
> >
> > dxml 0.3.0 has now been released.
> >
> > I won't repeat everything that's in the changelog, but the
> > biggest changes are that writer support has now been added, and
> > it's now possible to configure how the parser handles
> > non-standard entity references.
> >
> > Please report any bugs that you find via github.
> >
> > Changelog: http://jmdavisprog.com/changelog/dxml/0.3.0.html
> > Documentation: http://jmdavisprog.com/docs/dxml/0.3.0/
> > Github: https://github.com/jmdavis/dxml/tree/v0.3.0
> > Dub: http://code.dlang.org/packages/dxml
> >
> > - Jonathan M Davis
>
> I am happy to see dxml moving on!
> Jonathan, are the interfaces in the dom module generated from the
> IDL code from W3C?

No, that's not something that I'm familiar with. I just made up the API
based on what made sense to me. I basically took the API that
EntityRange.Entity has and morphed it into what made sense for a tree
structure.

- Jonathan M Davis



Re: dxml 0.3.0 released

2018-04-19 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, April 19, 2018 23:00:03 Jesse Phillips via Digitalmars-d-
announce wrote:
> On Thursday, 19 April 2018 at 14:40:58 UTC, Jonathan M Davis
>
> wrote:
> > I won't repeat everything that's in the changelog, but the
> > biggest changes are that writer support has now been added, and
> > it's now possible to configure how the parser handles
> > non-standard entity references.
>
> In reference to
> http://jmdavisprog.com/docs/dxml/0.3.0/dxml_writer.html#.XMLWriter.writeTe
> xt
>
> "XMLWritingException if the given text is not legal in the text
> portion of an XML document."
>
> Is this to say that the text must be encoded
> (dxml.util.encodeText) prior to calling this or it will throw if
> the text contains "<"?
>
> This should be clearer in the documentation.

Yes. I would have thought that that was clear. It throws if any of the
characters or sequence of characters in the argument aren't legal in the
text portion of an XML document. Those characters that can be legally
present in encoded form but not in their literal form can be encoded first
with encodeText. I'll try to make the documentation clearer.

- Jonathan M Davis



Re: dxml 0.3.0 released

2018-04-19 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, April 19, 2018 17:21:15 Suliman via Digitalmars-d-announce 
wrote:
> Am I right remember that this lib is planed to be included as
> std.xml replacement?

It is a potential candidate to replace std.xml. It is currently the plan
that once I feel that it's complete enough and battle-tested enough, I will
submit it to the Phobos review process, after which it may or may not end up
in Phobos. That will depend entirely on how that process goes.

The only other potential candidate that I'm aware of was a GSoC project that
stalled after the student disappeared (presumably, he got busy with school
again and never got back to it), and it shows no sign of ever being
completed.

- Jonathan M Davis



dxml 0.3.0 released

2018-04-19 Thread Jonathan M Davis via Digitalmars-d-announce
Well, since I'm going to be talking about dxml at dconf, and it's likely
that I'll be talking about stuff that was not in the 0.2.* releases, it
seemed like I should get a new release out before dconf. So, here it is.

dxml 0.3.0 has now been released.

I won't repeat everything that's in the changelog, but the biggest changes
are that writer support has now been added, and it's now possible to
configure how the parser handles non-standard entity references.

Please report any bugs that you find via github.

Changelog: http://jmdavisprog.com/changelog/dxml/0.3.0.html
Documentation: http://jmdavisprog.com/docs/dxml/0.3.0/
Github: https://github.com/jmdavis/dxml/tree/v0.3.0
Dub: http://code.dlang.org/packages/dxml

- Jonathan M Davis



Re: DIP 1009 (Add Expression-Based Contract Syntax) Accepted

2018-04-11 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, April 11, 2018 07:47:14 H. S. Teoh via Digitalmars-d-announce 
wrote:
> On Tue, Apr 10, 2018 at 11:43:00PM -0600, Jonathan M Davis via
> Digitalmars-d-announce wrote: [...]
>
> > IMHO, for contracts to be worth much outside of the inheritance case,
> > we'd need to do something like make it so that contracts are compiled
> > in based on whether the caller used -release or not rather than
> > whether the callee did.
>
> This is what should have been done in the first place, and I'd argue
> that this is the direction we should be going in. The current
> implementation of contracts greatly diminish their value, though
> personally I'd still use them because they convey intent better than
> just sticking a bunch of asserts at the top of the function body.
>
> > If that were done, then there would be real value in using contracts,
> > and I'd be a lot more excited about the new syntax. As it is, it seems
> > like a nice improvement that's ultimately pointless.
>
> [...]
>
> I consider this as a first step in improving DbC support in D.  The next
> step is to make it so that in-contracts are enforced on the caller's
> side rather than the callee's side.  IIRC, the original version of this
> DIP included something to this effect, but it was eventually taken off
> in order to stay more focused in scope so that the chances of acceptance
> would be higher.  But I hope that eventually a future DIP would address
> this more fundamental and important issue.

If we actually end up with a language improvement that makes it so that
contracts are compiled in based on the caller instead of the callee, then
I'll start using contracts. Until then, I'm not generally going to bother.

And that reminds me, I was considering putting together a DIP to fix the
situation with invariants and void initialization. Thanks to the fact that
opAssign checks the state of the object prior to assigning it, you basically
can't use invariants with anything that you would void initialize, which
means that I basically never use invariants, and unlike in and out
contracts, invariants are actually a huge boon when they're appropriate,
since they insert checks with _every_ public function call, which would be a
royal pain to do by hand. Because of this issue, I'd previously argued that
opAssign should not check the state of the object before assigning it, but
Walter rejected that, and in rare cases, you actually do care about the
state of the object before assigning it, so that makes some sense, but it's
a huge problem when void initialization gets involved. So, I was thinking
that maybe we should have a way to indicate at the call site that an
assignment should not call the invariant prior to calling opAssign in that
specific case. But I haven't gotten much past that in figuring it out, since
it's not all that high on my priority list. It's really annoying if you use
invariants, but my solution has been to just not use them, so it's a problem
but not one that actively gets in my way at the moment. It's just that I
then lose out on invariants. :|

- Jonathan M Davis



Re: #include C headers in D code

2018-04-11 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, April 11, 2018 09:23:29 Kagamin via Digitalmars-d-announce 
wrote:
> On Monday, 9 April 2018 at 19:36:23 UTC, Atila Neves wrote:
> > If you add to all that "No, really, it's ok, there's this
> > project that forked one of the compilers. No, it's not the
> > reference compiler. There's just this bit of non-standard
> > syntax to learn that's neither C nor D", then the chances of
> > convincing any "normal" engineer are 0.
>
> It is the reference compiler though (which is the frontend), the
> backend is different, but they don't want the dmc backend do
> they? Also recently it started to use pragma and import syntax,
> which are both D.

Yes, the frontend is shared, but you don't just use the frontend. You use
the whole compiler. dmd is the reference compiler and what your average
programmer coming to D is going to expect to be using (at least initially).
And telling folks that they have to use a language feature that is not
supported by the reference compiler is not going to go over well with a lot
of people. It would be one thing to tell them that they should use ldc,
because it generates faster code. That doesn't involve forking the language.
Your code would then still work just fine with dmd. It would just be slower.
It's quite another thing to tell them to use a feature that dmd doesn't
support. That _would_ be forking the language, and it would mean writing
programs that would not work with the reference compiler. Many folks are not
going to be happy with the idea of using a fork rather than the real deal.
Some folks will probably be fine with it, but in general, it just plain
looks bad.

It's one thing for someone who is familiar with D to weigh the options and
decide that being tied to ldc is okay. It's quite another to tell someone
who isn't familiar with D that in order to use D, they have to use a feature
which only works with a specific compiler that is not the reference compiler
and which will likely never work with the reference compiler.

- Jonathan M Davis



Re: DIP 1009 (Add Expression-Based Contract Syntax) Accepted

2018-04-10 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, April 11, 2018 05:23:58 really? via Digitalmars-d-announce 
wrote:
> On Friday, 6 April 2018 at 17:36:20 UTC, H. S. Teoh wrote:
> > Yeah, I think having expression syntax will make contracts more
> > readable.  We'll just have to see.
>
> Sorry, but I fail to see how (1) is more readable than (2)
>
> (1)
> in(s.length > 0, "s must not be empty")
>
> (2)
> in { assert(s.length > 0, "s must not be empty"); }
>
>
> In (1) The assert .. is removed.
> In (1) The scope indicators {} .. are removed.
> In (1) The semicolon..is removed.
>
> Removing all these things equates to being more readable??
>
> Sure, it makes it more concise, but more readable??
>
> I assert that it does not. But now..do I use the assert keyword..
> or not? Do I end with semicolon..or not??
>
> This just removes things that are still needed elsewhere in your
> code, but now... you have to remember that sometimes you need
> those things, and sometimes you don't.
>
> Better to have consistency over conciseness
>
> so glad to hear that existing syntax will remain.
> (well, till someone decides that needs to go too)

Many have considered the verboseness of contracts to be a major reason to
avoid them. The newer syntax will help with that in the cases where all you
need is a series of assertions. However, regardless of how anyone feels
about the new syntax, there are cases where you need more than just a series
of assertions (e.g. you need to declare one or more variables to use in the
assertions). The older syntax is required for such cases, and it would make
no sense to remove it even if we didn't care about avoiding code breakage.
So, if you prefer the older syntax, then feel free to use it, even if the
newer syntax will work. You'll be stuck reading the newer syntax in the code
of anyone who prefers the newer syntax, so you can't necessarily avoid
dealing with it, but you're not going to be forced to switch to the newer
syntax if you don't want to.

Personally, I think that new syntax is very straightforward. It may take
some getting used to, but it's basically the same syntax as an assertion
except that it has a different keyword, and because it's not a statement, it
doesn't need a semicolon. It makes sense in its context, and ultimately, I
don't think that it's really going to be readability problem.

That being said, I'm probably still not going to bother with contracts
simply because I don't see any real benefit over just putting assertions
inside the function except in the cases where inheritance is involved. I
find it a lot more tolerable than the old syntax, but I still find it to be
pointless so long as contracts are the same thing as putting assertions
inside the function (except when inheritance is involved). IMHO, for
contracts to be worth much outside of the inheritance case, we'd need to do
something like make it so that contracts are compiled in based on whether
the caller used -release or not rather than whether the callee did. If that
were done, then there would be real value in using contracts, and I'd be a
lot more excited about the new syntax. As it is, it seems like a nice
improvement that's ultimately pointless.

- Jonathan M Davis



Re: DIP 1009 (Add Expression-Based Contract Syntax) Accepted

2018-04-06 Thread Jonathan M Davis via Digitalmars-d-announce
On Friday, April 06, 2018 08:00:42 H. S. Teoh via Digitalmars-d-announce 
wrote:
> On Fri, Apr 06, 2018 at 12:26:36PM +, Mike Parker via Digitalmars-d-
announce wrote:
> > Congratulations to Zach Tollen and everyone who worked on DIP 1009. It
> > took a painful amount of time to get it through the process, but it
> > had finally come out of the other side with an approval.
>
> WOOHOO Just this week, I've started to wonder whatever happened to
> this DIP.  So happy to hear it's approved!!  Finally, sane contract
> syntax!

It definitely improves the syntax, but I confess that I still don't see much
point in using contracts outside of virtual functions. Everywhere else, the
behavior is the same if you just put assertions at the top of the function.
Now, if the contracts ended up in the documentation or something - or if it
were actually changed so that contracts were compiled in based on how the
caller were compiled rather than the callee - then maybe having an actual
contract would make sense, but as it stands, I don't see the point.

- Jonathan M Davis



Re: Why think unit tests should be in their own source code hierarchy instead of side-by-side

2018-03-23 Thread Jonathan M Davis via Digitalmars-d-announce
On Saturday, March 24, 2018 00:51:07 Tony via Digitalmars-d-announce wrote:
> On Saturday, 24 March 2018 at 00:12:23 UTC, Jonathan M Davis
>
> wrote:
> > On Friday, March 23, 2018 22:42:34 Tony via
> >
> > Digitalmars-d-announce wrote:
> >> On Friday, 23 March 2018 at 22:32:50 UTC, H. S. Teoh wrote:
> >> > On Fri, Mar 23, 2018 at 09:45:33PM +, Tony via
> >> >
> >> > Digitalmars-d-announce wrote:
> >> >> On Friday, 23 March 2018 at 20:43:15 UTC, H. S. Teoh wrote:
> >> >> > On Friday, 23 March 2018 at 19:56:03 UTC, Steven
> >> >> >
> >> >> > Schveighoffer wrote:
> >> >> > > I've worked on a project where the testing was separated
> >> >> > > from the code, and it was a liability IMO. Things would
> >> >> > > get missed and not tested properly.
> >> >>
> >> >> That's where Test Driven Development comes in.
> >> >
> >> > That's not an option when you have an existing codebase that
> >> > you have to work with.  You basically have to start out with
> >> > tons of code and no tests, and incrementally add them.
> >> > Having to also maintain a separate test tree mirroring the
> >> > source tree is simply far too much overhead to be worth the
> >> > effort.
> >>
> >> I think that you could "Test Driven Develop" the code you are
> >> adding or changing.
> >
> > Insisting on writing the tests before writing the code doesn't
> > help with the kind of situation that H. S. Teoh is describing.
> > And arguably it exacerbates the problem. Regardless, it doesn't
> > help when the code has already been written.
>
> I don't see how it exacerbates it and I don't see how it doesn't
> help. The point of Test-Driven Development it to make sure you
> have written a test for all your code. You can also do
> test-driven development in unittest blocks.

TDD makes it worse, because when you do TDD, you're constantly hopping
between your code and the tests instead of just writing the code and then
writing the tests. That still involves some back and forth, but it avoids
having to hop back and forth while you're still designing the function,
whereas with TDD, you have to go change the tests any time you change the
function before the function is even done. But whatever. If you prefer TDD,
then use it. A number of us have nothing good to say about TDD. You'd pretty
much have to hold a gun to my head before I'd ever do anything along those
lines. I'm all for writing tests, but I hate TDD.

Regardless, having the tests right next to the functions reduces how much
hopping around you have to do to edit the code and tests, because you have
everything in one place instead of in separate files.

> But as far as whether or not it can be done with maintenance
> code, my original reply that mentioned it was to someone who
> appeared to be talking about a new project not getting everything
> tested, not a maintenance project. So saying "can't do it for
> maintenance" doesn't even apply to my reply.

You were replying to H. S. Teoh talking about adding tests to an existing
project, in which case, it's very much about maintenance.

- Jonathan M Davis



Re: Why think unit tests should be in their own source code hierarchy instead of side-by-side

2018-03-23 Thread Jonathan M Davis via Digitalmars-d-announce
On Friday, March 23, 2018 22:42:34 Tony via Digitalmars-d-announce wrote:
> On Friday, 23 March 2018 at 22:32:50 UTC, H. S. Teoh wrote:
> > On Fri, Mar 23, 2018 at 09:45:33PM +, Tony via
> >
> > Digitalmars-d-announce wrote:
> >> On Friday, 23 March 2018 at 20:43:15 UTC, H. S. Teoh wrote:
> >> > On Friday, 23 March 2018 at 19:56:03 UTC, Steven
> >> >
> >> > Schveighoffer wrote:
> >> > > I've worked on a project where the testing was separated
> >> > > from the code, and it was a liability IMO. Things would
> >> > > get missed and not tested properly.
> >>
> >> That's where Test Driven Development comes in.
> >
> > That's not an option when you have an existing codebase that
> > you have to work with.  You basically have to start out with
> > tons of code and no tests, and incrementally add them.  Having
> > to also maintain a separate test tree mirroring the source tree
> > is simply far too much overhead to be worth the effort.
>
> I think that you could "Test Driven Develop" the code you are
> adding or changing.

Insisting on writing the tests before writing the code doesn't help with the
kind of situation that H. S. Teoh is describing. And arguably it exacerbates
the problem. Regardless, it doesn't help when the code has already been
written.

- Jonathan M Davis



Re: Why think unit tests should be in their own source code hierarchy instead of side-by-side

2018-03-23 Thread Jonathan M Davis via Digitalmars-d-announce
On Friday, March 23, 2018 13:43:15 H. S. Teoh via Digitalmars-d-announce 
wrote:
> Yep.  As I mentioned elsewhere, recently I've had to resort to external
> testing for one of my projects, and I'm still working on that right now.
> And already, I'm seeing a liability: rather than quickly locating a
> unittest immediately following a particular function, now I have to
> remember "oh which subdirectory was it that the tests were put in? and
> which file was it that a particular test of this function was done?".
> It's an additional mental burden to have to keep doing the mapping
> between current source location <-> test code location (even if it's a
> 1-to-1 mapping), and a physical burden to have to continually open
> external files (and typing a potentially long path for them) rather than
> just "bookmark, jump to end of function, navigate unittest blocks" in
> the same file.

When I've done unit testing in C++, I've had the tests in separate files,
and when I do that, I usually put all of the test files in the same place
(e.g. include/ for the .h files, source/ for the .cpp files and tests/ for
the .h and .cpp files with the tests) and have a 1-to-1 relationship between
the .h/.cpp pair containing the code and the .h/.cpp pair containing the
tests. Also, the test functions are usually named after the function that
they're testing. So, it's all fairly organized, but it's still way more of a
pain than just having the unittest block right after the function being
tested, and it makes it easy for folks to just ignore the tests and easy for
you to miss that something wasn't tested.

Having dealt with both that and putting all of the unit tests next to stuff
in D, I find having the tests right after the functions to be _far_ more
maintainable.

In fact, in std.datetime.interval, most of the tests are outside the
templated types that they're testing in order to avoid having the tests
included in every template instantiation, and that's turned out to be _way_
more of a pain to maintain than having the tests right after the functions.
And that doesn't even involve a separate file.

Obviously, YMMV, but in my experience, having the tests _immediately_ after
what they're testing is vastly more maintainable than having the tests
elsewhere.

- Jonathan M Davis



Re: Why think unit tests should be in their own source code hierarchy instead of side-by-side

2018-03-22 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, March 22, 2018 16:54:18 Anton Fediushin via Digitalmars-d-
announce wrote:
> On Thursday, 22 March 2018 at 16:30:37 UTC, H. S. Teoh wrote:
> > As for the dub-specific problems introduced by
> > version(unittest): IMO that's a flaw in dub.  I should not need
> > to contort my code just to accomodate some flaw in dub.  Having
> > said that, though, I do agree that version(unittest) itself is
> > a bad idea, so code written the way I recommend would not have
> > this problem even given dub's flaws.
>
> There's no "dub-specific problems". Article is wrong about that:
> when you run `dub test` only package you are working with is
> compiled with '-unittest' option. This way it is _impossible_ to
> have any kind of problems with `version(unittest)` if you are
> writing libraries

I could be wrong, but I am _quite_ sure that dub builds all dependencies
with their test targets when you build your project with its test target. I
was forced to work around that with dxml by adding a version identifier
specifically for dxml's tests _and_ create a test target specifically for
dxml. Simply adding a dxml-specific version identifier to its test target
was not enough, because any project which depended on dxml ended up with the
dxml-specific version identifier defined when its tests were built, because
dub was building dxml's test target and not its debug or release target. The
only way I found around the problem was to create a target specific to dxml
for building its unit tests and define the version identifier for that
target only.

I had to add this to dub.json

"buildTypes":
{
"doTests":
{
"buildOptions": ["unittests", "debugMode", "debugInfo"],
"versions": ["dxmlTests"]
},
"doCov":
{
"buildOptions": ["unittests", "debugMode", "debugInfo",
 "coverage"],
"versions": ["dxmlTests"]
}
}

And then I have scripts such as

test.sh
===
#!/bin/sh

dub test --build=doTests
===

to run the tests. I had to actively work around dub and what it does with
unit tests in order to not have all of dxml's tests compiled into any
project which had dxml as a dependency.

- Jonathan M Davis



Re: Why think unit tests should be in their own source code hierarchy instead of side-by-side

2018-03-22 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, March 22, 2018 09:30:37 H. S. Teoh via Digitalmars-d-announce 
wrote:
> 2) Compilation times: perhaps if your unittests are too heavy,
> compilation times could become a problem, but IME, they haven't been.

Personally, I agree with you, but Atila is one of those folks who gets
annoyed when tests take even a fraction of a second longer, so stuff that
many of us would not consider pain points at all tends to annoy him. I
prefer it when tests run fast, but if they take 1 second rather than 500
milliseconds, I don't consider it a big deal. Atila would.

So, given Atila's preferences, it makes sense to remove the tests from the
module if that speeds up compilation even a little, whereas personally, I'd
much rather have them next to the code they're testing so that I can see
what's being tested, and I don't have to go track down another file and edit
the tests there when I'm editing the code being tested. I'd prefer that that
not harm compilation time, but if it does, I'm generally going to put up
with it rather than move the tests elsewhere.

> 3) version(unittest): it's true that this can be a problem.  I remember
> that in Phobos we used to merge PRs containing code that compiles fine
> with -unittest, but in real-world code doesn't compile because it has
> stuff outside unittests that depend on imports/declarations inside
> version(unittest).  This is actually one of the reasons I was (and still
> am) a big supporter of local/scoped imports. It may be more convenient
> to just put global imports at the top of the module, but it just creates
> too many implicit dependencies from mostly-unrelated chunks of code,
> that I'm inclined actually to call global imports an anti-pattern.  In
> fact, I'd even go so far to say that version(unittest) in general is a
> bad idea.  It is better to factor out stuff inside version(unittest) to
> a different module dedicated to unittest-specific stuff, and have each
> unittest that needs it import that module.  This way you avoid polluting
> the non-unittest namespace with unittest-specific symbols.

I don't think that global imports are really an anti-pattern, though there
are advantages to local imports. The big problem with global imports is when
they're versioned, because then it's way too easy to screw them up. If
they're not versioned, then worst case, you're importing something which
wouldn't necessarily have to be imported if local imports were used.

> As for the dub-specific problems introduced by version(unittest): IMO
> that's a flaw in dub.  I should not need to contort my code just to
> accomodate some flaw in dub.  Having said that, though, I do agree that
> version(unittest) itself is a bad idea, so code written the way I
> recommend would not have this problem even given dub's flaws.

dub makes the problem worse, but it's inherent in how D currently works.
When I compile my project with -unittest, and I'm importing your library,
your version(unittest) code gets compiled in. I'm not sure exactly what
happens with the unittest blocks, since the code for them isn't generated
unless the module is explicitly compiled, but I think that it's the case
that all of the semantic analysis still gets done for them, in which case,
anything they import would need to be available, and that's a huge negative.
We really need a DIP to sort this out.

Now, unfortunately, dub makes the whole thing worse by compiling
dependencies with their unittest target when you build your project with its
unittest target, so you get all of the unit tests of all of your
dependencies regardless of what dmd does, but even if dub were not doing
that, we'd still have a problem with the language itself.

- Jonathan M Davis



Re: User Stories: Funkwerk

2018-03-17 Thread Jonathan M Davis via Digitalmars-d-announce
On Saturday, March 17, 2018 20:12:08 bauss via Digitalmars-d-announce wrote:
> On Saturday, 17 March 2018 at 19:54:07 UTC, Jonathan M Davis
>
> wrote:
> > On Saturday, March 17, 2018 12:48:07 bauss via
> >
> > Digitalmars-d-announce wrote:
> >> On Friday, 16 March 2018 at 19:42:11 UTC, Rubn wrote:
> >> > On Wednesday, 14 March 2018 at 14:17:50 UTC, Mike Parker
> >> >
> >> > wrote:
> >> >> foreach(auto element: elements)
> >> >
> >> > ":" is C++ syntax
> >>
> >> Also "auto" can be omitted.
> >>
> >> foreach (element; elements)
> >
> > Not only can be. It must be. auto is not legal in a foreach
> > loop in D, which is arguably a bit inconsistent, but for better
> > or worse, that's the way it is.
> >
> > - Jonathan M Davis
>
> Ahh, I didn't know it had become illegal or at least I think I
> remember foreach loops accepting auto in the past, but that's
> probably years ago or maybe I remember wrong and it has always
> been illegal?

AFAIK, it has always been illegal, and periodically, it's been brought up
that it should be legal for consistency, but for better or worse, it hasn't
been changed.

- Jonathan M Davis



Re: User Stories: Funkwerk

2018-03-17 Thread Jonathan M Davis via Digitalmars-d-announce
On Saturday, March 17, 2018 12:48:07 bauss via Digitalmars-d-announce wrote:
> On Friday, 16 March 2018 at 19:42:11 UTC, Rubn wrote:
> > On Wednesday, 14 March 2018 at 14:17:50 UTC, Mike Parker wrote:
> >> foreach(auto element: elements)
> >
> > ":" is C++ syntax
>
> Also "auto" can be omitted.
>
> foreach (element; elements)

Not only can be. It must be. auto is not legal in a foreach loop in D, which
is arguably a bit inconsistent, but for better or worse, that's the way it
is.

- Jonathan M Davis



Re: Vision document for H1 2018

2018-03-16 Thread Jonathan M Davis via Digitalmars-d-announce
On Friday, March 16, 2018 21:37:44 Void-995 via Digitalmars-d-announce 
wrote:
> Every time I'm thinking that something is impossible to be
> elegantly and/or easily done even in D - someone proves me wrong.
>
> And common, I just had that little spark of motivation to look
> into DMD, what is my purpose in life now?

Fan it into flames and go learn it so that you can fix or improve other
stuff that might come up? ;)

Of course, depending on what you know and are interested in, doing stuff
like writing useful libraries and putting them up on code.dlang.org can be
of huge benefit even if you never do anything for dmd, druntime, or Phobos.
In some ways, that's probably our biggest need. But regardless, if you're
interested in helping out the D ecosystem, there are plenty of options.

- Jonathan M Davis



Re: The D Language Foundation at Open Collective

2018-03-13 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, March 13, 2018 15:26:24 Martin Tschierschke via Digitalmars-d-
announce wrote:
> On Monday, 12 March 2018 at 14:32:34 UTC, Mike Parker wrote:
> > Today, the D Language Foundation has launched a page at Open
> > Collective:
> >
> > https://opencollective.com/dlang.
> >
> > This brings some transparency to the process and opens new
> > opportunities for how the Foundation handles donations.
> >
> > The blog post:
> > https://dlang.org/blog/2018/03/12/the-d-foundation-at-open-collective/
> >
> > Reddit:
> > https://www.reddit.com/r/d_language/comments/83vbz6/the_d_foundation_at_
> > open_collective/
> The Website needs the link, too!:
> https://dlang.org/foundation/donate.html

BTW, opencollective.com has a link to windfair.net listed in your backer
profile, but it links via https, and windfair.net seems to be http only, so
clicking on the link fails to connect.

- Jonathan M Davis



Re: Vision document for H1 2018

2018-03-11 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, March 12, 2018 03:31:36 Laeeth Isharc via Digitalmars-d-announce 
wrote:
> If Phobos looks like a mess to C# programmers, so much the worse
> for C# programmers.  However I doubt they this is true in the
> general case.  It's better in many ways, but different and
> unfamiliar and everything unfamiliar is disconcerting in the
> beginning.

Yeah, I remember when I was first dealing with D years ago, and the range
stuff that was in there was totally unfamiliar, and it annoyed me, because I
just wanted the iterators that I was familiar with so that I could easily do
what I wanted to do. Granted, it was worse to figure all of that out then,
since the documentation was much worse (e.g. at the time, there was a
compiler bug that made it so that auto return functions did not show up in
the documentation, so all of std.algorithm explicitly listed the return
types, making it downright scary), but still, at the time, I just didn't
want to deal with figuring out the unfamiliar concept, so it annoyed me.
Now, I actually understand ranges and am very glad that they're there, but
as a D newbie, they were annoying, because they were unfamiliar. I think
that I'd approach it all very differently now, since at the time, I was in
college and just generally a far less experienced programmer, but I think
that many of us tend to look for what we expect when learning a new language
or library, and when that's not what we get, it can be disconcerting to
begin with.

Phobos is very unique in its approach with ranges, likely making it a major
learning experience for anyone from any language, but AFAIK, the only
language with a comparable approach in its standard library is C++, and I'd
definitely expect it to be somewhat alien to the average C# or Java
programmer - at least to begin with. D does things differently, so anyone
learning it is just going to have to expect to deal with a learning curve.
How steep a curve that is is going to depend on the programmer's experience
and background, but it's going to be there regardless.

- Jonathan M Davis



Re: Vision document for H1 2018

2018-03-11 Thread Jonathan M Davis via Digitalmars-d-announce
On Monday, March 12, 2018 03:37:11 Laeeth Isharc via Digitalmars-d-announce 
wrote:
> On Sunday, 11 March 2018 at 19:58:51 UTC, rumbu wrote:
> > On Sunday, 11 March 2018 at 17:15:28 UTC, Seb wrote:
> >> [...]
> >
> > Yes, I'm the typical lazy convenient Windows user scared of the
> > terminal window.
> >
> >> [...]
> >
> > I am happy for Posix users. Theoretically the process is the
> > same on Windows.
> >
> >> [...]
> >
> > This will need Linux subsystem as a Windows feature installed.
> > Bash scripts do not work on Windows. Or other third party
> > applications that are not listed as prerequisites on wiki.
> >
> >> [...]
> >
> > make -fwin32.mak release
> > Error: don't know how to make 'release'
> >
> > Ok, let's skip this, make it without "release".
> >
> > Now test it:
> >
> > cd test
> > make all -j8
> > Command error: undefined switch '-j8'
>
> Why are you adding -j8 ? Does it say to do so in the instructions
> ? Try without it.  (I can't test here as typing from my phone).

When dealing with building on Windows, it would definitely pay to read the
instructions and not assume anything about make, since unfortunately, the
Windows build uses the digital mars make, which is severely limited in
comparison to the BSD make or GNU make.

- Jonathan M Davis



Re: Article: Why Const Sucks

2018-03-06 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, March 06, 2018 19:06:25 Martin Nowak via Digitalmars-d-announce 
wrote:
> On Tuesday, 6 March 2018 at 18:17:58 UTC, Jonathan M Davis wrote:
> > I'm not actually convinced that killing auto-decoding is really
> > much better.
>
> I don't think the problem is auto-decoding in string range
> adapters, but repeated validation.
> https://issues.dlang.org/show_bug.cgi?id=14519#c32
> If you know that sth. works on code units just use
> .representation.
>
> There is the related annoyance when the user of a function
> presumably knows to only deal with ASCII strings but algorithms
> fail, e.g. splitter.popBack or binary search. This one is tricky
> because broken unicode support is often rooted in ignoring it's
> existence.

Yes, using stuff like representation or byCodeUnit helps to work around the
auto-decoding, but as long as it's there, you have to constantly work around
it if you care about efficiency with strings and/or want to be able to
retain the original string type where possible. At this point, I think that
it's pretty clear that we wouldn't have it if we could do stuff from
scratch, but of course, we can't do stuff from scratch, because that would
break everything.

- Jonathan M Davis



Re: Article: Why Const Sucks

2018-03-06 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, March 06, 2018 10:47:36 H. S. Teoh via Digitalmars-d-announce 
wrote:
> On Tue, Mar 06, 2018 at 01:31:39PM -0500, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
> > On 3/6/18 10:39 AM, Jonathan M Davis wrote:
> > > Yeah. If you're dealing with generic code rather than a specific
> > > range type that you know is implicitly saved when copied, you have
> > > to use save so often that it's painful, and almost no one does it.
> > > e.g.
> > >
> > > equal(lhs.save, rhs.save)
> > >
> > > or
> > >
> > > immutable result = range.save.startsWith(needle.save);
> >
> > Yep. The most frustrating thing about .save to me is that .save is
> > nearly always implemented as:
> >
> > auto save() { return this; }
> >
> > This just screams "I really meant just copying".
>
> Yeah, and also:
>
>   auto save() {
>   auto copy = this;
>   copy.blah = blah.dup;
>   return this;
>   }
>
> Which just screams "I'm really just a postblit in disguise".

That's exactly what it is. It's a postblit constructor that you have to call
manually and which works for classes and dynamic arrays in addition to
structs.

- Jonathan M Davis



Re: Article: Why Const Sucks

2018-03-06 Thread Jonathan M Davis via Digitalmars-d-announce
On Tuesday, March 06, 2018 09:41:42 H. S. Teoh via Digitalmars-d-announce 
wrote:
> As they say, hindsight is always 20/20.  But it wasn't so easy to
> foresee the consequences at the time when the very concept of ranges was
> still brand new.

Except that even worse, I'd argue that hindsight really isn't 20/20. We can
see a lot of the mistakes that were made, and if we were starting from
scratch or otherwise willing to break a lot of code, we could change stuff
like the range API based on the lessons learned. But we'd probably still
screw it up, because we wouldn't have the experience with the new API to
know where it was wrong. Consider all of the stuff that was improved in D
over C++ but which still has problems in D (like const). We build on
experience to make the new stuff better and frequently lament that we didn't
know better in the past, but we still make mistakes when we do new stuff or
redesign old stuff. Frequently, the end result is better, but it's rarely
perfect.

- Jonathan M Davis



  1   2   3   >