Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-30 Thread Neia Neutuladh via Digitalmars-d-announce
On Wed, 30 Jan 2019 09:15:36 -0800, Manu wrote:
> Why are you so stuck on this case? The DIP is about accepting rvalues,
> not lvalues...
> Calling with 'p', an lvalue, is not subject to this DIP.

The result of a CastExpression is an rvalue. An implicit cast is a 
compiler-inserted CastExpression. Therefore all lvalues with a potential 
implicit cast are rvalues.


Re: GtkD Blog Now Up and Running

2019-01-29 Thread Neia Neutuladh via Digitalmars-d-announce
On Tue, 29 Jan 2019 21:13:17 +, WebFreak001 wrote:
> dub.sdl:
> name "my-awesome-gtk-app"
> 
> dependency "gtk-d" version="~>3.8.5"

Might I recommend instead:

dependency "gtk-d" version="3.8.5"

This depends on gtk-d 3.8.5 and only that version. If there is a breaking 
change in 3.8.6 despite semantic versioning, your code keeps working.

In libraries, I prefer using ~> to give more freedom to people depending 
on my code. But in applications, that isn't a concern. May as well only 
allow the code to be built with the versions of your dependencies that 
you've actually tested.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-25 Thread Neia Neutuladh via Digitalmars-d-announce
On Fri, 25 Jan 2019 18:14:56 -0800, Manu wrote:
> Removing the `void` stuff end expanding such that the declaration +
> initialisation is at the appropriate moments; any function can throw
> normally, and the unwind works naturally?

The contention was that, if the arguments are constructed properly, 
ownership is given to the called function, which is responsible for 
calling destructors. But if the argument construction fails, the caller is 
responsible for calling destructors.

I'm not sure what the point of that was. The called function doesn't own 
its parameters and shouldn't ever call destructors. So now I'm confused.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-25 Thread Neia Neutuladh via Digitalmars-d-announce
On Fri, 25 Jan 2019 23:08:52 +, kinke wrote:

> On Friday, 25 January 2019 at 19:08:55 UTC, Walter Bright wrote:
>> On 1/25/2019 2:57 AM, kinke wrote:
>>> On Thursday, 24 January 2019 at 23:59:30 UTC, Walter Bright wrote:
 On 1/24/2019 1:03 PM, kinke wrote:
> (bool __gate = false;) , ((A __pfx = a();)) , ((B __pfy =
> b();)) , __gate = true , f(__pfx, __pfy);

 There must be an individual gate for each of __pfx and pfy.
 With the rewrite above, if b() throws then _pfx won't be destructed.
>>> 
>>> There is no individual gate, there's just one to rule the
>>> caller-destruction of all temporaries.
>>
>> What happens, then, when b() throws?
> 
> `__pfx` goes out of scope, and is dtor expression (cleanup/finally) is
> run as part of stack unwinding. Rewritten as block statement:

And nested calls are serialized as you'd expect:

int foo(ref S i, ref S j);
S bar(ref S i, ref S j);
S someRvalue(int i);

foo(
bar(someRvalue(1), someRvalue(2)),
someRvalue(4));

// translates to something like:
{
bool __gate1 = false;
S __tmp1 = void;
S __tmp2 = void;
S __tmp3 = void;
__tmp1 = someRvalue(1);
try
{
__tmp2 = someRvalue(2);
__gate1 = true;
__tmp3 = bar(__tmp1, __tmp2);
}
finally
{
if (!__gate1) __tmp1.__xdtor();
}
S __tmp4 = void;
bool __gate2 = false;
try
{
__tmp4 = someRvalue(4);
__gate2 = true;
return foo(__tmp3, __tmp4);
}
finally
{
if (!__gate2)
{
__tmp3.__xdtor();
}
}
}


Re: Top Five World’s Most Underrated Programming Languages

2019-01-23 Thread Neia Neutuladh via Digitalmars-d-announce
On Wed, 23 Jan 2019 14:37:30 +, Bienlein wrote:
> This is all true, but you need to keep in mind that Go had no real
> package manager for a long time. There was the "go get" command which
> loaded the code from some github repo in the state it was at the time
> when being loaded. There was no version control. Nobody really cared
> (the vendor stuff in Go was added with Go 1.10 or 1.11). Goroutines were
> the killer feature of the language that paved the way, because this was
> badly needed for writing server-side software.

Go has several killer features:
* It's got a GC and yet is endorsed by one of the major people behind C. 
This helps people get over their fear of garbage collection and into 
appreciating the benefits.
* It's also got "pointers". They're actually references with pointer-ish 
syntax, but that makes people coming from C/C++ more comfortable.
* It's not Java, and it's not slower than Java.
* There was a team in Google that would rewrite old, crufty C++ code in 
Go. Was Go a benefit? Maybe in some ways, but the major benefit was a 
rewrite that the owning team didn't have to do. That earned goodwill among 
thousands of developers attached to Go as a language.
* It's backed by Google (in large part because of that goodwill).

I don't think fibers are all that important for Go's success. Maybe for 
people who would have looked at node.js but didn't want to use javascript?


Re: D-lighted, I'm Sure

2019-01-18 Thread Neia Neutuladh via Digitalmars-d-announce
On Fri, 18 Jan 2019 11:43:58 -0800, H. S. Teoh wrote:
> (1) it often builds unnecessarily -- `touch source.d` and it rebuilds
> source.d even though the contents haven't changed; and

Timestamp-based change detection is simple and cheap. If your filesystem 
supports a revision id for each file, that might work better, but I 
haven't heard of such a thing.

If you're only dealing with a small number of small files, content-based 
change detection might be a reasonable option.

> (2) it often fails to build necessary targets -- if for whatever reason
> your system clock is out-of-sync or whatever, and a newer version of
> source.d has an earlier date than a previously-built object.

I'm curious what you're doing that you often have clock sync errors.


Re: My Meeting C++ Keynote video is now available

2019-01-15 Thread Neia Neutuladh via Digitalmars-d-announce
On Tue, 15 Jan 2019 11:59:58 +, Atila Neves wrote:
> He's not saying "kill classes in D", he's saying an OOP system in D
> could be implemented from primitives and classes don't need to be a
> language feature, similar to CLOS in Common Lisp.

As long as the syntax and behavior don't change, the error messages are 
good, and the compile-time overhead is similar, I won't complain.


Re: My Meeting C++ Keynote video is now available

2019-01-14 Thread Neia Neutuladh via Digitalmars-d-announce
On Mon, 14 Jan 2019 21:12:48 +, Paul Backus wrote:
> On Monday, 14 January 2019 at 21:08:50 UTC, Ben Jones wrote:
>> Is it possible to declare a function whose name is a CTFE computed
>> string?
> 
> Yes, if you do it with a string mixin.

And more simply, you can declare a function with a hard-coded name, then 
use an alias to expose it under a different name.


Re: code-d 0.20.0 - serve-d 0.4.0 - Happy new year!

2019-01-13 Thread Neia Neutuladh via Digitalmars-d-announce
On Sun, 13 Jan 2019 21:40:43 +, Murilo wrote:
> It would be a good idea to publish that on the facebook group for users
> of D. There you would be able to spread the information fast. It is
> called Programming in D. Here is the link:
> https://www.facebook.com/groups/662119670846705/

I think one post advertising the facebook group per week would be more 
appropriate than three in one day.


Re: Blog post: What D got wrong

2018-12-20 Thread Neia Neutuladh via Digitalmars-d-announce
On Thu, 20 Dec 2018 14:19:33 +0100, Daniel Kozak wrote:
> default(attributes..) is no needed. You can already do this by:
> 
> pure @safe:
> // your code

That doesn't work if you have any member functions, and Walter says it's 
unlikely that that will ever change, even with a DIP.

default(pure) would be new syntax with no existing code broken.


Re: Blog post: What D got wrong

2018-12-19 Thread Neia Neutuladh via Digitalmars-d-announce
On Wed, 19 Dec 2018 17:28:01 +, Vijay Nayar wrote:
> Could you please elaborate a little bit more on this?  In the linked
> program, I had expected that "ref" would return a reference to "a" that
> would behave similar to a pointer.

They work like pointers that automatically dereference when assigning to 
the base type.

Only three things in D can be ref:
* A function parameter
* A function return value
* A foreach variable (since that's either going to be a function return 
value, a function parameter, or a pointer, depending on what you're 
iterating over)

So when the compiler sees something like:

ref int foo();
auto a = foo();

It sees that the type of 'a' has to be the same as the return type of 
'foo'. Except that's not possible, so it uses the nearest equivalent type: 
int.

And if you have:

ref int foo();
int a = foo();

That obviously converts by copying the value.


Re: Blog post: What D got wrong

2018-12-18 Thread Neia Neutuladh via Digitalmars-d-announce
On Wed, 19 Dec 2018 01:04:24 +, Nathan S. wrote:
> On Saturday, 15 December 2018 at 19:53:06 UTC, Atila Neves wrote:
>> Not the case in Rust, not the case in how I write D. TBH it's not such
>> a big deal because something has to be typed, I just default to const
>> now anyway instead of auto. @safe and pure though...
> 
> I'd be interested in seeing some of that Rust code. My impression from
> Clojure is that an all-immutable style requires leaning heavily on the
> garbage collector and as far as I know Rust has none.

It is greatly simplified by automatic memory management. Rust doesn't have 
a GC, but it has a complex ownership system instead, and that's the basis 
of its memory management. When that's insufficient, you use reference 
counting.

Besides which, this is about defaults. In cases where your data is 
actually mutable and it would be awkward to switch to a more Haskell-like 
way of coding, you can still use that.


Re: Blog post: What D got wrong

2018-12-18 Thread Neia Neutuladh via Digitalmars-d-announce
On Tue, 18 Dec 2018 08:17:28 +, Russel Winder wrote:
> I did a lightning talk at the GStreamer conference in Edinburgh a couple
> of months ago, concluding that I think D (which about half the audience
> knew of) is overall better than Rust for GTK+ and GStreamer
> applications, but recognising that Rust is actually the replacement for
> C and C++ for GTK+ and GStreamer applications. (Obviously Python has an
> ongoing role in all this as well.)

Is there a video link for that talk? I'd be interested in hearing it.


Re: A brief survey of build tools, focused on D

2018-12-15 Thread Neia Neutuladh via Digitalmars-d-announce
On Sun, 16 Dec 2018 00:17:55 +, Paul Backus wrote:
> On Wednesday, 12 December 2018 at 22:41:50 UTC, H. S. Teoh wrote:
>> It's time we came back to the essentials.  Current monolithic build
>> systems ought to be split into two parts: [...]
> You're missing (0) the package manager, which is probably the biggest
> advantage "monolothic" build tools like dub, cargo, and npm have
> compared to language-agnostic ones like make.

If I were to make a new build tool and wanted package manager integration, 
I'd choose Maven as the backend. This would no doubt be more frustrating 
than just making my own, but there would hopefully be fewer bugs on the 
repository side.

(I might separately make my own Maven-compatible backend.)

> There's something important you're glossing over here, which is that, in
> the general case, there's no single obvious or natural way to compose
> two DAGs together.

You do it like Bazel.

In Bazel, you have a WORKSPACE file at the root of your project. It 
describes, among other things, what dependencies you have. This might, for 
instance, be a git URL and revision. All this does is expose that 
package's build rules to you.

Separately, you have build rules. Each build rule can express a set of 
dependencies on other build rules. There's no difference between depending 
on a rule that your own code defines and depending on one from an external 
dependency.

It might be appropriate to have a hint on DAG nodes saying that this is 
the default thing that you should probably depend on if you're depending 
on the package. A lot of projects only produce one artifact for public 
consumption.


Re: Fuzzed - a program to find DMDFE parser crash

2018-12-15 Thread Neia Neutuladh via Digitalmars-d-announce
On Sat, 15 Dec 2018 21:09:12 +, Sebastiaan Koppe wrote:
> On Saturday, 15 December 2018 at 15:37:19 UTC, Basile B. wrote:
>> I think this is what Walter calls "AST poisoning" (never understood how
>> it worked before today). And the whole parser is like this.
>>
>> This poisoning kills the interest of using a fuzzer. 99% of the crashes
>> will be in hdrgen.
> 
> As is common with fuzzing, you'll need to ensure the program crashes.
> Sometimes that requires some tweaking.
> 
> Regardless, you still have the input to investigate.

I think the point is that DMD tries to recover from parsing failures in 
order to provide additional error messages. But those parsing failures 
leave the parser in an invalid state, and invalid states are fertile ground 
for crashes.

The way to fix this is to replace the entire parser and get rid of the 
idea of AST poisoning; at the first error, you give up on parsing the 
entire file. From there, you can try recovering from specific errors with 
proper testing.


Re: Autowrap for .NET is Now Available

2018-12-14 Thread Neia Neutuladh via Digitalmars-d-announce
On Sat, 15 Dec 2018 00:43:42 +, j...@jjs.com wrote:
>>> Do you have plans to incorportae this as a VisualD project .csproj

Retaining the "On Sat, 15 Dec, Person A wrote:" lines is helpful for 
keeping track of the conversation.

> Using DLangInNet(I'm renamed your project for you ;)

Only the project maintainers have that authority. The project exposes D 
types and functions in Python, Excel, and .NET, so DLangInNet would be a 
terrible name for it.

> You should talk to Rainer about this. It shouldn't be all that difficult
> to do since C#'s PInvoke does all the real work. I assume DLangInNet
> just generates a C# equivalent that forwards all the calls to the D code
> using Pinvoke when necessary?

This is discussed in the Autowrap readme file.

> So the idea is that one can add .d files to .Net projects, when built:

Emitting the C# interface is a separate build target. You need to be able 
to specify Autowrap and the D compiler as dependencies. You need to hook 
this up into the build script.

> 2. DInNet runs on the D code and generates the C# output(could be
> modified for many other lannguages such as python, F#, Haskell, etc).
> Basically an autobinding generator from D to whatever, might be a good
> project to develop.

Perhaps a good name for that would be "Autowrap".

> The point of having it this way in Visual Studio is that one can on a
> single project that has many different languages involved and can setup
> a rather seamless connection.

Add a field to a wrapped D type and it won't show up on the C# side until 
you rebuild. This is not seamless. Making it seamless requires your IDE to 
know how the binding will be constructed and to have good support for 
every language you're using. Plus each language plugin needs to use a 
common set of data structures to represent your source code.

Technically, this isn't hard for a common set of languages, at least if 
you're only binding from a statically typed, strongly typed language with 
ideally little metaprogramming and in any case no ability to change types 
at runtime.

> What we need is an idea that just works and one can use any appropriate
> source language at any time and they all bind without hicup's in most
> cases.

Languages need a certain level of similarity for that to work. Haskell 
assumes immutability and would not work particularly well with a D 
function that mutates its input. Javascript objects can gain and lose 
fields and methods at runtime, something that D can't model well. A Python 
function can return a value of any type at all.



Re: Blog post: What D got wrong

2018-12-13 Thread Neia Neutuladh via Digitalmars-d-announce
On Thursday, 13 December 2018 at 18:29:39 UTC, Adam D. Ruppe 
wrote:
Though, I think we could also get a lot of mileage out of 
fixing two glaring problems with the status quo: 1) making 
attr: at the top descend into aggregates consistently and 2) 
LETTING US TURN THEM OFF. SERIOUSLY WHY DON'T WE HAVE 
`virtual`, `throws`, `impure` AND THE REST?! THIS IS SO OBVIOUS 
AND THE LACK OF THEM IS UNBELIEVABLY FRUSTRATING.


While I might quibble about nothrow being the default, I wouldn't 
care once attributes descend into aggregates.


https://issues.dlang.org/show_bug.cgi?id=7616


Re: Blog post: What D got wrong

2018-12-13 Thread Neia Neutuladh via Digitalmars-d-announce

On Tuesday, 11 December 2018 at 10:45:39 UTC, Atila Neves wrote:
I think there’s a general consensus that @safe, pure and 
immutable should be default.


I recall there was a decent chunk of people around D2.007 who 
were pushing for const-by-default function parameters on the 
grounds of if we're going to have this controversial system, we 
may as well commit to it.


Also in the topic of defaults, you could potentially add inout as 
a default for member functions. It's a lot more strict, but no 
more so than immutable by default.


Re: A brief survey of build tools, focused on D

2018-12-12 Thread Neia Neutuladh via Digitalmars-d-announce

On Wednesday, 12 December 2018 at 22:41:50 UTC, H. S. Teoh wrote:
And here is the crux of my rant about build systems (earlier in 
this thread).  There is no *technical reason* why build systems 
should be constricted in this way. Today's landscape of 
specific projects being inextricably tied to a specific build 
system is completely the wrong approach.


You could reduce all this language-specific stuff to a way to 
generate a description of what needs to be built and what 
programs are suggested for doing it. This is quite a layer of 
indirection, and that means more work. "I can do less work" is a 
technical reason.


Ensuring that your output is widely usable is also extra work.

There is also a psychological reason: when you're trying to solve 
a set of problems and you are good at code, it's easy to tunnel 
vision into writing all the code yourself. It can even, 
sometimes, be easier to write that new code than to figure out 
how to use something that already exists (if you think you can 
gloss over a lot of edge cases or support a lot fewer pieces, for 
instance).


This is probably why Dub has its own repository instead of using 
Maven.


Seriously, building a lousy software project is essentially 
traversing a DAG of inputs and actions in topological order.  
The algorithms have been known since decades ago, if not 
longer, and there is absolutely no valid reason why we cannot 
import arbitrary sub-DAGs and glue it to the main DAG, and have 
everything work with no additional effort, regardless of where 
said sub-DAGs came from.  It's just a bunch of nodes and 
labelled edges, guys!  All the rest of the complications and 
build system dependencies and walled gardens are extraneous and 
completely unnecessary baggage imposed upon a straightforward 
DAG topological walk that any CS grad could write in less than 
a day.  It's ridiculous.


If any CS grad student could write it in a day, you could say 
that having a generic DAG isn't useful or interesting. That makes 
it seem pretty much useless to pull that out into a separate 
software project, and that's a psychological barrier.


Re: A brief survey of build tools, focused on D

2018-12-10 Thread Neia Neutuladh via Digitalmars-d-announce
On Mon, 10 Dec 2018 13:01:08 -0800, H. S. Teoh wrote:
> It also requires network access.  On *every* invocation, unless
> explicitly turned off.  And even then, it performs time-consuming
> dependency resolutions on every invocation, which doubles or triples
> incremental build times.  Again, unacceptable.

I feel like those should be configuration options at the very worst. And 
dub probably shouldn't even bother verifying your dependencies if you 
haven't changed dub.json.

> Then it requires a specific source layout, with incomplete /
> non-existent configuration options for alternatives.  Which makes it
> unusable for existing code bases.  Unacceptable.

A lot of people do find it acceptable to have a build tool that makes 
assumptions about your source code layout, but that's certainly not always 
possible or desirable.

> Worst of all, it does not support custom build actions, which is a
> requirement for many of my projects.

Yeah, there's a lot of neat metaprogramming stuff in D (like pegged) where 
it's awesome with small projects that it's part of compilation, but when 
I'm dealing with a nontrivial instance of it, I want to split it into a 
separate build step. Dub doesn't help me accomplish that.

> After so many decades of "advancement", we're still stuck in the
> gratuitously incompatible walled gardens, like the gratuitous browser
> incompatibilities of the pre-W3C days of the Web. And on modern CPUs
> with GHz clock speeds, RAM measured in GBs, and gigabit download speeds,
> building Hello World with a system like dub (or Gradle, for that matter)
> is still just as slow (if not slower!) as running make back in the 90's
> on a 4 *kHz* processor.  It's ridiculous.

Solving an NP-complete problem every time you build is not a great start.

> Why can't modern source code come equipped with dependency information
> in a *standard format* that can be understood by *any* build system?

Kythe is an attempt to make the relevant information available in a 
language-agnostic way. Might be a reasonable basis for a standardized 
build system. No clue how well it works or what it actually supports.

https://kythe.io/

> Build systems shouldn't need to reinvent their own gratuitously
> incompatible DSL just to express what's fundamentally the same old
> decades-worn directed graph. And programmers shouldn't need to repeat
> themselves by manually enumerating individual graph edges (like Meson
> apparently does).

Meson doesn't have you enumerate individual graph edges at that level. It 
just doesn't build your project correctly. Change a struct size in one 
file, and you get a host of weird errors when another file uses it.

Maven and Gradle also don't really have a DAG like that. If any file 
changed, your whole project needs to be rebuilt, and all your dependencies 
are immutable. Bazel has a DAG across build rules, not across individual 
files.

> - Efficient: the amount of work done by the build should be proportional
>   to the size of changes made to the source code since the last build,
>   NOT proportional to the size of the entire source tree (SCons fails in
>   this regard).

Would be great if the tool could pay attention to whether incremental 
builds saved time on average and just do a full build if it's better.

> - Language-agnostic: the build system should be essentially a dependency
>   graph resolver. It should be able to compile (possibly via plugins)
>   source code of any language using any given compiler, provided such a
>   combination is at all possible. In fact, at its core, it shouldn't
>   even have the concept of "compilation" at all; it should be able to
>   generate, e.g., .png files from POVRay scene description files, run
>   image post-processing tools on them, then package them into a tarball
>   and upload it to a remote webserver -- all driven by the same
>   underlying DAG.

You could support rsync just fine, but if it's just an HTTP upload, 
there's no standard way to tell if the server's got the file already.


Re: A brief survey of build tools, focused on D

2018-12-10 Thread Neia Neutuladh via Digitalmars-d-announce
On Tue, 11 Dec 2018 02:54:15 +, Mike Franklin wrote:
> Why not just write your build/tooling scripts in D?  That's what I
> prefer to do, and there's been a recent effort to do just that for the
> DMD compiler as well:
> https://github.com/dlang/dmd/blob/master/src/build.d  It still resembles
> the makefiles it was modeled from, but in time, I think it will clean up
> nicely.

That's fine for executables that don't depend on external libraries. It's 
not good for libraries that I want other people to use; dub's the easiest 
way to publish a thing. It also means I need to replicate that dependency 
graph logic in every single project, which is worse than replicating it 
once per language. We really should have a standard build tool supporting 
per-language plugins, like H. S. Teoh is recommending.


Re: A brief survey of build tools, focused on D

2018-12-10 Thread Neia Neutuladh via Digitalmars-d-announce
On Mon, 10 Dec 2018 21:53:40 +, GoaLitiuM wrote:
> The results for touching second file seems like an anomaly to me,

The generated ninja file had one rule per source file. If your modules 
tend to import each other a lot, or if they transitively import the code 
that's doing expensive stuff, then one rule per source file is bad. If 
your modules have few transitive dependencies and they're each fast to 
compile, one rule per source file is good.

My project used Pegged, and a lot of stuff referenced the grammar. That 
meant incremental builds went long and it would have been better to build 
the whole project at once.

Separating the grammar into a different build would reduce compile times 
significantly, and that might make incremental builds fast.

>From discussions on IRC about reducing compile times, though, using Phobos 
is a good way to get slow compilation, and I use Phobos. That alone means 
incremental builds are likely to go long.

> You also have to make sure the dependencies are built with the same
> compiler, which could explain the headache #3 in your article.

I've been using dmd as my primary compiler for ages, cleared out all the 
cached dub builds I could find, ran `dub build -v` to ensure that it was 
invoking dmd, and explicitly told Meson to use dmd.

Meson was still convinced that I'd built pegged with some other compiler.

> The comparison and some of the other headaches with meson does not seem
> to be fair as you are comparing dub, which is both a build system and a
> package manager, to meson which is only a build system, you have to make
> sure all the dependencies are installed to your system beforehand.

That *would* be a reasonable objection, but Meson explicitly advertises 
that you can use dub dependencies. The two flaws are the extra work 
required and the fact that it's broken. If it had not even pretended to 
support dub dependencies, I could have avoided several of the problems and 
just used git submodules from the start.

Just like with Bazel.


Re: Visual D 0.48.0 released

2018-12-03 Thread Neia Neutuladh via Digitalmars-d-announce
On Mon, 03 Dec 2018 15:08:33 +, greatsam4sure wrote:
> It will be nice if you can port this code base to vs code. It is the
> same visual studio code base.

Pardon? VS Code is an Electron application written mainly in TypeScript, 
while Visual Studio is a Windows application written in C++ and C#. 
They're quite different codebases with quite different plugin 
architectures.


Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-14 Thread Neia Neutuladh via Digitalmars-d-announce
On Wed, 14 Nov 2018 13:40:46 -0500, Steven Schveighoffer wrote:
> You don't think this is confusing?
> 
> enum A : int {
>  val
> }
> 
> A a;
> foo(a); // error: be more specific
> int x = a;
> foo(x); // Sure

I find this confusing:

void foo(int i) {}
void foo(ubyte b) {}
enum A : int { val = 0 }
foo(A.val);  // calls foo(ubyte)
A a = A.val;
foo(a);  // calls foo(int)

If it instead produced an error, the error would look like:

Error: foo called with argument types (E) matches both:
example.d(1): foo(int i)
and:
example.d(2): foo(ubyte i)

Or else:

Error: none of the overloads of foo are callable using
argument types (A), candidates are:
example.d(1): foo(int i)
example.d(2): foo(ubyte i)

These aren't the intuitively obvious thing to me, but they're not going to 
surprise me by calling the wrong function, and there are obvious ways to 
make the code work as I want. Of the two, I'd prefer the former.

The intuitively obvious thing for me is:

* Don't use VRP to select an overload. Only use it if there's only one 
candidate with the right number of arguments.
* Don't use VRP if the argument is a ctor, cast expression, or symbol 
expression referring to a non-builtin. Maybe disallow with builtins.
* Don't use VRP if the argument is a literal with explicitly indicated type 
(0UL shouldn't match to byte, for instance).

I think this would make things more as most people expect:

foo(A.val);  // A -> int, but no A -> byte; calls foo(int)
foo(0);  // errors (currently calls foo(int))
foo(0L); // errors (currently calls foo(ubyte))
foo(cast(ulong)0);  // errors (currently calls foo(ubyte))

And when there's only one overload:

void bar(byte b) {}
bar(A.val);  // errors; can't convert A -> byte
bar(0);  // type any-number and fits within byte, so should work
bar(0UL);// errors; explicit incorrect type
bar(0UL & 0x1F);// bitwise and expression can do VRP
bar("foo".length);  // length is a builtin; maybe do VRP?
bar(byte.sizeof);   // sizeof is a builtin; maybe do VRP?


Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-14 Thread Neia Neutuladh via Digitalmars-d-announce
On Tue, 13 Nov 2018 20:27:05 -0800, Walter Bright wrote:
> There have been various attempts over the years to "fix" various things
> in the D matching system by adding "just one more" match level.

I kind of feel like, if something would be confusing like this, maybe the 
compiler shouldn't be making an automatic decision. Not "just one more" 
match level, but just...don't match. If there are multiple matching 
overloads, just error out. Don't try to be clever and surprise people, 
just tell the user to be more explicit.


Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-14 Thread Neia Neutuladh via Digitalmars-d-announce
On Wed, 14 Nov 2018 12:09:33 +0100, Jacob Carlborg wrote:
> What is ": int" doing, only specifying the size?

It specifies the type to match for overloading when the compiler isn't 
required by the language to constant-fold the value.


Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-13 Thread Neia Neutuladh via Digitalmars-d-announce
On Wed, 14 Nov 2018 00:43:54 +, Rubn wrote:
> I wonder what these examples are? What did C++ do instead, cause
> something tells me it didn't do what D is doing. An enum in C++ doesn't
> call different function overloads based on the constant value.

Long long and unsigned long long give an ambiguous overload error. 
Unsigned int uses the unsigned int overload. Everything else uses the int 
overload.

Test code:

```
#include 
#include 
using namespace std;
void foo(bool c) { cout << "bool " << c << endl; }
void foo(unsigned char c) { cout << "unsigned char " << c << endl; }
void foo(char c) { cout << "char " << c << endl; }
void foo(int c) { cout << "int " << c << endl; }
void foo(unsigned int c) { cout << "unsigned int " << c << endl; }
void foo(long long c) { cout << "long long " << c << endl; }
void foo(unsigned long long c) { cout << "unsigned long long " << c << 
endl; }
enum Bool : bool { b = 1 };
enum Char : char { c = CHAR_MAX };
enum UChar : unsigned char { d = UCHAR_MAX };
enum Short : short { e = SHRT_MAX };
enum UShort : unsigned short { f = USHRT_MAX };
enum Int : int { g = INT_MAX };
enum UInt : unsigned int { h = UINT_MAX };
enum LongLong : long long { i = LLONG_MAX };
enum ULongLong : unsigned long long { j = ULLONG_MAX };
int main(int argc, char** argv)
{
foo(b);
foo(c);
foo(d);
foo(e);
foo(f);
foo(g);
foo(h);
//foo(i);
//foo(j);
}
```

Output:
int 1
int 127
int 255
int 32767
int 65535
int 2147483647
unsigned int 4294967295


Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-13 Thread Neia Neutuladh via Digitalmars-d-announce
On Tue, 13 Nov 2018 17:53:27 +, 12345swordy wrote:
> Ok, now that has got to be a bug. If you explicit cast the number to an
> integer then you expect the overload function with int to be called.
> 
> -Alex

...my mistake, I can't reproduce that anymore. Pretend I didn't say 
anything.


Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-13 Thread Neia Neutuladh via Digitalmars-d-announce
On Tue, 13 Nov 2018 09:46:17 -0500, Steven Schveighoffer wrote:
> Maybe the biggest gripe here is that enums don't prefer their base types
> over what their base types convert to. In the developer's mind, the
> conversion is:
> 
> A => int => (via VRP) short
> 
> which seems more complex than just
> 
> A => int

It affects explicit casts too:

void foo(short a) { writefln("short %s", a); }
void foo(int a) { writefln("int %s", a); }
foo(cast(int)0);  // prints: short 0

In order to force the compiler to choose a particular overload, you either 
need to assign to a variable or use a struct with alias this.

C++, Java, and C# all default to int, even for bare literals that fit into 
bytes or shorts, and let you use casts to select overloads.

C++ has some weird stuff where an enum that doesn't fit into an int is an 
equal match for all integer types:

void foo(unsigned long long);
void foo(short);
enum A : unsigned long long { a = 2 };
foo(a);  // ambiguous!

But if you just have an unsigned long long that's not in an enum, it only 
matches the unsigned long long overload.

In C#, if you define multiple implicit casts from a type that match 
multiple overloads, the compiler prefers the smallest matching type, and 
it prefers signed over unsigned types. However, for this situation to come 
up at all, you need to define implicit conversions for multiple numeric 
types, so it's not directly comparable.

Anyway, VRP overload selection hit me yesterday (accepts-invalid sort): I 
was calling a function `init_color(short, short, short, short)` with a 
bunch of things that I explicitly casted to int. Tried wrapping it in a 
function and I discovered the compiler had implicitly casted int to short. 
Not the end of the world, but I thought a cast would set the type of the 
expression (instead of just, in this case, truncating floating point 
numbers).


Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-12 Thread Neia Neutuladh via Digitalmars-d-announce
On Tue, 13 Nov 2018 00:28:46 +, Isaac S. wrote:
> Sorry if it wasn't clear, I meant that if `enum Foo : some_int_type`
> makes it so some_int_type is preferred (because it's a more direct
> conversion) DScanner could warn anyone that just does `enum Foo`.

Sorry, I read too hastily and thought you meant relative to the status quo.


Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-12 Thread Neia Neutuladh via Digitalmars-d-announce
On Tue, 13 Nov 2018 00:08:04 +, Isaac S. wrote:
> If you really want this plaque in the language, at least make it not
> affect those that gave their enum a type. If you at least do that,
> someone can add it to DScanner to tell anyone that doesn't type their
> enum to expect illogical behavior.

Unfortunately, dscanner only parses code. It can't tell you that your 
overload resolution depends on value range propagation on an enum value; 
that depends on semantic analysis. So it would have to aggressively warn 
you against using `enum Foo : some_int_type`.


Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-12 Thread Neia Neutuladh via Digitalmars-d-announce
On Mon, 12 Nov 2018 14:07:39 -0800, Walter Bright wrote:
>  =>   conversion>
>=>   conversion>

One confusion is from value range propagation / constant folding reaching 
past the static type information to yield a different result from what 
static typing alone would suggest. My intuition was that the compiler 
should prefer the declared type of the symbol when it's got a symbol with 
a declared type.

Like, A implicitly converts to int, and int doesn't implicitly convert to 
short, so an expression of type A shouldn't implicitly convert to short. 
And this is *generally* true, but when the compiler can use constant 
folding to get a literal value out of the expression, it does things I 
don't expect.

And this doesn't happen with structs with alias this, but I can't tell if 
that's an oversight or not, and there's no doubt some nuanced explanation 
of how things work, and it probably solves some edge cases to have it work 
differently...


Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-12 Thread Neia Neutuladh via Digitalmars-d-announce
On Mon, 12 Nov 2018 20:34:11 +, Neia Neutuladh wrote:
> enum : int { a = 0 }
> enum A : int { a = 0 }
> f(a);   // calls the int overload f(A.a); // calls the bool overload
> 
> Tell me more about this "consistency".

Filed issue 19394. (Sorry for spam.)


Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-12 Thread Neia Neutuladh via Digitalmars-d-announce
On Mon, 12 Nov 2018 09:45:14 +, Mike Parker wrote:
>  From Example B in the DIP:
> 
> ```
> int f(bool b) { return 1; }
> int f(int i) { return 2; }
> 
> enum E : int {
>  a = 0,
>  b = 1,
>  c = 2,
> }
> ```
> 
> Here, f(a) and f(b) call the bool overload, while f(c) calls the int
> version. This works because D selects the overload with the tightest
> conversion. This behavior is consistent across all integral types.

enum : int { a = 0 }
enum A : int { a = 0 }
f(a);   // calls the int overload
f(A.a); // calls the bool overload

Tell me more about this "consistency".


Re: DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals to bool--Formal Assement

2018-11-12 Thread Neia Neutuladh via Digitalmars-d-announce
On Mon, 12 Nov 2018 14:10:42 -0500, Steven Schveighoffer wrote:
> But it's not consistent:

And std.traits.isIntegral has not considered bools integral since its 
initial creation in 2007. Both Walter and Andrei have mucked about with 
that code and saw no reason to change it, even in wild and lawless days 
without deprecation cycles or DIPs. Andrei added a doc comment to 
explicitly note that bools and character types aren't considered integral 
back in 2009.


Re: Backend nearly entirely converted to D

2018-11-08 Thread Neia Neutuladh via Digitalmars-d-announce
On Thu, 08 Nov 2018 18:38:55 +, welkam wrote:
> On Thursday, 8 November 2018 at 18:15:55 UTC, Stanislav Blinov wrote:
>>
>> One keystroke (well ok, two keys because it's *) ;)
>> https://dl.dropbox.com/s/mifou0ervwspx5i/vimhl.png
>>
>>
> What sorcery is this? I need to know. I guess its vim but how does it
> highlight symbols?

By default, when you search for something in vim, it highlights all matches 
(as well as moving the cursor to the next match). The '*' command is 
'search for the word under the cursor'.

The rest is just basic syntax highlighting.


Re: Backend nearly entirely converted to D

2018-11-08 Thread Neia Neutuladh via Digitalmars-d-announce
On Thu, 08 Nov 2018 18:13:55 +0100, Jacob Carlborg wrote:
> I guess we have very different ideas on what "small scope" is. For me it
> means around 10 lines. Here's an example in the DMD code base, the
> method for doing the semantic analyze on a call expression [1]. It's 902
> lines long and has a parameter called "exp". Another example, the
> semantic analyze for an is expression [2], 310 lines long. It has a
> parameter called "e".

I recall opening up the source code some years ago, encountering a long 
function, and seeing variables `e` and `e2` that were reused for 
*probably* different purposes but I honestly couldn't tell.

Having them named `expression` and `expression2` would have saved me about 
five seconds total, which wouldn't have been particularly worthwhile. 
Giving them names that reflected how they were being used would have been 
quite helpful -- at the very least, it would have given a weak indication 
that they were not being reused for different purposes.


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-07 Thread Neia Neutuladh via Digitalmars-d-announce
On Thu, 08 Nov 2018 14:35:29 +1300, rikki cattermole wrote:
> Its a symptom of a larger set of problems. The frontend is not quite
> ready to have the GC turned on full time.
> 
> Based upon my testing, that little memory leak prevents pretty much
> *all* memory allocated by the GC to not be collected. I don't know why,
> but for some reason it pins it. Mind you, my testing could have been
> flawed, and needs more eyes on it *shrug*.

In the short term, that means turning on the GC won't do much, but it at 
least probably won't hurt.

In the long term, that bug might be covering up crashes and memory 
corruption, so turning it on by default is not a great idea.


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-07 Thread Neia Neutuladh via Digitalmars-d-announce
On Thu, 08 Nov 2018 01:49:49 +1300, rikki cattermole wrote:
> On 08/11/2018 1:46 AM, Patrick Schluter wrote:
>> Now that the compiler is completely in D, wouldn't it be a good idea to
>> activate the GC in the compiler. I know that it requires some care for
>> bootstrapping the compiler when there are dependencies to the D
>> runtime,
>> but the compiler would be an excellent example of the advantage of the
>> GC (i.e. dumb fast allocations as long as there's memory, collection
>> when no memory left which is miles away better than to get OOM-killed).
> 
> No, that would be a bad idea currently.
> 
> https://issues.dlang.org/show_bug.cgi?id=18811

That issue describes a memory leak. A memory leak of half a megabyte per 
full compiler invocation on a small file should still result in a lot of 
code being compilable on low-end machines that currently isn't.


Re: Wed Oct 17 - Avoiding Code Smells by Walter Bright

2018-11-04 Thread Neia Neutuladh via Digitalmars-d-announce
On Mon, 05 Nov 2018 01:23:44 +, nobodycares wrote:
> I think there are more than enough real-world examples, of where issues
> around 'type safety', or lack of, have caused a sufficient number of
> bugs, to warrant a discussion about ways to further improve type safety.

You do realize we can all see that you're posting from the same IP address 
with three different usernames, don't you? How's the dual boot working 
out? Firefox 52 is kind of old; are you holding off on updating for addon 
compatibility?

If you want to make a good sockpuppet, you'll need to invest some effort. 
Different browsers for each. Different IP addresses. Usernames that look 
like reasonable human names or forum handles. Posting about topics other 
than your personal cause.

And if you had actual examples, you'd have shown them already.


Re: Wed Oct 17 - Avoiding Code Smells by Walter Bright

2018-11-04 Thread Neia Neutuladh via Digitalmars-d-announce
On Sun, 04 Nov 2018 11:36:39 +, FooledDonor wrote:
> Can we argue about the problems arising from the potential introduction
> of this feature?

There are many potential features that wouldn't cause problems in 
isolation. Should we add all of them? Obviously not; the result would be a 
horribly complex language that takes too much time to learn and is 
impossible to maintain.

So instead, we need to aggressively filter out potential added features to 
ensure that what they add is sufficiently important to justify later 
maintenance costs and the effort of learning things.

The justification for this feature rests on real-world examples of bugs 
that have been caused by its lack.


Re: Wed Oct 17 - Avoiding Code Smells by Walter Bright

2018-11-03 Thread Neia Neutuladh via Digitalmars-d-announce
On Sat, 03 Nov 2018 11:24:06 +, FooledDonor wrote:
> And if the validity of a person's reasoning is a function of his way of
> expressing them, well ... do not pose to software engineers at least

If you want other people to do work for you, you need to convince them to 
do it. This is an open source project, so the appropriate way of doing 
this is with good reasoning and examples, not by insulting people.

This is true even if the feature seems obviously good and necessary to one 
or two people, if those people don't have abnormally large influence over 
the project.


Re: Wed Oct 17 - Avoiding Code Smells by Walter Bright

2018-11-03 Thread Neia Neutuladh via Digitalmars-d-announce
On Sat, 03 Nov 2018 04:50:52 +, unprotected-entity wrote:
> (q1) Why is it, that people who use D, object *so much* to the idea of
> allowing (at the choice of the programmer) for a type to have it's own
> private state *within* a module (so that its private state is respected
> by other code also within that module)?

We object because the people complaining can't point at a use case that 
seems reasonable. If you provided real-world examples, we'd consider them.

We are further disinclined to engage with you as a collaborator because 
you're insulting us and ignoring a lot of our responses to you.

> Or you ask it another way:
> 
> (q2)Why must a type within a module *always* have its private state
> exposed to other code within the module? (the key word here, being
> 'always').

Because that is both simple and flexible. Swift forsakes simplicity in 
favor of high granularity, and it's exhausting just reading its protection 
modifier list.

> (q3) Should a language intentionally set out to prevent a programmer
> from making that choice?

You're mischaracterizing the situation to make your preferred feature look 
like the default. That's the opposite of how language design works. 
Nothing is there by default. You add things as necessary to get a language 
that's good enough.


Re: Wed Oct 17 - Avoiding Code Smells by Walter Bright

2018-11-01 Thread Neia Neutuladh via Digitalmars-d-announce
On Thu, 01 Nov 2018 22:37:59 +, unprotected-entity wrote:
> On Thursday, 1 November 2018 at 03:10:22 UTC, H. S. Teoh wrote:
>>
>> Actually, code within a module *should* be tightly coupled and cohesive
>> -- that's the whole reason to put that code inside a single module in
>> the first place.  If two pieces of code inside a module are only weakly
>> coupled or completely decoupled, that's a sign that they should not be
>> in the same module at all.  Or at the very least, they should belong in
>> separate submodules that are isolated from each other.
> 
> How does one determine, whether a 10,000 line module, is tightly coupled
> and cohesive?

You can get a pretty accurate result by just saying "no". 10KLOC in one 
module is almost always a problem.

Failing that, you can do a code review. Which will take a very long time. 
The solution to that is to submit less code at a time, review as you go, 
and keep modules smaller.

For instance, Phobos keeps most modules below 3000 lines of code, 
including unittests. The largest module, std.datetime.systime, has about 
6000 lines of code -- but if you exclude unittests and test-only code, it's 
more like 1000.

> Only the author can make that statement - which they naturally will,
> even if it's not true.

I think a lot of programmers are well aware of the failings of their code.

> As soon as you see a class interface in a module, in D, you have to
> assume their is other code in the module, perhaps down around line
> 9,900, that is bypassing its interface, and doing who knows what to
> it

And so you have a rule against casting from an interface to a concrete 
type, if that's a thing that worries you. It's something you can check 
rather easily in a reasonably sized code review.

> In the age of 'lock-down-everything', increased modularity is becoming
> more important. A monolithic module approach, is already outdated, and
> risky, in terms of developing secure, maintainable software

That statement could be taken as being against an approach that recommends 
structuring a project as a monolithic module, or against an approach that 
treats modules as a monolith in terms of protection.

> I think providing an additional tool, to those who seek to use D,
> such as 'strict private' (syntax can be argued about), would aid better
> design - it can't make it any worse, that's for sure).

It would be more language complexity in order to make it easier to have 
large modules. But you already said that large modules are a problem.

> Is that really such a bad idea? Are there no programmers out there in
> the D world that might think this could be a good, additional tool, to
> give programmers, so they can better architect their solution?

The use case is when you don't want to break up a module and you don't 
trust yourself not to modify private members from the wrong parts of the 
module.

That's not useless.

It's also not obviously so useful as to merit inclusion. A lot of languages 
do without any notion of private. A lot, like the entire ALGOL family up 
to Oberon-2, Go, Rust, Lua, Haskell, and Node.js, use exported and 
unexported symbols instead, and that's per module. A fair number just don't 
have a notion of public and private symbols.

> The amount of push back in the D community on this idea, is really odd
> to me. I'm still trying to understand why that is. Are D programmers
> just hackers, insterested in getting their code to work, no matter what?
> Are their not enough Java/C# programmers coming to D - and bringing
> their design skills with them?

There are plenty of language designers that didn't think it obvious. Might 
be better to consider why instead of implying that no D programmers are 
familiar with or care about good design. I mean, if there are popular 
languages from the 1960s through the 2010s that do things the same way as 
D, that sounds like a pretty good indication that it's not an obviously bad 
idea. It's not rock-solid; actual evidence from the industry would be 
superior. But I think you would have presented that evidence already.


Re: Add D front-end, libphobos library, and D2 testsuite... to GCC

2018-10-28 Thread Neia Neutuladh via Digitalmars-d-announce
On Mon, 29 Oct 2018 03:43:49 +, Mike Parker wrote:
> Congratulations are in order for Iain Buclaw. His efforts have been
> rewarded in a big way. Last Friday, he got the greenlight to move
> forward with submitting his changes into GCC:

Awesome!

What frontend version is this, out of curiosity?


Re: New Initiative for Donations

2018-10-27 Thread Neia Neutuladh via Digitalmars-d-announce
On Sat, 27 Oct 2018 10:54:30 +, Joakim wrote:
> I see, so you want other taxpayers to bail you out for your mistakes,
> interesting.

One of the major points of having a government is to create these 
regulations that make it less likely for individuals to suffer from the 
actions of other people and organizations.

Another major point is to help people in need using the collective efforts 
of society.

Programs like FDIC in the United States exist to serve both of these: it's 
an extra set of regulations for banks, and compliant banks will be bailed 
out if circumstances require. If I choose an FDIC bank and the owners run 
off with my money, I didn't make an avoidable mistake, any more than being 
mugged in the street is me making a mistake.

If you oppose that, you're gunning for an eventual repeat of the Great 
Depression.

>> I think my concerns are rather normal. Judging by adoption, there's
>> some set of concerns that's normal.
> 
> Some of them are popularly held, but most are fairly irrational.
> 
> In any case, whether crypto-currencies ever go mainstream is irrelevant
> to this thread. They're already fairly popular among techies, from whom
> the D foundation is soliciting donations. As such, providing a way to
> accept such donations is literally a no-brainer: the work put into
> taking them will likely pay for itself many times over.

I suspect more techies use zloty than ethereum.


Re: New Initiative for Donations

2018-10-26 Thread Neia Neutuladh via Digitalmars-d-announce
On Fri, 26 Oct 2018 06:19:29 +, Joakim wrote:

> On Friday, 26 October 2018 at 05:47:05 UTC, Neia Neutuladh wrote:
>> On Fri, 26 Oct 2018 02:38:08 +, Joakim wrote:
>>> As with D, sometimes the new _is_ better, so perhaps you shouldn't
>>> assume old is better either.
>>
>> There's no assuming going on. Cryptocurrencies are worse than credit
>> cards for everything that normal people care about,
> 
> Such as? I already noted that they're easier and cheaper, you simply
> flatly state that "normal people" find them worse.

In most countries where people are going to donate to D, the vast majority 
of people have access to a credit card.

>> If for some reason cryptocurrencies become popular and sufficiently
>> stable to be used as currency, I have no doubt that existing credit
>> card companies will start offering automatic currency exchange, so you
>> can have an account in USD and pay a vendor who accepts only Ethereum,
>> or vice versa. As such, accepting credit card payments is good enough.
> 
> I don't know what we'd be waiting for, the tokens I mentioned are all
> worth billions and widely used, particularly by techies:

Very few merchants accept any sort of cryptocurrency. I think I've found 
three. One was through a cryptocurrency forum, and one was Valve 
announcing that they would stop accepting it.

> Why would I wait for antiquated credit-card companies to accept these
> tokens? The whole point of these new tokens is to obsolete the credit
> card companies.

You wouldn't wait. You haven't waited. For you, the benefits are large 
enough and the downsides small enough that it doesn't make sense to wait. 
But I'm not you.

I would wait because I've lost access to important credentials before and 
had to send a copy of my government-issued ID to a company to get them to 
deactivate two-factor authentication. I've had to use password reset 
mechanisms frequently. I don't trust myself not to lose access to a 
cryptocurrency private key. And that would destroy currency and lose me my 
life savings.

I would wait because I want a mechanism to dispute transactions. Maybe I 
authorized that transaction, but the merchant didn't deliver.

I would wait because I want an environmentally-friendly system instead of 
one that uses as much electricity as Afghanistan to process fifteen 
transactions per second.

I would wait because cryptocurrencies have extremely volatile exchange 
rates, which makes it difficult to set prices or store value in them.

I would wait because I can't use cryptocurrency to do anything useful, so 
I would incur a fee to transfer money into it and another to transfer 
money out of it.

I would wait because I don't trust any cryptocurrency exchanges to stick 
around like I expect Visa or even a community bank to remain in business, 
or even not to commit fraud against me. While I might not trust my local 
bank much, I do trust my government to regulate them and to bail me out 
should the worst happen.

I think my concerns are rather normal. Judging by adoption, there's some 
set of concerns that's normal.


Re: New Initiative for Donations

2018-10-25 Thread Neia Neutuladh via Digitalmars-d-announce
On Fri, 26 Oct 2018 02:38:08 +, Joakim wrote:
> As with D, sometimes the new _is_ better, so perhaps you shouldn't
> assume old is better either.

There's no assuming going on. Cryptocurrencies are worse than credit cards 
for everything that normal people care about, and they're better than 
credit cards for illegal transactions. This might eventually change, and 
we can re-evaluate then.

If for some reason cryptocurrencies become popular and sufficiently stable 
to be used as currency, I have no doubt that existing credit card 
companies will start offering automatic currency exchange, so you can have 
an account in USD and pay a vendor who accepts only Ethereum, or vice 
versa. As such, accepting credit card payments is good enough.


Re: Beta 2.082.0

2018-10-17 Thread Neia Neutuladh via Digitalmars-d-announce
On Wednesday, 17 October 2018 at 14:02:20 UTC, Jesse Phillips 
wrote:
Wait, why does each get a special bailout? Doesn't until full 
that role?


`until` is lazy. We could have `doUntil` instead, which would be 
eager and would return a boolean indicating whether to continue. 
We could all write `someRange.until!condition.each!func`. That's 
going to be clearer sometimes and less clear other times. So now 
we have options.


Re: Copy Constructor DIP and implementation

2018-09-11 Thread Neia Neutuladh via Digitalmars-d-announce
On Tuesday, 11 September 2018 at 15:22:55 UTC, rikki cattermole 
wrote:
Here is a question (that I don't think has been asked) why not 
@copy?


It's not wrong to call this an implicit constructor since it's 
called implicitly. It also means that, if we get implicit 
constructors in general, we can keep the same syntax and 
annotations, and it will be consistent.


Also can we really not come up with an alternative bit of code 
than the tupleof to copying wholesale? E.g. super(other);


That would be possible, but it would be inconsistent with super 
constructors in general.


Re: [OT] My State is Illegally Preventing Me From Voting In The Upcoming 2018 US Elections

2018-09-09 Thread Neia Neutuladh via Digitalmars-d-announce
On Sunday, 9 September 2018 at 09:34:31 UTC, Nick Sabalausky 
(Abscissa) wrote:
1. As most United States citizens are implicitly aware (though 
the government assumes NO responsibility to ensure citizens are 
aware of this), to vote in a United States of America election 
and have the vote legally *count*, a United States citizen MUST 
vote on the exact day of elections, from the exact location 
determined to be the correct voting location for said citizen.


In Ohio, early voting begins 10 October:

https://www.sos.state.oh.us/elections/voters/voting-schedule/

Some areas have vote-by-mail as the default. This increases 
turnout for non-Presidential elections from abysmal to merely 
shameful. In these areas, you don't need to go to a specific 
location (though there are ballot drop-off locations if you don't 
want to trust your ballot in the mail).


Re: Blog post: using dynamic libraries in dub

2017-12-27 Thread Neia Neutuladh via Digitalmars-d-announce

On Monday, 25 December 2017 at 08:57:09 UTC, Jacob Carlborg wrote:
If I knew exactly what would need to be done I would most 
likely have done it already :). Perhaps Martin that implemented 
the support on Linux or David that, I think, implemented it for 
LDC on macOS would be better suited for such a bugzilla issue.


If you know that it doesn't work, please file an issue; a bug 
that just says "this doesn't work" is more valuable than its 
absence.


If you have a test case, that is valuable; "what would need to be 
done" is to make the test case work.


If you know that it works with LDC, that is also valuable; "what 
would need to be done" is to port over LDC's fixes.


I haven't used a Mac since 2012 (an experience that I am anxious 
to avoid repeating), so I don't even know whether TLS works with 
dynamic libraries on OSX. I can't test fixes. All I could do is 
report that there's a rumor.


Re: Blog post: using dynamic libraries in dub

2017-12-21 Thread Neia Neutuladh via Digitalmars-d-announce

On Tuesday, 19 December 2017 at 21:38:40 UTC, Mike Wey wrote:
And for GtkD, that is why it would make sense to relay on the 
packages supplied by your distribution. And just list "gtkd-3" 
in the "libs" section. Avoiding the need for the workaround to 
build a shared version.


That would be awesome. I'm not able to access the d-apt 
repository at the moment and Ubuntu 16.04 doesn't seem to have 
gtkd in the repositories. So for the near future, at least, I'll 
continue using this cruddy workaround.


Re: datefmt 1.0.0 released: parse datetimes and also format them

2017-12-19 Thread Neia Neutuladh via Digitalmars-d-announce

On Monday, 18 December 2017 at 09:03:09 UTC, Andrea Fontana wrote:
I think you should add some way to translate days/month in 
other language.


That would be great! Unfortunately, it requires a decent locales 
library.


Blog post: using dynamic libraries in dub

2017-12-19 Thread Neia Neutuladh via Digitalmars-d-announce
From the "it's a hacky workaround but it's what we've got" 
department: how to use dynamic libraries in dub, with GtkD as the 
example.


GtkD takes about 45MB on its own, and that means it can take a 
fair bit of time to build anything that depends on it -- even if 
it only uses a handful of symbols. Building it as a dynamic 
library can shrink compile times significantly.


https://blog.ikeran.org/?p=323

An example of this strategy in use: 
https://git.ikeran.org/dhasenan/resin-browser/src/master/dub.json


datefmt 1.0.0 released: parse datetimes and also format them

2017-12-12 Thread Neia Neutuladh via Digitalmars-d-announce

# Sales pitch

If you've ever had to parse datetime input from multiple sources 
and everyone's standardized on ISO8601, you might have found out 
that that's not quite as standard as you'd wish. This is where 
datefmt helps you.


---
import datefmt;
auto expected = SysTime(Date(2010, 1, 1), UTC());
foreach (date; ["2010-01-01", "2010-01-01T00:00:00", "2010-01-01 
00:00:00.000Z"])

{
  SysTime parsed;
  assert(tryParse(date, ISO8601FORMAT, parsed));
  assert(expected == parsed);
}
---


# How does datefmt's parsing differ from dateparser?

dateparser is great when you have a date that's in some arbitrary 
format and you want to turn it into a sensible date. It's perfect 
for manual input.


datefmt is good when you have a restricted set of formats you 
need to accept and want to reject everything else -- generally 
when a wide range of systems using the same somewhat nebulous 
standard emit the stuff you need to parse.



# What about formatting?

datefmt can do formatting too! Most of its formatting options are 
taken from strftime, so it should be generally familiar.


And of course you can use predefined formats for RFC1123 and 
ISO8601:


auto st = SysTime(DateTime(2010, 4, 12, 15, 30, 00), UTC());
writeln(st.format(ISO8601FORMAT));
// 2010-04-12T15:30:00.00Z
writeln(st.format(RFC1123FORMAT));
// Mon, 12 Apr 2010 15:30:00 Z


# Is anyone using it?

I've been using this in my RSS reader for the past month or two, 
during which time it's been exposed to a number of horrible 
variants of both RFC1123 and ISO8601.



# How do I get it?

Add "datefmt": "~>1.0.0" to your dub.json and Bob's your uncle!

Or download the single file from 
https://raw.githubusercontent.com/dhasenan/datefmt/master/source/datefmt.d and put it in your project.


Licensed under MS-PL (BSD-style permissive license with patent 
grant); open an issue at 
https://github.com/dhasenan/datefmt/issues if you need a 
different license.


Re: Visual Studio Code code-d serve-d beta release

2017-08-05 Thread Neia Neutuladh via Digitalmars-d-announce

On Saturday, 5 August 2017 at 22:43:31 UTC, WebFreak001 wrote:
I just released a beta version on the visual studio marketplace 
that allows you to try out the latest features of serve-d.


Awesome! Once I worked around the binary placement issue, this 
actually gave me completion options, which is better than the 
previous version ever did for me.