[Issue 19159] `alloca` does not work in -betterC
https://issues.dlang.org/show_bug.cgi?id=19159 Jonathan Marler changed: What|Removed |Added CC||johnnymar...@gmail.com --
Re: What changes to D would you like to pay for?
On Wednesday, 5 September 2018 at 07:00:49 UTC, Joakim wrote: The D foundation is planning to add a way for us to pay for changes we'd like to see in D and its ecosystem, rather than having to code everything we need ourselves or find and hire a D dev to do it: "[W]e’re going to add a page to the web site where we can define targets, allow donations through Open Collective or PayPal, and track donation progress. Each target will allow us to lay out exactly what the donations are being used for, so potential donors can see in advance where their money is going. We’ll be using the State of D Survey as a guide to begin with, but we’ll always be open to suggestions, and we’ll adapt to what works over what doesn’t as we go along." https://dlang.org/blog/2018/07/13/funding-code-d/ I'm opening this thread to figure out what the community would like to pay for specifically, so we know what to focus on initially, whether as part of that funding initiative or elsewhere. I am not doing this in any official capacity, just a community member who would like to hear what people want. Please answer these two questions if you're using or would like to use D, I have supplied my own answers as an example: 1. What D initiatives would you like to fund and how much money would you stake on each? (Nobody is going to hold you to your numbers, but please be realistic.) I'd be willing to pay at least $100 each for these two: https://issues.dlang.org/show_bug.cgi?id=19159 https://issues.dlang.org/show_bug.cgi?id=18788 Quite honestly, though, I probably wouldn't do it myself for $100. These bounties really need to be $500 or more. If D is to be funded by individuals, there needs to be some way to organize individuals around common interest and raise funds for those tasks. For example, the D Language Foundation has a "Corporate Bronze" offer on its OpenCollective page that includes 3 priority bug fixes per month for $12,000. If we could get 24 like-minded people, willing to contribute $500 each, and vote on priority bugs, that could potentially get things moving in the right direction. That would be 1 1/2 bugs per contributor. I don't think that's bad. I'd be willing to join such a collective if I got at least 1 priority bug fix out of it. Even better, IMO, it'd be nice if the "Individual Sponsor" or "Organizational Sponsor" offers on the OpenCollective page included at least 1 priority bug fix. Mike
[Issue 19209] [ICE] Overriding a field in a baseclass issues an ICE
https://issues.dlang.org/show_bug.cgi?id=19209 --- Comment #3 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/dlang/dmd https://github.com/dlang/dmd/commit/bfd48f4a56bacfb8f01e6be27833e675b62eab7e Fix Issue 19209 - [ICE] Overriding a field in a baseclass issues an ICE https://github.com/dlang/dmd/commit/35558bd524e519d6ef58253e56f47cdc663a6593 Merge pull request #8665 from RazvanN7/Issue_19209 Fix Issue 19209 - [ICE] Overriding a field in a baseclass issues an ICE merged-on-behalf-of: Walter Bright --
[Issue 19209] [ICE] Overriding a field in a baseclass issues an ICE
https://issues.dlang.org/show_bug.cgi?id=19209 github-bugzi...@puremagic.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --
[Issue 5570] 64 bit C ABI not followed for passing structs and complex numbers as function parameters
https://issues.dlang.org/show_bug.cgi?id=5570 --- Comment #51 from Mike Franklin --- An potential additional $500 has been offered for a fix to this issue: https://forum.dlang.org/post/xlvstldxxehkqaxux...@forum.dlang.org --
[Issue 5570] 64 bit C ABI not followed for passing structs and complex numbers as function parameters
https://issues.dlang.org/show_bug.cgi?id=5570 Mike Franklin changed: What|Removed |Added CC||slavo5...@yahoo.com See Also|http://d.puremagic.com/issu | |es/show_bug.cgi?id=12343| --
[Issue 5570] 64 bit C ABI not followed for passing structs and complex numbers as function parameters
https://issues.dlang.org/show_bug.cgi?id=5570 Mike Franklin changed: What|Removed |Added See Also|http://d.puremagic.com/issu | |es/show_bug.cgi?id=6772 | --
[Issue 19179] extern(C++) small-struct by-val uses wrong ABI
https://issues.dlang.org/show_bug.cgi?id=19179 Mike Franklin changed: What|Removed |Added CC||slavo5...@yahoo.com See Also||https://issues.dlang.org/sh ||ow_bug.cgi?id=5570 --
[Issue 5570] 64 bit C ABI not followed for passing structs and complex numbers as function parameters
https://issues.dlang.org/show_bug.cgi?id=5570 Mike Franklin changed: What|Removed |Added See Also||https://issues.dlang.org/sh ||ow_bug.cgi?id=19179 --
Re: What changes to D would you like to pay for?
On Thursday, 6 September 2018 at 01:24:35 UTC, Laeeth Isharc wrote: $500.00 to fix these three together - they may well be essentially the same bug: https://issues.dlang.org/show_bug.cgi?id=19179 https://issues.dlang.org/show_bug.cgi?id=5570 https://issues.dlang.org/show_bug.cgi?id=13957 According to BountySource (https://www.bountysource.com/teams/d/issues?tracker_ids=383571) Issue 5570 already has a bounty of $445. With the addition of your $500 that would make the bounty $945, which isn't bad. Mike
[Issue 13957] 64 bit C ABI not followed for passing structs with floating+integer types
https://issues.dlang.org/show_bug.cgi?id=13957 Mike Franklin changed: What|Removed |Added CC||slavo5...@yahoo.com See Also||https://issues.dlang.org/sh ||ow_bug.cgi?id=5570 --
[Issue 5570] 64 bit C ABI not followed for passing structs and complex numbers as function parameters
https://issues.dlang.org/show_bug.cgi?id=5570 Mike Franklin changed: What|Removed |Added See Also||https://issues.dlang.org/sh ||ow_bug.cgi?id=13957 --
[Issue 5570] 64 bit C ABI not followed for passing structs and complex numbers as function parameters
https://issues.dlang.org/show_bug.cgi?id=5570 Mike Franklin changed: What|Removed |Added See Also||https://issues.dlang.org/sh ||ow_bug.cgi?id=12343 --
[Issue 12343] Win64 64 bit C ABI not followed for passing structs as function parameters
https://issues.dlang.org/show_bug.cgi?id=12343 Mike Franklin changed: What|Removed |Added CC||slavo5...@yahoo.com See Also||https://issues.dlang.org/sh ||ow_bug.cgi?id=5570 --
Re: linking trouble
On Friday, 7 September 2018 at 02:44:24 UTC, hridyansh thakur wrote: On Thursday, 6 September 2018 at 16:59:43 UTC, rikki cattermole wrote: On 07/09/2018 4:03 AM, hridyansh thakur wrote: [...] That definition isn't complete. Missing at the very least ``();`` to make it a function declaration. [...] So what is the errors you're getting? And what are the commands you're executing? compiler is failing to rercognise the .o file A .o file? Are you using MinGW to compile your C++? You're not going to get very far if you are. You have two options that are guaranteed to work. Use the Digital Mars C++ compiler to compile your C++ file to an OMF object then use the default OPTLINK linker when building. This only supports 32-bit builds. dmd foo.d bar.obj The other option is to use the Microsoft linker, which requires the MS build tools be installed, either via the build tools distribution or Visual Studio. Then you can compile your C++ file to a COFF object with the MS compiler for 32- or 64-bit and build your executable with one of the following: dmd -m32mscoff foo.d bar.obj dmd -m64 foo.d bar.obj
Re: LDC 1.12.0-beta1
On Wednesday, 5 September 2018 at 05:15:45 UTC, Joakim wrote: I'll add native beta builds for Android in a couple days. The native Android builds are up at the above github release link. I think this is the last time I'll put beta builds out, too much of a PITA to rebuild llvm each time. I'll continue maintaining the ldc package in the official Termux package repo though. The Termux package build script used to build these betas is online here: https://github.com/joakim-noah/termux-packages/tree/beta/packages/ldc-beta
Re: Variant is just a class
On Thursday, 6 September 2018 at 20:25:18 UTC, Neia Neutuladh wrote: On Thursday, 6 September 2018 at 10:18:43 UTC, Josphe Brigmo wrote: Variants can hold an arbitrary set of types. I imagine that it is effectively just a type id and an object pointer!? It's a typeid and a static array large enough to hold any basic builtin type: the now-deprecated creal, a dynamic array, or a delegate. If you make a Variant from an object, it stores that object reference. The object reference is just a pointer, yes. If you make a Variant from a 256-byte struct, it copies that struct onto the heap and stores a pointer. If you make a Variant from a 6-byte struct, then it stores that struct and does no heap allocations. If so, then it really is just a special type of a class class. It's similar to a java.lang.Object with explicit boxing, but without the need to create a new wrapper class for each value type. It seems that variant and oop are essentially the same thing, more or less, as whatever can be done in one can effectively be done in the other, except, of course, that the class version has compile time type information associated with it, which sort of restricts variant to a subset of all types!?! Object-oriented programming includes inheritance and member function overloading. Variant doesn't; it's just about storage. We are talking about two different things that are related: A variant holds a set of objects. Using VariantClass limits the types to a subset and allows for inherited types to be added. Those objects may be classes which already have inheritance and hence matching and calling their methods will dispatch appropriately. A variant sits on top of the object hierarchy, it is not somewhere in the middle where objects will inherit from it(which is impossible). The difference is simply Object x; Variant y; There is very little difference. If x is a class type then so will variant hold class type and it will act just like the Object does. That is, Variant can do no worse than just being an object(except it then becomes pointless as it can hold only one type. If you're working with classes, you'd be better off using a base class or interface instead of Variant for fields that can only hold objects of those types. But variant can reduce the code complexity if one restricts it's inputs to a specific class of types: Yes, for which you can use std.variant.Algebraic. For instance, Algebraic!(int, long, float) will accept ints, longs, and floats, but nothing else. It is not the same since Algebraic does not allow inherited types inside it's container? Or maybe it does? VariantClass!X will accept anything derived from X and it will dispatch appropriately since it just delegates to the normal class dispatching mechanisms. Then VariantClass will prevent any arbitrary type from being assigned to the variant, effectively allow inheritance to be used(in the sense that it will prevent any type from being used at compile time, like inheritance): VariantClass!X v; // only T : X's are allowed. That's equivalent to `X v;` except with a wrapper around it. Yes, but the whole point of the wrapper is simply to insure that only a subset of types is used but allow for different types. X v; only allows types derived from X. VariantClass!(X, Y) v; allows types derived from X or from Y. then matching on the type will simply delegate everything appropriately. If X and Y have a common type I then one can do VariantClass!I v; which would be the same as I v; But not the same as Algebraic!I v; because we couldn't stick in a derived object for I. We can cast, and it works but it simply doesn't naturally allow derived types for some reason. VariantClass allows derived types. This is a big difference because Algebraic doesn't naturally work well with oop but VariantClass does. Matching then is dispatch. We could further extend VariantClass to return specific classes for each type that dispatch to match and vice versa. I think you're saying that this sort of Algebraic could expose any methods and fields common to all its types? Can D project interfaces like this? interface I { int foo(int); } I i = project!I(o); You can write code to make that work. It would create a wrapper class that implements the requested interface, and that wrapper would just forward everything to the wrapped value. It would be a lot easier just to have the type implement the interface itself, if that's possible. But this is just oop. Opp requires more work to design because one has to implement the interfaces in a prescribed way. What I am talking about taking any object and if it has certain methods that conform to some "interface" then it will behave as if it were derived... even if it were not specified as derived. Why this is better is because it allows objects that may not have been inherited from some interface in the
Re: Alias this and opDispatch override
On Friday, 7 September 2018 at 02:22:58 UTC, Domain wrote: The following code fail to compile: enum KeyMod : int { LCtrl = 1 << 0, RCtrl = 1 << 1, Ctrl = LCtrl | RCtrl, } struct Flags(E) { public: BitFlags!(E, Yes.unsafe) flags; alias flags this; bool opDispatch(string name)() const if (__traits(hasMember, E, name)) { enum e = __traits(getMember, E, name); return (mValue & e) != 0; } } Flags!KeyMod keys; keys.LCtrl = true; assert(keys.Ctrl); Error: no property LCtrl for type Flags!(KeyMod) Error: no property Ctrl for type Flags!(KeyMod) Sorry. This works: struct Flags(E) { public: BitFlags!(E, Yes.unsafe) flags; alias flags this; bool opDispatch(string name)() const if (__traits(hasMember, E, name)) { enum e = __traits(getMember, E, name); return cast(int)(flags & e) != 0; } void opDispatch(string name)(bool set) if (__traits(hasMember, E, name)) { enum e = __traits(getMember, E, name); if (set) flags |= e; else flags &= ~e; } }
Re: linking trouble
On Thursday, 6 September 2018 at 16:59:43 UTC, rikki cattermole wrote: On 07/09/2018 4:03 AM, hridyansh thakur wrote: [...] That definition isn't complete. Missing at the very least ``();`` to make it a function declaration. [...] So what is the errors you're getting? And what are the commands you're executing? compiler is failing to rercognise the .o file
Alias this and opDispatch override
The following code fail to compile: enum KeyMod : int { LCtrl = 1 << 0, RCtrl = 1 << 1, Ctrl = LCtrl | RCtrl, } struct Flags(E) { public: BitFlags!(E, Yes.unsafe) flags; alias flags this; bool opDispatch(string name)() const if (__traits(hasMember, E, name)) { enum e = __traits(getMember, E, name); return (mValue & e) != 0; } } Flags!KeyMod keys; keys.LCtrl = true; assert(keys.Ctrl); Error: no property LCtrl for type Flags!(KeyMod) Error: no property Ctrl for type Flags!(KeyMod)
[Issue 18388] std.experimental.logger slow performance
https://issues.dlang.org/show_bug.cgi?id=18388 --- Comment #10 from Arun Chandrasekaran --- s/_logger.tracef/printf/ 06-09-2018 18:25:51 vaalaham ~/code/d/std-log-benchmark $ time ./std-log-benchmark 8 100 > /dev/null real0m0.495s user0m1.254s sys 0m2.100s 06-09-2018 19:02:22 vaalaham ~/code/d/std-log-benchmark $ So the slowdown is definitely related to phobos. --
Re: DIP25/DIP1000: My thoughts round 2
On Sunday, 2 September 2018 at 05:14:58 UTC, Chris M. wrote: Hopefully that was coherent. Again this is me for me to get my thoughts out there, but also I'm interested in what other people think about this. Somewhat related, I was reading through this thread on why we can't do ref variables and thought this was interesting. A lot of these use cases could be prevented. I tacked my own comments on with //** https://forum.dlang.org/post/aqvtunmdqfkrsvzlg...@forum.dlang.org struct S { return ref int r; } //ref local variable/stack, Ticking timebomb //compiler may refuse //** nope, never accept this void useRef(ref S input, int r) { input.r = r; //** error } //should be good, right? S useRef2(S input, return ref int r) { //Can declare @safe, right??? input.r = r; //maybe, maybe not. //** sure we can return S; } //Shy should indirect care if it's local/stack or heap? //** someone double-check my rationale here, but it should be fine S indirect(return ref int r) { return useRef2(S(), r); } //local variables completely okay to ref! Right? //** Nope! Reject! indirect2() knows whatever receives the return value can't outlive r S indirect2() { int r; return useRef2(S(), r); } S someScope() { int* pointer = new int(31); //i think that's right int local = 127; S s; //reference to calling stack! (which may be destroyed now); //Or worse it may silently work for a while //** or the function never gets compiled useRef(s, 99); assert(s.r == 99); return s; s = useRef2(s, pointer); //or is it *pointer? //** no clue what to say about this one assert(s.r == 31); //good so far if it passes correctly return s; //good, heap allocated s = useRef2(s, local); //** fine here, local outlives s assert(s.r == 127); //good so far (still local) return s; //Ticking timebomb! //** but we reject it here s = indirect(local); //** fine here, local outlives s assert(s.r == 127); //good so far (still local) return s; //timebomb! //** reject again s = indirect2(); //** never accepted in the first place return s; //already destroyed! Unknown consequences! }
Re: Java also has chained exceptions, done manually
On Thursday, 6 September 2018 at 23:34:20 UTC, Andrei Alexandrescu wrote: On 9/6/18 1:13 PM, Neia Neutuladh wrote: The actual structure of the exceptions: `primary` has children `scope 2` and `scope 1`; `scope 2` has child `cause 2`; `scope 1` has child `cause 1`. A tree. No, it's a list. What relationship is supposed to be encoded in Throwable.next?
Re: DIP Draft Reviews
On Thursday, 6 September 2018 at 17:44:28 UTC, Jonathan M Davis wrote: Of course, what further complicates things here is that the author is Walter, and ultimately, it's Walter and Andrei who make the decision on their own. And if Walter doesn't respond to any of the feedback or address it in the DIP, it all comes across as if the DIP itself is just a formality. The fact that he wrote a DIP and presented it for feedback is definitely better than him simply implementing it, since it does give him the chance to get feedback on the plan and improve upon it, but if he then doesn't change anything or even respond to any of the review comments, then it makes it seem kind of pointless that he bothered with a DIP. At that point, it just serves as documentation of his intentions. This is all in stark contrast to the case where someone other than Walter or Andrei wrote the DIP, and the author doesn't bother to even respond to the feedback let alone incorporate it, since they then at least still have to get the DIP past Walter and Andrei, and if the DIP has not taken any of the feedback into account, then presumably, it stands a much worse chance of making it through. On the other hand, if the DIP comes from Walter or Andrei, they only have the other person to convince, and that makes it at least seem like there's a decent chance that it's just going to be rubber-stamped when the DIP author doesn't even respond to feedback. I think that it's great for Walter and Andrei to need to put big changes through the DIP process just like the rest of us do, but given that they're the only ones deciding what's accepted, it makes the whole thing rather weird when a DIP comes from them. - Jonathan M Davis If Walter had tried to implement this w/o a DIP, that would have been among the first reviews received, so it is good that he's has done it as a DIP. But not using it for improving the design is almost as bad. I view this DIP like DIP1000 but worse: at least with DIP1000 there was clear motivation, and despite any breakage and poor documentation of continued changes due to unforeseen requirements, it solves a real problem and has bought real value. It could have been handled much better, but is a net positive IMO. DIP1017 OTOH has flawed/unsubstantiated motivation, will break lots of code, and solves a problem that is already solved by GDC/LDC where the only benefit other that documentation is faster code and could be solved in the same way as GDC/LDC with none of the breakage and complications. Any marginal benefits in speed of compiled code for DMD _only_ (which is not why one uses DMD) comes at the cost of: opportunity cost of development/review and ongoing implementation fixes; unknown but probably very large code breakages; slower compile times for all three compilers; increased complexity in the type system and for new users; and all the other reasons listed in the draft and community review. IMO, a very much net negative I now understand why Mihails left over DIP1000...
Re: Java also has chained exceptions, done manually
On 9/6/18 1:13 PM, Neia Neutuladh wrote: The actual structure of the exceptions: `primary` has children `scope 2` and `scope 1`; `scope 2` has child `cause 2`; `scope 1` has child `cause 1`. A tree. No, it's a list. The encoded structure: a linked list where only the first two positions have any structure-related meaning and the rest are just a sort of mish-mash. This isn't a situation you get in Java because Java doesn't have a way to enqueue multiple independent actions at the end of the same block. You just have try/finally and try(closeable). (As an aside, it does seem we could allow some weird cases where people rethrow some exception down the chain, thus creating loops. Hopefully that's handled properly.) Not if you semi-manually create the loop: auto e = new Exception("root"); scope (exit) throw new Exception("scope 1", e); throw e; Filed as https://issues.dlang.org/show_bug.cgi?id=19231 Thanks! Andrei
[Issue 18388] std.experimental.logger slow performance
https://issues.dlang.org/show_bug.cgi?id=18388 --- Comment #9 from Arun Chandrasekaran --- > You need to flush after each log call. If there is a log buffer and the > program crashes, you might not see the log line that indicates the problem That's right, but may be worth doing on a configuration basis (async mode)? Also I'm not sure how formattedWrite is implemented. As I see it writes piece by piece into the LockingWriter. The log string could probably be constructed first, followed by a single write into the sink. This will avoid multiple chunked writes. --
[Issue 18388] std.experimental.logger slow performance
https://issues.dlang.org/show_bug.cgi?id=18388 --- Comment #8 from Arun Chandrasekaran --- spdlog (C++) does exactly the same thing and it is much faster as I shared the stats earlier. It is apples to apples, IMO. So the contention is same in both C++ and D versions. In fact, D does much better with atomic operations. Disabling the log statements with 8 threads and 1 iterations, C++ version takes real0m3.167s user0m25.215s sys 0m0.004s whereas D version takes real0m2.527s user0m20.124s sys 0m0.000s You can compare the asm at https://godbolt.org/z/JjfTyw --
Re: John Regehr on "Use of Assertions"
On 9/5/2018 4:55 PM, Timon Gehr wrote: John rather explicitly states the opposite in the article. I believe that his statement: "it’s not an interpretation that is universally useful" is much weaker than saying "the opposite". He did not say it was "never useful". For example, it is not universally true that airplanes never crash. But it is rare enough that we can usefully assume the next one we get on won't crash.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 17:19:01 UTC, Joakim wrote: No, Swift counts grapheme clusters by default, so it gives 1. I suggest you read the linked Swift chapter above. I think it's the wrong choice for performance, but they chose to emphasize intuitiveness for the common case. I like to point out that Swift spend a lot of time reworking how string are handled. If my memory serves me well, they have reworked strings from version 2 to 3 and finalized it in version 4. Swift 4 includes a faster, easier to use String implementation that retains Unicode correctness and adds support for creating, using and managing substrings. That took them somewhere along the line of two years to get string handling to a acceptable and predictable state. And it annoyed the Swift user base greatly but a lot of changes got made to reaching a stable API. Being honest, i personally find Swift a more easy languages despite it lacking IDE support on several platforms and no official Windows compiler.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 20:15:22 UTC, Jonathan M Davis wrote: On Thursday, September 6, 2018 1:04:45 PM MDT aliak via Digitalmars-d wrote: D makes the code-point case default and hence that becomes the simplest to use. But unfortunately, the only thing I can think of that requires code point representations is when dealing specifically with unicode algorithms (normalization, etc). Here's a good read on code points: https://manishearth.github.io/blog/2017/01/14/stop-ascribing-meaning-to-un icode-code-points/ - tl;dr: application logic does not need or want to deal with code points. For speed units work, and for correctness, graphemes work. I think that it's pretty clear that code points are objectively the worst level to be the default. Unfortunately, changing it to _anything_ else is not going to be an easy feat at this point. But if we can first ensure that Phobos in general doesn't rely on it (i.e. in general, it can deal with ranges of char, wchar, dchar, or graphemes correctly rather than assuming that all ranges of characters are ranges of dchar), then maybe we can figure something out. Unfortunately, while some work has been done towards that, what's mostly happened is that folks have complained about auto-decoding without doing much to improve the current situation. There's a lot more to this than simply ripping out auto-decoding even if every D user on the planet agreed that outright breaking almost every existing D program to get rid of auto-decoding was worth it. But as with too many things around here, there's a lot more talking than working. And actually, as such, I should probably stop discussing this and go do something useful. - Jonathan M Davis Is there a unittest somewhere in phobos you know that one can be pointed to that shows the handling of these 4 variations you say should be dealt with first? Or maybe a PR that did some of this work that one could investigate? I ask so I can see in code what it means to make something not rely on autodecoding and deal with ranges of char, wchar, dchar or graphemes. Or a current "easy" bugzilla issue maybe that one could try a hand at?
Re: Variant is just a class
On Thursday, 6 September 2018 at 10:18:43 UTC, Josphe Brigmo wrote: Variants can hold an arbitrary set of types. I imagine that it is effectively just a type id and an object pointer!? It's a typeid and a static array large enough to hold any basic builtin type: the now-deprecated creal, a dynamic array, or a delegate. If you make a Variant from an object, it stores that object reference. The object reference is just a pointer, yes. If you make a Variant from a 256-byte struct, it copies that struct onto the heap and stores a pointer. If you make a Variant from a 6-byte struct, then it stores that struct and does no heap allocations. If so, then it really is just a special type of a class class. It's similar to a java.lang.Object with explicit boxing, but without the need to create a new wrapper class for each value type. It seems that variant and oop are essentially the same thing, more or less, as whatever can be done in one can effectively be done in the other, except, of course, that the class version has compile time type information associated with it, which sort of restricts variant to a subset of all types!?! Object-oriented programming includes inheritance and member function overloading. Variant doesn't; it's just about storage. If you're working with classes, you'd be better off using a base class or interface instead of Variant for fields that can only hold objects of those types. But variant can reduce the code complexity if one restricts it's inputs to a specific class of types: Yes, for which you can use std.variant.Algebraic. For instance, Algebraic!(int, long, float) will accept ints, longs, and floats, but nothing else. Then VariantClass will prevent any arbitrary type from being assigned to the variant, effectively allow inheritance to be used(in the sense that it will prevent any type from being used at compile time, like inheritance): VariantClass!X v; // only T : X's are allowed. That's equivalent to `X v;` except with a wrapper around it. Matching then is dispatch. We could further extend VariantClass to return specific classes for each type that dispatch to match and vice versa. I think you're saying that this sort of Algebraic could expose any methods and fields common to all its types? Can D project interfaces like this? interface I { int foo(int); } I i = project!I(o); You can write code to make that work. It would create a wrapper class that implements the requested interface, and that wrapper would just forward everything to the wrapped value. It would be a lot easier just to have the type implement the interface itself, if that's possible. Seems though this cannot be used at runtime though since function signatures are not transported in the binary? If they are, then maybe it would work and would reduce the overhead of oop as one could just project types to other types that overlap. You can use the witchcraft library on dub to perform runtime introspection, but there would be a performance penalty.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, September 6, 2018 1:04:45 PM MDT aliak via Digitalmars-d wrote: > D makes the code-point case default and hence that becomes the > simplest to use. But unfortunately, the only thing I can think of > that requires code point representations is when dealing > specifically with unicode algorithms (normalization, etc). Here's > a good read on code points: > https://manishearth.github.io/blog/2017/01/14/stop-ascribing-meaning-to-un > icode-code-points/ - > > tl;dr: application logic does not need or want to deal with code > points. For speed units work, and for correctness, graphemes work. I think that it's pretty clear that code points are objectively the worst level to be the default. Unfortunately, changing it to _anything_ else is not going to be an easy feat at this point. But if we can first ensure that Phobos in general doesn't rely on it (i.e. in general, it can deal with ranges of char, wchar, dchar, or graphemes correctly rather than assuming that all ranges of characters are ranges of dchar), then maybe we can figure something out. Unfortunately, while some work has been done towards that, what's mostly happened is that folks have complained about auto-decoding without doing much to improve the current situation. There's a lot more to this than simply ripping out auto-decoding even if every D user on the planet agreed that outright breaking almost every existing D program to get rid of auto-decoding was worth it. But as with too many things around here, there's a lot more talking than working. And actually, as such, I should probably stop discussing this and go do something useful. - Jonathan M Davis
[Issue 18388] std.experimental.logger slow performance
https://issues.dlang.org/show_bug.cgi?id=18388 --- Comment #7 from Robert Schadek --- I didn't say anything about api calls. That is not the problem. The problem with the benchmark is that the threads share memory. That means each write will, given you tested on a multicore cpu, invalidates some of the CPUs caches. That means the program has to go to RAM, and that is slow, really slow. I bet you a drink at next years DConf, that if you check perf you will find that the CPU is waiting for data from RAM most of the execution time. You need to flush after each log call. If there is a log buffer and the program crashes, you might not see the log line that indicates the problem. --
Re: This is why I don't use D.
On Thursday, September 6, 2018 12:35:06 PM MDT Joakim via Digitalmars-d wrote: > Ah, but would you actually pay for such a service to be set up? > > https://forum.dlang.org/thread/acxedxzzesxkyomrs...@forum.dlang.org > > It's all well and good to hope for such services, but they're > unlikely to happen unless paid for. That's actually something that I could see happening without anyone being paid to do it, but it's much more likely to get done in a timely manner if someone is paid to do it, and it's arguably critical enough for the dub ecosystem that it's something that the Foundation should pay someone to do. - Jonathan M Davis
Re: Bug with writeln?
On Thursday, September 6, 2018 1:05:03 PM MDT Steven Schveighoffer via Digitalmars-d-learn wrote: > On 9/6/18 2:52 PM, Jonathan M Davis wrote: > > On Thursday, September 6, 2018 12:21:24 PM MDT Steven Schveighoffer via > > > > Digitalmars-d-learn wrote: > >> On 9/6/18 12:55 PM, Jonathan M Davis wrote: > >>> It's not a bug in writeln. Any time that a range is copied, you must > >>> not > >>> do _anything_ else with the original unless copying it is equivalent > >>> to > >>> calling save on it, because the semantics of copying a range are > >>> unspecified. They vary wildly depending on the range type (e.g. > >>> copying > >>> a dynamic array is equivalent to calling save, but copying a class > >>> reference is not). When you pass the range to writeln, you must > >>> assumed > >>> that it may have been consumed. And since you have range of ranges, > >>> you > >>> must assume that the ranges that are contained may have been consumed. > >>> If you want to pass them to writeln and then do anything else with > >>> them, then you'll need to call save on every range involved (which is > >>> a > >>> bit of a pain with a range of ranges, but it's necessary all the > >>> same). > >> > >> This is not necessarily true. It depends how the sub-ranges are > >> returned. > >> > >> The bug is that formattedWrite takes ranges sometimes by ref, sometimes > >> not. > >> > >> formattedWrite should call save on a forward range whenever it makes a > >> copy, and it doesn't. > >> > >> Case in point, it doesn't matter if you call writeln(b.save), the same > >> thing happens. > > > > That's still not a bug in formattedWrite. save only duplicates the > > outer-most range. And since writeln will ultimately iterate through the > > inner ranges - which weren't saved - you end up with them being > > consumed. > > That is the bug -- formattedWrite should save all the inner ranges > (writeln calls formattedWrite, and lets it do all the work). To not do > so leaves it open to problems such as consuming the sub ranges. > > I can't imagine that anyone would expect or desire the current behavior. It's exactly what you're going to get in all cases if the ranges aren't forward ranges, and it's what you have to do in general when passing ranges of ranges to functions if you want to be able to continue to use any of the ranges involved after passing them to the function. Changing formattedWrite to work around it is only a workaround for this paricular case. It's still a bug in general - though given that this would be one of the more common cases, working around it in this particular case may be worth it. It's still a workaround though and not something that can be relied on in with range-based code in general - especially when most range-based code isn't written to care about what the element types are and copies elements around all the time. > Ironically, when that bug is fixed, you *don't* have to call save on the > outer range! Except you do, because it's passed by value. If it's a dynamic array, then you're fine, since copying saves, but in the general case, you still do. > > When you're passing a range of ranges to a function, you need to > > recursively save them if you don't want the inner ranges in the > > original range to be consumed. Regardless of what formattedWrite does, > > it's a general issue with any function that you pass a range of ranges. > > It comes right back to the same issue of the semantics of copying > > ranges being unspecified and that you therefore must always use save on > > any ranges involved if you want to then use those ranges after having > > passed them to a function or copy them doing anything else. It's that > > much more annoying when you're dealing with a range of ranges rather > > than a range of something else, but the issue is the same. > It's only a problem if the subranges are returned by reference. If they > aren't, then no save is required (because they are already copies). The > fix in this case is to make a copy if possible (using save as expected). > > I think the save semantics have to be one of the worst designs in D. On that we can definitely agree. I'm strongly of the opinion that it should have been required that forward ranges be dynamic arrays or structs (no classes allowed) and that it be required that they have a postblit / copy constructor if the default copy wasn't equivalent to save. If you wanted a class that was a forward range, you would then have to wrap it in a struct with the appropriate postblit / copy constructor. That way, copying a forward range would _always_ be saving it. The harder question is what to then do with basic input ranges. Having them share code with forward ranges is often useful but also frequently a disaster, and to really be correct, they would need to either be full-on reference types or always passed around by reference. Allowing partial reference types is a total disaster when you're allowed to copy the range. Requiring that they be classes would
Re: Bug with writeln?
On 9/6/18 2:52 PM, Jonathan M Davis wrote: On Thursday, September 6, 2018 12:21:24 PM MDT Steven Schveighoffer via Digitalmars-d-learn wrote: On 9/6/18 12:55 PM, Jonathan M Davis wrote: It's not a bug in writeln. Any time that a range is copied, you must not do _anything_ else with the original unless copying it is equivalent to calling save on it, because the semantics of copying a range are unspecified. They vary wildly depending on the range type (e.g. copying a dynamic array is equivalent to calling save, but copying a class reference is not). When you pass the range to writeln, you must assumed that it may have been consumed. And since you have range of ranges, you must assume that the ranges that are contained may have been consumed. If you want to pass them to writeln and then do anything else with them, then you'll need to call save on every range involved (which is a bit of a pain with a range of ranges, but it's necessary all the same). This is not necessarily true. It depends how the sub-ranges are returned. The bug is that formattedWrite takes ranges sometimes by ref, sometimes not. formattedWrite should call save on a forward range whenever it makes a copy, and it doesn't. Case in point, it doesn't matter if you call writeln(b.save), the same thing happens. That's still not a bug in formattedWrite. save only duplicates the outer-most range. And since writeln will ultimately iterate through the inner ranges - which weren't saved - you end up with them being consumed. That is the bug -- formattedWrite should save all the inner ranges (writeln calls formattedWrite, and lets it do all the work). To not do so leaves it open to problems such as consuming the sub ranges. I can't imagine that anyone would expect or desire the current behavior. Ironically, when that bug is fixed, you *don't* have to call save on the outer range! When you're passing a range of ranges to a function, you need to recursively save them if you don't want the inner ranges in the original range to be consumed. Regardless of what formattedWrite does, it's a general issue with any function that you pass a range of ranges. It comes right back to the same issue of the semantics of copying ranges being unspecified and that you therefore must always use save on any ranges involved if you want to then use those ranges after having passed them to a function or copy them doing anything else. It's that much more annoying when you're dealing with a range of ranges rather than a range of something else, but the issue is the same. It's only a problem if the subranges are returned by reference. If they aren't, then no save is required (because they are already copies). The fix in this case is to make a copy if possible (using save as expected). I think the save semantics have to be one of the worst designs in D. -Steve
[Issue 18388] std.experimental.logger slow performance
https://issues.dlang.org/show_bug.cgi?id=18388 --- Comment #6 from Arun Chandrasekaran --- > The benchmark becomes single threaded around three times for each line > printed. How is it a benchmark issue? I agree that three API for a single log line is expensive. Isn't that a problem with std.experimental.logger? I also see that a flush after every line is causing the slowdown as well. Probably do a flush only when the message level is fatal/error/warn and also during termination? --
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 16:44:11 UTC, H. S. Teoh wrote: On Thu, Sep 06, 2018 at 02:42:58PM +, Dukc via Digitalmars-d wrote: On Thursday, 6 September 2018 at 14:17:28 UTC, aliak wrote: > // D > auto a = "á"; > auto b = "á"; > auto c = "\u200B"; > auto x = a ~ c ~ a; > auto y = b ~ c ~ b; > > writeln(a.length); // 2 wtf > writeln(b.length); // 3 wtf > writeln(x.length); // 7 wtf > writeln(y.length); // 9 wtf [...] This is an unfair comparison. In the Swift version you used .count, but here you used .length, which is the length of the array, NOT the number of characters or whatever you expect it to be. You should rather use .count and specify exactly what you want to count, e.g., byCodePoint or byGrapheme. I suspect the Swift version will give you unexpected results if you did something like compare "á" to "a\u301", for example (which, in case it isn't obvious, are visually identical to each other, and as far as an end user is concerned, should only count as 1 grapheme). Not even normalization will help you if you have a string like "a\u301\u302": in that case, the *only* correct way to count the number of visual characters is byGrapheme, and I highly doubt Swift's .count will give you the correct answer in that case. (I expect that Swift's .count will count code points, as is the usual default in many languages, which is unfortunately wrong when you're thinking about visual characters, which are called graphemes in Unicode parlance.) And even in your given example, what should .count return when there's a zero-width character? If you're counting the number of visual places taken by the string (e.g., you're trying to align output in a fixed-width terminal), then *both* versions of your code are wrong, because zero-width characters do not occupy any space when displayed. If you're counting the number of code points, though, e.g., to allocate the right buffer size to convert to dstring, then you want to count the zero-width character as 1 rather than 0. And that's not to mention double-width characters, which should count as 2 if you're outputting to a fixed-width terminal. Again I say, you need to know how Unicode works. Otherwise you can easily deceive yourself to think that your code (both in D and in Swift and in any other language) is correct, when in fact it will fail miserably when it receives input that you didn't think of. Unicode is NOT ASCII, and you CANNOT assume there's a 1-to-1 mapping between "characters" and display length. Or 1-to-1 mapping between any of the various concepts of string "length", in fact. In ASCII, array length == number of code points == number of graphemes == display width. In Unicode, array length != number of code points != number of graphemes != display width. Code written by anyone who does not understand this is WRONG, because you will inevitably end up using the wrong value for the wrong thing: e.g., array length for number of code points, or number of code points for display length. Not even .byGrapheme will save you here; you *need* to understand that zero-width and double-width characters exist, and what they imply for display width. You *need* to understand the difference between code points and graphemes. There is no single default that will work in every case, because there are DIFFERENT CORRECT ANSWERS depending on what your code is trying to accomplish. Pretending that you can just brush all this detail under the rug of a single number is just deceiving yourself, and will inevitably result in wrong code that will fail to handle Unicode input correctly. T It's a totally fair comparison. .count in swift is the equivalent of .length in D, you use that to get the size of an array, etc. They've just implemented string.length as string.byGrapheme.walkLength. So it's intuitively correct (and yes, slower). If you didn't want the default though then you could also specify what "view" over characters you want. E.g. let a = "á̂" a.count // 1 <-- Yes it is exactly as expected. a.unicodeScalars // 3 a.utf8.count // 5 I don't really see any issues with a zero-width character. If you want to deal with screen width (i.e. pixel space) that's not the same as how many characters are in a string. And it doesn't matter whether you go byGrapheme or byCodePoint or byCodeUnit because none of those represent a single column on screen. A zero-width character is 0 *width* but it's still *one* character. There's no .length/size/count in any language (that I've heard of) that'll give you your screen space from their string type. You query the font API for that as that depends on font size, kerning, style and face. And again, I agree you need to know how unicode works. I don't argue that at all. I'm just saying that having the default be incorrect for application logic is just silly and when people have to do things like string.representation.normalize.byGrapheme or whatever
Re: Bug with writeln?
On Thursday, September 6, 2018 12:21:24 PM MDT Steven Schveighoffer via Digitalmars-d-learn wrote: > On 9/6/18 12:55 PM, Jonathan M Davis wrote: > > On Thursday, September 6, 2018 2:40:08 AM MDT Saurabh Das via > > Digitalmars-d-> > > learn wrote: > >> Is this a bug with writeln? > >> > >> void main() > >> { > >> > >> import std.stdio, std.range, std.algorithm; > >> > >> auto a1 = sort([1,3,5,4,2]); > >> auto a2 = sort([9,8,9]); > >> auto a3 = sort([5,4,5,4]); > >> > >> pragma(msg, typeof(a1)); > >> pragma(msg, typeof(a2)); > >> pragma(msg, typeof(a3)); > >> > >> auto b = [a1, a2, a3]; > >> pragma(msg, typeof(b)); > >> > >> writeln("b:"); > >> writeln(b); > >> writeln(b); // <-- this one prints incorrectly > >> > >> writeln("a:"); > >> writeln(a1); > >> writeln(a2); > >> writeln(a3); > >> > >> } > >> > >> Output > >> == > >> > >> SortedRange!(int[], "a < b") > >> SortedRange!(int[], "a < b") > >> SortedRange!(int[], "a < b") > >> SortedRange!(int[], "a < b")[] > >> b: > >> [[1, 2, 3, 4, 5], [8, 9, 9], [4, 4, 5, 5]] > >> [[], [], []] > >> a: > >> [1, 2, 3, 4, 5] > >> [8, 9, 9] > >> [4, 4, 5, 5] > >> > >> The issue goes away if I cast 'b' to const before writeln. I > >> think it is a bug, but maybe I am missing something? > > > > It's not a bug in writeln. Any time that a range is copied, you must not > > do _anything_ else with the original unless copying it is equivalent to > > calling save on it, because the semantics of copying a range are > > unspecified. They vary wildly depending on the range type (e.g. copying > > a dynamic array is equivalent to calling save, but copying a class > > reference is not). When you pass the range to writeln, you must assumed > > that it may have been consumed. And since you have range of ranges, you > > must assume that the ranges that are contained may have been consumed. > > If you want to pass them to writeln and then do anything else with > > them, then you'll need to call save on every range involved (which is a > > bit of a pain with a range of ranges, but it's necessary all the same). > > This is not necessarily true. It depends how the sub-ranges are returned. > > The bug is that formattedWrite takes ranges sometimes by ref, sometimes > not. > > formattedWrite should call save on a forward range whenever it makes a > copy, and it doesn't. > > Case in point, it doesn't matter if you call writeln(b.save), the same > thing happens. That's still not a bug in formattedWrite. save only duplicates the outer-most range. And since writeln will ultimately iterate through the inner ranges - which weren't saved - you end up with them being consumed. When you're passing a range of ranges to a function, you need to recursively save them if you don't want the inner ranges in the original range to be consumed. Regardless of what formattedWrite does, it's a general issue with any function that you pass a range of ranges. It comes right back to the same issue of the semantics of copying ranges being unspecified and that you therefore must always use save on any ranges involved if you want to then use those ranges after having passed them to a function or copy them doing anything else. It's that much more annoying when you're dealing with a range of ranges rather than a range of something else, but the issue is the same. - Jonathan M Davis
[Issue 19182] missing semicolon crashes compiler
https://issues.dlang.org/show_bug.cgi?id=19182 github-bugzi...@puremagic.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --
[Issue 19182] missing semicolon crashes compiler
https://issues.dlang.org/show_bug.cgi?id=19182 --- Comment #2 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/dlang/dmd https://github.com/dlang/dmd/commit/1b981b65641d44e8a8c81de798cc2e15e5116871 Fix Issue 19182 - missing semicolon crashes compiler https://github.com/dlang/dmd/commit/7bf8b689a6cdbe6ca4d82e97a970c4a4b370ef21 Merge pull request #8598 from RazvanN7/Issue_19182 Fix Issue 19182 - missing semicolon crashes compiler merged-on-behalf-of: Petar Kirov --
Re: This is why I don't use D.
On Thursday, 6 September 2018 at 18:20:05 UTC, Bastiaan Veelo wrote: On Wednesday, 5 September 2018 at 05:44:38 UTC, H. S. Teoh wrote: To me, this strongly suggests the following idea: - add *all* dlang.org packages to our current autotester / CI infrastructure. - if a particular (version of a) package builds successfully, log the compiler version / git hash / package version to a database and add a note to dlang.org that this package built successfully with this compiler version. - if a particular (version of a) package fails to build for whatever reason, log the failure and have a bot add a note to dlang.org that this package does NOT build with that compiler version. - possibly add the package to a blacklist for this compiler version so that we don't consume too many resources on outdated packages that no longer build. - periodically update dlang.org (by bot) to indicate the last known compiler version that successfully built this package. - in the search results, give preference to packages that built successfully with the latest official release. Yes please! Ah, but would you actually pay for such a service to be set up? https://forum.dlang.org/thread/acxedxzzesxkyomrs...@forum.dlang.org It's all well and good to hope for such services, but they're unlikely to happen unless paid for.
Re: file io
On 9/6/18 2:30 PM, Steven Schveighoffer wrote: On 9/6/18 1:07 PM, rikki cattermole wrote: On 07/09/2018 4:17 AM, Arun Chandrasekaran wrote: On Thursday, 6 September 2018 at 16:13:42 UTC, hridyansh thakur wrote: how to read a file line by line in D std.stdio.File.byLine() Refer the doc here: https://dlang.org/library/std/stdio/file.by_line.html An example from the doc: ``` import std.algorithm, std.stdio, std.string; // Count words in a file using ranges. void main() { auto file = File("file.txt"); // Open for reading const wordCount = file.byLine() // Read lines .map!split // Split into words .map!(a => a.length) // Count words per line .sum(); // Total word count writeln(wordCount); } ``` Ranges will be far too advanced of a topic to bring up at this stage. So something a little more conventional might be a better option: --- import std.file : readText; import std.array : split; import std.string : strip; string text = readText("file.txt"); string[] onlyWords = text.split(" "); uint countWords; foreach(ref word; onlyWords) { word = word.strip(); if (word.length > 0) countWords++; } --- Ugh, don't do that, it will read the unknown-length file into RAM all at once. foreach(word; File("file.txt").byLine) { word = word.strip(); if(word.length > 0) countWords++; } That will buffer one line at a time and achieve the same results. ugh, I didn't think this through. This works: foreach(line; File("file.txt").byLine) { foreach(word; line.split(" ")) { word = word.strip(); if(word.length > 0) countWords++; } } -Steve
Re: file io
On 9/6/18 1:07 PM, rikki cattermole wrote: On 07/09/2018 4:17 AM, Arun Chandrasekaran wrote: On Thursday, 6 September 2018 at 16:13:42 UTC, hridyansh thakur wrote: how to read a file line by line in D std.stdio.File.byLine() Refer the doc here: https://dlang.org/library/std/stdio/file.by_line.html An example from the doc: ``` import std.algorithm, std.stdio, std.string; // Count words in a file using ranges. void main() { auto file = File("file.txt"); // Open for reading const wordCount = file.byLine() // Read lines .map!split // Split into words .map!(a => a.length) // Count words per line .sum(); // Total word count writeln(wordCount); } ``` Ranges will be far too advanced of a topic to bring up at this stage. So something a little more conventional might be a better option: --- import std.file : readText; import std.array : split; import std.string : strip; string text = readText("file.txt"); string[] onlyWords = text.split(" "); uint countWords; foreach(ref word; onlyWords) { word = word.strip(); if (word.length > 0) countWords++; } --- Ugh, don't do that, it will read the unknown-length file into RAM all at once. foreach(word; File("file.txt").byLine) { word = word.strip(); if(word.length > 0) countWords++; } That will buffer one line at a time and achieve the same results. -Steve
Re: This is why I don't use D.
On Wednesday, 5 September 2018 at 05:44:38 UTC, H. S. Teoh wrote: To me, this strongly suggests the following idea: - add *all* dlang.org packages to our current autotester / CI infrastructure. - if a particular (version of a) package builds successfully, log the compiler version / git hash / package version to a database and add a note to dlang.org that this package built successfully with this compiler version. - if a particular (version of a) package fails to build for whatever reason, log the failure and have a bot add a note to dlang.org that this package does NOT build with that compiler version. - possibly add the package to a blacklist for this compiler version so that we don't consume too many resources on outdated packages that no longer build. - periodically update dlang.org (by bot) to indicate the last known compiler version that successfully built this package. - in the search results, give preference to packages that built successfully with the latest official release. Yes please!
Re: Bug with writeln?
On 9/6/18 12:55 PM, Jonathan M Davis wrote: On Thursday, September 6, 2018 2:40:08 AM MDT Saurabh Das via Digitalmars-d- learn wrote: Is this a bug with writeln? void main() { import std.stdio, std.range, std.algorithm; auto a1 = sort([1,3,5,4,2]); auto a2 = sort([9,8,9]); auto a3 = sort([5,4,5,4]); pragma(msg, typeof(a1)); pragma(msg, typeof(a2)); pragma(msg, typeof(a3)); auto b = [a1, a2, a3]; pragma(msg, typeof(b)); writeln("b:"); writeln(b); writeln(b); // <-- this one prints incorrectly writeln("a:"); writeln(a1); writeln(a2); writeln(a3); } Output == SortedRange!(int[], "a < b") SortedRange!(int[], "a < b") SortedRange!(int[], "a < b") SortedRange!(int[], "a < b")[] b: [[1, 2, 3, 4, 5], [8, 9, 9], [4, 4, 5, 5]] [[], [], []] a: [1, 2, 3, 4, 5] [8, 9, 9] [4, 4, 5, 5] The issue goes away if I cast 'b' to const before writeln. I think it is a bug, but maybe I am missing something? It's not a bug in writeln. Any time that a range is copied, you must not do _anything_ else with the original unless copying it is equivalent to calling save on it, because the semantics of copying a range are unspecified. They vary wildly depending on the range type (e.g. copying a dynamic array is equivalent to calling save, but copying a class reference is not). When you pass the range to writeln, you must assumed that it may have been consumed. And since you have range of ranges, you must assume that the ranges that are contained may have been consumed. If you want to pass them to writeln and then do anything else with them, then you'll need to call save on every range involved (which is a bit of a pain with a range of ranges, but it's necessary all the same). This is not necessarily true. It depends how the sub-ranges are returned. The bug is that formattedWrite takes ranges sometimes by ref, sometimes not. formattedWrite should call save on a forward range whenever it makes a copy, and it doesn't. Case in point, it doesn't matter if you call writeln(b.save), the same thing happens. -Steve
[Issue 19231] Infinite loop in exception chains
https://issues.dlang.org/show_bug.cgi?id=19231 Steven Schveighoffer changed: What|Removed |Added CC||schvei...@yahoo.com --- Comment #1 from Steven Schveighoffer --- There are 2 options other than adding just a limit. First, we could use tortoise and hare algorithm to detect the cycle. Second, we could mark each exception somehow as it is printed, and then unmark them after the printing algorithm is over. --
Re: Slicing betterC
On Thursday, September 6, 2018 11:34:18 AM MDT Adam D. Ruppe via Digitalmars-d-learn wrote: > On Thursday, 6 September 2018 at 17:10:49 UTC, Oleksii wrote: > > struct Slice(T) { > > > > size_t capacity; > > size_t size; > > T* memory; > > > > } > > There's no capacity in the slice, that is stored as part of the > GC block, which it looks up with the help of RTTI, thus the > TypeInfo reference. > > Slices *just* know their size and their memory pointer. They > don't know how they were allocated and don't know what's beyond > their bounds or how to grow their bounds. This needs to be > managed elsewhere. > > If you malloc a slice in regular D, the capacity will be returned > as 0 - the GC doesn't know anything about it. Any attempt to > append to it will allocate a whole new block. > > In -betterC, there is no GC to look up at all, and thus it has > nowhere to look. You'll have to make your own struct that stores > capacity if you need it. > > I like to do something like > > struct MyArray { >T* rawPointer; >int capacity; >int currentLength; > >// most user interaction will occur through this >T[] opSlice() { return rawPointer[0 .. currentLength]; } > >// fill in other operators as needed > } To try to make this even clearer, a dynamic array looks basically like this underneath the hood struct DynamicArray(T) { size_t length; T* ptr; } IIRC, it actually uses void* unfortunately, but that struct is basically what you get. Notice that _all_ of the information that's there is the pointer and the length. That's it. If you understand the semantics of what happens when passing that struct around, you'll understand the semantics of passing around dynamic arrays. And all of the operations that would have anything to do with memory management involve the GC - capacity, ~, ~=, etc. all require the GC. If you're not using -betterC, the fact that the dynamic array was allocated with malloc is pretty irrelevant, since all of those operations will function exactly the same as if the dynamic array were allocated by the GC. It's just that because the dynamic array is not GC-allocated, it's guaranteed that the capacity is 0, and therefore any operations that would increase the arrays length then require reallocating the dynamic array with the GC, whereas if it were already GC-allocated, then its capacity might have been greater than its length, in which case, reallocation would not be required. If you haven't read it already, I would suggest reading this article: https://dlang.org/articles/d-array-article.html It does not use the official terminology, but in spite of that, it should really help clarify things for you. The article refers to T[] as being a slice (which is accurate, since it is a slice of memory), but it incorrectly refers to the memory buffer itself as being the dynamic array, whereas the language spec considers the T[] (the struct shown above) to be the dynamic array. The language does not have a specific name for that memory buffer, and it considers a T[] to be dynamic array regardless of what memory it refers to. So, you should keep that in mind when reading the article, but the concepts that it teaches are very much correct and should help a great deal in understanding how dynamic arrays work in D. - Jonathan M Davis
Re: DIP Draft Reviews
On Thursday, September 6, 2018 4:49:55 AM MDT Mike Parker via Digitalmars-d- announce wrote: > On Thursday, 6 September 2018 at 10:22:47 UTC, Nicholas Wilson > > wrote: > > Put it this way: DIP1017 should not go to formal without > > change, as it did from draft to community (which I don't think > > should have happened without at least some acknowledgement or > > refutation of the points raised in draft). > > I always ask DIP authors about unaddressed feedback before moving > from one stage to the other, and I did so with DIP 1017 when > moving out of Draft Review. It's entirely up to the author > whether or not to address it and there is no requirement for DIP > authors to respond to any feedback. I would prefer it if they > did, especially in the Post-Community stage and later as it helps > me with my review summaries, but 1017 is not the first DIP where > feedback went unaddressed and I'm sure it won't be the last. Of course, what further complicates things here is that the author is Walter, and ultimately, it's Walter and Andrei who make the decision on their own. And if Walter doesn't respond to any of the feedback or address it in the DIP, it all comes across as if the DIP itself is just a formality. The fact that he wrote a DIP and presented it for feedback is definitely better than him simply implementing it, since it does give him the chance to get feedback on the plan and improve upon it, but if he then doesn't change anything or even respond to any of the review comments, then it makes it seem kind of pointless that he bothered with a DIP. At that point, it just serves as documentation of his intentions. This is all in stark contrast to the case where someone other than Walter or Andrei wrote the DIP, and the author doesn't bother to even respond to the feedback let alone incorporate it, since they then at least still have to get the DIP past Walter and Andrei, and if the DIP has not taken any of the feedback into account, then presumably, it stands a much worse chance of making it through. On the other hand, if the DIP comes from Walter or Andrei, they only have the other person to convince, and that makes it at least seem like there's a decent chance that it's just going to be rubber-stamped when the DIP author doesn't even respond to feedback. I think that it's great for Walter and Andrei to need to put big changes through the DIP process just like the rest of us do, but given that they're the only ones deciding what's accepted, it makes the whole thing rather weird when a DIP comes from them. - Jonathan M Davis
Re: Messing with betterC and string type.
On Thursday, 6 September 2018 at 17:09:34 UTC, SrMordred wrote: Yes, the true problem arrives on the operations like concat "~" that call some internal function to do that with strings. Only if it is string ~ string. If it is your type, that's where opBinary and opBinaryRight come in. YourString ~ built_in_string = YourString.opBinary(string op : "~")(immutable(char)[] rhs); built_in_string ~ YourString = YourString.opBinaryRight(string op : "~")(immutable(char)[] lhs); so you can make it work. Though btw I would actually suggest leaving concat unimplemented... it is so hard to manage the memory for it without the GC. IMO better off just appending to an existing thing; do ~= instead of ~. iep, this seems a real problem. ;/ yeah we have no implict ctors :(
Re: Slicing betterC
On Thursday, 6 September 2018 at 17:10:49 UTC, Oleksii wrote: struct Slice(T) { size_t capacity; size_t size; T* memory; } There's no capacity in the slice, that is stored as part of the GC block, which it looks up with the help of RTTI, thus the TypeInfo reference. Slices *just* know their size and their memory pointer. They don't know how they were allocated and don't know what's beyond their bounds or how to grow their bounds. This needs to be managed elsewhere. If you malloc a slice in regular D, the capacity will be returned as 0 - the GC doesn't know anything about it. Any attempt to append to it will allocate a whole new block. In -betterC, there is no GC to look up at all, and thus it has nowhere to look. You'll have to make your own struct that stores capacity if you need it. I like to do something like struct MyArray { T* rawPointer; int capacity; int currentLength; // most user interaction will occur through this T[] opSlice() { return rawPointer[0 .. currentLength]; } // fill in other operators as needed }
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, September 6, 2018 10:44:11 AM MDT H. S. Teoh via Digitalmars-d wrote: > On Thu, Sep 06, 2018 at 02:42:58PM +, Dukc via Digitalmars-d wrote: > > On Thursday, 6 September 2018 at 14:17:28 UTC, aliak wrote: > > > // D > > > auto a = "á"; > > > auto b = "á"; > > > auto c = "\u200B"; > > > auto x = a ~ c ~ a; > > > auto y = b ~ c ~ b; > > > > > > writeln(a.length); // 2 wtf > > > writeln(b.length); // 3 wtf > > > writeln(x.length); // 7 wtf > > > writeln(y.length); // 9 wtf > > [...] > > This is an unfair comparison. In the Swift version you used .count, but > here you used .length, which is the length of the array, NOT the number > of characters or whatever you expect it to be. You should rather use > .count and specify exactly what you want to count, e.g., byCodePoint or > byGrapheme. > > I suspect the Swift version will give you unexpected results if you did > something like compare "á" to "a\u301", for example (which, in case it > isn't obvious, are visually identical to each other, and as far as an > end user is concerned, should only count as 1 grapheme). > > Not even normalization will help you if you have a string like > "a\u301\u302": in that case, the *only* correct way to count the number > of visual characters is byGrapheme, and I highly doubt Swift's .count > will give you the correct answer in that case. (I expect that Swift's > .count will count code points, as is the usual default in many > languages, which is unfortunately wrong when you're thinking about > visual characters, which are called graphemes in Unicode parlance.) > > And even in your given example, what should .count return when there's a > zero-width character? If you're counting the number of visual places > taken by the string (e.g., you're trying to align output in a > fixed-width terminal), then *both* versions of your code are wrong, > because zero-width characters do not occupy any space when displayed. If > you're counting the number of code points, though, e.g., to allocate the > right buffer size to convert to dstring, then you want to count the > zero-width character as 1 rather than 0. And that's not to mention > double-width characters, which should count as 2 if you're outputting to > a fixed-width terminal. > > Again I say, you need to know how Unicode works. Otherwise you can > easily deceive yourself to think that your code (both in D and in Swift > and in any other language) is correct, when in fact it will fail > miserably when it receives input that you didn't think of. Unicode is > NOT ASCII, and you CANNOT assume there's a 1-to-1 mapping between > "characters" and display length. Or 1-to-1 mapping between any of the > various concepts of string "length", in fact. > > In ASCII, array length == number of code points == number of graphemes > == display width. > > In Unicode, array length != number of code points != number of graphemes > != display width. > > Code written by anyone who does not understand this is WRONG, because > you will inevitably end up using the wrong value for the wrong thing: > e.g., array length for number of code points, or number of code points > for display length. Not even .byGrapheme will save you here; you *need* > to understand that zero-width and double-width characters exist, and > what they imply for display width. You *need* to understand the > difference between code points and graphemes. There is no single > default that will work in every case, because there are DIFFERENT > CORRECT ANSWERS depending on what your code is trying to accomplish. > Pretending that you can just brush all this detail under the rug of a > single number is just deceiving yourself, and will inevitably result in > wrong code that will fail to handle Unicode input correctly. Indeed. And unfortunately, the net result is that a large percentage of the string-processing code out there is going to be wrong, and I don't think that there's any way around that, because Unicode is simply too complicated for the average programmer to understand it (sad as that may be) - especially when most of them don't want to have to understand it. Really, I'd say that there are only three options that even might be sane if you really have the flexibility to design a proper solution: 1. Treat strings as ranges of code units by default. 2. Don't allow strings to be ranges, to be iterated, or indexed. They're opaque types. 3. Treat strings as ranges of graphemes. If strings are treated as ranges of code units by default (particularly if they're UTF-8), you'll get failures very quickly if you're dealing with non-ASCII, and you screw up the Unicode handling. It's also by far the most performant solution and in many cases is exactly the right thing to do. Obviously, something like byCodePoint or byGrapheme would then be needed in the cases where code points or graphemes are the appropriate level to iterate at. If strings are opaque types (with ways to get ranges over code units, code points, etc.),
Re: Slicing betterC
On Thursday, 6 September 2018 at 17:10:49 UTC, Oleksii wrote: allocatedFoo = foos[0 .. $ + 1];// <= Error: TypeInfo This line meant to be `allocatedFoo = foos[$]`. Sorry about that.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 16:44:11 UTC, H. S. Teoh wrote: On Thu, Sep 06, 2018 at 02:42:58PM +, Dukc via Digitalmars-d wrote: On Thursday, 6 September 2018 at 14:17:28 UTC, aliak wrote: > // D > auto a = "á"; > auto b = "á"; > auto c = "\u200B"; > auto x = a ~ c ~ a; > auto y = b ~ c ~ b; > > writeln(a.length); // 2 wtf > writeln(b.length); // 3 wtf > writeln(x.length); // 7 wtf > writeln(y.length); // 9 wtf [...] This is an unfair comparison. In the Swift version you used .count, but here you used .length, which is the length of the array, NOT the number of characters or whatever you expect it to be. You should rather use .count and specify exactly what you want to count, e.g., byCodePoint or byGrapheme. I suspect the Swift version will give you unexpected results if you did something like compare "á" to "a\u301", for example (which, in case it isn't obvious, are visually identical to each other, and as far as an end user is concerned, should only count as 1 grapheme). Not even normalization will help you if you have a string like "a\u301\u302": in that case, the *only* correct way to count the number of visual characters is byGrapheme, and I highly doubt Swift's .count will give you the correct answer in that case. (I expect that Swift's .count will count code points, as is the usual default in many languages, which is unfortunately wrong when you're thinking about visual characters, which are called graphemes in Unicode parlance.) No, Swift counts grapheme clusters by default, so it gives 1. I suggest you read the linked Swift chapter above. I think it's the wrong choice for performance, but they chose to emphasize intuitiveness for the common case. I agree with most of the rest of what you wrote about programmers having no silver bullet to avoid Unicode's and languages' complexity.
Re: Java also has chained exceptions, done manually
On Thursday, 6 September 2018 at 14:39:12 UTC, Andrei Alexandrescu wrote: First off, there's no tree of exceptions simply because... well it's not there. There is on field "next", not two fields "left" and "right". It's a linear list, not a tree. During construction there might be the situation whereby two lists need to be merged. But they will be merged by necessity into a singly-linked list, not a tree, because we have no structural representation of a tree. The runtime appends any exception raised by a `scope (exit|failure)` statement to the next list of an exception already being raised. So if I write: scope (exit) throw new Exception("scope 1"); scope (exit) throw new Exception("scope 2"); throw new Exception("primary"); I get output like: object.Exception@scratch.d(21): primary ??:? void scratch.throwy() [0x53a74c01] ??:? _Dmain [0x53a74d68] object.Exception@scratch.d(26): scope 2 ??:? void scratch.throwy() [0x53a74ce1] ??:? _Dmain [0x53a74d68] object.Exception@scratch.d(25): scope 1 ??:? void scratch.throwy() [0x53a74d42] ??:? _Dmain [0x53a74d68] Okay, that seems reasonable. But what if each of those `scope(exit)` statements had their own chains? scope (exit) throw new Exception("scope 1", new Exception("cause 1")); scope (exit) throw new Exception("scope 2", new Exception("cause 2")); throw new Exception("primary"); object.Exception@scratch.d(8): primary ??:? void scratch.throwy() [0x7259caf3] ??:? _Dmain [0x7259cc64] object.Exception@scratch.d(7): scope 2 ??:? void scratch.throwy() [0x7259cba5] ??:? _Dmain [0x7259cc64] object.Exception@scratch.d(7): cause 2 object.Exception@scratch.d(6): scope 1 ??:? void scratch.throwy() [0x7259cc4c] ??:? _Dmain [0x7259cc64] object.Exception@scratch.d(6): cause 1 The actual structure of the exceptions: `primary` has children `scope 2` and `scope 1`; `scope 2` has child `cause 2`; `scope 1` has child `cause 1`. A tree. The encoded structure: a linked list where only the first two positions have any structure-related meaning and the rest are just a sort of mish-mash. This isn't a situation you get in Java because Java doesn't have a way to enqueue multiple independent actions at the end of the same block. You just have try/finally and try(closeable). (As an aside, it does seem we could allow some weird cases where people rethrow some exception down the chain, thus creating loops. Hopefully that's handled properly.) Not if you semi-manually create the loop: auto e = new Exception("root"); scope (exit) throw new Exception("scope 1", e); throw e; Filed as https://issues.dlang.org/show_bug.cgi?id=19231
Slicing betterC
Hi the folks, Could you please share your wisdom with me? I wonder why the following code: ``` import core.stdc.stdlib; Foo[] pool; Foo[] foos; auto buff = (Foo*)malloc(Foo.sizeof * 10); pool = buff[0 .. 10]; foos = pool[0 .. 0 ]; // Now let's allocate a Foo: Foo* allocatedFoo; if (foos.length < foos.capacity) {// <= Error: TypeInfo cannot be used with -betterC allocatedFoo = foos[0 .. $ + 1];// <= Error: TypeInfo cannot be used with -betterC } ``` fails to compile because of `foos.capacity` and `foos[0 .. $ + 1]`. Why do these two innocent looking expressions require TypeInfo? Aren't slices basically fat pointers with internal structure that looks like this: ``` struct Slice(T) { size_t capacity; size_t size; T* memory; } ``` ? It's weird that `TypeInfo` (being a run-time and reflection specific thing) is required in this particular case. Shouldn't static type checking be enough for all that? Thanks in advance, -- Oleksii
[Issue 19231] New: Infinite loop in exception chains
https://issues.dlang.org/show_bug.cgi?id=19231 Issue ID: 19231 Summary: Infinite loop in exception chains Product: D Version: D2 Hardware: x86_64 OS: Linux Status: NEW Severity: minor Priority: P1 Component: druntime Assignee: nob...@puremagic.com Reporter: dhase...@gmail.com void throwy() { auto e = new Exception("root"); scope (exit) throw new Exception("scope 1", e); throw e; } void main() { throwy(); } This results in an infinite loop in druntime. There are only two exceptions, but because they form a loop instead of a proper linked list, druntime keeps on printing them. Perhaps we should define a reasonable limit on the number of exceptions we print before quitting out. 10 should probably be plenty. --
Re: Messing with betterC and string type.
On Thursday, 6 September 2018 at 16:50:01 UTC, Adam D. Ruppe wrote: this(object.string x) {} Yep, this works. which will work - immutable(char)[] is what object.string actually is (and the compiler will often use that - immutable(char)[], the proper name - and string, the user-friendly name, totally interchangably). Yes, the true problem arrives on the operations like concat "~" that call some internal function to do that with strings. I can hijack the string identifier, but i can´t replace the concat operator right? (Well, i already tried the module object; trick, but didn´t go much far with that path.) void foo(string s) {} foo("this"); won't compile, since it won't make a String out of that immutable(char)[] literal without an explicit initialization of some sort. //cannot pass argument "this" of type string to parameter String s iep, this seems a real problem. ;/
Re: file io
On 07/09/2018 4:17 AM, Arun Chandrasekaran wrote: On Thursday, 6 September 2018 at 16:13:42 UTC, hridyansh thakur wrote: how to read a file line by line in D std.stdio.File.byLine() Refer the doc here: https://dlang.org/library/std/stdio/file.by_line.html An example from the doc: ``` import std.algorithm, std.stdio, std.string; // Count words in a file using ranges. void main() { auto file = File("file.txt"); // Open for reading const wordCount = file.byLine() // Read lines .map!split // Split into words .map!(a => a.length) // Count words per line .sum(); // Total word count writeln(wordCount); } ``` Ranges will be far too advanced of a topic to bring up at this stage. So something a little more conventional might be a better option: --- import std.file : readText; import std.array : split; import std.string : strip; string text = readText("file.txt"); string[] onlyWords = text.split(" "); uint countWords; foreach(ref word; onlyWords) { word = word.strip(); if (word.length > 0) countWords++; } ---
Re: This is why I don't use D.
On Thursday, 6 September 2018 at 16:27:38 UTC, Everlast wrote: You totally missed the point. The point with 1 package only was to demonstrate how easy it is to maintain and that it theoretically would have the long longevity. When one has an infinite number of packages then every package(or almost everyone) would rot very quickly. I didn't say there should actually only be one package, as that is absurd. What I said was that because D has no organizational structure to focus work on a fewer number of better maintained and designed packages there is a ton of shit out there for D that doesn't work but there is no way of knowing and not always an easy fix because it takes time to understand the code or it is simply defunct. This is not rocket science... I know that you wanted to talk about easier maintenance. But what exactly is that D organization going to do ? Prevent people from making more packages and force them to work on theirs instead ? No matter if there is an organization, there will always be unofficial packages going stale. You sound like it's going to be a magical solution.
Re: linking trouble
On 07/09/2018 4:03 AM, hridyansh thakur wrote: i am on windows i have tried DMD LDC and i am getting same linking error with linking my c++ object i am doing by the official tutorial (dlang spec book) here is my app.d code import std.stdio; void main() { //writeln("Edit source/app.d to start your project."); int[] m = someFUN(44,55); ulong k = m.length; for (int i=0;i That definition isn't complete. Missing at the very least ``();`` to make it a function declaration. here is the C++ code #include #include #include FILE *fp; int*file_io(){ char name[20] ; std::cout << "please enter the file name " << '\n'; std::cin >> name; fp = fopen(name,"r+"); char a = 'a'; int n = 0 ; while (!feof(fp)) { a = fgetc(fp); if (a=='\n') { n++; } } int *p = (int*)calloc(n,sizeof(int)); for (size_t i = 0; i < n ; i++) { fscanf(fp,"%d",(p+i)); } return p; } So what is the errors you're getting? And what are the commands you're executing?
Re: Bug with writeln?
On Thursday, September 6, 2018 2:40:08 AM MDT Saurabh Das via Digitalmars-d- learn wrote: > Is this a bug with writeln? > > void main() > { > import std.stdio, std.range, std.algorithm; > > auto a1 = sort([1,3,5,4,2]); > auto a2 = sort([9,8,9]); > auto a3 = sort([5,4,5,4]); > > pragma(msg, typeof(a1)); > pragma(msg, typeof(a2)); > pragma(msg, typeof(a3)); > > auto b = [a1, a2, a3]; > pragma(msg, typeof(b)); > > writeln("b:"); > writeln(b); > writeln(b); // <-- this one prints incorrectly > > writeln("a:"); > writeln(a1); > writeln(a2); > writeln(a3); > > } > > Output > == > > SortedRange!(int[], "a < b") > SortedRange!(int[], "a < b") > SortedRange!(int[], "a < b") > SortedRange!(int[], "a < b")[] > b: > [[1, 2, 3, 4, 5], [8, 9, 9], [4, 4, 5, 5]] > [[], [], []] > a: > [1, 2, 3, 4, 5] > [8, 9, 9] > [4, 4, 5, 5] > > The issue goes away if I cast 'b' to const before writeln. I > think it is a bug, but maybe I am missing something? It's not a bug in writeln. Any time that a range is copied, you must not do _anything_ else with the original unless copying it is equivalent to calling save on it, because the semantics of copying a range are unspecified. They vary wildly depending on the range type (e.g. copying a dynamic array is equivalent to calling save, but copying a class reference is not). When you pass the range to writeln, you must assumed that it may have been consumed. And since you have range of ranges, you must assume that the ranges that are contained may have been consumed. If you want to pass them to writeln and then do anything else with them, then you'll need to call save on every range involved (which is a bit of a pain with a range of ranges, but it's necessary all the same). In many cases, you can get away with passing a range to a function or use it with foreach and then continue to use it after that, but that's only because copying those ranges is equivalent to calling save on them. It doesn't work if any of the ranges involved aren't saved when they're copied, and it doesn't work in generic code, because you have no clue whether the ranges that are going to be used with that code are going to be saved when they're copied. - Jonathan M Davis
Re: Messing with betterC and string type.
On Thursday, 6 September 2018 at 16:24:12 UTC, SrMordred wrote: alias string = String; For the rest of this module, any time you write `string`, the compiler sees `String`. But inside the compiler, it still thinks of its own string, hence the confusing looking error messages. struct String{ this(string x){} } And this is going to be seen as this(String x) {} instead of what you wanted. Try this(object.string x) {} which MIGHT work, or this(immutable(char)[] x) {} which will work - immutable(char)[] is what object.string actually is (and the compiler will often use that - immutable(char)[], the proper name - and string, the user-friendly name, totally interchangably). My question is, i´m breaking something else, or this could be a valid approach? It is valid as long as you keep the names straight. Though it won't be as cool as you think because D doesn't do implicit construction, so void foo(string s) {} foo("this"); won't compile, since it won't make a String out of that immutable(char)[] literal without an explicit initialization of some sort. You could, of course, just do a wrapper function or whatever, but really I think you are better off just trying to work with immutable(char)[]...
Re: This is why I don't use D.
On Thu, Sep 06, 2018 at 04:32:09PM +, Everlast via Digitalmars-d wrote: > On Thursday, 6 September 2018 at 15:28:56 UTC, Patrick Schluter wrote: [...] > > What annoys people is not that there are broken packages in the > > list, but that there is no way to know beforehand if one is choosing > > a reliable package or a hobby experiment gone wrong. This > > uncertainty is grating imo. > > The problem is that google isn't going to help. Most people find > packages by searching google in some way and then follow that rabbit > hole. It would be impossible to know then until it's too late. > > But things such as your suggestions can mitigate the problem. Dub, for > example, could have a list of reliable packages built in(or could have > a master list) that can automatically inform the user about these > issues... rather than the user having to look up on a web page. [...] Again, this strongly suggests the idea I've mentioned a few times now: *all* packages on code.dlang.org needs to be run through a CI tester, and success/failure to compile should be reported back to dlang.org somehow. Then in the search results and in the package's home page, there should be a prominently-displayed notice of which compiler versions work / don't work with the package. This gives users the information they need to make the right decision (e.g., the last known compiler that compiles this package is 2.060, so don't bother, move on.). And this *must* be automated, because nobody has the time or energy to manually test every package against every known compiler release and manually update code.dlang.org. And doing it manually tends to quickly get out of date, not to mention the chance of human error. T -- Ignorance is bliss... until you suffer the consequences!
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thu, Sep 06, 2018 at 02:42:58PM +, Dukc via Digitalmars-d wrote: > On Thursday, 6 September 2018 at 14:17:28 UTC, aliak wrote: > > // D > > auto a = "á"; > > auto b = "á"; > > auto c = "\u200B"; > > auto x = a ~ c ~ a; > > auto y = b ~ c ~ b; > > > > writeln(a.length); // 2 wtf > > writeln(b.length); // 3 wtf > > writeln(x.length); // 7 wtf > > writeln(y.length); // 9 wtf [...] This is an unfair comparison. In the Swift version you used .count, but here you used .length, which is the length of the array, NOT the number of characters or whatever you expect it to be. You should rather use .count and specify exactly what you want to count, e.g., byCodePoint or byGrapheme. I suspect the Swift version will give you unexpected results if you did something like compare "á" to "a\u301", for example (which, in case it isn't obvious, are visually identical to each other, and as far as an end user is concerned, should only count as 1 grapheme). Not even normalization will help you if you have a string like "a\u301\u302": in that case, the *only* correct way to count the number of visual characters is byGrapheme, and I highly doubt Swift's .count will give you the correct answer in that case. (I expect that Swift's .count will count code points, as is the usual default in many languages, which is unfortunately wrong when you're thinking about visual characters, which are called graphemes in Unicode parlance.) And even in your given example, what should .count return when there's a zero-width character? If you're counting the number of visual places taken by the string (e.g., you're trying to align output in a fixed-width terminal), then *both* versions of your code are wrong, because zero-width characters do not occupy any space when displayed. If you're counting the number of code points, though, e.g., to allocate the right buffer size to convert to dstring, then you want to count the zero-width character as 1 rather than 0. And that's not to mention double-width characters, which should count as 2 if you're outputting to a fixed-width terminal. Again I say, you need to know how Unicode works. Otherwise you can easily deceive yourself to think that your code (both in D and in Swift and in any other language) is correct, when in fact it will fail miserably when it receives input that you didn't think of. Unicode is NOT ASCII, and you CANNOT assume there's a 1-to-1 mapping between "characters" and display length. Or 1-to-1 mapping between any of the various concepts of string "length", in fact. In ASCII, array length == number of code points == number of graphemes == display width. In Unicode, array length != number of code points != number of graphemes != display width. Code written by anyone who does not understand this is WRONG, because you will inevitably end up using the wrong value for the wrong thing: e.g., array length for number of code points, or number of code points for display length. Not even .byGrapheme will save you here; you *need* to understand that zero-width and double-width characters exist, and what they imply for display width. You *need* to understand the difference between code points and graphemes. There is no single default that will work in every case, because there are DIFFERENT CORRECT ANSWERS depending on what your code is trying to accomplish. Pretending that you can just brush all this detail under the rug of a single number is just deceiving yourself, and will inevitably result in wrong code that will fail to handle Unicode input correctly. T -- It's amazing how careful choice of punctuation can leave you hanging:
Re: This is why I don't use D.
On Thursday, 6 September 2018 at 15:28:56 UTC, Patrick Schluter wrote: On Thursday, 6 September 2018 at 12:33:21 UTC, Everlast wrote: On Wednesday, 5 September 2018 at 12:32:33 UTC, Andre Pany wrote: On Wednesday, 5 September 2018 at 06:47:00 UTC, Everlast wrote: [...] You showed as a painful issue in our eco system which we can work on, thank you. You do not need to work on this but do you have a proposal for a solution? What would you help (ranking according to last update, ...) Kind regards Andre The problem is that all projects should be maintained. The issue, besides the tooling which can only reduce the problem to manageable levels, is that projects go stale over time. This is obvious! You say though "But we can't maintain every package, it is too much work"... and that is the problem, not that it is too much work but there are too many packages. This is the result of allowing everyone to build their own kitchen sink instead of having some type of common base types. It's sort of like most things now... say cell phone batteries... everyone makes a different one to their liking and so it is a big mess to find replacements after a few years. See, suppose if there were only one package... and everyone maintained it. Then as people leave other people will come in in a continual basis and the package will always be maintained as long as people are using it. This is why D needs organization, which it has none. It needs structure so things work and last and it isn't a continual fight. It's like if someone doesn't take care of their car. Eventually it starts to break down and when they do shitty fixes it only buys them a little time before it breaks down again and again. The issue isn't the fixes nor the car but how they use the car and not maintain it properly. That is, it is their mindsets. Since D seems to be full of people with very little understanding how how to build a proper foundation for organization, D has little chance of surviving. As the car breaks down more and more it is just a matter of time before it ends up in the junk heap. It was a great car while it lasted though... That's what I have said elsewhere in the thread. Checking the maintainer of a package, if there's no feedback, put the package out of the main list and put it in a purgatory where it can get stale for itself. If a new maintainer appears for a specific package, it can be reinstated in the approved list when it works again. What annoys people is not that there are broken packages in the list, but that there is no way to know beforehand if one is choosing a reliable package or a hobby experiment gone wrong. This uncertainty is grating imo. The problem is that google isn't going to help. Most people find packages by searching google in some way and then follow that rabbit hole. It would be impossible to know then until it's too late. But things such as your suggestions can mitigate the problem. Dub, for example, could have a list of reliable packages built in(or could have a master list) that can automatically inform the user about these issues... rather than the user having to look up on a web page. The more the compilers and tools do for us the easier our life becomes. The more complex things get the more time it takes. Since most things tend towards complexity it means things should be designed well from the get go before they become mainstream so messes like this do not happen.
Re: Example of using C API from D?
On Mon, 2018-09-03 at 11:41 +1200, rikki cattermole via Digitalmars-d-learn wrote: […] > > You won't need to actually fill out any c struct's that you don't need > either. Make them opaque as long as they are referenced via pointer and > not by value. True. And indeed Fontconfig can mostly meet the "it's an opaque type based system" but there are a few dark corners – most of which I need! […] > > Ugh, you do know that the linker which does all the hard work doesn't > know anything about the signature of the C function? That is the part > SharedLib replaces. You will of course define it with a proper signature > on D's side with a helpful cast :) Not good enough to be honest, D is a statically typed language and so all usage of functions should be checked against something other than a human guess. Anyway DStep has given me a Fontconfig D module that works nicely, and I will check dpp as an alternative as soon as it builds on Debian Sid. -- Russel. === Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Roadm: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk signature.asc Description: This is a digitally signed message part
Re: This is why I don't use D.
On Thursday, 6 September 2018 at 13:08:00 UTC, Laurent Tréguier wrote: On Thursday, 6 September 2018 at 12:33:21 UTC, Everlast wrote: The problem is that all projects should be maintained. The issue, besides the tooling which can only reduce the problem to manageable levels, is that projects go stale over time. This is obvious! You say though "But we can't maintain every package, it is too much work"... and that is the problem, not that it is too much work but there are too many packages. This is the result of allowing everyone to build their own kitchen sink instead of having some type of common base types. I doubt having too many packages will be D's downfall. Javascript is a thriving language even if tons of NPM packages are unmaintained (and even if they still run, they potentially have security vulnerabilities due to old dependencies). It's sort of like most things now... say cell phone batteries... everyone makes a different one to their liking and so it is a big mess to find replacements after a few years. See, suppose if there were only one package... and everyone maintained it. Then as people leave other people will come in in a continual basis and the package will always be maintained as long as people are using it. If we could have something as simple as "having the one and only package that fits every use case", we wouldn't have multiple OS's, multiple programming languages, etc. I do agree that having "the one" would make everything easier in theory, but reality isn't theory. You totally missed the point. The point with 1 package only was to demonstrate how easy it is to maintain and that it theoretically would have the long longevity. When one has an infinite number of packages then every package(or almost everyone) would rot very quickly. I didn't say there should actually only be one package, as that is absurd. What I said was that because D has no organizational structure to focus work on a fewer number of better maintained and designed packages there is a ton of shit out there for D that doesn't work but there is no way of knowing and not always an easy fix because it takes time to understand the code or it is simply defunct. This is not rocket science...
[Issue 19229] formattedWrite destructively iterates over forward ranges
https://issues.dlang.org/show_bug.cgi?id=19229 Jonathan M Davis changed: What|Removed |Added CC||issues.dl...@jmdavisprog.co ||m --- Comment #2 from Jonathan M Davis --- In general, if you pass a range by value, then it's copied, and you have to assume that the original is then unusable, because the semantics of copying a range are unspecified and can vary wildly depending on the range type. If you want to pass a range to a function and then continue to use it (including passing it to another function in the same expression), then you need to call save. IMHO, there is no bug here. If you want to do anything with the range after passing it to text, then you need to call save on it when passing it. --
Messing with betterC and string type.
I'm most of the time exploring the betterC lands and was thinking about custom strings, and how to convert D into D-betterC hmm I wonder... struct String{} alias string = String; string x = "test"; //cannot implicitly convert expression "test" of type string to String ok then... struct String{ this(string x){} } //constructor app.String.this(String x) is not callable using argument types (string) Ok now things got weird. ... this(String x){} //same error. Ok the string is a String but not a string... what monster I created? How to tame this chimera? templates off course this(T)(T t){} //compiles!! Now some true utility: string x = "test"; x = x ~ x; //incompatible types for (x) ~ (x): both operands are of type String as expected: ... String opBinary(string op, T)(T other) { return String(); } //incompatible types for (x) ~ (x): both operands are of type String hmmm, whats happening? x.opBinary!"~"(x); //template app.String.opBinary cannot deduce function from argument types !("~")(String), candidates are: //source\app.d(9,12):app.String.opBinary(String op, T)(T other) Oh, it must be the string is not string chaos that I started. String opBinary(alias op, T)(T other) { return String(); } //compiles!! Ok, so then *maybe* I can transform D normal code using 'string' to D-BetterC only with aliasing my custom struct.(off course, besides all other problems that may emerge) My question is, i´m breaking something else, or this could be a valid approach?
Re: file io
On Thursday, 6 September 2018 at 16:13:42 UTC, hridyansh thakur wrote: how to read a file line by line in D std.stdio.File.byLine() Refer the doc here: https://dlang.org/library/std/stdio/file.by_line.html An example from the doc: ``` import std.algorithm, std.stdio, std.string; // Count words in a file using ranges. void main() { auto file = File("file.txt"); // Open for reading const wordCount = file.byLine()// Read lines .map!split // Split into words .map!(a => a.length) // Count words per line .sum(); // Total word count writeln(wordCount); } ```
file io
how to read a file line by line in D
linking trouble
i am on windows i have tried DMD LDC and i am getting same linking error with linking my c++ object i am doing by the official tutorial (dlang spec book) here is my app.d code import std.stdio; void main() { //writeln("Edit source/app.d to start your project."); int[] m = someFUN(44,55); ulong k = m.length; for (int i=0;i #include #include FILE *fp; int*file_io(){ char name[20] ; std::cout << "please enter the file name " << '\n'; std::cin >> name; fp = fopen(name,"r+"); char a = 'a'; int n = 0 ; while (!feof(fp)) { a = fgetc(fp); if (a=='\n') { n++; } } int *p = (int*)calloc(n,sizeof(int)); for (size_t i = 0; i < n ; i++) { fscanf(fp,"%d",(p+i)); } return p; }
Re: Static foreach bug?
On Thursday, September 6, 2018 3:11:14 AM MDT Dechcaudron via Digitalmars-d wrote: > On Wednesday, 5 September 2018 at 11:39:31 UTC, Jonathan M Davis > > wrote: > > Conceptually, what Timon is talking about doing here is to add > > an attribute to symbols declared within a static foreach where > > that attribute indicates that the symbol is temporary (or at > > least scoped to a particular iteration of the loop). So, saying > > that it's "local" as __local would makes perfect sense. It's > > local to that iteration of the loop. > > > > And there may very well be other syntaxes which would be > > better, but trying to overload the meaning of static even > > further by using it in this context would risk code breakage > > and would be _very_ confusing for most people. > > You are right, using "static" would be confusing I guess. I'm > just against starting to use __keywords reserved to the compiler > that maybe shouldn't be. I know we already have __gshared, > though. Just what is the criteria to prepend the double > underscore to a keyword? Why now just use an @attribute instead? > @gshared and @ctlocal would fit better in the D style, IMO. __ can be used for any identifier that is reserved by the compiler. There have been identifiers which start with __ since the language began, whereas attributes are a later edition to the language. And regardless of whether @gshared would make sense, __gshared predates attributes, so there's no way that it would be @gshared. However, since attributes are applied to functions, and __gshared is for variables, it really wouldn't make sense to have @gshared, and by that same token, it wouldn't make sense to have @ctlocal. - Jonathan M Davis
[Issue 19185] [ICE] Nested struct segfaults when using variable from outer scope
https://issues.dlang.org/show_bug.cgi?id=19185 github-bugzi...@puremagic.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --
[Issue 19185] [ICE] Nested struct segfaults when using variable from outer scope
https://issues.dlang.org/show_bug.cgi?id=19185 --- Comment #2 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/dlang/dmd https://github.com/dlang/dmd/commit/9b5a295e8707a17bee8e45e0337935ed568bcff9 Fix Issue 19185 - [ICE] Nested struct segfaults when using variable from outer scope https://github.com/dlang/dmd/commit/88de313bb92b3fb181cc0198b490422b1ab57407 Merge pull request #8597 from RazvanN7/Issue_19185 Fix Issue 19185 - [ICE] Nested struct segfaults when using variable from outer scope merged-on-behalf-of: Razvan Nitu --
Re: Dub support was added to Meson
On Thu, 2018-08-16 at 22:44 +, Filipe Laíns via Digitalmars-d-announce wrote: […] Apologies for the delay in replying to this one. > This is obviously bad. Your distro has a package manager, you > should use it, not create a separated language-specific one. If I'm afraid you are onto a lost cause on this one. The whole JVM-based milieu, Ruby, Python, Go, D, Rust, etc. all have language specific repositories. Debian, Fedora, etc. pick and choose which bits they choose to package based on some algorithm almost, but not quite, totally unrelated to what is the latest version. Operating system package managers are providing the operating system, not the development tools and dependencies needed for software development. Go, Rust, and indeed D, are going the route of static compilation as much because operating system dependencies can never be guaranteed, and are often wrong. It is not clear to me why Debian spend so much time packaging bits of the Go universe that no-one uses even if the dependencies are in fact used. On the other hand, having GtkD (and GStreamerD) packaged is great since there are shared objects for use with D codes. There is nothing quite so depressing as waiting for LDC or DMD to statically link to GtkD. So static linking is not something I want. But waiting for Debian and Fedora to package things is often like Waiting for Godot. Hence "build it yourself" becomes a bit of a must. This is not a simple situation, and every individuals positions on it is likely inconsistent and full of holes. > you are doing this locally, either but using the user's home of > by installing to /usr/local, I don't think it's much of a > problem. If you are implementing something like this at least do > it in a way that the package managing feature is optional. I > don't know if I'm being biased by being an Archlinux TU but from > my perspective, it's not something we should do, at the very > least globally. It may be that Arch stays more up to date than Debian and Fedora (because it is less centralised and more like Homebrew/Linuxbrew, but Debian (and Fedora?) is where the bulk of Linux programs get executed, and so is the obvious place to develop. > Weirdly enough, I can reproduce. This was working when I wrote > the patch. I've opened an issue in the upstream. Excellent. I have a clone of Meson so can try stuff out on master/HEAD as needed. -- Russel. === Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Roadm: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk signature.asc Description: This is a digitally signed message part
Re: This is why I don't use D.
On Thursday, 6 September 2018 at 12:33:21 UTC, Everlast wrote: On Wednesday, 5 September 2018 at 12:32:33 UTC, Andre Pany wrote: On Wednesday, 5 September 2018 at 06:47:00 UTC, Everlast wrote: [...] You showed as a painful issue in our eco system which we can work on, thank you. You do not need to work on this but do you have a proposal for a solution? What would you help (ranking according to last update, ...) Kind regards Andre The problem is that all projects should be maintained. The issue, besides the tooling which can only reduce the problem to manageable levels, is that projects go stale over time. This is obvious! You say though "But we can't maintain every package, it is too much work"... and that is the problem, not that it is too much work but there are too many packages. This is the result of allowing everyone to build their own kitchen sink instead of having some type of common base types. It's sort of like most things now... say cell phone batteries... everyone makes a different one to their liking and so it is a big mess to find replacements after a few years. See, suppose if there were only one package... and everyone maintained it. Then as people leave other people will come in in a continual basis and the package will always be maintained as long as people are using it. This is why D needs organization, which it has none. It needs structure so things work and last and it isn't a continual fight. It's like if someone doesn't take care of their car. Eventually it starts to break down and when they do shitty fixes it only buys them a little time before it breaks down again and again. The issue isn't the fixes nor the car but how they use the car and not maintain it properly. That is, it is their mindsets. Since D seems to be full of people with very little understanding how how to build a proper foundation for organization, D has little chance of surviving. As the car breaks down more and more it is just a matter of time before it ends up in the junk heap. It was a great car while it lasted though... That's what I have said elsewhere in the thread. Checking the maintainer of a package, if there's no feedback, put the package out of the main list and put it in a purgatory where it can get stale for itself. If a new maintainer appears for a specific package, it can be reinstated in the approved list when it works again. What annoys people is not that there are broken packages in the list, but that there is no way to know beforehand if one is choosing a reliable package or a hobby experiment gone wrong. This uncertainty is grating imo.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thu, Sep 6, 2018 at 4:45 PM Dukc via Digitalmars-d < digitalmars-d@puremagic.com> wrote: > On Thursday, 6 September 2018 at 14:17:28 UTC, aliak wrote: > > // D > > auto a = "á"; > > auto b = "á"; > > auto c = "\u200B"; > > auto x = a ~ c ~ a; > > auto y = b ~ c ~ b; > > > > writeln(a.length); // 2 wtf > > writeln(b.length); // 3 wtf > > writeln(x.length); // 7 wtf > > writeln(y.length); // 9 wtf > > > > writeln(a == b); // false wtf > > writeln("ááá".canFind("á")); // false wtf > > > > I had to copy-paste that because I wondered how the last two can > be false. They are because á is encoded differently. if you > replace all occurences of it with a grapheme that fits to one > code point, the results are: > > 2 > 2 > 7 > 7 > true > true > import std.stdio; import std.algorithm : canFind; import std.uni : normalize; void main() { auto a = "á".normalize; auto b = "á".normalize; auto c = "\u200B".normalize; auto x = a ~ c ~ a; auto y = b ~ c ~ b; writeln(a.length); // 2 writeln(b.length); // 2 writeln(x.length); // 7 writeln(y.length); // 7 writeln(a == b); // true writeln("ááá".canFind("á".normalize)); // true }
[Issue 19205] [REG 2.081] Cannot call superclass ctor after end of switch statement
https://issues.dlang.org/show_bug.cgi?id=19205 RazvanN changed: What|Removed |Added CC||razvan.nitu1...@gmail.com --- Comment #2 from RazvanN --- This looks like a duplicate of : https://issues.dlang.org/show_bug.cgi?id=18688 --
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 14:42:14 UTC, Chris wrote: Usually a sign to move on... You have said that at least 10 times in this very thread. Doomsayers are as old as D. It will be doing OK.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 14:17:28 UTC, aliak wrote: Hehe, it's already a bit laughable that correctness is not preferred. // Swift let a = "á" let b = "á" let c = "\u{200B}" // zero width space let x = a + c + a let y = b + c + b print(a.count) // 1 print(b.count) // 1 print(x.count) // 3 print(y.count) // 3 print(a == b) // true print("ááá".range(of: "á") != nil) // true // D auto a = "á"; auto b = "á"; auto c = "\u200B"; auto x = a ~ c ~ a; auto y = b ~ c ~ b; writeln(a.length); // 2 wtf writeln(b.length); // 3 wtf writeln(x.length); // 7 wtf writeln(y.length); // 9 wtf writeln(a == b); // false wtf writeln("ááá".canFind("á")); // false wtf writeln(cast(ubyte[]) a); // [195, 161] writeln(cast(ubyte[]) b); // [97, 204, 129] At least for equality, it doesn't seem far fetched to me that both are not considered equal if they are not the same.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 14:30:38 UTC, Guillaume Piolat wrote: On Thursday, 6 September 2018 at 13:30:11 UTC, Chris wrote: And autodecode is a good example of experts getting it wrong, because, you know, you cannot be an expert in all fields. I think the problem was that it was discovered too late. There are very valid reasons not to talk about auto-decoding again: - it's too late to remove because breakage - attempts at removing it were _already_ tried - it has been debated to DEATH - there is an easy work-around So any discussion _now_ would have the very same structure of the discussion _then_, and would lead to the exact same result. It's quite tragic. And I urge the real D supporters to let such conversation die (topics debated to death) as soon as they appear. The real supporters? So it's a religion? For me it's about technology and finding a good tool for a job. why shouldn't users be allowed to give feedback? Straw-man. I meant in _general_, not necessarily autodecode ;) If we don't get over _some_ technical debate, the only thing that is achieved is a loss of time for everyone involved. Translation: "Nothing to see here, move along!" Usually a sign to move on...
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 14:17:28 UTC, aliak wrote: // D auto a = "á"; auto b = "á"; auto c = "\u200B"; auto x = a ~ c ~ a; auto y = b ~ c ~ b; writeln(a.length); // 2 wtf writeln(b.length); // 3 wtf writeln(x.length); // 7 wtf writeln(y.length); // 9 wtf writeln(a == b); // false wtf writeln("ááá".canFind("á")); // false wtf I had to copy-paste that because I wondered how the last two can be false. They are because á is encoded differently. if you replace all occurences of it with a grapheme that fits to one code point, the results are: 2 2 7 7 true true
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 14:33:27 UTC, rikki cattermole wrote: Either decide a list of conditions before we can break to remove it, or yes lets let this idea go. It isn't helping anyone. Can't you just let mark it as deprecated and provide a library compatibility range (100% compatible). Then people will just update their code to use the range... This should be possible to achieve using automated source-to-source translation in most cases.
Java also has chained exceptions, done manually
In an earlier post, Don Clugston wrote: When I originally implemented this, I discovered that the idea of "chained exceptions" was hopeless naive. The idea was that while processing one exception, if you encounter a second one, and you chain them together. Then you get a third, fourth, etc. The problem is that it's much more complicated than that. Each of the exceptions can be a chain of exceptions themselves. This means that you don't end up with a chain of exceptions, but rather a tree of exceptions. That's why there are those really nasty test cases in the test suite. The examples in the test suite are very difficult to understand if you expect it to be a simple chain! On the one hand, I was very proud that I was able to work out the barely-documented behaviour of Windows SEH, and it was really thorough. In the initial implementation, all the complexity was covered. It wasn't the bugfix-driven-development which dmd usually operates under . But on the other hand, once you can see all of the complexity, exception chaining becomes much less convincing as a concept. Sure, the full exception tree is available in the final exception which you catch. But, is it of any use? I doubt it very much. It's pretty clearly a nett loss to the language, it increases complexity with negligible benefit. Fortunately in this case, the cost isn't really high. First off, there's no tree of exceptions simply because... well it's not there. There is on field "next", not two fields "left" and "right". It's a linear list, not a tree. During construction there might be the situation whereby two lists need to be merged. But they will be merged by necessity into a singly-linked list, not a tree, because we have no structural representation of a tree. (As an aside, it does seem we could allow some weird cases where people rethrow some exception down the chain, thus creating loops. Hopefully that's handled properly.) Second, it does pay to keep abreast other languages. I had no idea (and am quite ashamed of it) that Java also has chained exceptions: https://www.geeksforgeeks.org/chained-exceptions-java/ They implement them manually, i.e. the user who throws a new exception would need to pass the existing exception (or exception chain) as an argument to the new exception's constructor. Otherwise, an exception thrown from a catch/finally block obliterates the existing exception and replaces it with the new one: https://stackoverflow.com/questions/3779285/exception-thrown-in-catch-and-finally-clause So chaining exceptions in Java is a nice complementary mechanism to compensate for that loss in information: when you throw, you have the chance to chain the current exception so it doesn't get ignored. Because of that, D's chained exceptions mechanism can be seen as an automated way of doing "the right thing" in Java. We should study similarities and distinctions with Java's mechanism and discuss them in our documentation. Andrei
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On 07/09/2018 2:30 AM, Guillaume Piolat wrote: On Thursday, 6 September 2018 at 13:30:11 UTC, Chris wrote: And autodecode is a good example of experts getting it wrong, because, you know, you cannot be an expert in all fields. I think the problem was that it was discovered too late. There are very valid reasons not to talk about auto-decoding again: - it's too late to remove because breakage - attempts at removing it were _already_ tried - it has been debated to DEATH - there is an easy work-around So any discussion _now_ would have the very same structure of the discussion _then_, and would lead to the exact same result. It's quite tragic. And I urge the real D supporters to let such conversation die (topics debated to death) as soon as they appear. +1 Either decide a list of conditions before we can break to remove it, or yes lets let this idea go. It isn't helping anyone.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 13:30:11 UTC, Chris wrote: And autodecode is a good example of experts getting it wrong, because, you know, you cannot be an expert in all fields. I think the problem was that it was discovered too late. There are very valid reasons not to talk about auto-decoding again: - it's too late to remove because breakage - attempts at removing it were _already_ tried - it has been debated to DEATH - there is an easy work-around So any discussion _now_ would have the very same structure of the discussion _then_, and would lead to the exact same result. It's quite tragic. And I urge the real D supporters to let such conversation die (topics debated to death) as soon as they appear. why shouldn't users be allowed to give feedback? Straw-man. If we don't get over _some_ technical debate, the only thing that is achieved is a loss of time for everyone involved.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Wednesday, 5 September 2018 at 22:00:27 UTC, H. S. Teoh wrote: Because grapheme decoding is SLOW, and most of the time you don't even need it anyway. SLOW as in, it will easily add a factor of 3-5 (if not worse!) to your string processing time, which will make your natively-compiled D code a laughing stock of interpreted languages like Python. It will make autodecoding look like an optimization(!). Hehe, it's already a bit laughable that correctness is not preferred. // Swift let a = "á" let b = "á" let c = "\u{200B}" // zero width space let x = a + c + a let y = b + c + b print(a.count) // 1 print(b.count) // 1 print(x.count) // 3 print(y.count) // 3 print(a == b) // true print("ááá".range(of: "á") != nil) // true // D auto a = "á"; auto b = "á"; auto c = "\u200B"; auto x = a ~ c ~ a; auto y = b ~ c ~ b; writeln(a.length); // 2 wtf writeln(b.length); // 3 wtf writeln(x.length); // 7 wtf writeln(y.length); // 9 wtf writeln(a == b); // false wtf writeln("ááá".canFind("á")); // false wtf Tell me which one would cause the giggles again? If speed is the preference over correctness (which I very much disagree with, but for arguments sake...) then still code points are the wrong choice. So, speed was obviously (??) not the reason to prefer code points as the default. Here's a read on how swift 4 strings behave. Absolutely amazing work there: https://oleb.net/blog/2017/11/swift-4-strings/ Grapheme decoding is really only necessary when (1) you're typesetting a Unicode string, and (2) you're counting the number of visual characters taken up by the string (though grapheme counting even in this case may not give you what you want, thanks to double-width characters, zero-width characters, etc. -- though it can form the basis of correct counting code). Yeah nah. Those are not the only 2 cases *ever* where grapheme decoding is correct. I don't think one can list all the cases where grapheme decoding is the correct behavior. Off the op of me head you've already forgotten comparisons. And on top of that, comparing and counting has a bajillion* use cases. * number is an exaggeration. For all other cases, you really don't need grapheme decoding, and being forced to iterate over graphemes when unnecessary will add a horrible overhead, worse than autodecoding does today. As opposed to being forced to iterate with incorrect results? I understand that it's slower. I just don't think that justifies incorrect output. I agree with everything you've said next though, that people should understand unicode. // Seriously, people need to get over the fantasy that they can just use Unicode without understanding how Unicode works. Most of the time, you can get the illusion that it's working, but actually 99% of the time the code is actually wrong and will do the wrong thing when given an unexpected (but still valid) Unicode string. You can't drive without a license, and even if you try anyway, the chances of ending up in a nasty accident is pretty high. People *need* to learn how to use Unicode properly before complaining about why this or that doesn't work the way they thought it should work. I agree that you should know about unicode. And maybe you can't be correct 100% of the time but you can very well get much closer than were D is right now. And yeah, you can't drive without a license, but most cars hopefully don't show you an incorrect speedometer reading because it produces faster drivers. T -- Gone Chopin. Bach in a minuet. Lol :D
[Issue 18771] Identical overload sets in different modules have different identities
https://issues.dlang.org/show_bug.cgi?id=18771 RazvanN changed: What|Removed |Added CC||razvan.nitu1...@gmail.com --- Comment #1 from RazvanN --- PR : https://github.com/dlang/dmd/pull/8675 --
Re: DIP Draft Reviews
On Thursday, 6 September 2018 at 11:18:25 UTC, Nicholas Wilson wrote: I can understand not requiring authors to respond to all the feedback, but not requiring them to respond to _any_ is just wasting everyone's time, since _all_ of the previous points will be bought up again and the next stage will be a repeat of the previous. Yeah, I agree that if nothing or almost nothing is addressed (even by explaining why the raised issues didn't convince) that should prevent the DIP from moving forward.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 11:01:55 UTC, Guillaume Piolat wrote: So Unicode in D works EXACTLY as expected, yet people in this thread act as if the house is on fire. Expected by who? The Unicode expert or the user? D dying because of auto-decoding? Who can possibly think that in its right mind? Nobody, it's just another major issue to be fixed. The worst part of this forum is that suddenly everyone, by virtue of posting in a newsgroup, is an annointed language design expert. Let me break that to you: core developer are language experts. The rest of us are users, that yes it doesn't make us necessarily qualified to design a language. Calm down. I for my part never said I was an expert on language design. Number one: experts do make mistakes too, there is nothing wrong with that. And autodecode is a good example of experts getting it wrong, because, you know, you cannot be an expert in all fields. I think the problem was that it was discovered too late. Number two: why shouldn't users be allowed to give feedback? Engineers and developers need feedback, else we'd still be using CLI, wouldn't we. The user doesn't need to be an expert to know what s/he likes and doesn't like and developers / engineers often have a different point of view as to what is important / annoying etc. That's why IT companies introduced customer service, because the direct interaction between developers and users would often end badly (disgruntled customers).
Re: hasAliasing with nested static array bug ?
On Thursday, 6 September 2018 at 07:37:11 UTC, Simen Kjærås wrote: On Wednesday, 5 September 2018 at 22:35:16 UTC, SrMordred wrote: https://run.dlang.io/is/TOTsL4 Yup, that's a bug. Reduced example: struct S { int*[1] arr; } import std.traits : hasAliasing; static assert(hasAliasing!S); Issue filed: https://issues.dlang.org/show_bug.cgi?id=19228 Pull request: https://github.com/dlang/phobos/pull/6694 -- Simen Nice, thanks :)
Re: Meson issue with -L--export-dynamic flag
On Mon, 2018-09-03 at 13:19 +, Gerald via Digitalmars-d-learn wrote: > Myself and some others are looking at replacing autotools in > Tilix with meson for the various Linux distros to use when > building and packaging the binary. However we are running into an > issue with meson around the use of the "-L--export-dynamic" flag. > I have been using -L-Wl,--export-dynamic to get C symbols exported at runtime with ldc2, I am nt sure if this datum helps any. -- Russel. === Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Roadm: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk signature.asc Description: This is a digitally signed message part
Re: This is why I don't use D.
On Thursday, 6 September 2018 at 13:03:09 UTC, 0xEAB wrote: On Thursday, 6 September 2018 at 10:55:04 UTC, Laurent Tréguier wrote: Then would it be possible to use code coverage to hint users about packages possibly not building anymore even if they are shown to be buildable ? I see yet another problem here. Having to maintain a high coverage just to get your package flagged as maintained might lead to a package not getting maintained in the first place. Of course, coverage can be considered a measurement for code quality. But can one really derive the state of maintenance from it? Maybe not. I simply thought that it could help know if a package would still build in real-world situations, especially with a lot of meta-programming.
Re: This is why I don't use D.
On 9/5/18 4:40 PM, Nick Sabalausky (Abscissa) wrote: On 09/04/2018 09:58 PM, Jonathan M Davis wrote: On Tuesday, September 4, 2018 7:18:17 PM MDT James Blachly via Digitalmars-d wrote: Are you talking about this? https://github.com/clinei/3ddemo which hasn't been updated since February 2016? This is part of why it's sometimes been discussed that we need a way to indicate which dub packages are currently maintained and work. What we need is for DUB to quit pretending the compiler (and DUB itself, for that matter) isn't a dependency just like any other. I pointed this out years ago over at DUB's GitHub project, but pretty much just got silence. The compiler doesn't change all that often, and when it does, it's usually a long deprecation cycle. The problem really is that phobos/druntime change all the time. And they are tied to the compiler. I think some sort of semver scheme really should be implemented for the compiler and phobos. But we need more manpower to handle that. -Steve
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 11:01:55 UTC, Guillaume Piolat wrote: Let me break that to you: core developer are language experts. The rest of us are users, that yes it doesn't make us necessarily qualified to design a language. Who?
Re: This is why I don't use D.
On Thursday, 6 September 2018 at 12:33:21 UTC, Everlast wrote: The problem is that all projects should be maintained. The issue, besides the tooling which can only reduce the problem to manageable levels, is that projects go stale over time. This is obvious! You say though "But we can't maintain every package, it is too much work"... and that is the problem, not that it is too much work but there are too many packages. This is the result of allowing everyone to build their own kitchen sink instead of having some type of common base types. I doubt having too many packages will be D's downfall. Javascript is a thriving language even if tons of NPM packages are unmaintained (and even if they still run, they potentially have security vulnerabilities due to old dependencies). It's sort of like most things now... say cell phone batteries... everyone makes a different one to their liking and so it is a big mess to find replacements after a few years. See, suppose if there were only one package... and everyone maintained it. Then as people leave other people will come in in a continual basis and the package will always be maintained as long as people are using it. If we could have something as simple as "having the one and only package that fits every use case", we wouldn't have multiple OS's, multiple programming languages, etc. I do agree that having "the one" would make everything easier in theory, but reality isn't theory.
[Issue 19196] DMD thinks storage size for pointer or dynamic array isn't always known
https://issues.dlang.org/show_bug.cgi?id=19196 --- Comment #4 from github-bugzi...@puremagic.com --- Commit pushed to master at https://github.com/dlang/dmd https://github.com/dlang/dmd/commit/379446e0f059d25e8da909bf5373f861af5069c5 Improve diagnostic with forward references and tupleof When using tupleof in a template forward reference context, dmd emitted an error message complaining that it could not calculate the size of the struct. This happened even in cases when the size was not obviously required. To make things less confusing and workarounds more obvious, the error message now explicitly says that tupleof is the problem. Relates to (but doesn't fix) issue 19196. --
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 11:43:31 UTC, ag0aep6g wrote: You say that D users shouldn't need a '"Unicode license" before they do anything with strings'. And you say that Python 3 gets it right (or maybe less wrong than D). But here we see that Python requires a similar amount of Unicode knowledge. Without your Unicode license, you couldn't make sense of `len` giving different results for two strings that look the same. So both D and Python require a Unicode license. But on top of that, D also requires an auto-decoding license. You need to know that `string` is both a range of code points and an array of code units. And you need to know that `.length` belongs to the array side, not the range side. Once you know that (and more), things start making sense in D. You'll need some basic knowledge of Unicode, if you deal with strings, that's for sure. But you don't need a "license" and it certainly shouldn't be used as an excuse for D's confusing nature when it comes to strings. Unicode is confusing enough, so you don't need to add another layer of complexity to confuse users further. And most certainly you shouldn't blame the user for being confused. Afaik, there's no warning label with an accompanying user manual for string handling. My point is: D doesn't require more Unicode knowledge than Python. But D's auto-decoding gives `string` a dual nature, and that can certainly be confusing. It's part of why everybody dislikes auto-decoding. D should be clear about it. I think it's too late for `string` to change its behavior (i.e. "á".length = 1). If you wanna change `string`'s behavior now, maybe a compiler switch would be an option for the transition period: -autodecode=off. Maybe a new type of string could be introduced that behaves like one would expect, say `ustring` for correct Unicode handling. Or `string` does that and you introduce a new type for high performance tasks (`rawstring` would unfortunately be confusing). The thing is that even basic things like string handling are complicated and flawed so that I don't want to use D for any future projects and I don't have the time to wait until it gets fixed one day, if it ever will get fixed that is. Neither does it seem to be a priority as opposed to other things that are maybe less important for production. But at least I'm wiser after this thread, since it has been made clear that things are not gonna change soon, at least not soon enough for me. This is why I'll file for D-vorce :) Will it be difficult? Maybe at the beginning, but it will make things easier in the long run. And at the end of the day, if you have to fix and rewrite parts of your code again and again due to frequent language changes, you might as well port it to a different PL altogether. But I have no hard feelings, it's a practical decision I had to make based on pros and cons. [snip]