Re: Ownership and Borrowing in D
On Tuesday, 23 July 2019 at 22:01:56 UTC, Sebastiaan Koppe wrote: So in a nutshell, a variable with the unique qualifier ensures that there are no other references to that data during the lifetime of said variable? In other words, you can only take 1 addressOf/ref at a time? Does it prevent more that just use-after-free? Can you ask again on the Github thread? I don't want to hijack this thread for 12 pages. (also, can you expand your question? I don't get what you're asking)
Re: Ownership and Borrowing in D
On Tuesday, 23 July 2019 at 14:18:27 UTC, Timon Gehr wrote: I think tying ownership/borrowing semantics to pointers instead of structs makes no sense; it's not necessary and it's not sufficient. Your use case illustrates why it is not sufficient. That's as good a time as any to plug my own proposal draft: https://gist.github.com/PoignardAzur/9896ddb17b9f6d6f3d0fa5e6fe1a7088 Any thoughts?
Re: Ownership and Borrowing in D
On Wednesday, 17 July 2019 at 20:59:27 UTC, Walter Bright wrote: I'm interested to see your design. But I suggest you move quickly, as I've suggested people who talked about this propose a design for the last 10 years, nothing has happened, and we can't wait any longer. For the record, I've been busy with WebAssembly proposals and haven't had the time to write a proper DIP this week. That said, here's a **very, very rough** draft that conveys what the proposed design is about: https://gist.github.com/PoignardAzur/9896ddb17b9f6d6f3d0fa5e6fe1a7088 I'll try to make a PR before next week-end.
Re: Ownership and Borrowing in D
On Tuesday, 16 July 2019 at 06:12:42 UTC, Walter Bright wrote: Now I just have to deliver the goods! Lately, I've been thinking about the possibility of an alternative ownership system for D, one that would be radically different from what you're considering, but still aim for the same features (memory safety, compile-time checking, zero-cost abstraction), based on a `unique` qualifier. If I were to write a formal proposal for it, how interested would you be in comparing the two schemes (DIP 1021 and eventually Rust "one mutable ref" rule, vs unique qualifier)? Like, I want to make my pitch, but I don't want to spend huge amount of effort on it if you're just going to go with DIP 1021 anyway.
Re: Ownership and Borrowing in D
On Monday, 15 July 2019 at 14:58:55 UTC, Mike Parker wrote: some folks asked for some information about the bigger picture. As one of the folks that asked for that information, thank you Walter for posting this :) The system described is a little rough around the edges, but it's really helpful to inform the discussion around DIP 1021!
Re: Priority DIP for Draft Review: Argument Ownership and Function Calls
On Wednesday, 26 June 2019 at 10:51:33 UTC, Mike Parker wrote: https://github.com/dlang/DIPs/pull/158 I'm going to be candid here: Based on past experience, I'm worried that: - This DIP will generate a lot of negative feedback. - Walter will ignore most of that feedback, or cherry-pick a few arguments that resonate with him, and ignore the others. - People will ask for a more formal specification, and Walter will refuse on the ground that he isn't a PL theorist / that the DIP is only a first step towards a more complete borrowing scheme. - Walter will not lay out what that complete borrowing scheme looks like, on the grounds that it's too early to tell. - Because of its importance for future features, the DIP is going to be rushed despite the unnadressed criticisms. Can we get some assurance this isn't going to be the case? I'm particularly interested in flow analysis features, and I think I have something to contribute, but I don't want to spend a large amount of effort debating and suggesting alternatives if I expect to be stonewalled.
Re: Phobos is now compiled with -preview=dip1000
On Saturday, 18 May 2019 at 19:44:37 UTC, Walter Bright wrote: If all access to internals is returned by ref, those lifetimes are restricted to the current expression. Oh my god, I try my best to be open-minded, but talking about dip1000 design with you is like pulling teeth *at best*. Yes, containers work perfectly if you allocate them on the stack and use their contents during the current stack frame, and then de-allocate them statically. By definition, this represents 0% of the use cases of dynamic containers. Dynamic containers need methods like "push_back", "reserve", "resize", "concatenate" or "clear", which are all impossible to implement with dip1000 without making their implementations trusted, which in turns opens up the program to use-after-free memory corruption. See also: https://forum.dlang.org/post/qbbipvkjqjeweasxk...@forum.dlang.org https://forum.dlang.org/post/rxmwjjphnmkszaxon...@forum.dlang.org Have you talked to Atila Neves at all for the past six months? Why the hell are we having this discussion? This is not a new issue. I have raised it repeatedly in the past (I can even dig up the posts if you're interested; I remember writing a fairly in-depth analysis at some point). Atila's automem and Skoppe's spasm have the same limitation: you can't reallocate memory without writing unsafe code (I'm told spasm gets around that by never deallocating anything). Honestly, the fact that you're the only person with a coherent vision of dip1000, and yet you keep ignoring problems when they're pointed out to you is both worrying and infuriating. Eg: So far, the only real shortcoming in the initial design was revealed by the put() semantics, and was fixed with that PR that transmitted scope-ness through the first argument. Like, yes, I understand that dip1000 is an achievement even if it doesn't allow for resizable containers, and that immutable already allow for functional programming patterns and that's great, but you need to stop acting like everything's going perfect when community members (including highly involved library writers) have complained about the same things over and over again (imprecise semantics, lack of documentation, the resize() use case) and you've kept ignoring them. Seriously, I'm not asking for much. I'm not demanding you take any architecture decision or redesign the language (like some people are prone to demanding here). But it would be nice if you stopped acting like you didn't read a word I wrote, over and over again.
Re: Phobos is now compiled with -preview=dip1000
On Friday, 17 May 2019 at 20:04:42 UTC, Walter Bright wrote: Dip1000 is key to enable containers to control access to pointers to their innards that they expose. I haven't looked at the subject for a while, but every time I did the takeaway was the same: dip1000 works great for containers until you need to reallocate or free something, at which point it's back to @trusted code with you. I think you said at some point "It's still useful, because it reduces the surface of code that needs to be checked", but even then saying containers can "control access to the data they expose" is a little optimistic. They only control that access as long as you don't need to call resize(). I'd wager that a large fraction of dangling pointer errors made by non-beginner C++ developpers come specifically from this use case.
Re: DIP 1000--Scoped Pointers--Superseded
On Thursday, 7 March 2019 at 14:24:29 UTC, Mike Parker wrote: The implementation supersedes the DIP. I think the question a lot of people have in mind is "Is there any plan to formally organize a discussion about the future of scoped pointers?" More specifically, are you planning a new DIP discussing the semantics of scope and return scope, ideally one that would take into account previous feedback, address concerns that DIP-1000 had originally inspired, and include an analysis of the pros and cons of -dip1000's implementation, as reported by its current users? Less formally, what I mean is that a lot of people had concerns at the time DIP-1000 was discussed; many of these concerns (including mine) weren't really addressed, and Walter's reaction gave the impression that he didn't understand them, and as a result, considered them unimportant, which led to a lot of frustration (including, if I remember correctly, Dicebot stepping down as DIP manager) and a general break in communication between Walter and the community. So, considering how important scoped pointers are to the language (betterC, webasm, video games, C++ interop, competing with Rust), I think (and I realize this is a lot to ask) that this is an area where Walter needs to bite the bullet and make a sustained effort to interact with the community and address DIP-1000's problems, whether by starting another DIP or through some other mean. If nothing else, we should probably have a "Who here uses -dip1000, and does it work for you?" thread.
Re: DIP 1018--The Copy Constructor--Formal Review
On Thursday, 28 February 2019 at 01:42:13 UTC, Andrei Alexandrescu wrote: Such sharing of resources across objects is a common occurrence, which would be impeded by forcing `const` on the right-hand side of a copy. (An inferior workaround would be to selectively cast `const` away inside the copy constructor, which is obviously undesirable.) For that reason this DIP proposes allowing mutable copy sources. There's an argument to be made that a copy constructor isn't the best way to share resources between two variables in a way that might affect code using the variable being copied.
Re: Project Highlight: Spasm
On Friday, 1 March 2019 at 14:27:36 UTC, Sebastiaan Koppe wrote: That would be awesome. I initially tried very hard to stick to React/JSX functional rendering. I could not find a way to make it a zero-cost abstraction, but maybe you have more success! I'll create an issue once I've written down my thoughts on the subject.
Re: Project Highlight: Spasm
On Thursday, 28 February 2019 at 12:24:27 UTC, Mike Parker wrote: The Blog: https://dlang.org/blog/2019/02/28/project-highlight-spasm/ Reddit: https://www.reddit.com/r/programming/comments/avqioi/spasm_d_to_webassembly_for_single_page_apps/ I've seen spasm around quite a few times, but reading this article has made me want to actually take a look at the documentation and try to understand how the library works. Would the author be interested in structural level-feedback? As in, not "I wish there was this feature", but "I think the way you're doing X and Y is wrong, and the project would probably benefit from a complete refactoring". I realize this kind of feedback is pretty irritating to get and hard to act on several months into the project, hence why I'm asking. The short version is, it's pretty clear Sebastiaan has designed spasm with the goal of giving the library compile-time information on the structure of the widgets to render, to avoid React's superfluous updates and prop comparison; that said, I think it's possible to give the library that information without losing React's "your components are all functions, don't worry about how the data is updated" simplicity, which I think is an area where spasm comes up short. Anyway, I'm ready to spend more time documenting for a deeper analysis if Sebastiaan is interested.
Re: DIP 1018--The Copy Constructor--Formal Review
On Monday, 25 February 2019 at 22:45:38 UTC, Olivier FAURE wrote: For the same reason C++'s std::shared_pointer uses a non-const copy constructor. Wait, no, I just checked, std::shared_pointer's copy constructor is const, even though it changes shared data. Ugh, that's just wrong. (I kind of agree with Walter's point; I totally assumed the constructor would be non-const, since it mutates data it receives)
Re: DIP 1018--The Copy Constructor--Formal Review
On Monday, 25 February 2019 at 16:00:54 UTC, Andrei Alexandrescu wrote: Thorough feedback has been given, likely more so than for any other submission. A summary for the recommended steps to take can be found here: https://forum.dlang.org/post/q2u429$1cmg$1...@digitalmars.com It is not desirable to demand reviewers to do more work on the review or to defend it. Acceptance by bullying is unlikely to create good results. The target of work is squarely the proposal itself. Agreed. Honestly, I am not impressed with the behavior of several members here. I understand that the rvalue DIP went through a long process, that some people really wanted it to be accepted, and that it was frustrating to wait so long only for it to be refused, but at some point, you guys have to accept that the people in charge refused it. They explained why they did, their reasons matched concerns other users had, and they explained how to move the proposal forward. So again, I get that this is frustrating, but repeatedly complaining and asking for an appeal and protesting about other DIPs being accepted is *not* professional behavior. Reviewers are entitled to refuse contributions for any reasons, and if a reviewer rejects a proposal, too bad; you don't get to ask again and again and complain and bring it up in every other thread until they say yes. Yes, this DIP was fast-tracked. Yes, this can feel unfair. And yet, it makes sense that it was fast-tracked, because it fits a priority of the project owners (C++ interoperability + reference counting) and project owners are allowed to have priorities. It's not like this DIP was rushed or has major vulnerabilities (the "mutable copy constructor" thing is necessary for reference counting).
Re: DIP 1018--The Copy Constructor--Formal Review
On Monday, 25 February 2019 at 20:41:58 UTC, Paolo Invernizzi wrote: Honestly, I've not understood the rationale or the covered use case in letting the copy ctor mutate the ref source parameters... Sincerely, without polemical intent. - P For the same reason C++'s std::shared_pointer uses a non-const copy constructor.
Re: DIP 1018--The Copy Constructor--Formal Review
On Sunday, 24 February 2019 at 12:57:06 UTC, ag0aep6g wrote: On 24.02.19 11:46, Mike Parker wrote: Walter provided feedback on Razvan's implementation. When it reached a state with which he was satisfied, he gave the green light for acceptance. Sounds like it might be a "worst acceptable proposal" [1] which Andrei says the DIP process is supposed to avoid. [1] https://forum.dlang.org/post/q2ndr8$15gm$1...@digitalmars.com If I'm understanding correctly, Andrei said that about the proposals, while Walter gave feedback on the implementation, which is a little different. But yeah, the proposal was clearly fast-tracked, probably because it's needed for reference counting and better C++ integration.
Re: DIP 1016--ref T accepts r-values--Formal Assessment
On Friday, 1 February 2019 at 09:10:15 UTC, aliak wrote: Shouldn't doubleMyValue(pt.x) be a compiler error if pt.x is a getter? For it not to be a compile error pt.x should also have a setter, in which case the code needs to be lowered to something else: The thing is, D doesn't really differentiate between a getter and any other method. So with DIP-1016, when given doubleMyValue(pt.x); The compiler would assume the programmer means - Call pt.x() - Store the result in a temporary - Pass that temporary as a ref parameter to doubleMyValue At no point is the compiler aware that the user intends for x to be interpreted as a getter.
Re: DIP 1016--ref T accepts r-values--Formal Assessment
On Thursday, 31 January 2019 at 21:50:32 UTC, Steven Schveighoffer wrote: How is the problem not in doubleMyValue? It's sole purpose is to update an lvalue. It is the perfect candidate to mark with @disable for rvalues. But right now, updating an rvalue is what ref is supposed to be used for. Besides, the fact remains that accepting DIP 1016 would add a new corner case with the potential to create hard-to-detect bugs, which feels to me like it should be a dealbreaker. The fact that this corner case can be patched using @disable isn't good enough, because: - Existing codebases that use ref won't have the @disable patch, which means using them will become (slightly) dangerous because of DIP 1016. - Making libraries that behave predictably should be the default path (the "pit of success" philosophy), not require an additional construct. Besides, D's type system should be more than capable of consistently telling the user "Be careful, you're modifying a temporary when it's probably not what you meant". --- An alternate proposal that just came to mind: allowing the user to pass rvalues to ref arguments with the following syntax: y = doubleMyValue(cast(ref)10); This syntax would avoid creating ambiguous situations where the compiler thinks you're passing it a getter's return as a temporary when you're trying to pass the attribute that the getter maps to. Eg, the following code: y = doubleMyValue(pt.x); would still fail the same way it currently does when Point.x() is a getter.
Re: DIP 1016--ref T accepts r-values--Formal Assessment
On Thursday, 31 January 2019 at 21:57:21 UTC, Steven Schveighoffer wrote: That being said, you can look at the fact that most people don't even know about this problem, even seasoned veterans, as a sign that it's really not a big problem. Isn't it a recurring theme on this forum that D is really cool but also kind of obnoxious because of weird corner cases that veterans know, but aren't documented anywhere?
Re: DIP 1016--ref T accepts r-values--Formal Assessment
On Thursday, 31 January 2019 at 21:44:53 UTC, jmh530 wrote: It doesn't compile with dip1000 without first giving the getter functions a return attribute for this. But it still compiles with -dip1000 once you give x() and y() return attributes, even though what's happening is clearly different from what the user wants (and the compiler has enough info to know that).
Re: DIP 1016--ref T accepts r-values--Formal Assessment
On Thursday, 31 January 2019 at 18:31:22 UTC, Steven Schveighoffer wrote: BTW, the DIP discusses how to annotate these rare situations: int doubleMyValue(ref int x) { ... } @disable int doubleMyValue(int x); -Steve I don't think that's a solution. The problem is in the getter method, not in doubleMyValue. If nothing else, since the DIP is designed to work on existing functions, it could happen on doubleMyValue functions which would be both designed by and used by people completely unaware of DIP-1016.
Re: DIP 1016--ref T accepts r-values--Formal Assessment
On Thursday, 31 January 2019 at 16:38:42 UTC, Steven Schveighoffer wrote: Yeah, that's already a thing that ref in D doesn't protect against: It took me a while to understand what the compiler was doing. This really feels like something that shouldn't compile.
Re: DIP 1016--ref T accepts r-values--Formal Assessment
On Thursday, 31 January 2019 at 02:10:05 UTC, Manu wrote: I still can't see a truck-sized hole. I don't know if it's truck-sized, but here's another corner case: int doubleMyValue(ref int x) { x *= 2; return x; } Point pt; pt.x = 5; pt.y = foobar(); doubleMyValue(pt.x); assert(pt.x == 10); Question: in the above code, will the assertion pass? Answer: it depends on Point's implementation. If x is a member variable, then yes. If it's a getter, then doubleMyValue will take a rvalue and x won't be mutated and the assertion will fail. I think this is a non-trivial conceptual problem.
Re: DIP 1016--ref T accepts r-values--Formal Assessment
On Monday, 28 January 2019 at 17:23:51 UTC, Andrei Alexandrescu wrote: * Regarding the argument "why not make this an iterative process where concerns are raised and incrementally addressed?" We modeled the DIP process after similar processes - conference papers, journal papers, proposals in other languages. There is a proposal by one or more responsibles, perfected by a community review, and submitted for review. This encourages building a strong proposal - as strong as can be - prior to submission. Washing that down to a negotiation between the proposers and the reviewers leads to a "worst acceptable proposal" state of affairs in which proposers are incentivized to submit the least-effort proposal, reactively change it as issues are raised by reviewers. Fair enough.
Re: DIP 1016--ref T accepts r-values--Formal Assessment
On Friday, 25 January 2019 at 11:56:58 UTC, Walter Bright wrote: All that criticism aside, I'd like to see rvalue references in D. But the DIP needs significant work. I haven't participated to writing this DIP, but I personally appreciate this level of feedback. I think it would have been more appreciated if the original answer had that level of detail. Otherwise, I don't think this should be an all-or-nothing situation. It would make sense to bump the DIP back one stage or two for minor adjustments, and if the author decide that they need to make major changes to get past the problems you mention, require these changes to go through the entire process again (through another DIP).
Re: A brief survey of build tools, focused on D
On Sunday, 16 December 2018 at 00:17:55 UTC, Paul Backus wrote: There's something important you're glossing over here, which is that, in the general case, there's no single obvious or natural way to compose two DAGs together. For example: suppose project A's DAG has two "output" vertices (i.e., they have no outgoing edges), one corresponding to a "debug" build and one corresponding to a "release" build. Now suppose project B would like to depend on project A. For this to happen, our hypothetical DAG import function needs to add one or more edges that connect A's DAG to B's DAG. The question is, how many edges, and which vertices should these edges connect? That doesn't seem right. Surely you could write externalDependencies = [ someSubmodule.release ] in your language-specific build tool, and have it convert to an equivalent import edge targeting the correct vertice in the standardized dependency graph? It might be inconvenient in some cases (eg you have to manually tell your tool to import someSubmodule.release in release mode and someSubmodule.debug in debug mode), but it would still be infinitely more convenient and elegant than the current "multiple incompatible build tools per language, and shell/make scripts to link them together" paradigm. Especially with the increasing usability of WebAsm, it would be nice to have a standard build model to link multiple modules in different languages together into a single wasm binary. A standardized DAG model would also help making a project-wide equivalent to the Language Server Protocol. (I'm not convinced by BSP)
Re: Warn on unused imports?
On Wednesday, 26 September 2018 at 09:25:11 UTC, Jonathan M Davis wrote: It's just a message. You can use a compiler flag to make the message go away or to turn it into an error (though in general, I'd advise against it, since then your code breaks as soon as something gets deprecated), but by default, they're just messages. That's precisely what warnings are: messages that can be silenced or turned into errors. When people talk about warnings, that's usually what they mean. Personally speaking, I like to treat warnings as problems that don't stop your code from compiling during development (eg I don't want to worry about my extranous return statements when I'm doing quick experiments), but can't be accepted upstream, and have to trigger errors in CI. But with a robust warning system, other approaches are possible. Unfortunately, if your build spits out a bunch of status stuff instead of just printing out actual problems, deprecation messages do sometimes get missed Warnings often catch real problems, even categories of warnings with high amounts of false positives like unused variables. But yeah, I get your point. Warning lose their interest when they start to pile up in your codebase to the point it's impossible to notice new ones, and it's impossible to turn them into errors because there's already too many. That said, that's a problem with D compilers, not with the concept of warnings. You mention that deprecations warnings are nice because they can be turned off; ideally, all categories of warnings should be like that. What DMD and GDC lack (I think) is GCC's level of granularity, with flags like -Wnonnull -Werror=missing-attributes -Wno-error=misleading-indentation But that doesn't mean the concept of compiler warnings needs to be thrown away.
Re: Named multi-imports
On Friday, 18 August 2017 at 09:18:42 UTC, Timon Gehr wrote: Any downsides? ... - It introduces a new type that would not really be necessary. This is avoidable, at the cost of a little more verbosity: D newbie here: is there a non-negligible cost to creating a stateless struct type? Also, since the struct is private and only used for its aliases, if there a chance the compiler might elide those costs?
Re: Stefan Koch: New CTFE fix
On Wednesday, 16 August 2017 at 13:55:51 UTC, Johnson wrote: Oh, your such a bad boy. How bout you grow up. If I'm childish, you are just a smaller child because you are doing the exact same thing... getting on the internet, pretending to be some bad boy with something to say... like anyone will listen to you. That goes for your buddy too. Probably all the same person that has nothing real to say. For what it's worth, I apologize for what I said earlier. I stand by it, but I shouldn't have called you names for the sake of it. That said, you're insulting people who are (mostly) trying to help you. This isn't a "who's right and who's wrong" thing. I'm *not* trying to smack you down or prove you're an idiot, I'm trying to help you (and, as you're no doubt thinking, I'm bored and I like drama on the internet and whatever). Don't behave like that, please. It's somewhat hurtful to other people, it ruins productive discussion, and it probably makes the discussion irritating for you too. I do agree that Moritz "do" comment was a little pedantic, and I think his next answers in this thread are barely more constructive than yours, but again, it's not a "he's wrong then you're right" thing. You could have told him off without being extremely aggressive and insulting. The fact that you did that is why Biotronic and I have been calling you "immature", not because we're all infatuated with Moritz or because we're a hive mind dedicated to being jealous of you.
Re: Stefan Koch: New CTFE fix
On Tuesday, 15 August 2017 at 16:10:40 UTC, Johnson wrote: I'm sorry, but you are obviously someone in *need* to prove something. No need to respond, ever. You're being extremely rude and immature.
Re: Why does stringof not like functions with arguments?
On Wednesday, 9 August 2017 at 01:39:07 UTC, Jason Brady wrote: Why does the following code error out with: app.d(12,10): Error: function app.FunctionWithArguments (uint i) is not callable using argument types () Code: import std.stdio; void FunctionWithoutArguments() { } void FunctionWithArguments(uint i) { } void main() { writeln(FunctionWithoutArguments.stringof); writeln(FunctionWithArguments.stringof); } I'm not sure how `stringof` actually works, but it expects a valid expression as its prefix. `FunctionWithoutArguments` is a valid expression (optional parentheses), but `FunctionWithArguments` is not.
Re: why won't byPair work with a const AA?
On Wednesday, 2 August 2017 at 18:06:03 UTC, H. S. Teoh wrote: On Wed, Aug 02, 2017 at 01:15:44PM -0400, Steven Schveighoffer via Digitalmars-d-learn wrote: [...] The real answer is to have tail modifiers for structs, so you can do the same thing an array does. Note that if Result is an array, you CAN use inout: auto byPair(AA)(inout(AA) aa) { alias Result = inout(X)[]; return Result(...); } [...] Yeah, this isn't the first time I've run into this. But then the problem becomes, how do you design tail modifiers for structs? I understand the general concept you're describing, but what exactly are tail modifiers? It's the first time I see this name, and my google-fu gives me nothing.
Re: DIP 1012--Attributes--Preliminary Review Round 1
On Friday, 28 July 2017 at 01:30:28 UTC, sarn wrote: To be totally honest, as it stands it feels like architecture astronautics: https://www.joelonsoftware.com/2001/04/21/dont-let-architecture-astronauts-scare-you/ Yeah, I think you nailed it. This DIP does seem to come from a 'what is the smartest, most elegant system I can design' logic; I don't see much value in it as something that would solve problems. That's not to say that the rationale should be explained better, or the examples should be different, or this or that paragraph should be tweaked. I think this proposal is fundamentally flawed for the reasons Jonathan M Davis outlined.
Re: DIP 1012--Attributes--Preliminary Review Round 1
On Thursday, 27 July 2017 at 14:44:23 UTC, Mike Parker wrote: DIP 1012 is titled "Attributes". https://github.com/dlang/DIPs/blob/master/DIPs/DIP1012.md This DIP proposes a very complex change (treating attributes as Enums), but doesn't really provide a rationale for these changes. The DIP's written rationale is fairly short, and only mentions "We need a way to conveniently change default values for attributes" which I feel doesn't really justifies these complex new semantics.
Re: DIP 1009--Improve Contract Usability--Preliminary Review Round 2 Begins
On Tuesday, 25 July 2017 at 09:53:02 UTC, Mike Parker wrote: On Tuesday, 25 July 2017 at 07:58:13 UTC, Olivier FAURE wrote: I feel like making a list of alternative proposals and why they were rejected would still be a good idea, both to improve the quality of the debate in this thread (people are proposing alternative syntaxes that should be addressed in the DIP), and for posterity. We weren't speaking of rejected proposals, but of previously reviewed drafts. Rejected proposals already include a summary of why they were rejected. You can a list of all DIPs submitted under the current process, including links and their current status, at: https://github.com/dlang/DIPs/blob/master/DIPs/README.md I... think you misunderstood me? I shouldn't have used the word 'proposals', I should have said 'suggestions'. What I meant was "I think it would be better for the current version of DIP 1009 to include a 'Rejected alternative syntaxes' that would include a summary of the previously discussed suggestions for improving contract readability." MysticZach argues that such a section would be pointless since the language authors read the previous version of DIP 1009, but I still think adding it would be a good idea (for posterity and to streamline discussions in this thread).
Re: DIP 1009--Improve Contract Usability--Preliminary Review Round 2 Begins
On Saturday, 22 July 2017 at 04:48:34 UTC, MysticZach wrote: On Friday, 21 July 2017 at 19:36:08 UTC, H. S. Teoh wrote: In short, I feel that a more substantial discussion of how we arrived at the current form of the proposal is important so that Walter & Andrei can have the adequate context to appreciate the proposed syntax changes, and not feel like this is just one possibility out of many others that haven't been adequately considered. I think we have to assume they've been reading the prior threads. If they have specific questions or concerns, then we have to hope they'll express them here, rather than just reject the proposal. I'll put you in the author line as a co-author if you want, as this _is_ essentially your proposal. I feel like making a list of alternative proposals and why they were rejected would still be a good idea, both to improve the quality of the debate in this thread (people are proposing alternative syntaxes that should be addressed in the DIP), and for posterity.
Re: newCTFE Status July 2017
On Monday, 24 July 2017 at 11:17:23 UTC, Stefan Koch wrote: static assert (fold ([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) == [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]); Enjoy! I barely have any idea what any of this means, but it looks really cool. Keep up the good work!
Re: If Statement with Declaration
On Friday, 21 July 2017 at 21:32:48 UTC, Andrei Alexandrescu wrote: with (auto r = makeMeARange) if (!r.empty) with (auto x = r.front) { ... } Andrei I'm being real nitpicky, but this in particular just seems like a slightly worse way to write with (auto r = makeMeARange) if (!r.empty) { auto x = r.front; ... } But yeah, it's cool construct that lends itself to pretty neat chaining.
Re: The X Macro using D
On Friday, 21 July 2017 at 12:27:35 UTC, Olivier FAURE wrote: private __gshared const(char)*[24] pseudotab = Y.map!(x => x.id); I meant private __gshared static immutable string[Y.length] pseudotab = Y.map!(x => x.id); but you get my point. Also, upon trying it, it doesn't seem to work (at least the immutable part doesn't), but I don't really understand why. All the variables in the expression are known at compile time.
Re: The X Macro using D
On Friday, 21 July 2017 at 08:06:09 UTC, Stefan Koch wrote: My pleasure :) // ... mixin((){ // ... enum Y = [ // id reg mask ty X("AH", 4, mAX, TYuchar), X("AL", 0, mAX, TYuchar), X("AX", 8, mAX, TYushort), X("BH", 7, mBX, TYuchar), X("BL", 3, mBX, TYuchar), // ... X("ESI", 22, mSI, TYulong), X("ESP", 20, 0, TYulong), X("SI", 14, mSI, TYushort), X("SP", 12, 0, TYushort), ]; enum lns = itos(Y.length); string pseudotab = "\nprivate __gshared static immutable string[" ~ lns ~ "] pseudotab = ["; foreach(i, r; Y) { pseudotab ~= `"` ~ r.id ~ `", `; } pseudotab ~= "];\n"; }()); Quick questions: isn't it possible to do private __gshared const(char)*[24] pseudotab = Y.map!(x => x.id); instead? That seems like the most obvious and easy to read option; and in contrast to the other solutions proposed, it's closer to the Rule of Least Power.
Re: If Statement with Declaration
On Wednesday, 19 July 2017 at 20:42:33 UTC, Steven Schveighoffer wrote: I remember reading a discussion about using with statements to do this earlier as well, but I can't find it. -Steve I don't think this is the discussion you're talking about, but this does bring DIP 1005 to mind: https://github.com/dlang/DIPs/blob/master/DIPs/DIP1005.md Personally, I'm in favor of `with` for both variable declarations and imports. It's pretty intuitive semantically.
Re: Compilation times and idiomatic D code
On Monday, 17 July 2017 at 18:42:36 UTC, H. S. Teoh wrote: Besides the symbol problem though, does the template instantiation explosion problem imply as many duplicated function bodies corresponding to the new type symbols? That's a different, though somewhat related, issue. I have some ideas on that front, but for now, Rainers' PR addresses only the problem of symbol length. So the next task after his fix is merged is to tackle the larger problem of how to improve the implementation of templates. [...] Often, this results from templates that contain helper code that's independent of the template parameters, or dependent only on a subset of template parameters. For example: Yeah, I think this is the second best optimization for symbol names after using references for duplicates symbol subnames. One thing you didn't mention is that this also creates a lot of useless duplicate for Voldemort types in template functions that don't depend on (all of) the functions parameters. For instance, looking at the code from the OP's issue: auto s(T)(T t) { struct Result { void foo(){ throw new Exception("2");} } return Result(); } void main(string[] args) { auto x = 1.s.s.s.s.s; x.foo; } foo mangles to a super long symbol, even though the actual symbol should be something like moduleName.s!(__any_type)(__any_type).Result.foo Actually, it's a little more complicated than that, because s might have overloads the compiler is not aware of, which means it doesn't trivially know that T doesn't matter to result. Still, pretty sure there are optimizations to be had there. One possibility would be to allow the coder to give unique subnames to functions. For instance int add.withInts(int n1, int n2) float add.withFloats(float n1, float n2) This would allow the compiler to assume that a function's mangling is unique, without having to worry about overloads and other templates, and therefore skip the unnecessary parameters the function's Voldemort type.
Re: proposed @noreturn attribute
On Monday, 17 July 2017 at 18:10:27 UTC, Andrei Alexandrescu wrote: On 7/17/17 11:39 AM, Olivier FAURE wrote: I'd really prefer if you avoided the whole `typeof(assert(0))` thing. First off, it's way too verbose for a simple concept. Noted, thanks. I won't debate this much but for now I disagree. Fair enough.
Re: proposed @noreturn attribute
On Sunday, 16 July 2017 at 20:44:13 UTC, Andrei Alexandrescu wrote: An issue is that we already have typeof(null). typeof(null) and typeof(assert(0))* are two ways to specify almost the same thing. One question is whether typeof(assert(0))* and typeof(null) should be the same, or if the former should not implicitly convert to class references. I have also argued in the past that there should be a separate typeof([]). This role would now be filled by typeof(assert(0))[]. However, changing the type of '[]' may break code. You're on to something here. Perhaps we go the inverse route and define the bottom type as typeof(*null). Would that simplify matters? There is some good consistency about it: null: a pointer to anything. But can't be dereferenced. *null: well, therefore... anything. But can't be created. The latter is a mere consequence of the former. I'd really prefer if you avoided the whole `typeof(assert(0))` thing. First off, it's way too verbose for a simple concept. In general, code is much more readable when you can read functions as `Type functionName(args)`, rather than template-style `expr(valueof!(thing + otherThing).typeof) functionName(args)`, so I think it would be better not to encourage adding more expressions as return types. I think the following: noreturn_t logThenQuit(string message) is much more readable and obvious (especially to a beginner) than: typeof(*null) logThenQuit(string message) Of course, you could implement typeof(*null); and also add noreturn_t as an alias; it might be a good compromise; I'd still dislike it because it encourages people to use the verbose hard to understand version. The second reason I don't like it is that I feel it's just trying to be clever for the sake of cleverness. I don't think we need a language feature that perfectly matches the idea of not returning from a function on a deep, philosophical level; we just need a way to tell the type system "Hey, this function doesn't return!". I don't think `typeof(*null)`, or `typeof(assert(0))` brings any advantage in term of real life user code, and I don't think it's worth the confused users that would look at code and go "Uh? What is the type of *null?" or "I thought assert was void! What would you get the type of assert()?".
Re: Why is phobos so wack?
On Sunday, 9 July 2017 at 17:13:11 UTC, Dukc wrote: About C++ from what I've heard, generic error messages there are not only much worse than others, they are much worse than even D template errors! Yeah. You don't want to try reading the GCC message for a std::cout error. It's like 80 lines of different template overloads of the '<<' operator.
Re: proposed @noreturn attribute
On Sunday, 9 July 2017 at 19:14:37 UTC, Walter Bright wrote: On 7/9/2017 6:13 AM, Andrei Alexandrescu wrote: We should use typeof(assert(0)) for Bottom. That also leaves the door open for: alias noreturn = typeof(assert(0)); I would really prefer noreturn (or noreturn_t) to be the default appellation for such a type. typeof(assert(0)) is way uglier, while noreturn is a lot more intuitive and explicit.
Re: DIP 1009--Improve Contract Usability--Preliminary Review Round 1
On Friday, 23 June 2017 at 17:31:15 UTC, MysticZach wrote: Yeah, my take is that the grammar for `assert`s applies to the new syntax as well. If the grammar for asserts is this: AssertExpression: assert ( AssertParameters ) ... then the grammar for the new syntax is: InExpression: in ( AssertParameters ) OutExpression: out ( ; AssertParameters ) out ( Identifier ; AssertParameters ) A bit late to the party, but I would recommend the following syntax: out (void; myTest) for argument-less tests. A casual reader would be less likely to see this in code and think it's some sort of typo; it would be easier to google; and it would make some semantic sense (functions that don't return anything return void).
Re: Concept proposal: Safely catching error
On Thursday, 8 June 2017 at 13:02:38 UTC, ag0aep6g wrote: Catching the resulting error is @safe when you throw the int* away. So if f is `pure` and you make sure that the arguments don't survive the `try` block, you're good, because f supposedly cannot have reached anything else. This is your proposal, right? Right. I don't think that's sound. At least, it clashes with another relatively recent development: https://dlang.org/phobos/core_memory.html#.pureMalloc That's a wrapper around C's malloc. C's malloc might set the global errno, so it's impure. pureMalloc achieves purity by resetting errno to the value it had before the call. So a `pure` function may mess with global state, as long as it cleans it up. But when it's interrupted (e.g. by an out-of-bounds error), it may leave globals in an invalid state. So you can't assume that a `pure` function upholds its purity when it throws an error. That's true. A "pure after cleanup" function is incompatible with catching Errors (unless we introduce a "scope(error)" keyword that also runs on errors, but that comes with other problems). Is pureMalloc supposed to be representative of pure functions, or more of a special case? That's not a rhetorical question, I genuinely don't know. The spec says a pure function "does not read or write any global or static mutable state", which seems incompatible with "save a global, then write it back like it was". In fact, doing so seems contrary to the assumption that you can run any two pure functions on immutable / independent data at the same time and you won't have race conditions. Actually, now I'm wondering whether pureMalloc & co handle potential race conditions at all, or just hope they don't happen.
Re: Concept proposal: Safely catching error
On Thursday, 8 June 2017 at 12:20:19 UTC, Steven Schveighoffer wrote: Hm... if you locked an object that was passed in on the stack, for instance, there is no guarantee the object gets unlocked. This wouldn't be allowed unless the object was duplicated / created inside the try block. Aside from the point that this still doesn't solve the problem (pure functions do cleanup too), this means a lot of headache for people who just want to write code. I'd much rather just write an array type and be done. -Steve Fair enough. There are other advantages to writing with "create data with pure functions then process it" idioms (easier to do unit tests, better for parallelism, etc), though.
Re: Concept proposal: Safely catching error
On Wednesday, 7 June 2017 at 19:45:05 UTC, ag0aep6g wrote: You gave the argument against catching out-of-bounds errors as: "it means an invariant is broken, which means the code surrounding it probably makes invalid assumptions and shouldn't be trusted." That line of reasoning applies to @trusted code. Only @trusted code can lose its trustworthiness. @safe code is guaranteed trustworthy (except for calls to @trusted code). To clarify, when I said "shouldn't be trusted", I meant in the general sense, not in the memory safety sense. I think Jonathan M Davis put it nicely: On Wednesday, 31 May 2017 at 23:51:30 UTC, Jonathan M Davis wrote: Honestly, once a memory corruption has occurred, all bets are off anyway. The core thing here is that the contract of indexing arrays was violated, which is a bug. If we're going to argue about whether it makes sense to change that contract, then we have to discuss the consequences of doing so, and I really don't see why whether a memory corruption has occurred previously is relevant. [...] In either case, the runtime has no way of determining the reason for the failure, and I don't see why passing a bad value to index an array is any more indicative of a memory corruption than passing an invalid day of the month to std.datetime's Date when constructing it is indicative of a memory corruption. The sane way to protect against memory corruption is to write safe code, not code that *might* shut down brutally onces memory corruption has already occurred. This is done by using @safe and proofreading all @trusted functions in your libs. Contracts are made to preempt memory corruption, and to protect against *programming* errors; they're not recoverable because breaking a contract means that from now on the program is in a state that wasn't anticipated by the programmer. Which means the only way to handle them gracefully is to cancel what you were doing and go back to the pre-contract-breaking state, then produce a big, detailed error message and then exit / remove the thread / etc. I think the issue of @trusted is tangential to this. If you (or the writer of a library you use) are using @trusted to cast away pureness and then have side effects, you're already risking data corruption and undefined behavior, catching Errors or no catching Errors. The point is that an out-of-bounds error implies a bug somewhere. If the bug is in @safe code, it doesn't affect safety at all. There is no explosion. But if the bug is in @trusted code, you can't determine how large the explosion is by looking at the function signature. I don't think there is much overlap between the problems that can be caused by faulty @trusted code and the problems than can be caught by Errors. Not that this is not a philosophical problem. I'm making an empirical claim: "Catching Errors would not open programs to memory safety attacks or accidental memory safety blunders that would not otherwise happen". For instance, if some poorly-written @trusted function causes the size of an int[10] slice to be registered as 20, then your program becomes vulnerable to buffer overflows when you iterate over it; the buffer overflow will not throw any Error. I'm not sure what the official stance is on this. As far as I'm aware, contracts and OOB checks are supposed to prevent memory corruption, not detect it. Any security based on detecting potential memory corruption can ultimately be bypassed by a hacker.
Re: Concept proposal: Safely catching error
On Monday, 5 June 2017 at 14:05:27 UTC, Steven Schveighoffer wrote: I don't think this will work. Only throwing Error makes a function nothrow. A nothrow function may not properly clean up the stack while unwinding. Not because the stack unwinding code skips over it, but because the compiler knows nothing can throw, and so doesn't include the cleanup code. If the function is @pure, then the only things it can set up will be stored on local or GC data, and it won't matter if they're not properly cleaned up, since they won't be accessible anymore. I'm not 100% sure about that, though. Can a pure function do impure things in its scope(exit) / destructor code? Not to mention that only doing this for pure code eliminates usages that sparked the original discussion, as my code communicates with a database, and that wouldn't be allowed in pure code. It would work for sending to a database; but you would need to use the functional programming idiom of "do 99% of the work in pure functions, then send the data to the remaining 1% for impure tasks". A process's structure would be: - Read the inputs from the socket (impure, no catching errors) - Parse them and transform them into database requests (pure) - Send the requests to the database (impure) - Parse / analyse / whatever the results (pure) - Send the results to the socket (impure) And okay, yeah, that list isn't realistic. Using functional programming idioms in real life programs can be a pain in the ass, and lead to convoluted callback-based scaffolding and weird data structures that you need to pass around a bunch of functions that don't really need them. The point is, you could isolate the pure data-manipulating parts of the program from the impure IO parts; and encapsulate the former in Error-catching blocks (which is convenient, since those parts are likely to be more convoluted and harder to foolproof than the IO parts, therefore likely to throw more Errors). Then if an Error occurs, you can close the connection the client (maybe send them an error packet beforehand), close the database file descriptor, log an error message, etc.
Re: DIP 1007--"future symbol"--Formal Review
On Wednesday, 7 June 2017 at 14:42:24 UTC, Mike Parker wrote: I'll be revising it a bit in the near future based on the lessons I've learned so far to clarify the process, but I want to keep this thread focused on feedback. Thanks! https://github.com/dlang/DIPs/blob/master/README.md Sorry, my question was poorly formulated. What I meant was it would be nice if you gave us either a short explanation of what these processes are, or a link to one (which you just did, thank you!), and more importantly, it would be nice to do this systematically with each of these announcements :)
Re: Concept proposal: Safely catching error
On Monday, 5 June 2017 at 13:13:01 UTC, ketmar wrote: this still nullifies the sense of Error/Exception differences. not all errors are recoverable, even in @safe code. ... using wrappers and carefully checking preconditions looks better to me. after all, if programmer failed to check some preconditions, the worst thing to do is trying to hide that by masking errors. bombing out is *way* better, i believe, 'cause it forcing programmer to really fix the bugs instead of creating hackish workarounds. I don't think this is a workaround, or that it goes against the purpose of Errors. The goal would still be to bomb out, cancel whatever you were doing, print a big red error message to the coder / user, and exit. A program that catches an Error would not try to use the data that broke a contract; in fact, the program would not have access to the invalid data, since it would be thrown away. It's natural progression would be to log the error, and quit whatever it was doing. The point is, if the program needs to free system resources before shutting down, it could do so; or if the program is a server or a multi-threaded app dealing with multiple clients at the same time, those clients would not be affected by a crash unrelated to their data.
Re: Concept proposal: Safely catching error
On Monday, 5 June 2017 at 12:59:11 UTC, Moritz Maxeiner wrote: On Monday, 5 June 2017 at 12:01:35 UTC, Olivier FAURE wrote: Another problem is that non-gc memory allocated in the try block would be irreversibly leaked when an Error is thrown (though now that I think about it, that would probably count as impure and be impossible anyway). D considers allocating memory as pure[1]. ... Sure, but with regards to long running processes that are supposed to handle tens of thousands of requests, leaking memory (and continuing to run) will likely eventually end up brutally shutting down the process on out of memory errors. But yes, that is something that would have to be evaluated on a case by case basis. Note that in the case you describe, the alternative is either "Brutally shutdown right now", or "Throwaway some data, potentially some memory as well, and maybe brutally shut down later if that happens too often". (although in the second case, there is also the trade-off that the leaking program "steals" memory from the other routines running on the same computer) Anyway, I don't think this would happen. Most forms of memory allocations are impure, and wouldn't be allowed in a try {} catch(Error) block; C's malloc() is pure, but C's free() isn't, so the thrown Error wouldn't be skipping over any calls to free(). Memory allocated by the GC would be reclaimed once the Error is caught and the data thrown away. Arrays aside, I think there's some use in being able to safely recover from (or safely shut down after) the kind of broken contracts that throw Errors. I consider there to be value in allowing users to say "this is not a contract, it is a valid use case" (-> wrapper), but a broken contract being recoverable violates the entire concept of DbC. I half-agree. There *should not* be way to say "Okay, the contract is broken, but let's keep going anyway". There *should* be a way to say "okay, the contract is broken, let's get rid of all data associated with it, log an error message to explain what went wrong, then kill *the specific thread/process/task* and let the others keep going". The goal isn't to ignore or bypass Errors, it's to compartmentalize the damage.
Re: Concept proposal: Safely catching error
On Monday, 5 June 2017 at 12:51:16 UTC, ag0aep6g wrote: On 06/05/2017 11:50 AM, Olivier FAURE wrote: In other words, @safe functions would be allowed to catch Error after try blocks if the block only mutates data declared inside of it; the code would look like: import vibe.d; // ... string handleRequestOrError(in HTTPServerRequest req) @safe { ServerData myData = createData(); try { ... } catch (Error) { throw new SomeException("Oh no, a system error occured"); } } ... But `myData` is still alive when `catch (Error)` is reached, isn't it? Good catch; yes, this example would refuse to compile; myData needs to be declared in the try block. How does `@trusted` fit into this? The premise is that there's a bug somewhere. You can't assume that the bug is in a `@system` function. It can just as well be in a `@trusted` one. And then `@safe` and `pure` mean nothing. The point of this proposal is that catching Errors should be considered @safe under certain conditions; code that catch Errors properly would be considered as safe as any other code, which is, "as safe as the @trusted code it calls". I think the issue of @trusted is tangential to this. If you (or the writer of a library you use) are using @trusted to cast away pureness and then have side effects, you're already risking data corruption and undefined behavior, catching Errors or no catching Errors.
Re: DIP 1007--"future symbol"--Formal Review
On Wednesday, 7 June 2017 at 10:36:05 UTC, Mike Parker wrote: The first stage of the formal review for DIP 1007 [1], "'future symbol' Compiler Concept", is now underway. From now until 11:59 PM ET on June 21 (3:59 AM GMT on June 22), the community has the opportunity to provide last-minute feedback. If you missed the preliminary review [2], this is your chance to provide input. This has probably been said somewhere before, but could you give a short explanation of what the formal review is, and how it's supposed to be different from the preliminary review? Thank you in advance. Otherwise, the whole DIP seems pretty well thought-out to me. I particularly like the idea of a core-only @__future attribute, that can be replaced with a globally available @future attribute eventually.
Re: Concept proposal: Safely catching error
On Monday, 5 June 2017 at 10:59:28 UTC, Moritz Maxeiner wrote: Pragmatic question: How much work do you think this will require? Good question. I'm no compiler programmer, so I'm not sure what the answer is. I would say "probably a few days at most". The change is fairly self-contained, and built around existing concepts (mutability and @safety); I think it would mostly be a matter of adding a function to the safety checks that tests whether a mutable reference to non-local data is used in any try block with catch(Error). Another problem is that non-gc memory allocated in the try block would be irreversibly leaked when an Error is thrown (though now that I think about it, that would probably count as impure and be impossible anyway). Either way, it's not a safety risk and the programmer can decide whether leaking memory is worse than brutally shutting down for their purpose. Because writing a generic wrapper that you can customize the fault behaviour for using DbI requires very little. Using an array wrapper only covers part of the problem. Users may want their server to keep going even if they fail an assertion, or want the performance of @nothrow code, or use a library that throws RangeError in very rare and hard to pinpoint cases. Arrays aside, I think there's some use in being able to safely recover from (or safely shut down after) the kind of broken contracts that throw Errors.
Re: Concept proposal: Safely catching error
On Monday, 5 June 2017 at 10:09:30 UTC, ketmar wrote: tbh, i think that it adds Yet Another Exception Rule to the language, and this does no good in the long run. "oh, you generally cannot do that, except if today is Friday, it is rainy, and you've seen pink unicorn at the morning." the more exceptions to general rules language has, the more it reminds Dragon Poker game from Robert Asprin books. Fair enough. A few counterpoints: - This one special case is pretty self-contained. It doesn't require adding annotations (unlike, say, DIP PR #61*), won't impact code that doesn't use it, and the users most likely to hear about it are the one who need to recover from Errors in their code. - It doesn't introduce elaborate under-the-hood tricks (unlike DIP 1008*). It uses already-existing concepts (@safe and @pure), and is in fact closer to the intuitive logic behind Error recovery than the current model; instead of "You can't recover from Errors" you have "You can't recover from Errors unless you flush all data that might have been affected by it". *Note that I am not making a statement for or against those DIPs. I'm only using them as examples to compare my proposal against. So while this would add feature creep to the language, but I'd argue that feature creep would be pretty minor and well-contained, and would probably be worth the problem it would solve.
Re: DIP 1003 (Remove body as a Keyword) Accepted!
On Friday, 2 June 2017 at 14:17:10 UTC, Mike Parker wrote: https://github.com/dlang/DIPs/blob/master/DIPs/DIP1003.md The "See the previous version" link at the end of the document is currently broken and leads to a 404. Thank you for your efforts and congratulations to Jared Hanson!
Concept proposal: Safely catching error
I recently skimmed the "Bad array indexing is considered deadly" thread, which discusses the "array OOB throws Error, which throws the whole program away" problem. The gist of the debate is: - Array OOB is a programming problem; it means an invariant is broken, which means the code surrounding it probably makes invalid assumptions and shouldn't be trusted. - Also, it can be caused by memory corruption. - But then again, anything can be cause by memory corruption, so it's kind of an odd thing to worry about. We should worry about not causing it, not making memory corrupted programs safe, since it's extremely rare and there's not much we can do about it anyway. - But memory corruption is super bad, if a proved error *might* be caused by memory corruption then we must absolutely throw the potentially corrupted data away without using it. - Besides, even without memory corruption, the same argument applies to broken invariants; if we have data that breaks invariants, we need to throw it away, and use it as little as possible. - But sometimes we have very big applications with lots of data and lots of code. If my server deals with dozens of clients or more, I don't want to brutally disconnect them all because I need to throw away one user's data. - This could be achieved with processes. Then again, using processes often isn't practical for performance or architecture reasons. My proposal for solving these problems would be to explicitly allow to catch Errors in @safe code IF the try block from which the Error is caught is perfectly pure. In other words, @safe functions would be allowed to catch Error after try blocks if the block only mutates data declared inside of it; the code would look like: import vibe.d; // ... string handleRequestOrError(in HTTPServerRequest req) @safe { ServerData myData = createData(); try { // both doSomethingWithData and mutateMyData are @pure doSomethingWithData(req, myData); mutateMyData(myData); return myData.toString; } catch (Error) { throw new SomeException("Oh no, a system error occured"); } } void handleRequest(HTTPServerRequest req, HTTPServerResponse res) @safe { try { res.writeBody(handleRequestOrError(req), "text/plain"); } catch (SomeException) { // Handle exception } } The point is, this is safe even when doSomethingWithData breaks an invariant or mutateMyData corrupts myData, because the compiler guarantees that the only data affected WILL be thrown away or otherwise unaccessible by the time catch(Error) is reached. This would allow to design applications that can fail gracefully when dealing with multiple independent clients or tasks, even when one of the tasks has to thrown away because of a programmer error. What do you think? Does the idea have merit? Should I make it into a DIP?
Re: DIP 1007 Preliminary Review Round 1
On Wednesday, 26 April 2017 at 11:26:19 UTC, Steven Schveighoffer wrote: I'm wondering if you actually wrote this? It seems to be quoted. That was a quote from the DIP. (guess I should have used a colon)
Re: DIP 1007 Preliminary Review Round 1
On Tuesday, 25 April 2017 at 18:32:09 UTC, Steven Schveighoffer wrote: I missed this part. Now that I read that, I think we aren't going to gain much by having this language feature internal to the compiler. The way I understood it, the feature will only stay internal to the compiler until it's deemed "safe" for general use. If we want to limit usage, what I'd *rather* see is that the @future (or TBD attribute) only has special meaning inside object.d. Having to adjust a hard-coded compiler list seems like it will result in zero PR changes (save the one that prompted this DIP) that might give us any insight into whether this is a useful feature. The proposal does include something like that. Alternatively, if it proves to be simpler to implement, the feature can be added as an attribute with reserved double-underscore prefix (i.e. @__future) with compiler checks ensuring that it is not used outside of core.*/std.*/object modules. If that course of action is chosen, such an attribute should be defined in the core.attribute module of DRuntime.
Re: DIP 1006 - Preliminary Review Round 1
On Wednesday, 12 April 2017 at 17:16:33 UTC, H. S. Teoh wrote: Overall, I support the idea of this DIP. However, as others have mentioned, it needs to make it clear whether/how `-contracts=assert` here interacts with unittests. According to the discussion, apparently a different druntime function is used for asserts in unittests? If so, this needs to be clearly stated in the DIP. Agreed. The simplest behavior I could imagine for this switch is "override everything else". That is, no matter which other switch is used (release, unittests), the -contracts switch has the final say on which tests are enabled and which are suppressed.
Re: Exceptions in @nogc code
On Thursday, 6 April 2017 at 16:39:15 UTC, Andrei Alexandrescu wrote: On 4/6/17 9:05 AM, deadalnix wrote: There is this whole "you are ignoring others' ideas" and "we demand that this is listened to" that is sadly quite harmful. There is not ignoring as much as the difficulty on working on someone else's rough idea, while they simultaneously refuse to flesh it out. I'm not saying you're wrong, but there's a different between saying "You should flesh out your idea" and "We're not going to respond formally before you submit a DIP".
Re: Walter and Andrei and community relationship management
On Thursday, 6 April 2017 at 07:24:28 UTC, Nick B wrote: I'm going to address this post to Walter and Andrei, as the joint captains of the D ship, so to speak. [...] For the community, it seems different rules apply. In-depth news-groups discussions for new proposals are firstly encouraged and then later discouraged, with the ultimate response that the proposal MUST be in the form of a time-consuming DIP, to be considered, even if it will ultimately wastes everyone time, and cause resentment in the community. Agreed. I don't want to make any assumptions, and I do respect Walter for consistently taking on a role that means that people keep criticizing his choices whatever he does, but his approach to dealing with the community is undeniably flawed, and seems to be breeding a lot of frustration and resentment. My personal example is from a discussion we had in February about 'return scope', where Walter Bright asked deadalnix to explain his case, and explain the problems he saw. At the time, deadalnix (and other users) replied that they didn't want to make their cases, because they had already done so in the past, and they expected Walter to ignore whatever they would tell him. http://forum.dlang.org/post/o6h3re$26lo$1...@digitalmars.com I outlined several problems I saw with return scope, and Walter replied to my post, answering each point I made. And while it's commendable that Walter took the time to do it, those answers felt extremely frustrating to me; Walter did *not* address my points, and did not take what I was saying seriously. As an example, one of the problems I pointed out was: It only addresses cases where a reference might be escaped through a single return value; it doesn't address escaping through 'out' parameters, The following conversation ensued: Walter: Yes it does (the general case is storing a value into any data structure pointed to by an argument). Me: I don't understand. Let's say I have an arbitrary class 'Container', and I want a function that stores a pointer to an int in this container, in a way that lets the function's caller know that the int* given to it will last only as long as the container, and I want to do it without return values. The prototype would be akin to void store(ref Container cont, int* ptr); And the code it would be used in would look like: { scope Container c; scope int* ptr = ...; store(c, ptr); } What would the syntax be? Walter: c.ptr = ptr; You can also do: ref Container store(ref return scope c, return scope int* ptr); The rest of the conversation basically went like this: Me: This isn't possible, or if it is, it shouldn't be. Walter: Yes it is. It compiles. Me: Okay, but it shouldn't compile, because it make [invalid write error] possible. Walter: Well, it doesn't compile with @safe. Me: Yes, it does compile with @safe, and no, it shouldn't, and my point from the beginning was that your model made that kind of function impossible. Why do you think we're even talking about this? In short, Walter asked for people to give their opinions on the subject; but when I did give my opinion, Walter did not take my points seriously, and basically assumed that the only reason I disagreed with him was that I didn't understand the subject as well as he did. Other people (including Dicebot) have complained about that. This was a very frustrating experience, and I did not want to participate in the discussions about Dlang any further after that. Look, again, I feel bad trash-talking Walter. He's putting a lot of effort into this. But he's clearly really, really bad at listening to other people. This has to be addressed at some point.
Re: memcpy() comparison: C, Rust, and D
On Tuesday, 31 January 2017 at 11:32:01 UTC, John Colvin wrote: On Tuesday, 31 January 2017 at 10:17:09 UTC, Olivier FAURE wrote: On Tuesday, 31 January 2017 at 01:30:48 UTC, Walter Bright wrote: Point 3 is about `const`, which as far as I know is unaffected by application of @safe. Did you mean to quote a different point? Oh yeah, I thought it was about scope. Makes sense then.
Re: memcpy() comparison: C, Rust, and D
On Tuesday, 31 January 2017 at 10:00:03 UTC, Stefan Koch wrote: On Tuesday, 31 January 2017 at 09:31:23 UTC, Nordlöw wrote: How can we be sure that the return value points to the same content as `s1`? Because of the return attribute. return means I am passing this value through myself. I thought it meant "the parameter can be returned" not "the parameter *will* be returned".
Re: memcpy() comparison: C, Rust, and D
On Tuesday, 31 January 2017 at 01:30:48 UTC, Walter Bright wrote: 3. Nothing s2 transitively points to is altered via s2. Wait, really? Does that mean that this code is implicitly illegal? import core.stdc.string; void main() { int*[10] data1; int*[10] data2; memcpy(data1.ptr, data2.ptr, 10); } Since memcpy is @system, I have no way to know for sure (the compiler obviously won't warn me since I can't mark main as @safe), so I'd argue the prototype doesn't carry that information.
Re: `in` no longer same as `const ref`
On Monday, 30 January 2017 at 13:57:10 UTC, Adam D. Ruppe wrote: On Monday, 30 January 2017 at 00:26:27 UTC, Walter Bright wrote: I was afraid that by checking it, too much code would break. Code that was using it improperly was *already* broken. Now, the compiler will simply tell them, at compile time, why instead of letting it silently accept undefined behavior. Well, it's a trade-off. Some people would rather their project with potentially broken code does not stop compiling because they upgraded their compiler. Although I guess you could solve this by having -dip1000 emit only warnings and no error until the adaptation period had passed.
Re: `in` no longer same as `const ref`
On Monday, 30 January 2017 at 06:38:11 UTC, Jonathan M Davis wrote: Personally, I think that effectively having an alias for two attributes in a single attribute is a confusing design decision anyway and think that it was a mistake, but we've had folks slapping in on stuff for years with no enforcement, and flipping the switch on that would likely not be pretty. - Jonathan M Davis I've always thought of 'in' as a visual shorthand for "this parameter doesn't care whether you give it a deep copy or a shallow reference", personally.
Re: Release D 2.073.0
Continuing on a new thread because this is getting kinda off-topic. http://forum.dlang.org/post/jhtvuvhxsayjatsdb...@forum.dlang.org
Re: Release D 2.073.0
On Saturday, 28 January 2017 at 22:31:23 UTC, Walter Bright wrote: It only addresses cases where a reference might be escaped through a single return value; it doesn't address escaping through 'out' parameters, Yes it does (the general case is storing a value into any data structure pointed to by an argument). I don't understand. Let's say I have an arbitrary class 'Container', and I want a function that stores a pointer to an int in this container, in a way that lets the function's caller know that the int* given to it will last only as long as the container, and I want to do it without return values. The prototype would be akin to void store(ref Container cont, int* ptr); And the code it would be used in would look like: { scope Container c; scope int* ptr = ...; store(c, ptr); } What would the syntax be? or through a returned tuple. Yes it does (it's as if the tuple elements were fields of a struct). I meant a little more specific. You have no way to do this Pair!(int*, float*) makePair( int*, float*); You can declare them both a scope return, but then their scope is "merged" into the return value, which may be undesirable if you want to treat them differently. Although it's not that important, because this particular case would rarely appear in actual practical code, unlike swap and out parameters. There will be a need for @system code for some things, that is correct. That's also true of Rust, where cyclic data structures have to be marked as unsafe, and functions cannot access mutable global data. Yeah, but cyclic data structures and complex types are one thing. I'm just talking about having a (scope int*)[] type, and a swap function. Those should be covered in the scope system, and shouldn't need GC or RC. Nobody has come up with a better plan. A Rust-like system would require users to not just add annotations, but redesign all their code and data structures. It's out of the question. There has been off and on for about 10 years. Little has come of it. Then came -dip25, which addressed the return ref problem. It worked surprisingly well. There was some severe criticism of it last year as being unusable, but those turned out to be implementation bugs that were not difficult to resolve. Well, I liked Schultz's original proposal. It seemed easier to theorize formally, and didn't need major redesign the way Rust's templates do. And having read the thread it was proposed in... I didn't see any brainstorming? It seems to me that dip25 replaced Schult's proposal without transition, or without debate about the merits and trade-offs of either proposition, or any rationale explaining why Schultz's proposition was abandoned. It's fair if you don't agree with my rationale, but that isn't the same as not addressing them at all. I believe I have addressed the issues you brought up here. If you'd like further clarification, please ask. I'll probably have more to say on that later, but I think THIS is the major point of contention. I don't feel like you've addressed my concerns, because I'm pretty sure you haven't understood my concerns. You interpreted my remarks as obstacles to be overcome, not as information I was trying to communicate to you. I feel like your opinion is that the reason I'm arguing against dip1000 is that I don't understand it or its context well enough. I feel you have the same opinion of others people who argue against the dip. This is what I meant by "not being taken seriously". I didn't write that. Sorry, I was replying another poster above in the thread. I'm not used to mailing list forums.
Re: Release D 2.073.0
On Saturday, 28 January 2017 at 03:40:43 UTC, Walter Bright wrote: If you've got a case, make it. If you see problems, explain. If you want to help, please do. For what it's worth, here are my problems with 'return scope': - As far as I can tell, it's not properly documented. The github page for DIP-1000 is apparently pending a rewrite, and I can't find a formal definition of 'return scope' anywhere (the D reference 'Functions' page only mentions 'return ref', and in passing). - I personally don't like using the return keyword as anything but an instruction; when I'm reading code, I can have a good feeling of the code's flow by just looking at the indentation, the if/while/for blocks, and the break/throw/return instructions. I'll be the first to admit it's kind of minor though. - It's an obvious monkey patch, and it will clearly have to be replaced at some point. It only addresses cases where a reference might be escaped through a single return value; it doesn't address escaping through 'out' parameters, or through a returned tuple. - It fails to enable useful features like the swap function, or storing a scoped value in a container (well, outside of @trusted code, but that's beside the point). - Because it isn't an integral part of the type system, but more of an external addition, it has a ton of special cases and little gotcha's when you try to do something complex with it. (ref parameters can't be scope, can't have pointers on scope values, which means you can't pass any kind of scope value by reference, you can't have scope types as template parameters, etc) The two last ones feel like the most important problems to me. If all you want to do is variants of the identity function, and returning references to attributes, then return ref and return scope is everything you need (arguably). If you want to store containers of scoped values, swap scope values, and generally treat scope as a first-class citizen, then return scope and return ref seem like a step in the wrong direction. The meta problem people seem to have with 'return scope' seems more of a social problem. Apparently a lot of people feel like you haven't treated their concerns seriously; part of it is that as far as I'm aware there hasn't been a proper, open brainstorming on how to address lifetime analysis in D. My reading of the situation, which may be completely off-base, is that you took inspiration from Marc Schütz's proposal, and wrote something simpler, easier to understand and to code with, and following the model you developed when coming up with inout (no templates, KISS, don't use up too much language complexity estate on an optional feature), then entered a cycle of gradually improving it, eventually making DIP-1000. People who don't like the direction DIP-1000 goes towards are upset because they feel way too much effort is going towards refining an idea they don't agree with in the first place. To speak bluntly, I don't think you've addressed their concerns at all, and I hope you do so before 'scope return' is set in stone. So, do what numerous people have done numerous times already, to no great effect? Please don't be hostile. When you have a communication problem, being passive-aggressive will only makes it worse.