Re: "I made a game using Rust"
On Friday, 12 May 2017 at 02:43:17 UTC, evilrat wrote: I use just dub and generated Visual D project. Well in my case the problem is that engine is built as static lib, and there is not much i can do with this. I've started moving things around and turn lib to executable, at least now build time cut in half, down to 20-22 sec. And speaking about build time i mean exactly it, compile+link. Compiling without linking is just 10-12 sec. That's really good for single-threaded build! Ah okay. If I understand correctly, the "game" itself is just the two .d files in the Scripts folder, which get compiled then linked with the prebuilt .lib for the engine. If so, a 10-12s compile just for those two files still sounds really long to me. Either I'm misunderstanding, or there's something bizarre going on that's causing those build times. I'm using derelict heavily in my project, and therefore most of my libraries live in DLLs. I've run into some annoyances with this, for example, bugs in the derelict interface to the DLL, either existing ones or ones I introduce myself, can be very difficult to detect and debug, since there's no type checking across the DLL boundary. But one huge advantage is that the build time overhead for these libraries is almost zero. Everything I said about debugging was related to D, and on Windows Visual D specifically. C++ had it for ages. Ah, my apologies I misunderstood. Agreed, the current debugging experience for D leaves much to be desired unfortunately. I see. This could even be benefit for low-level stuff like renderer or physics system, especially when keeping required data in large single array. But for the game part itself this is no go, not for business at least. Yeah I agree that this limitation sucks. Even if I try to avoid using classes, there are still times where I'm forced into it. For example, Thread is a class. I have to be very cautious when working with an instance of Thread, or I risk bizarre crashes due to mismatched typeinfo.
Re: Processing a gzipped csv-file by line-by-line
On Friday, 12 May 2017 at 00:18:47 UTC, H. S. Teoh wrote: On Wed, May 10, 2017 at 11:40:08PM +, Jesse Phillips via Digitalmars-d-learn wrote: [...] H.S. Teoh mentioned fastcsv but requires all the data to be in memory. Or you could use std.mmfile. But if it's decompressed data, then it would still need to be small enough to fit in memory. Well, in theory you *could* use an anonymous mapping for std.mmfile as an OS-backed virtual memory buffer to decompress into, but it's questionable whether that's really worth the effort. If you can get the zip to decompress into a range of dchar then std.csv will work with it. It is by far not the fastest, but much speed is lost since it supports input ranges and doesn't specialize on any other range type. I actually spent some time today to look into whether fastcsv can possibly be made to work with general input ranges as long as they support slicing... and immediately ran into the infamous autodecoding issue: strings are not random-access ranges because of autodecoding, so it would require either extensive code surgery to make it work, or ugly hacks to bypass autodecoding. I'm quite tempted to attempt the latter, in fact, but not now since it's getting busier at work and I don't have that much free time to spend on a major refactoring of fastcsv. Alternatively, I could possibly hack together a version of fastcsv that took a range of const(char)[] as input (rather than a single string), so that, in theory, it could handle arbitrarily large input files as long as the caller can provide a range of data blocks, e.g., File.byChunk, or in this particular case, a range of decompressed data blocks from whatever decompressor is used to extract the data. As long as you consume the individual rows without storing references to them indefinitely (don't try to make an array of the entire dataset), fastcsv's optimizations should still work, since unreferenced blocks will eventually get cleaned up by the GC when memory runs low. T I hacked your code to work with std.experimental.allocator. If I remember it was a fair bit faster for my use. Let me know if you would like me to tidy up into a pull request. Thanks for the library. Also - sent you an email. Not sure if you got it. Laeeth
Re: Fantastic exchange from DConf
On Thursday, 11 May 2017 at 15:53:40 UTC, Jonathan M Davis wrote: On Monday, May 08, 2017 23:15:12 H. S. Teoh via Digitalmars-d wrote: Recently I've had the dubious privilege of being part of a department wide push on the part of my employer to audit our codebases (mostly C, with a smattering of C++ and other code, all dealing with various levels of network services and running on hardware expected to be "enterprise" quality and "secure") and fix security problems and other such bugs, with the help of some static analysis tools. I have to say that even given my general skepticism about the quality of so-called "enterprise" code, I was rather shaken not only to find lots of confirmation of my gut feeling that there are major issues in our codebase, but even more by just HOW MANY of them there are. In a way, it's amazing how successful folks can be with software that's quite buggy. A _lot_ of software works just "well enough" that it gets the job done but is actually pretty terrible. And I've had coworkers argue to me before that writing correct software really doesn't matter - it just has to work well enough to get the job done. And sadly, to a great extent, that's true. However, writing software that's works just "well enough" does come at a cost, and if security is a real concern (as it increasingly is), then that sort of attitude is not going to cut it. But since the cost often comes later, I don't think that it's at all clear that we're going to really see a shift towards languages that prevent such bugs. Up front costs tend to have a powerful impact on decision making - especially when the cost that could come later is theoretical rather than guaranteed. Now, given that D is also a very _productive_ language to write in, it stands to reduce up front costs as well, and that combined with its ability to reduce the theoretical security costs, we could have a real win, but with how entrenched C and C++ are and how much many companies are geared towards not caring about security or software quality so long as the software seems to get the job done, I think that it's going to be a _major_ uphill battle for a language like D to really gain mainstream use on anywhere near the level that languages like C and C++ have. But for those who are willing to use a language that makes it harder to write code with memory safety issues, there's a competitive advantage to be gained. - Jonathan M Davis D wasn't ready for mainstream adoption until quite recently I think. The documentation for Phobos when I started looking at D in 2014 was perfectly clear if you were more theoretically minded, but not for other people. In a previous incarnation I tried to get one trader who writes Python to look at D and he was terrified of it because of the docs. And I used to regularly have compiler crashes and ldc was always too far behind dmd. If you wanted to find commercial users there didn't seem to be so many and so hard to point to successful projects in D that people would have heard of or could recognise - at least not enough of them. Perception has threshold effects and isn't linear. There wasn't that much on numerical front either. The D Foundation didn't exist and Andrei played superhero in his spare time. All that's changed now in every respect. I can point to the documentation and say we should have docs like that and with runnable tests /examples. Most code builds fine with ldc, plenty of numerical libraries - thanks Ilya - and perception is quite different about commercial successes. Remember what's really just incremental in reality can be a step change in perception. I don't think the costs of adopting D are tiny upfront. Putting aside the fact that people expect better IDE support than we have, and that we have quite frequent releases (not a bad thing, but it's where we are in maturity) and some of them are a bit unfinished and others break things for good reasons, build systems are not that great even for middling projects (200k sloc). Dub is an amazing accomplishment for Sonke as one of many projects part time, but it's not yet so mature as a build tool. We have extern(C++) which is great, and no other language has it. But that's not the same thing as saying it's trivial to use a C++ library from D (and I don't think it's yet mature bugwise). No STL yet. Even for C compare the steps involved vs LuaJIT FFI. Dstep is a great tool but not without some friction and it only works for C. So one should expect to pay a price with all of this, and I think most of the price is upfront (also because you might want to wrap the libraries you use most often). And the price is paid by having to deal with things people often take for granted, so even if it's small in the scheme of things, it's more noticeable. A community needs energy coming into it to grow, but if there's too quick an influx of newcomers that wouldn't be good
Re: On Andrei's Keynote / checkedint
Hi, I can't find video for Andrei's talk at https://www.youtube.com/playlist?list=PL3jwVPmk_PRxo23yyoc0Ip_cP3-rCm7eB Can you provide a link? I'm looking forward to watching it! Thanks!
Re: "I made a game using Rust"
On Thursday, 11 May 2017 at 17:38:26 UTC, Lewis wrote: On Thursday, 11 May 2017 at 03:17:13 UTC, evilrat wrote: I have played recently with one D game engine and result was frustrating. My compile time was about 45 sec! Interesting. What game engine were you using? To me this sounds like a problem in the build process. DMD isn't a build system and doesn't handle build management, incremental builds, or anything else like that. You'll need an external tool (or roll a python script like I did). At the end of the day, you hand a bunch of files to DMD to build, and it spits out one or more exe/dll/lib/obj. This process for me has been quite fast, even considering that I'm pretty much rebuilding the entire game (minus libs and heavy templates) every time. My python script basically separates the build into four parts, and does a sort of poor man's coarse incremental build with them. The four parts are: - D libs - Heavy templates - Game DLL - Game EXE (which is pretty much just one file that loads the DLL then calls into it) For example, if a lib changes, I rebuild everything. But if a file in the Game DLL changes, I only rebuild the game DLL. I use just dub and generated Visual D project. Well in my case the problem is that engine is built as static lib, and there is not much i can do with this. I've started moving things around and turn lib to executable, at least now build time cut in half, down to 20-22 sec. And speaking about build time i mean exactly it, compile+link. Compiling without linking is just 10-12 sec. That's really good for single-threaded build! As for the engine... Here, take a look. https://github.com/Superbelko/Dash And you also need the "game" itself, add it to the engine as submodule. https://github.com/Circular-Studios/Sample-Dash-Game There is no sane x64 debugging on Windows. structs doesn't shows at all, that just top of the list... In C++, I've generally had a very good experience with the visual studio debugger, both with x86 and x64. When I program C++ at home, literally the only thing I use visual studio for is the debugger (the rest of the program is pretty bloated and I use almost none of the other features). When you debugged on x64 in windows, what debugger were you using? Even back in 2011 things were good enough that I could see into structs :) Everything I said about debugging was related to D, and on Windows Visual D specifically. C++ had it for ages. How did you managed using classes from DLL? I pretty much don't. If a class is created in the DLL from a class defined in the DLL and is never touched by the EXE, things seem fine. But I don't let classes cross the EXE/DLL boundary, and even then I keep my usage of classes to a bare minimum. Thankfully though my programming style is fairly procedural anyway, so it's not a huge loss for me personally. I see. This could even be benefit for low-level stuff like renderer or physics system, especially when keeping required data in large single array. But for the game part itself this is no go, not for business at least. The real issue is that you can pass classes both ways, but any casts will fail due to no type info, I have not tested it myself but they said on Linux(only) it works as it should. So this problem had to be resolved as soon as possible, for the sake of D future.
Re: Fantastic exchange from DConf
On 05/11/2017 10:20 PM, Nick Sabalausky (Abscissa) wrote: On 05/10/2017 02:28 AM, H. S. Teoh via Digitalmars-d wrote: I'm on the fence about the former. My current theory is that being forced to write "proper" code even while refactoring actually helps the quality of the resulting code. I find anything too pedantic to be an outright error will *seriously* get in my way and break my workflow on the task at hand when I'm dealing with refactoring, debugging, playing around with an idea, etc., if I'm required to compulsively "clean them all up" at every little step along the way Another thing to keep in mind is that deprecations are nothing more than a special type of warning. If code must be be either "error" or "non-error" with no in-between, then that rules out deprecations. They would be forced to either become fatal errors (thus defeating the whole point of keeping an old symbol around marked as deprecated) or go away entirely.
Re: Fantastic exchange from DConf
On 05/10/2017 02:28 AM, H. S. Teoh via Digitalmars-d wrote: I'd much rather the compiler say "Hey, you! This piece of code is probably wrong, so please fix it! If it was intentional, please write it another way that makes that clear!" - and abort with a compile error. In the vast majority of cases, yes, I agree. But I've seen good ideas of useful heads-ups the compiler *could* provide get shot down in favor of silence because making it an error would, indeed, be a pedantic pain. As I see it, an argument against warnings is an argument against lint tools. And lint messages are *less* likely to get heeded, because the user has to actually go ahead and bother to install and run them. That puts me strongly in the philosophy of "Code containing warnings: Allowed while compiling, disallowed when committing (with allowances for mitigating circumstances)." I'm on the fence about the former. My current theory is that being forced to write "proper" code even while refactoring actually helps the quality of the resulting code. I find anything too pedantic to be an outright error will *seriously* get in my way and break my workflow on the task at hand when I'm dealing with refactoring, debugging, playing around with an idea, etc., if I'm required to compulsively "clean them all up" at every little step along the way (it'd be like working with my mother hovering over my shoulder...). And that's been the case even for things I would normally want to be informed of. Dead/unreachable code and unused variables are two examples that come to mind. The problem is that it's not enforced by the compiler, so *somebody* somewhere will inevitably bypass it. I never understood the "Some people ignore it, therefore it's good to remove it and prevent anyone else from ever benefiting" line of reasoning. I don't want all "caution" road signs ("stop sign ahead", "hidden driveway", "speed limit decreases ahead", etc) all ripped out of the ground and tossed just because there are some jackasses who ignore them and cause trouble. Bad things happen when people ignore road signs, and they do ignore road signs, therefore let's get rid of road signs. That wouldn't make any shred of sense, would it? It's the same thing here: I'd rather have somebody somewhere bypass that enforcement than render EVERYONE completely unable to benefit from it, ever. When the compiler keeps silent about a code smell instead of emitting a waring, that's exactly the same as emitting a warning but *requiring* that *everybody* *always* ignores it. "Sometimes" missing a heads-up is better than "always" missing it. C/C++ doesn't demonstrate that warnings are doomed to be useless and "always" ignored. What it demonstrates is that warnings are NOT an appropriate strategy for fixing language problems. Point. I suppose YMMV, but IME unless warnings are enforced with -Werror or equivalent, after a while people just stop paying attention to them, at least where I work. So nobody else should have the opportunity to benefit from them? Because that's what the alternative is. As soon as we buy into the "error" vs "totally ok" false dichotomy, we start hitting (and this is exactly what did happen in D many years ago) cases where a known code smell is too pedantic to be justifiable as a build-breaking error. So if we buy into the "error/ok" dichotomy, those code smells are forced into the "A-Ok!" bucket, guaranteeing that nobody benefits. Those "X doesn't fit into the error vs ok dichotomy" realities are exactly why DMD wound up with a set of warnings despite Walter's philosophical objections to them. That's why my eventual conclusion is that anything short of enforcement will ultimately fail. Unless there is no way you can actually get an executable out of badly-written code, there will always be *somebody* out there that will write bad code. And by Murphy's Law, that somebody will eventually be someone in your team, and chances are you'll be the one cleaning up the mess afterwards. Not something I envy doing (I've already had to do too much of that). And when I am tasked with cleaning up that bad code, I *really* hope it's from me being the only one to read the warnings, and not because I just wasted the whole day tracking down some weird bug only to find it was caused by something the compiler *could* have warned me about, but chose not to because the compiler doesn't believe in warnings out of fear that somebody, somewhere might ignore it.
Re: Fantastic exchange from DConf
On 05/11/2017 11:53 AM, Jonathan M Davis via Digitalmars-d wrote: In a way, it's amazing how successful folks can be with software that's quite buggy. A _lot_ of software works just "well enough" that it gets the job done but is actually pretty terrible. And I've had coworkers argue to me before that writing correct software really doesn't matter - it just has to work well enough to get the job done. And sadly, to a great extent, that's true. However, writing software that's works just "well enough" does come at a cost, and if security is a real concern (as it increasingly is), then that sort of attitude is not going to cut it. But since the cost often comes later, I don't think that it's at all clear that we're going to really see a shift towards languages that prevent such bugs. Up front costs tend to have a powerful impact on decision making - especially when the cost that could come later is theoretical rather than guaranteed. Now, given that D is also a very _productive_ language to write in, it stands to reduce up front costs as well, and that combined with its ability to reduce the theoretical security costs, we could have a real win, but with how entrenched C and C++ are and how much many companies are geared towards not caring about security or software quality so long as the software seems to get the job done, I think that it's going to be a _major_ uphill battle for a language like D to really gain mainstream use on anywhere near the level that languages like C and C++ have. But for those who are willing to use a language that makes it harder to write code with memory safety issues, there's a competitive advantage to be gained. All very, unfortunately, true. It's like I say, the tech industry isn't engineering, it's fashion. There is no meritocracy here, not by a long shot. In tech: What's popular is right and what's right is popular, period.
Re: dmd: can't build on Arch Linux or latest Ubuntu
On Thursday, May 11, 2017 22:16:22 Joseph Rushton Wakeling via Digitalmars-d wrote: > On Wednesday, 10 May 2017 at 11:51:03 UTC, Atila Neves wrote: > > So I went "I know, I'll just use a container". I tried Ubuntu > > Zesty in docker. That doesn't build dmd off the bat either, it > > fails with PIC errors. > > Have you tried adding `PIC=-fPIC` when you invoke `make`? As I understand it, it's PIC=1 that you need. - Jonathan M Davis
Re: Fantastic exchange from DConf
On 05/10/2017 08:06 AM, Patrick Schluter wrote: On Wednesday, 10 May 2017 at 06:28:31 UTC, H. S. Teoh wrote: On Tue, May 09, 2017 at 09:19:08PM -0400, Nick Sabalausky [...] Perhaps I'm just being cynical, but my current unfounded hypothesis is that the majority of C/C++ programmers ... Just a nitpick, could we also please stop conflating C and C++ programmers? My experience is that C++ programmer are completely clueless when it comes to C programming? They think they know C but it's generally far away. The thing is, that C has evolved with C99 and C11 and the changes have not all been adopted by C++ (and Microsoft actively stalling the adoption of C99 in Visual C didn't help either). I wouldn't know the difference all that well anyway. Aside from a brief stint playing around with the Marmalade engine, the last time I was still really using C *or* C++ was back when C++ *did* mean little more than "C with classes" (and there was this new "templates" thing that was considered best avoided for the time being because all the implementations were known buggy). I left them when I could tell the complexity of getting things done (in either) was falling way behind the modern curve, and there were other languages which offered sane productivity without completely sacrificing low-level capabilities.
Re: The cost of doing compile time introspection
On Thursday, 11 May 2017 at 21:09:05 UTC, Timon Gehr wrote: [...] Yes, this works and is a few times faster. It's slightly faster when inlining the condition: static foreach(fn;__traits(allMembers, functions)){ static if (isFunction!(__traits(getMember, functions, fn)) && (functionLinkage!(__traits(getMember, functions, fn)) == "C" || functionLinkage!(__traits(getMember, functions, fn)) == "Windows")){ mixin("typeof(functions."~fn~")* "~fn~";"); } } With the DMD debug build, I measured the following times on my machine: Baselines: just imports: 0m0.318s copy-pasted generated code after printing it with pragma(msg, ...): 0m0.341s Compile-time code generation: old version: 0m2.569s static foreach, uninlined: 0m0.704s static foreach inlined: 0m0.610s Still not great, but a notable improvement. isFunction and functionLinkage are slow, so I got rid of them (as well as the dependency on std.traits): static foreach(fn;__traits(allMembers, functions)){ static if(fn != "object" && fn != "llvm" && fn != "orEmpty"): mixin("typeof(functions."~fn~")* "~fn~";"); } timing: 0m0.350s (This is not perfect as you'll need to edit the list in case you are adding more non-c-function members to that module, but I guess it is a good trade-off.) You can achieve essentially the same using a string mixin: mixin({ string r; foreach(fn;__traits(allMembers, functions)) if(fn != "object" && fn != "llvm" && fn != "orEmpty") r~="typeof(functions."~fn~")* "~fn~";"; return r; }()); timing: 0m0.370s In case the original semantics should be preserved, I think this is the best option: mixin({ string r; foreach(fn;CFunctions!functions) r~="typeof(functions."~fn~")* "~fn~";"; return r; }()); timing: 0m0.740s Thank you for the detailed comparison. I have applied your optimizations (with minor refactoring that did not impact compile time for me) and ended up with this (sorry for some name changes, wasn't happy with my original ones): --- import link = llvm.functions.link; bool isSym(string m) { return m != "object" && m != "llvm" && m != "orEmpty"; } string declareSymPtr(string m) { return "typeof(link." ~ m ~ ")* " ~ m ~ ";"; } string getSymPtr(string m) { return m ~ " = library.getSymbol!(typeof(" ~ m ~ "))(\"" ~ m ~ "\");"; } mixin ({ string code; foreach (m; __traits(allMembers, link)) if (m.isSym) { code ~= m.declareSymPtr; } return code; }()); public struct LLVM { static void getSymbols() { foreach (m; __traits(allMembers, link)) static if (m.isSym) { mixin (m.getSymPtr); } } } --- I am not particularly happy about (isSym) having to do a name based blacklist approach instead of a type based whitelist approach, though. With this I'm at least out of the "OMG why is it still compiling" range and thank you to everyone for that. It's still not in the ideal range of < 100 milliseconds, but I'll take what I can get.
[Issue 17138] Warn about superfluous "with" statements
https://issues.dlang.org/show_bug.cgi?id=17138 Walter Brightchanged: What|Removed |Added CC||bugzi...@digitalmars.com --- Comment #1 from Walter Bright --- What does the declaration of someObject look like? --
[Issue 17156] Local function declaration not inferred to be static
https://issues.dlang.org/show_bug.cgi?id=17156 Walter Brightchanged: What|Removed |Added Status|NEW |RESOLVED CC||bugzi...@digitalmars.com Resolution|--- |INVALID --- Comment #1 from Walter Bright --- The trouble is this: uint g() { return 5; } ... uint delegate() d = Your proposal would cause that to fail. Inference is done for template 'a' because the assignment is part of the expression. But for the 'g' case, there may be intervening code of this sort: uint g() { return 5; } uint function() c = uint delegate() d = 'g' cannot be both a function and a delegate. So the simple rule is 'static' being there or not sets it to be a function pointer or a delegate. This is consistent with other uses of 'static'. This is working as designed. Not a bug. --
Re: Processing a gzipped csv-file by line-by-line
On Wed, May 10, 2017 at 11:40:08PM +, Jesse Phillips via Digitalmars-d-learn wrote: [...] > H.S. Teoh mentioned fastcsv but requires all the data to be in memory. Or you could use std.mmfile. But if it's decompressed data, then it would still need to be small enough to fit in memory. Well, in theory you *could* use an anonymous mapping for std.mmfile as an OS-backed virtual memory buffer to decompress into, but it's questionable whether that's really worth the effort. > If you can get the zip to decompress into a range of dchar then > std.csv will work with it. It is by far not the fastest, but much > speed is lost since it supports input ranges and doesn't specialize on > any other range type. I actually spent some time today to look into whether fastcsv can possibly be made to work with general input ranges as long as they support slicing... and immediately ran into the infamous autodecoding issue: strings are not random-access ranges because of autodecoding, so it would require either extensive code surgery to make it work, or ugly hacks to bypass autodecoding. I'm quite tempted to attempt the latter, in fact, but not now since it's getting busier at work and I don't have that much free time to spend on a major refactoring of fastcsv. Alternatively, I could possibly hack together a version of fastcsv that took a range of const(char)[] as input (rather than a single string), so that, in theory, it could handle arbitrarily large input files as long as the caller can provide a range of data blocks, e.g., File.byChunk, or in this particular case, a range of decompressed data blocks from whatever decompressor is used to extract the data. As long as you consume the individual rows without storing references to them indefinitely (don't try to make an array of the entire dataset), fastcsv's optimizations should still work, since unreferenced blocks will eventually get cleaned up by the GC when memory runs low. T -- The computer is only a tool. Unfortunately, so is the user. -- Armaphine, K5
why no statements inside mixin teplates?
Is there a rational behind not allowing statements inside mixin templates? I know mixin does accept code containing statements, but using mixin is much uglier. so I was wondering. example use case: //- int compute(string) { return 1; } mixin template testBoilerPlate(alias arg, alias expected) { { import std.format : format; auto got = compute(arg); assert(got == expected, "expected %s got %s".format(expected, got)); } } unittest { mixin testBoilerPlate("12345", 1); mixin testBoilerPlate("00" ~ "0", 2 - 1); } //
Re: Thoughts on some code breakage with 2.074
On Thu, May 11, 2017 at 07:46:24PM -0400, Steven Schveighoffer via Digitalmars-d wrote: [...] > But this still doesn't mean that *all* bool conversions are value > based. In at least the struct and class cases, more than just the bits > are checked. [...] Wait, what? You can use a *struct* as a bool condition?! I tried this: import std.stdio; struct S {} void main() { S s; if (s) { writeln("WAT"); } } But the compiler (rightly) said: test.d(5): Error: expression s of type S does not have a boolean value Or were you talking about structs that define opCast!bool? (In which case it's certainly intentional and doesn't pose a problem.) I can see classes being usable in conditions, though, since they're essentially pointers hiding behind an abstraction. Still, it doesn't quite sit right with me. For example: class C { } class D { bool opCast(T : bool)() { return false; } } void main() { C c; D d = new D; if (!c) { ... } // OK, expected semantics if (!d) { ... } // *** What happens here? } Whereas had the last two lines been written: if (c is null) { ... } if (d is null) { ... } the intent would be much clearer. (And of course, d would be usable without "is null" if you actually intended to invoke opCast!bool.) T -- In a world without fences, who needs Windows and Gates? -- Christian Surchi
Re: Thoughts on some code breakage with 2.074
On Thu, May 11, 2017 at 08:21:46AM -0400, Steven Schveighoffer via Digitalmars-d wrote: [...] > I don't ever remember if(ptr) being deprecated. In fact, I'd go as far > as saying that maybe H.S. Teoh misremembers the array thing as > pointers. > > The biggest reason is that a huge useful pattern with this is: > > if(auto x = key in someAA) > { >// use *x without more hash lookup costs. > } > > I can't imagine anyone attempted to force this to break without a loud > backlash. I think if(ptr) is mostly universally understood to mean the > pointer is not null. [...] Since the accuracy of my memory was questioned, I went back to look at the code in question, and indeed I did misremember it, but it was not with arrays, it was with casting pointers to bool. And it was in a while-condition, not an if-condition. Here's a simplified version of the original code: struct Op {...} Op* getOp(...) { ... } ... Op* op; while (!input.empty && cast(bool)(op = getOp(...))) { ... } The cast(bool) used to be accepted up to a certain version (it was in the code from when I first wrote it around 2012), then around 2013 it became a compile error, which forced me to rewrite it as: struct Op {...} Op* getOp(...) { ... } ... Op* op; while (!input.empty && (op = getOp(...)) !is null) { ... } which is much more readable and documents intent more clearly. I originally wrote the cast(bool) because without it the compiler rejects using assignment in an while-condition. I suppose the reasoning is that it's too easy to mistakenly write `while (a=b)` instead of `while (a==b)`. In modern C compilers, an extra set of parentheses usually silenced the compiler warning about a possible typo of ==, but in D even with parentheses the compiler would reject it. So all things considering, this little anecdote represents the following progression in readability (the first 2 steps are hypothetical, since they're only permitted in C): while (a = b) ... // in C, error-prone, could be typo while ((a = b)) ... // still in C, marginally better while (cast(bool)(a = b)) // early D, the conversion is now explicit while ((a = b) !is null)// present-day D, finally intent is clear T -- By understanding a machine-oriented language, the programmer will tend to use a much more efficient method; it is much closer to reality. -- D. Knuth
[Issue 15488] global variable shadows function argument
https://issues.dlang.org/show_bug.cgi?id=15488 Walter Brightchanged: What|Removed |Added Status|NEW |RESOLVED CC||bugzi...@digitalmars.com Resolution|--- |FIXED --- Comment #1 from Walter Bright --- This was fixed a while ago with the changes in how symbols are looked up in imports. The example as written prints the same address for both writefln() statements. --
Re: Thoughts on some code breakage with 2.074
On 5/11/17 7:12 PM, deadalnix wrote: On Thursday, 11 May 2017 at 12:26:11 UTC, Steven Schveighoffer wrote: if(arr) -> same as if(arr.ptr) Nope. It is: if(arr) -> same as if(((cast(size_t) arr.ptr) | arr.length) != 0) Should we conclude from the fact that absolutely nobody gets it right in this very forum that nobody will get it right outside ? I'll let you judge. But this still doesn't mean that *all* bool conversions are value based. In at least the struct and class cases, more than just the bits are checked. -Steve
[Issue 14027] segmentation fault in dmd in some circular import situation
https://issues.dlang.org/show_bug.cgi?id=14027 Walter Brightchanged: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |WORKSFORME --- Comment #2 from Walter Bright --- It works without error when compile it with HEAD. --
[Issue 14027] segmentation fault in dmd in some circular import situation
https://issues.dlang.org/show_bug.cgi?id=14027 Walter Brightchanged: What|Removed |Added CC||bugzi...@digitalmars.com --- Comment #1 from Walter Bright --- The files are: --- module_a.d --- module module_a; import module_b; enum U = 1; --- module_b.d --- module module_b; import module_a; struct J(int M) {} struct Y { J!U x; } -- --
Re: Thoughts on some code breakage with 2.074
On 5/11/17 7:12 PM, deadalnix wrote: On Thursday, 11 May 2017 at 12:26:11 UTC, Steven Schveighoffer wrote: if(arr) -> same as if(arr.ptr) Nope. It is: if(arr) -> same as if(((cast(size_t) arr.ptr) | arr.length) != 0) Should we conclude from the fact that absolutely nobody gets it right in this very forum that nobody will get it right outside ? I'll let you judge. Yep, you are right. It's checking the length too. Although in practice, almost never do you have a null pointer array with non-zero length. Just for your amusement, I wrote the test this way :) Stevens-MacBook-Pro:testd steves$ cat testifarrptr.d void main() { char[] x = null; x = x.ptr[0 .. 1]; if(x) { import std.stdio; writeln("ok, deadalnix was right"); } } Stevens-MacBook-Pro:testd steves$ dmd -run testifarrptr.d ok, deadalnix was right -Steve
[Issue 13904] calls to mutable methods are just ignored when instance is an enum
https://issues.dlang.org/show_bug.cgi?id=13904 Walter Brightchanged: What|Removed |Added Status|NEW |RESOLVED CC||bugzi...@digitalmars.com Resolution|--- |INVALID --- Comment #2 from Walter Bright --- This is actually not a bug. enum S x = S(10); x.setValue(20); is rewritten to be: S(10).setValue(20); which is then rewritten to be: auto tmp = S(10); tmp.setValue(20); which works as expected. --
Re: Thoughts on some code breakage with 2.074
On Thursday, 11 May 2017 at 12:26:11 UTC, Steven Schveighoffer wrote: if(arr) -> same as if(arr.ptr) Nope. It is: if(arr) -> same as if(((cast(size_t) arr.ptr) | arr.length) != 0) Should we conclude from the fact that absolutely nobody gets it right in this very forum that nobody will get it right outside ? I'll let you judge.
Re: Thoughts on some code breakage with 2.074
On Thursday, 11 May 2017 at 12:21:46 UTC, Steven Schveighoffer wrote: I can't imagine anyone attempted to force this to break without a loud backlash. I think if(ptr) is mostly universally understood to mean the pointer is not null. -Steve It is not a problem for pointer because for pointer identity and equality are the same thing. It isn't for slices.
Re: Snap packages for DMD and DUB
On Thursday, 11 May 2017 at 22:30:52 UTC, Joseph Rushton Wakeling wrote: OK, looks like `-fPIC` was missing from some of the druntime and phobos build commands. I've pushed a patch to the `dmd` package definition that should fix this. Hmm, no dice. I'll look into this further in the next days.
Re: The cost of doing compile time introspection
On Thursday, 11 May 2017 at 21:57:06 UTC, Timon Gehr wrote: On 10.05.2017 16:28, Stefan Koch wrote: On Wednesday, 10 May 2017 at 14:03:58 UTC, Biotronic wrote: On Wednesday, 10 May 2017 at 11:45:05 UTC, Moritz Maxeiner wrote: [CTFE slow] First, as you may know, Stefan Koch is working on an improved CTFE engine that will hopefully make things a lot better. It will not; This is issue is caused by templates, and not by CTFE. I think my measurements show that the main bottleneck is actually Appender in CTFE, while templates contribute a smaller, yet significant, amount. You are correct. I should not have made this statement without actually measuring. Still templates produce the enormous amounts of code that ctfe has to wade trough. So while they are not the bottleneck in this case, they are still the cause.
Re: Snap packages for DMD and DUB
On Thursday, 11 May 2017 at 14:46:10 UTC, Joseph Rushton Wakeling wrote: On Thursday, 11 May 2017 at 11:47:10 UTC, Piotr Mitana wrote: Hello, I have tried those snaps recently on Ubuntu 16.10. There were -fPIC related errors (if you need the output, I can install the snap again and post it tomarrow). OK, looks like `-fPIC` was missing from some of the druntime and phobos build commands. I've pushed a patch to the `dmd` package definition that should fix this. Can I confirm whether you had these problems with `dmd` only or also with the `dub` package?
[Issue 13331] naked asm functions are broken when compiling with -profile
https://issues.dlang.org/show_bug.cgi?id=13331 Walter Brightchanged: What|Removed |Added CC||bugzi...@digitalmars.com --- Comment #2 from Walter Bright --- https://github.com/dlang/dmd/pull/6770 --
Re: dmd: can't build on Arch Linux or latest Ubuntu
On Wednesday, 10 May 2017 at 11:51:03 UTC, Atila Neves wrote: So I went "I know, I'll just use a container". I tried Ubuntu Zesty in docker. That doesn't build dmd off the bat either, it fails with PIC errors. Have you tried adding `PIC=-fPIC` when you invoke `make`?
[Issue 13186] core/sys/posix/sys/uio.d is not linked into the standard lib
https://issues.dlang.org/show_bug.cgi?id=13186 Walter Brightchanged: What|Removed |Added CC||bugzi...@digitalmars.com --- Comment #1 from Walter Bright --- https://github.com/dlang/druntime/pull/1827 --
Re: The cost of doing compile time introspection
On 10.05.2017 16:28, Stefan Koch wrote: On Wednesday, 10 May 2017 at 14:03:58 UTC, Biotronic wrote: On Wednesday, 10 May 2017 at 11:45:05 UTC, Moritz Maxeiner wrote: [CTFE slow] First, as you may know, Stefan Koch is working on an improved CTFE engine that will hopefully make things a lot better. It will not; This is issue is caused by templates, and not by CTFE. I think my measurements show that the main bottleneck is actually Appender in CTFE, while templates contribute a smaller, yet significant, amount.
Re: Fantastic exchange from DConf
On Thursday, 11 May 2017 at 21:20:35 UTC, Jack Stouffer wrote: On Tuesday, 9 May 2017 at 14:13:31 UTC, Walter Bright wrote: 2. it may not be available on your platform I just had to use valgrind for the first time in years at work (mostly Python code there) and I realized that there's no version that works on the latest OS X version. So valgrind runs on about 2.5% of computers in existence. Fun! Use ASAN.
Re: DConf 2017 Hackathon report [OT]
On 11 May 2017 at 23:19, David Nadlinger via Digitalmars-dwrote: > On Thursday, 11 May 2017 at 21:14:16 UTC, Iain Buclaw wrote: >> >> Oh, do you have to do the multi-stage build yourself? I don't. > > > So you intend to keep a copy of the (old) bootstrap compiler sources in-tree > for all future D-based GDC versions (if/when you start requiring D)? We > could do that just as well, but it seems a bit pointless. — David I do not intend to. Which is why switching to D-based GDC will be a no-op. It is at this point that you've lost me, there is no added complexity building a self-hosted compiler within GCC's build system. C++ is self-hosted, Ada is self-hosted, an acquaintance of mine even wrote a self-hosted ALGOL60 frontend to GCC. This is not a problem that needs to be solved for GDC. Iain.
Re: DConf 2017 Hackathon report
On Tuesday, 9 May 2017 at 04:35:40 UTC, Ali Çehreli wrote: - Contributed to the logo and branding discussions Me too. And: - Discussed ways to move forward with Laeeth and Andrei, and Daniel and Stefan. - Discussed an issue in std.experimental.logger with Robert. - Worked on translation of Extended Pascal snippets to D. It took rather long retrieving them from my workstation at home, but after that I got valuable help from 3-4 seniors in my direct vicinity, finding ways to emulate EP constructs. Nice! Bastiaan.
Re: Fantastic exchange from DConf
On Tuesday, 9 May 2017 at 14:13:31 UTC, Walter Bright wrote: 2. it may not be available on your platform I just had to use valgrind for the first time in years at work (mostly Python code there) and I realized that there's no version that works on the latest OS X version. So valgrind runs on about 2.5% of computers in existence. Fun!
Re: DConf 2017 Hackathon report [OT]
On Thursday, 11 May 2017 at 21:14:16 UTC, Iain Buclaw wrote: Oh, do you have to do the multi-stage build yourself? I don't. So you intend to keep a copy of the (old) bootstrap compiler sources in-tree for all future D-based GDC versions (if/when you start requiring D)? We could do that just as well, but it seems a bit pointless. — David
Re: DConf 2017 Hackathon report [OT]
On 11 May 2017 at 23:06, David Nadlinger via Digitalmars-dwrote: > On Thursday, 11 May 2017 at 20:54:45 UTC, Iain Buclaw wrote: >> >> My rebuttal still stands. Switching build from C++ to D should be a one >> line change, if it isn't then you have a problems with your build process. > > > How does snap requiring more than a one-line change for a multi-stage build > imply that anybody's build process is problematic? — David Oh, do you have to do the multi-stage build yourself? I don't. :-) Iain.
Re: The cost of doing compile time introspection
On 10.05.2017 16:03, Biotronic wrote: A few things here - functions.fn would not do what you want, and neither would __traits(identifier). functions.fn would treat "fn" like a part of name, not a string value, so this will make the poor compiler barf. __traits(identifier, fn) expects fn to be a symbol, while here it's a string. In fact, it's exactly the string you want __traits to return. Lastly, you'll still need a mixin, whether it's for __traits(identifier, fn) or just fn - they're just strings. Something like this: static foreach (fn; CFunctions!functions) { mixin("typeof(__traits(getMember, functions, fn))* "~fn~";"); } Yes, this works and is a few times faster. It's slightly faster when inlining the condition: static foreach(fn;__traits(allMembers, functions)){ static if (isFunction!(__traits(getMember, functions, fn)) && (functionLinkage!(__traits(getMember, functions, fn)) == "C" || functionLinkage!(__traits(getMember, functions, fn)) == "Windows")){ mixin("typeof(functions."~fn~")* "~fn~";"); } } With the DMD debug build, I measured the following times on my machine: Baselines: just imports: 0m0.318s copy-pasted generated code after printing it with pragma(msg, ...): 0m0.341s Compile-time code generation: old version: 0m2.569s static foreach, uninlined: 0m0.704s static foreach inlined: 0m0.610s Still not great, but a notable improvement. isFunction and functionLinkage are slow, so I got rid of them (as well as the dependency on std.traits): static foreach(fn;__traits(allMembers, functions)){ static if(fn != "object" && fn != "llvm" && fn != "orEmpty"): mixin("typeof(functions."~fn~")* "~fn~";"); } timing: 0m0.350s (This is not perfect as you'll need to edit the list in case you are adding more non-c-function members to that module, but I guess it is a good trade-off.) You can achieve essentially the same using a string mixin: mixin({ string r; foreach(fn;__traits(allMembers, functions)) if(fn != "object" && fn != "llvm" && fn != "orEmpty") r~="typeof(functions."~fn~")* "~fn~";"; return r; }()); timing: 0m0.370s In case the original semantics should be preserved, I think this is the best option: mixin({ string r; foreach(fn;CFunctions!functions) r~="typeof(functions."~fn~")* "~fn~";"; return r; }()); timing: 0m0.740s
Re: DConf 2017 Hackathon report [OT]
On Thursday, 11 May 2017 at 20:54:45 UTC, Iain Buclaw wrote: My rebuttal still stands. Switching build from C++ to D should be a one line change, if it isn't then you have a problems with your build process. How does snap requiring more than a one-line change for a multi-stage build imply that anybody's build process is problematic? — David
Re: Json in D: clean, simple API
On Thursday, 11 May 2017 at 20:56:09 UTC, aberba wrote: Something like this is exactly what I'm talking about. Vibe.data.json also has: // using piecewise construction Json j2 = Json.emptyObject; j2["field1"] = "foo"; j2["field2"] = 42.0; j2["field3"] = true; Yeah, mine can do that too, just change `Json` to `var`. I even coincidentally called it `emptyObject` too, cool. Mine also allows you do do math and concatenations and even assign functions, it is crazy what D can do.
Re: Json in D: clean, simple API
On Thursday, 11 May 2017 at 20:36:13 UTC, Adam D. Ruppe wrote: On Thursday, 11 May 2017 at 20:22:22 UTC, aberba wrote: With that i meant designing of a simple-clean api my jsvar.d works kinda similarly to javascript... though i wouldn't call it "clean" because it will not inform you of missing stuff. jsvar.d is here: https://raw.githubusercontent.com/adamdruppe/arsd/master/jsvar.d --- // dmd test.d jsvar.d import arsd.jsvar; import std.stdio; void main() { // reading json with `var.fromJson` var obj = var.fromJson(`{"a":{"b":10},"c":"hi"}`); // inspecting contents writeln(obj.a); writeln(obj.a.b); writeln(obj.c); // convert to basic static type with `.get!T` string c = obj.c.get!string; // change the contents with dot notation obj.a = 15; obj.d = "add new field too"; writeln(obj); struct Test { int a; string c; } // can even get plain structs out Test test = obj.get!Test; writeln(test); // and set structs test.c = "from D"; obj = test; writeln(obj); // writeln on an object prints it in json // or you can explicitly do writeln(obj.toJson()); // big thing is referencing non-existent // things is not an error, it just propagates null: writeln(obj.no.such.property); // null // but that can be convenient } --- Something like this is exactly what I'm talking about. Vibe.data.json also has: // using piecewise construction Json j2 = Json.emptyObject; j2["field1"] = "foo"; j2["field2"] = 42.0; j2["field3"] = true; D doesn't seem to be the blocker for these convenient abstractions.
Re: DConf 2017 Hackathon report
On 11 May 2017 at 22:24, David Nadlinger via Digitalmars-dwrote: > On Thursday, 11 May 2017 at 17:56:00 UTC, Iain Buclaw wrote: >> >> I can only infer that you are saying that using a D project means it's >> more difficult to get working with snap. To which I will insert an >> obligatory "Woah!", and "I expect you to know better" rebuttal. >> >> ... >> >> Woah, I expect you to know better. > > > Incorrect. My (implied) statement was that a dependency on D makes the build > process more complex *if that project is a D compiler, and you don't want to > depend on another one in build-packages*. > > — David My rebuttal still stands. Switching build from C++ to D should be a one line change, if it isn't then you have a problems with your build process.
Re: Json in D: clean, simple API
On Thursday, 11 May 2017 at 19:49:54 UTC, cym13 wrote: On Thursday, 11 May 2017 at 19:33:09 UTC, aberba wrote: Json libs in D seem not straight foward. I've seen several wrappers and (De)serializers trying to abstract std.json. std_data_json proposed to replace std.json doesnt *seem* to tackle straigh-forward json process like I find vibe.data.json. Handy stuff I'm not finding but much simple in vibe.data.json include piecewise construction (constructing json objects like associative arrays) and vibe.data.json's value.get!T. API feel also *feels* not straight-forward. Are all the capabilities of std_data_json what I see in its docs? Why does json seem hard in D and why is std.json still there? Try https://github.com/tamediadigital/asdf ? Not much helpful for my use case though it can be really useful for json-heavy rest api. Interesting.
Re: Json in D: clean, simple API
On Thursday, 11 May 2017 at 20:22:22 UTC, aberba wrote: With that i meant designing of a simple-clean api my jsvar.d works kinda similarly to javascript... though i wouldn't call it "clean" because it will not inform you of missing stuff. jsvar.d is here: https://raw.githubusercontent.com/adamdruppe/arsd/master/jsvar.d --- // dmd test.d jsvar.d import arsd.jsvar; import std.stdio; void main() { // reading json with `var.fromJson` var obj = var.fromJson(`{"a":{"b":10},"c":"hi"}`); // inspecting contents writeln(obj.a); writeln(obj.a.b); writeln(obj.c); // convert to basic static type with `.get!T` string c = obj.c.get!string; // change the contents with dot notation obj.a = 15; obj.d = "add new field too"; writeln(obj); struct Test { int a; string c; } // can even get plain structs out Test test = obj.get!Test; writeln(test); // and set structs test.c = "from D"; obj = test; writeln(obj); // writeln on an object prints it in json // or you can explicitly do writeln(obj.toJson()); // big thing is referencing non-existent // things is not an error, it just propagates null: writeln(obj.no.such.property); // null // but that can be convenient } ---
[Issue 16053] SysTime.fromIsoExtString don't work if nanoseconds are presented
https://issues.dlang.org/show_bug.cgi?id=16053 Jonathan M Davischanged: What|Removed |Added CC||issues.dl...@jmdavisprog.co ||m --- Comment #1 from Jonathan M Davis --- Yes, it's legit. It's just that SysTime will never produce a string with more than 7 digits in the fractional seconds, because its precision is hecto-nanoseconds, and for whatever reason, it didn't occur to me that I would need handle higher precision from elsewhere (even though it should have). It should be a simple enough fix though. --
Re: DConf 2017 Hackathon report
On Thursday, 11 May 2017 at 17:56:00 UTC, Iain Buclaw wrote: I can only infer that you are saying that using a D project means it's more difficult to get working with snap. To which I will insert an obligatory "Woah!", and "I expect you to know better" rebuttal. ... Woah, I expect you to know better. Incorrect. My (implied) statement was that a dependency on D makes the build process more complex *if that project is a D compiler, and you don't want to depend on another one in build-packages*. — David
Re: Json in D: clean, simple API
On Thursday, 11 May 2017 at 20:04:35 UTC, Adam D. Ruppe wrote: On Thursday, 11 May 2017 at 19:33:09 UTC, aberba wrote: Why does json seem hard in D What are you actually looking for? With that i meant designing of a simple-clean api
Re: Unicode Bidi Brackets in D std library?
On Thursday, May 11, 2017 7:07:30 PM PDT Las via Digitalmars-d-learn wrote: > On Thursday, 11 May 2017 at 19:05:46 UTC, Las wrote: > > On Thursday, 11 May 2017 at 18:59:12 UTC, ag0aep6g wrote: > >> On 05/11/2017 08:27 PM, Las wrote: > >>> I see no way of getting > >>> [these](http://unicode.org/Public/UCD/latest/ucd/BidiBrackets.txt) > >>> properties for unicode code points in the std.uni library. > >>> How do I get > >>> these properties? > >> > >> Looks like it's too new. std.uni references "Unicode v6.2" as > >> the standard it complies with, but that BidiBrackets.txt was > >> "originally created [...] for Unicode 6.3". > > > > That's sad. > > Maybe there's an easy way for me to add it to phobos. > > Nearly ten thousand lines in std.uni, great. Well, Unicode _is_ stupidly complicated. However, also remember that those lines include the unit tests and documentation, so it's not as much code as it might first seem like. - Jonathan M Davis
Re: Json in D: clean, simple API
On Thursday, 11 May 2017 at 20:04:35 UTC, Adam D. Ruppe wrote: On Thursday, 11 May 2017 at 19:33:09 UTC, aberba wrote: Why does json seem hard in D What are you actually looking for? I managed to do the task but api makes code not clean for such simple tasks (AA to json, json to AA, json["new key"] = newJson, ...). vibe.data.json does a good job with api design specially construction of new json objects from AA and D types to json types.
Re: Json in D: clean, simple API
On Thursday, 11 May 2017 at 19:33:09 UTC, aberba wrote: Why does json seem hard in D What are you actually looking for?
Re: The cost of doing compile time introspection
On Wednesday, 10 May 2017 at 14:03:58 UTC, Biotronic wrote: As for making the code faster right now, could this be done with mixin templates instead? Something like: import functions = llvm.functions.link; import std.meta, std.traits; template isCFunction(alias member) { static if (isFunction!(member) && (functionLinkage!(member) == "C" || functionLinkage!(member) == "Windows")) { enum isCFunction = true; } else { enum isCFunction = false; } } template CFunctions(alias scope_) { alias GetSymbol(string member) = AliasSeq!(__traits(getMember, scope_, member)); alias CFunctions = Filter!(isCFunction, staticMap!(GetSymbol, __traits(allMembers, scope_))); } mixin template declareStubsImpl(T...) { static if (T.length == 0) { } else { mixin("extern (C) typeof(T[0])* "~__traits(identifier, T[0])~";"); mixin declareStubsImpl!(T[1..$]); } } mixin template declareStubs(alias scope_) { mixin declareStubsImpl!(CFunctions!scope_); } mixin declareStubs!functions; After testing this approach out, I couldn't even time it. Why? Because the compiler pretty much immediately hits the (I think fixed) recursive template expansion limit. The LLVM C API has too many functions for this :/
Re: How to avoid throwing an exceptions for a built-in function?
On Thursday, 11 May 2017 at 18:07:47 UTC, H. S. Teoh wrote: On Thu, May 11, 2017 at 05:55:03PM +, k-five via Digitalmars-d-learn wrote: On Thursday, 11 May 2017 at 17:18:37 UTC, crimaniak wrote: > On Wednesday, 10 May 2017 at 12:40:41 UTC, k-five wrote: - > try this: > https://dlang.org/phobos/std_exception.html#ifThrown Worked. Thanks. import std.stdio; import std.conv: to; import std.exception: ifThrown; void main( string[] args ){ string str = "string"; int index = to!int( str ).ifThrown( 0 ); // if an exception was thrown, it is ignored and then return ( 0 ); writeln( "index: ", index ); // 0 } Keep in mind, though, that you should not do this in an inner loop if you care about performance, as throwing / catching exceptions will incur a performance hit. Outside of inner loops, though, it probably doesn't matter. T This reason is why I sometimes use isNumeric if I have heaps of strings I need to convert, to reduce exceptions. So something like: int index = (str.isNumeric) ? to!int(str).ifThrown(0) : 0; Jordan
Re: Json in D: clean, simple API
On Thursday, 11 May 2017 at 19:33:09 UTC, aberba wrote: Json libs in D seem not straight foward. I've seen several wrappers and (De)serializers trying to abstract std.json. std_data_json proposed to replace std.json doesnt *seem* to tackle straigh-forward json process like I find vibe.data.json. Handy stuff I'm not finding but much simple in vibe.data.json include piecewise construction (constructing json objects like associative arrays) and vibe.data.json's value.get!T. API feel also *feels* not straight-forward. Are all the capabilities of std_data_json what I see in its docs? Why does json seem hard in D and why is std.json still there? Try https://github.com/tamediadigital/asdf ?
Json in D: clean, simple API
Json libs in D seem not straight foward. I've seen several wrappers and (De)serializers trying to abstract std.json. std_data_json proposed to replace std.json doesnt *seem* to tackle straigh-forward json process like I find vibe.data.json. Handy stuff I'm not finding but much simple in vibe.data.json include piecewise construction (constructing json objects like associative arrays) and vibe.data.json's value.get!T. API feel also *feels* not straight-forward. Are all the capabilities of std_data_json what I see in its docs? Why does json seem hard in D and why is std.json still there?
Re: Unicode Bidi Brackets in D std library?
On Thursday, 11 May 2017 at 19:05:46 UTC, Las wrote: On Thursday, 11 May 2017 at 18:59:12 UTC, ag0aep6g wrote: On 05/11/2017 08:27 PM, Las wrote: I see no way of getting [these](http://unicode.org/Public/UCD/latest/ucd/BidiBrackets.txt) properties for unicode code points in the std.uni library. How do I get these properties? Looks like it's too new. std.uni references "Unicode v6.2" as the standard it complies with, but that BidiBrackets.txt was "originally created [...] for Unicode 6.3". That's sad. Maybe there's an easy way for me to add it to phobos. Nearly ten thousand lines in std.uni, great.
Re: Unicode Bidi Brackets in D std library?
On Thursday, 11 May 2017 at 18:59:12 UTC, ag0aep6g wrote: On 05/11/2017 08:27 PM, Las wrote: I see no way of getting [these](http://unicode.org/Public/UCD/latest/ucd/BidiBrackets.txt) properties for unicode code points in the std.uni library. How do I get these properties? Looks like it's too new. std.uni references "Unicode v6.2" as the standard it complies with, but that BidiBrackets.txt was "originally created [...] for Unicode 6.3". That's sad. Maybe there's an easy way for me to add it to phobos.
Re: Unicode Bidi Brackets in D std library?
On 05/11/2017 08:27 PM, Las wrote: I see no way of getting [these](http://unicode.org/Public/UCD/latest/ucd/BidiBrackets.txt) properties for unicode code points in the std.uni library. How do I get these properties? Looks like it's too new. std.uni references "Unicode v6.2" as the standard it complies with, but that BidiBrackets.txt was "originally created [...] for Unicode 6.3".
Re: DConf 2017 Hackathon report
On 9 May 2017 at 06:35, Ali Çehreli via Digitalmars-dwrote: > Please list what we've achieved during the hackathon, including what is > started but is likely to be finished in the coming days or months. > I was frankly a zombie all Sunday, apart from helping Joe setting up the best snap package in the world, I spent the morning rebuilding my toolchain for GCC/GDC-8. After spending some time away from my laptop, then came back to discover it had died on battery. At least I managed to remove D compiler support for SH-5. That was a notable productive task. :-)
Re: Lookahead in unittest
On 2017-05-10 18:17, Stefan Koch wrote: It looks like this unitest-test block are treated like a function. unittest blocks are lowered to functions. -- /Jacob Carlborg
Unicode Bidi Brackets in D std library?
I see no way of getting [these](http://unicode.org/Public/UCD/latest/ucd/BidiBrackets.txt) properties for unicode code points in the std.uni library. How do I get these properties?
Re: DIP 1007 Preliminary Review Round 1
On 5/11/17 1:21 PM, Nick Sabalausky (Abscissa) wrote: On 05/11/2017 07:19 AM, Steven Schveighoffer wrote: On 5/11/17 12:11 AM, Nick Sabalausky (Abscissa) wrote: This is a pointless limitation. What is the benefit of requiring the author to *not* provide an implementation until the transition period is over? It runs counter to normal workflow. The idea (I think) is to have a version of the library that functions *exactly* like the old version, but helpfully identifies where a future version will not function properly. This is like @deprecate. You don't put a @deprecate on a symbol and at the same time remove the symbol's implementation -- you leave it as it was, and just tag it so the warning shows up. That's step one. Yes, I'm aware that's the idea the author had in mind, but that still doesn't begin to address this: What is the *benefit* of requiring of requiring the author to *not* provide an implementation until the transition period is over? How does this work? class Base { void foo() @future { ... } } class Derived : Base { void foo() { ... } } void bar(Base b) // could be instance of Derived { // which one is called? Derived.foo may not have been intended for // the same purpose as Base.foo b.foo(); } The point is not to break code without fair warning. This is the progression I have in mind: In Library version 1 (LV1), the function doesn't exist. In LV2, [the lib author makes a guess that they're going to write a function with a particular name and the] function is marked as @future. In LV3, the function is implemented and the @future tag is removed. Fixed step 2 for you. No, an implementation is in mind and tested. Just not available. You could even have the implementation commented out. In Phobos/Druntime, we wouldn't accept such a prospect without requiring a fleshing out of the details ahead of time. If it makes sense to just add the symbol with an implementation, then I'd rather do that. Otherwise, we create a new way to overload/override, and suddenly things work very differently than people are used to. Suddenly templates start calling the wrong thing and code actually breaks before a change is actually made. And yes, that *is* the progression suggested by this DIP, but one of my key points is: that's a downright silly progression. This is better: - In Library version 1 (LV1), the function doesn't exist. - In LV2, the new function is marked as @new_symbol to prevent the (somehow) TERRIBLE HORRIBLE AWFUL consequence of the new symbol causing people to be required to toss in a FQN, but there's no harm in STOPPING people from actually using the new functionality if they request it unambiguously, now is there? No, there isn't. - In LV3, the @new_symbol tag is removed. It's also possible to implement the symbol with a different temporary name, and use that name if you need it before it's ready. I'm just more comfortable with a symbol that changes absolutely nothing about how a function can be called, but is a warning that something is coming than I am with a callable symbol that acts differently in terms of overloading and overriding. I'll admit, I'm not the DIP author, and I don't know the intention of whether the implementation is allowed to be there or not. The important thing here is that the library writer gives fair warning that a breaking change is coming, giving the user time to update his code at his convenience. Or, if the tag is added to the actual implementation then there IS NO FREAKING BREAKING CHANGE until the @new_func or whatever tag is removed, but the library user is STILL given fair (albiet useless, imo) warning that it will be (kinda sorta) broken (with a downright trivial fix) in a followup release. Not sure I agree there would be no breakage. The symbol is there, it can be called in some cases. This changes behavior without warning. I've had my share of is(typeof(trycallingThis())) blow up spectacularly in ways I didn't predict. To change what happens there is a bad idea IMO. -Steve
Re: How to avoid throwing an exceptions for a built-in function?
On Thu, May 11, 2017 at 05:55:03PM +, k-five via Digitalmars-d-learn wrote: > On Thursday, 11 May 2017 at 17:18:37 UTC, crimaniak wrote: > > On Wednesday, 10 May 2017 at 12:40:41 UTC, k-five wrote: > - > > try this: > > https://dlang.org/phobos/std_exception.html#ifThrown > > > > Worked. Thanks. > > import std.stdio; > import std.conv: to; > import std.exception: ifThrown; > > void main( string[] args ){ > > string str = "string"; > int index = to!int( str ).ifThrown( 0 ); // if an exception was thrown, > it > is ignored and then return ( 0 ); > writeln( "index: ", index );// 0 > } Keep in mind, though, that you should not do this in an inner loop if you care about performance, as throwing / catching exceptions will incur a performance hit. Outside of inner loops, though, it probably doesn't matter. T -- Ph.D. = Permanent head Damage
Re: How to avoid throwing an exceptions for a built-in function?
On Thursday, 11 May 2017 at 17:18:37 UTC, crimaniak wrote: On Wednesday, 10 May 2017 at 12:40:41 UTC, k-five wrote: - try this: https://dlang.org/phobos/std_exception.html#ifThrown Worked. Thanks. import std.stdio; import std.conv: to; import std.exception: ifThrown; void main( string[] args ){ string str = "string"; int index = to!int( str ).ifThrown( 0 ); // if an exception was thrown, it is ignored and then return ( 0 ); writeln( "index: ", index ); // 0 }
Re: DConf 2017 Hackathon report
On 10 May 2017 at 22:04, David Nadlinger via Digitalmars-dwrote: > On Wednesday, 10 May 2017 at 19:46:01 UTC, Joseph Rushton Wakeling wrote: >> >> Ironically, given that I'd always been worried this would be the most >> finnicky compiler snap to create, it's actually the simplest package >> definition out of all the Big 3 ;-) > > > Without even having seen your snap file, I can confidently say that this is > just due to the idiosyncrasies of the snap environment, though. > > Oh wait, no, GDC is still stuck on an ancient C++-based frontend. Not too > surprising, then. ;P > > — David I can only infer that you are saying that using a D project means it's more difficult to get working with snap. To which I will insert an obligatory "Woah!", and "I expect you to know better" rebuttal. ... Woah, I expect you to know better. Iain.
Re: "I made a game using Rust"
On Thursday, 11 May 2017 at 03:17:13 UTC, evilrat wrote: I have played recently with one D game engine and result was frustrating. My compile time was about 45 sec! Interesting. What game engine were you using? To me this sounds like a problem in the build process. DMD isn't a build system and doesn't handle build management, incremental builds, or anything else like that. You'll need an external tool (or roll a python script like I did). At the end of the day, you hand a bunch of files to DMD to build, and it spits out one or more exe/dll/lib/obj. This process for me has been quite fast, even considering that I'm pretty much rebuilding the entire game (minus libs and heavy templates) every time. My python script basically separates the build into four parts, and does a sort of poor man's coarse incremental build with them. The four parts are: - D libs - Heavy templates - Game DLL - Game EXE (which is pretty much just one file that loads the DLL then calls into it) For example, if a lib changes, I rebuild everything. But if a file in the Game DLL changes, I only rebuild the game DLL. There is no sane x64 debugging on Windows. structs doesn't shows at all, that just top of the list... In C++, I've generally had a very good experience with the visual studio debugger, both with x86 and x64. When I program C++ at home, literally the only thing I use visual studio for is the debugger (the rest of the program is pretty bloated and I use almost none of the other features). When you debugged on x64 in windows, what debugger were you using? Even back in 2011 things were good enough that I could see into structs :) How did you managed using classes from DLL? I pretty much don't. If a class is created in the DLL from a class defined in the DLL and is never touched by the EXE, things seem fine. But I don't let classes cross the EXE/DLL boundary, and even then I keep my usage of classes to a bare minimum. Thankfully though my programming style is fairly procedural anyway, so it's not a huge loss for me personally.
Re: DIP 1007 Preliminary Review Round 1
On 05/11/2017 06:10 AM, Dicebot wrote: On Thursday, 11 May 2017 at 03:46:55 UTC, Nick Sabalausky (Abscissa) wrote: 1. Why are FQNs alone (assume they still worked like they're supposed to) not good enough? Needs to be addressed in DIP. Currently isn't. It is already addressed in the DIP. FQNs only help if they are used and current idiomatic D code tends to rely on unqualified imports/names. I didn't see that. Certainly not in the "Existing solutions" section. It needs to be there. But in any case, I'm not talking about the "existing solution" of projects *already* using FQNs for things, I'm talking about the "existing solution" of just letting a library user spend two seconds adding an FQN when they need to disambiguate. 2. The library user is already going to be informed they need to fix an ambiguity anyway, with or without this DIP. Only if you consider "after compiler/library upgrade your project doesn't work anymore" a sufficient "informing" which we definitely don't. I definitely do. But even if you don't, see my "@new_func" alternate suggestion. 3. Updating code to fix the ambiguity introduced by a new symbol is always trivial (or would be if FQNs were still working properly and hadn't become needlessly broken) and inherently backwards-compatible with the previous version of the lib. Trivial compilation error fixup that takes 5 minutes to address in a single project takes up to one month to propagate across all our libraries in projects per my experience. Actually fixing code is hardly a problem with breaking changes, ever. It is synchronization between developers and projects that makes it so painful. This needs to go in the DIP. And in override case, there is no backwards compatible solution available at all (see Steven comment). This needs to be made explicit in the DIP. Currently, I see nothing in the DIP clarifying that FQNs cannot address the override case. Unlike when symbols being added to a lib, the fix in user-code for a deprecation *can* be non-trivial and *can* be non-backwards-compatible with the previous version of the lib, depending on the exact circumstances. Therefore, unlike added symbols, the "deprecation" feature for removed symbols is justified. Please elaborate. User code fix is always either using FQN or renaming, what non-trivial case comes to your mind? For *added* symbols, yes. Which is why I find this DIP to be of questionable value compared to "@deprecated". That's what my quoted paragraph above is referring to: *removed* (ie, deprecated) symbols. When a symbol is *removed*, the user code fix is NOT always guaranteed to be trivial. That's what justifies the existence of @deprecated. @future, OTOH, doesn't meet the same criteria strength because, as you say, when a symbol is added, "User code fix is always either using FQN or renaming".
Re: DIP 1007 Preliminary Review Round 1
On 05/11/2017 07:19 AM, Steven Schveighoffer wrote: On 5/11/17 12:11 AM, Nick Sabalausky (Abscissa) wrote: This is a pointless limitation. What is the benefit of requiring the author to *not* provide an implementation until the transition period is over? It runs counter to normal workflow. The idea (I think) is to have a version of the library that functions *exactly* like the old version, but helpfully identifies where a future version will not function properly. This is like @deprecate. You don't put a @deprecate on a symbol and at the same time remove the symbol's implementation -- you leave it as it was, and just tag it so the warning shows up. That's step one. Yes, I'm aware that's the idea the author had in mind, but that still doesn't begin to address this: What is the *benefit* of requiring of requiring the author to *not* provide an implementation until the transition period is over? I maintain there is no benefit to that. Drawing a parallel to "how you do it with deprecated symbols" is not demonstrating a benefit. For that matter, I see the parallel with deprecated symbols as being "The deprecation tag goes with an implemented function. Symmetry would imply that a 'newly added' tag also goes on an implemented function." So the symmetry arguments goes both ways. But regardless, what we *don't* usually do is develop functionality *after* first finalizing its name. That's just silly. > The point is not to break code without fair warning. This is the > progression I have in mind: > > In Library version 1 (LV1), the function doesn't exist. > In LV2, [the lib author makes a guess that they're going to write a function with a particular name and the] function is marked as @future. > In LV3, the function is implemented and the @future tag is removed. Fixed step 2 for you. And yes, that *is* the progression suggested by this DIP, but one of my key points is: that's a downright silly progression. This is better: - In Library version 1 (LV1), the function doesn't exist. - In LV2, the new function is marked as @new_symbol to prevent the (somehow) TERRIBLE HORRIBLE AWFUL consequence of the new symbol causing people to be required to toss in a FQN, but there's no harm in STOPPING people from actually using the new functionality if they request it unambiguously, now is there? No, there isn't. - In LV3, the @new_symbol tag is removed. > The important thing here is that the library writer gives fair warning > that a breaking change is coming, giving the user time to update his > code at his convenience. Or, if the tag is added to the actual implementation then there IS NO FREAKING BREAKING CHANGE until the @new_func or whatever tag is removed, but the library user is STILL given fair (albiet useless, imo) warning that it will be (kinda sorta) broken (with a downright trivial fix) in a followup release. > I'd say the need for this tag is going to be very rare, That's for certain. > but necessary when it is needed. I can't even begin to comprehend a situation where a heads-up about a mere "FQN needed here" qualifies as something remotely as strong as "necessary". Unless the scenario hinges on the current brokenness of FQNs, which seriously need to be fixed anyway. > I don't think there's a definitive > methodology for deciding when it's needed and when it's not. Would be > case-by-case. Sounds like useless cognitive bother on the library author for extremely minimal (at best) benefit to the library user. Doesn't sound like sufficient justification for a new language feature to me. > This is not anti-breakage. Code is going to break. It's just a > warning that the breaking is coming. > It's going out of the way to create and use a new language feature purely out of fear of a trivial breakage situation. Actual breakage or not, it's "all breakages are scary and we must bend over backwards because of them" paranoia, just the same.
Re: How to avoid throwing an exceptions for a built-in function?
On Wednesday, 10 May 2017 at 12:40:41 UTC, k-five wrote: I have a line of code that uses "to" function in std.conv for a purpose like: int index = to!int( user_apply[ 4 ] ); // string to int When the user_apply[ 4 ] has value, there is no problem; but when it is empty: "" it throws an ConvException exception and I want to avoid this exception. currently I have to use a dummy catch: try{ index = to!int( user_apply[ 4 ] ); } catch( ConvException conv_error ){ // nothing } I no need to handle that, so is there any way to prevent this exception? try this: https://dlang.org/phobos/std_exception.html#ifThrown
Re: Fantastic exchange from DConf
On Thursday, 11 May 2017 at 09:39:57 UTC, Kagamin wrote: https://bugs.chromium.org/p/project-zero/issues/detail?id=1252=5 - a vulnerability in an application that doesn't go on the internet. This link got me thinking: When will we see the first class action lawsuit for criminal negligence for not catching a buffer overflow (or other commonly known bug) which causes identity theft or loss of data? Putting aside the moral questions, the people suing would have a good case, given the wide knowledge of these bugs and the availability of tools to catch/fix them. I think they could prove negligence/incompetence and win given the right circumstances. Would be an interesting question to pose to any managers who don't want to spend time on security.
Re: How to avoid throwing an exceptions for a built-in function?
On Wednesday, 10 May 2017 at 21:44:32 UTC, Andrei Alexandrescu wrote: On 5/10/17 3:40 PM, k-five wrote: --- I no need to handle that, so is there any way to prevent this exception? Use the "parse" family: https://dlang.org/phobos/std_conv.html#parse -- Andrei --- This is my answer :). I want a way to covert a string without facing any exceptions. But may I do not understand so well the documentation It says: The parse family of functions works quite like the to family, except that: 1 - It only works with character ranges as input. 2 - It takes the input by reference. (This means that rvalues - such as string literals - are not accepted: use to instead.) 3 - It advances the input to the position following the conversion. 4 - It does not throw if it could not convert the entire input. here, number 4: It does not throw if it could not convert the entire input. then it says: Throws: A ConvException if the range does not represent a bool. Well it says different things about throwing! Also I tested this: import std.stdio; import std.conv: parse; void main( string[] args ){ string str = "string"; int index = parse!int( str ); writeln( "index: ", index ); } the output: std.conv.ConvException@/usr/include/dmd/phobos/std/conv.d(2111): Unexpected 's' when converting from type string to type int and so on ... Please correct me if I am wrong.
Re: How to avoid throwing an exceptions for a built-in function?
On Wednesday, 10 May 2017 at 21:19:21 UTC, Stanislav Blinov wrote: On Wednesday, 10 May 2017 at 15:35:24 UTC, k-five wrote: On Wednesday, 10 May 2017 at 14:27:46 UTC, Stanislav Blinov --- I don't understand. If you don't want to take care of exceptions, then you just don't do anything, simply call to!int(str). Well I did that, but when the string is a valid type like: "10" there is no problems. But when the string is not valid, like: "abc", then to! function throws an exception. Why I do not want to take care of that? Because I just need the value, if the string is valid, otherwise no matter what the value of string is. First I just wrote: index = to!int( user_apply[ 4 ] ); And this code is a part of a command-line program and the user may enter anything. So, for a valid string: ./program '10' // okey but for: ./program 'non-numerical' // throws an exception an 10 lines of error appear on the screen( console ) I just want to silent this exception. Of course it is useful for handling when someone wants to. But in my code I no need to handle it. So I want to silent that, without using try{}catch(){} block. I just wondered about try-catch and I want to know may there would be a better way instead of a dummy try-catch block. Thanks for replying and mentioning. And I am sorry, since I an new in English Writing, if you got confuse.
Re: Fantastic exchange from DConf
On Wednesday, 10 May 2017 at 17:51:38 UTC, H. S. Teoh wrote: Haha, I guess I'm not as good of a C coder as I'd like to think I am. :-D That comment puts you ahead of the pack already :)
Re: Fantastic exchange from DConf
On Monday, May 08, 2017 23:15:12 H. S. Teoh via Digitalmars-d wrote: > Recently I've had the dubious privilege of being part of a department > wide push on the part of my employer to audit our codebases (mostly C, > with a smattering of C++ and other code, all dealing with various levels > of network services and running on hardware expected to be "enterprise" > quality and "secure") and fix security problems and other such bugs, > with the help of some static analysis tools. I have to say that even > given my general skepticism about the quality of so-called "enterprise" > code, I was rather shaken not only to find lots of confirmation of my > gut feeling that there are major issues in our codebase, but even more > by just HOW MANY of them there are. In a way, it's amazing how successful folks can be with software that's quite buggy. A _lot_ of software works just "well enough" that it gets the job done but is actually pretty terrible. And I've had coworkers argue to me before that writing correct software really doesn't matter - it just has to work well enough to get the job done. And sadly, to a great extent, that's true. However, writing software that's works just "well enough" does come at a cost, and if security is a real concern (as it increasingly is), then that sort of attitude is not going to cut it. But since the cost often comes later, I don't think that it's at all clear that we're going to really see a shift towards languages that prevent such bugs. Up front costs tend to have a powerful impact on decision making - especially when the cost that could come later is theoretical rather than guaranteed. Now, given that D is also a very _productive_ language to write in, it stands to reduce up front costs as well, and that combined with its ability to reduce the theoretical security costs, we could have a real win, but with how entrenched C and C++ are and how much many companies are geared towards not caring about security or software quality so long as the software seems to get the job done, I think that it's going to be a _major_ uphill battle for a language like D to really gain mainstream use on anywhere near the level that languages like C and C++ have. But for those who are willing to use a language that makes it harder to write code with memory safety issues, there's a competitive advantage to be gained. - Jonathan M Davis
[Issue 17382] void main(){}pragma(msg,main()); crashes DMD
https://issues.dlang.org/show_bug.cgi?id=17382 uplink.co...@googlemail.com changed: What|Removed |Added CC||uplink.co...@googlemail.com --- Comment #1 from uplink.co...@googlemail.com --- This is because the void main() gets type-painted to int main(); Fix pending. --
[Issue 14894] mangling of mixins and lambdas is not unique and depends on compilation flags
https://issues.dlang.org/show_bug.cgi?id=14894 uplink.co...@googlemail.com changed: What|Removed |Added CC||uplink.co...@googlemail.com --- Comment #9 from uplink.co...@googlemail.com --- The way I see forward is to not use a number. But to disambiguate by a reproducible hash. --
Re: I'm looking for job on D
Here are some projects that I used on the last job: https://github.com/httpal/XML_PARSE_CLINIC https://github.com/httpal/CHECK_STRUCT
Re: Snap packages for DMD and DUB
On Thursday, 11 May 2017 at 11:47:10 UTC, Piotr Mitana wrote: Hello, I have tried those snaps recently on Ubuntu 16.10. There were -fPIC related errors (if you need the output, I can install the snap again and post it tomarrow). Ouch! Thanks for reporting this: it sounds like something similar to what Attila was reporting for his attempts at building on Arch. I'll look into it and see if I can fix packaging side (it's probably possible by tweaking CFLAGS), before submitting fixes upstream if it's something that can reasonably be addressed there.
Re: NetBSD amd64: which way is the best for supporting 80 bits real/double?
On Thursday, 11 May 2017 at 11:31:58 UTC, Nikolay wrote: On Thursday, 11 May 2017 at 11:10:50 UTC, Joakim wrote: Well, if you don't like what's available and NetBSD doesn't provide them... up to you to decide where that leads. In any case it was not my decision. LDC does not use x87 for math functions on other OS's. LDC does use x87 reals on x86, the only exception I'm aware of being Windows (MSVC targets, MinGW would use x87), as the MS C runtimes don't support x87 at all (and they also define a 64-bit `long double` type, so the choice was pretty obvious). I don't have a strong opinion on whether the NetBSD x86 real should be 80 bits with a lot of tweaked tests or 64 bits. The latter is surely the simpler approach though.
Re: Static foreach pull request
On 5/10/17 3:45 PM, Stefan Koch wrote: On Wednesday, 10 May 2017 at 18:41:30 UTC, Timon Gehr wrote: On 10.05.2017 16:21, Stefan Koch wrote: On Wednesday, 10 May 2017 at 14:13:09 UTC, Timon Gehr wrote: On 10.05.2017 15:18, Stefan Koch wrote: if you try assert([] is null), it should fail. It doesn't. I have tried to make that point before, unsuccessfully. Empty arrays may or may not be null, but the empty array literal is always null. cat t3.d static assert([] is null); --- dmd t.d -c --- t3.d(1): Error: static assert ([] is null) is false void main(){ import std.stdio; enum x = [] is null; auto y = [] is null; writeln(x," ",y); // "false true" } Oh fudge. Another case where the ctfe-engine goes the right way; And the runtime version does not ... we should fix this one of these days. [] lowers to a d runtime call (in lifetime, the function to allocate an array) with no elements. The function must return a valid array with 0 length. null is such an array. It's not an error at all, and should not be fixed or changed IMO. You almost never want to use 'is' on an array, as it compares just the pointer and length. Usually you want '=='. For instance, you would never do: [1, 2, 3] is [1, 2, 3] And expect any sane result. It might actually be true on some compiler that's clever enough! -Steve
Re: Thoughts on some code breakage with 2.074
On 5/11/17 5:37 AM, deadalnix wrote: On Wednesday, 10 May 2017 at 19:06:40 UTC, Ali Çehreli wrote: Bummer for H. S. Teoh I guess... :/ Although I prefer explicit over implicit in most cases, I've never graduated from if(p) and still using it happily. :) Ali All bool conversions in D are value based, not identity based. Not only this is error prone, this is inconsistent. What does "value based" and "identity based" mean? bool conversions vary widely and allow a lot of flexibility (at least for structs): if(arr) -> same as if(arr.ptr) if(someInt) -> same as if(someInt != 0) if(someObject) -> if(someObject !is null && someObject.invariant) if(someStruct) -> if(someStruct.opCast!(bool)) -Steve
Re: Thoughts on some code breakage with 2.074
On 5/10/17 2:49 PM, Jonathan M Davis via Digitalmars-d wrote: On Wednesday, May 10, 2017 05:05:59 Ali Çehreli via Digitalmars-d wrote: On 05/09/2017 10:34 AM, H. S. Teoh via Digitalmars-d wrote: > I even appreciate breakages that eventually force me to write more > > readable code! A not-so-recent example: >/* Used to work, oh, I forget which version now, but it used to > > * work: */ > >MyType* ptr = ...; >if (someCondition && ptr) { ... } > > After upgrading the compiler, I get a warning that using a pointer as a > condition is deprecated. At first I was mildly annoyed... but then to > > make the warning go away, I wrote this instead: >/* Look, ma! Self-documenting, readable code! */ >MyType* ptr = ...; >if (someCondition && ptr !is null) { ... } Can you show an example please. I don't see this being required by 2.074.0 (compiled with -w -de). I think that that's the one that Andrei and Vladimir didn't like, because they actually used the conversion to bool correctly in their code a bunch (whereas most everyone else thought that it was too error prone), and the deprecation ended up being removed. I think that was the if(array) fiasco. I don't ever remember if(ptr) being deprecated. In fact, I'd go as far as saying that maybe H.S. Teoh misremembers the array thing as pointers. The biggest reason is that a huge useful pattern with this is: if(auto x = key in someAA) { // use *x without more hash lookup costs. } I can't imagine anyone attempted to force this to break without a loud backlash. I think if(ptr) is mostly universally understood to mean the pointer is not null. -Steve
Re: Thoughts on some code breakage with 2.074
On Wednesday, 10 May 2017 at 19:06:40 UTC, Ali Çehreli wrote: On 05/10/2017 11:49 AM, Jonathan M Davis via Digitalmars-d wrote: > On Wednesday, May 10, 2017 05:05:59 Ali Çehreli via Digitalmars-d wrote: >> On 05/09/2017 10:34 AM, H. S. Teoh via Digitalmars-d wrote: >> > After upgrading the compiler, I get a warning that using a pointer as a >> > condition is deprecated. > I think that that's the one that Andrei and Vladimir didn't like, because > they actually used the conversion to bool correctly in their code a bunch > (whereas most everyone else thought that it was too error prone), and the > deprecation ended up being removed. > > - Jonathan M Davis Bummer for H. S. Teoh I guess... :/ Although I prefer explicit over implicit in most cases, I've never graduated from if(p) and still using it happily. :) Yes, me too (in C). It is conceptually imho ok to use it that way as a pointer does have a boolean semantic, either it is a valid pointer or it is not. The value of the pointer itself is only in special cases relevant (cases in which they have to be converted to an integral type anyway) and is in any case extremely machine dependent. One can even make the case that checking "ptr !is null" or in C "ptr != 0" is inconsistent as it is the only operation where the value of a pointer is used, which is, at least for C a source of confusion. The 0 value in a pointer context will not necessarily compile to a 0 value in the generated assembly. Some machines have null ptrs that are not represented by 0 bits integral values and the C standard has to take these (granted obsolete) into account.
Re: Concerns about using struct initializer in UDA?
On Thursday, 11 May 2017 at 11:36:17 UTC, Andre Pany wrote: On Thursday, 11 May 2017 at 10:51:09 UTC, Stefan Koch wrote: On Thursday, 11 May 2017 at 10:49:58 UTC, Andre Pany wrote: [...] We have that syntax already. I do not understand. Should the syntax I have written already work as I expect or do you mean my proposal is not possible as the syntax is ambiguous? Kind regards André I thought it should have worked already. My apologies the struct literal initialization syntax is unsupported because of the parser implementation. I don't know if you would introduce new ambiguities; I suspect that you wouldn't.
Re: alias and UDAs
On 05/11/2017 12:39 PM, Andre Pany wrote: in this example, both asserts fails. Is my assumption right, that UDA on alias have no effect? If yes, I would like to see a compiler warning. But anyway, I do not understand why the second assertion fails. Are UDAs on arrays not allowed? import std.traits: hasUDA; enum Flattened; struct Foo { int bar; } @Flattened alias FooList = Foo[]; struct Baz { FooList fooList1; @Flattened FooList[] fooList2; } void main() { Baz baz; static assert(hasUDA!(baz.fooList1, "Flattened")); // => false static assert(hasUDA!(baz.fooList2, "Flattened")); // => false } 1) You have to test against `Flattened`, not `"Flattened"`. A string is a valid UDA, but you're not using the string on the declarations. When you fix this, the second assert passes. 2) `Baz.fooList1` doesn't have any attributes. Attributes apply to declarations. If it's valid, the attribute on `FooList` applies only to `FooList`. It doesn't transfer to `Baz.fooList1`. If anything, you could assert that `hasUDA!(FooList, Flattened)` holds. Maybe you could, if it compiled. 3) Why does `hasUDA!(FooList, Flattened)` fail to compile? The error message reads: "template instance hasUDA!(Foo[], Flattened) does not match template declaration hasUDA(alias symbol, alias attribute)". We see that `FooList` has been replaced by `Foo[]`. It's clear then why the instantiation fails: `Foo[]` isn't a symbol. Unfortunately, the spec is a bit muddy on this topic. On the one hand it says that "AliasDeclarations create a symbol", but it also says that "Aliased types are semantically identical to the types they are aliased to" [1]. In practice, the compiler doesn't seem to create a symbol. The alias identifier is simply replaced with the aliased thing, and you can't use the alias identifier as a symbol. That means, you might be able to add an attribute to `FooList`, but you can't get back to it, because whenever you use `FooList` it's always replaced by `Foo[]`. And `Foo[]` doesn't have the attribute, of course. I agree that it would probably make sense to disallow putting attributes on aliases. You can also mark aliases `const`, `static`, `pure`, etc. And they all have no effect. [1] http://dlang.org/spec/declaration.html#AliasDeclaration
Re: Snap packages for DMD and DUB
On Monday, 8 May 2017 at 20:05:01 UTC, Joseph Rushton Wakeling wrote: Hello all, As announced at DConf 2017, snap packages are now available for DMD 2.074.0 and DUB 1.3.0 in the official snap store. These should allow for installation on multiple different Linux distros (see below) on i386 and amd64 systems. Hello, I have tried those snaps recently on Ubuntu 16.10. There were -fPIC related errors (if you need the output, I can install the snap again and post it tomarrow).
Re: Concerns about using struct initializer in UDA?
On Thursday, 11 May 2017 at 10:51:09 UTC, Stefan Koch wrote: On Thursday, 11 May 2017 at 10:49:58 UTC, Andre Pany wrote: Hi, I know there are concerns about struct initialization in method calls but what is about struct initializer in UDA? Scenario: I want to set several UDA values. At the moment I have to create for each value a structure with exactly 1 field. But it would be quite nice if I could use struct initialization to group these values: struct Field { string location; string locationName; } struct Foo { @A = {locationName: "B"} int c; // <-- } void main() {} Of course the syntax is questionable, it is just a proposal. What do you think? Kind regards André We have that syntax already. I do not understand. Should the syntax I have written already work as I expect or do you mean my proposal is not possible as the syntax is ambiguous? Kind regards André
Re: NetBSD amd64: which way is the best for supporting 80 bits real/double?
On Thursday, 11 May 2017 at 11:10:50 UTC, Joakim wrote: Well, if you don't like what's available and NetBSD doesn't provide them... up to you to decide where that leads. In any case it was not my decision. LDC does not use x87 for math functions on other OS's.
Re: alias and UDAs
On Thursday, 11 May 2017 at 10:57:22 UTC, Stanislav Blinov wrote: On Thursday, 11 May 2017 at 10:39:03 UTC, Andre Pany wrote: [...] It should've been alias FooList = @Flattened Foo[]; which will generate a compile-time error (UDAs not allowed for alias declarations). And then: static assert(hasUDA!(baz.fooList2, Flattened)); No quotes, since Flattened is an enum, not a string Thanks for the explanation. I think I will create a bug report for this statement: @Flattened alias FooList = Foo[]; The UDA has no effect as far as I understand. Kind regards André
Re: DIP 1007 Preliminary Review Round 1
On 5/11/17 12:11 AM, Nick Sabalausky (Abscissa) wrote: On 04/25/2017 08:33 AM, Steven Schveighoffer wrote: In the general case, one year is too long. A couple compiler releases should be sufficient. * When the @future attribute is added, would one add it on a dummy symbol or would one provide the implementation as well? dummy symbol. Think of it as @disable, but with warning output instead of error. This is a pointless limitation. What is the benefit of requiring the author to *not* provide an implementation until the transition period is over? It runs counter to normal workflow. The idea (I think) is to have a version of the library that functions *exactly* like the old version, but helpfully identifies where a future version will not function properly. This is like @deprecate. You don't put a @deprecate on a symbol and at the same time remove the symbol's implementation -- you leave it as it was, and just tag it so the warning shows up. That's step one. Instead, why not just say "Here's a new function. But !!ZOMG!! what if somebody is already using a function by that name??!? They'd have use FQNs to disambiguate! Gasp!!! We can't have that! So, fine, if it's that big of a deal, we'll just instruct the compiler to just NOT pick up this function unless it's specifically requested via FQN". The point is not to break code without fair warning. This is the progression I have in mind: In Library version 1 (LV1), the function doesn't exist. In LV2, the function is marked as @future. In LV3, the function is implemented and the @future tag is removed. LV1 + user code version 1 (UCV1) -> works * library writer updates his version LV2 + UCV1 -> works, but warns that it will not work in a future version. * user updates his code to mitigate the potential conflict LV2 + UCV2 -> works, no warnings. LV3 + UCV2 -> works as expected. The important thing here is that the library writer gives fair warning that a breaking change is coming, giving the user time to update his code at his convenience. If he does so before the next version of the library comes out, then his code works for both the existing library version AND the new one without needing to rush a change through. That sounds FAR better to me than "Here's a new function, but we gotta keep it hidden in a separate branch/version/etc and not let anyone use it until we waste a bunch of time making sure everyone's code is all updated and ready so that once we let people use it nobody will have to update their code with FQNs, because we can't have that, can we?" It depends on both the situation and the critical nature of the symbol in question. I'd say the need for this tag is going to be very rare, but necessary when it is needed. I don't think there's a definitive methodology for deciding when it's needed and when it's not. Would be case-by-case. Pardon me for saying so, and so bluntly, but honestly, this whole discussion is just stupid. It's full-on C++-grade anti-breakage hysteria. There are times when code breakage is a legitimate problem. This is not REMOTELY one of them. This is not anti-breakage. Code is going to break. It's just a warning that the breaking is coming. -Steve
Re: NetBSD amd64: which way is the best for supporting 80 bits real/double?
On Thursday, 11 May 2017 at 10:22:29 UTC, Dominikus Dittes Scherkl wrote: On Thursday, 11 May 2017 at 10:07:32 UTC, Joakim wrote: On Thursday, 11 May 2017 at 02:05:11 UTC, Nikolay wrote: I am porting LDC to NetBSD amd64, and I ask advice how to handle real type. NetBSD has limited support for this type. What is long double on NetBSD/amd64, 64-bit or full 80-bit? We were talking about this when I was porting to Android/x86, where long double is 64-bit but the FPU should support 80-bit. Iain suggested just sticking to the ABI, ie using 64-bit if that's how long double is defined (see next three comments after linked comment): https://github.com/dlang/phobos/pull/2150#issuecomment-42731651 This type exists, but standard library does not provide full set of math functions for it (e.g. sinus, cosinus, and etc). Currently I just forward all function calls to 64 bits version counterparts, but in this case a set of unit tests are failing. I see following approaches to handle this issue: - Totally remove 80 bit real type from NetBSD port (make real==double) - Change tests and skip asserts for NetBSD There is one additional approach: implement these functions in druntime, but it is too big/massive work for me. I wouldn't worry about it too much. If someone really needs this, they will have to chip in. Dmd uses compiler intrinsics for those trig functions, and I notice that they seem to just call the native x86 asm instructions: I hate it if D doesn't fully support the hardware just to be compatible to some bad designed C library. This is not just "some... C library," we're talking about the system ABI here! Hey, it's a system language! I want to be able to use the hardware I have to its fullest! You can: I left real as 80-bit there, but it's irrelevant as Android/x86 is basically dead since Intel exited the mobile market. And for calling C functions you always have to fing the fitting D-type by checking "mantdig" and map accordingly. Thats really not so difficult. The problem is that std.math depends on some basic C math functions for the native long double type, ie the D "real" equivalent, and if your system ABI defines long double to be less precise than what the hardware supports, those more precise math functions may not exist. Hell, as Nikolay just said, they may not exist even if your ABI uses the same precision as the hardware! In that case, where your platform doesn't provide such precise C math functions, it's tough for me to care. If you really need the precision, roll up your sleeves and add it, whether in C or D. On Thursday, 11 May 2017 at 10:33:21 UTC, Nikolay wrote: What is long double on NetBSD/amd64, 64-bit or full 80-bit? 80 bit but function set is not full e.g. acos supports long double http://netbsd.gw.com/cgi-bin/man-cgi?acos+3+NetBSD-7.0 cos does not support long double http://netbsd.gw.com/cgi-bin/man-cgi?cos+3+NetBSD-7.0 In that case, defining real as 80-bit and modifying some tests for NetBSD seems the way to go. You may want to look at my last Phobos patch for Android/x86, from a couple years ago: https://gist.github.com/joakim-noah/5d399fdcd5e484d6aaa2 On Thursday, 11 May 2017 at 10:07:32 UTC, Joakim wrote: Dmd uses compiler intrinsics for those trig functions, and I notice that they seem to just call the native x86 asm instructions: https://github.com/dlang/dmd/blob/master/src/ddmd/root/longdouble.c#L428 As I know native x87 implementation of many math functions is terrible, and it is rarely used in real world. Well, if you don't like what's available and NetBSD doesn't provide them... up to you to decide where that leads.
Re: "I made a game using Rust"
On 5/10/17 7:54 PM, H. S. Teoh via Digitalmars-d wrote: On Wed, May 10, 2017 at 07:52:53PM -0400, Steven Schveighoffer via Digitalmars-d wrote: [...] I'll reiterate here: if the compiler's sanity is suspect, there's nothing much for it to do except crash. hard. And tell you where to look. [...] OTOH, it's not very nice from a user's POV. It would be nice(r) if this PR could be looked at and brought to a mergeable state: https://github.com/dlang/dmd/pull/6103 Yes, totally agree. Proper reporting of errors is a very important thing. Here's another one: https://issues.dlang.org/show_bug.cgi?id=13810 -Steve
Re: alias and UDAs
On Thursday, 11 May 2017 at 10:39:03 UTC, Andre Pany wrote: Hi, in this example, both asserts fails. Is my assumption right, that UDA on alias have no effect? If yes, I would like to see a compiler warning. But anyway, I do not understand why the second assertion fails. Are UDAs on arrays not allowed? import std.traits: hasUDA; enum Flattened; struct Foo { int bar; } @Flattened alias FooList = Foo[]; struct Baz { FooList fooList1; @Flattened FooList[] fooList2; } void main() { Baz baz; static assert(hasUDA!(baz.fooList1, "Flattened")); // => false static assert(hasUDA!(baz.fooList2, "Flattened")); // => false } Kind regards André It should've been alias FooList = @Flattened Foo[]; which will generate a compile-time error (UDAs not allowed for alias declarations). And then: static assert(hasUDA!(baz.fooList2, Flattened)); No quotes, since Flattened is an enum, not a string
Re: Concerns about using struct initializer in UDA?
On Thursday, 11 May 2017 at 10:49:58 UTC, Andre Pany wrote: Hi, I know there are concerns about struct initialization in method calls but what is about struct initializer in UDA? Scenario: I want to set several UDA values. At the moment I have to create for each value a structure with exactly 1 field. But it would be quite nice if I could use struct initialization to group these values: struct Field { string location; string locationName; } struct Foo { @A = {locationName: "B"} int c; // <-- } void main() {} Of course the syntax is questionable, it is just a proposal. What do you think? Kind regards André We have that syntax already.
Concerns about using struct initializer in UDA?
Hi, I know there are concerns about struct initialization in method calls but what is about struct initializer in UDA? Scenario: I want to set several UDA values. At the moment I have to create for each value a structure with exactly 1 field. But it would be quite nice if I could use struct initialization to group these values: struct Field { string location; string locationName; } struct Foo { @A = {locationName: "B"} int c; // <-- } void main() {} Of course the syntax is questionable, it is just a proposal. What do you think? Kind regards André
alias and UDAs
Hi, in this example, both asserts fails. Is my assumption right, that UDA on alias have no effect? If yes, I would like to see a compiler warning. But anyway, I do not understand why the second assertion fails. Are UDAs on arrays not allowed? import std.traits: hasUDA; enum Flattened; struct Foo { int bar; } @Flattened alias FooList = Foo[]; struct Baz { FooList fooList1; @Flattened FooList[] fooList2; } void main() { Baz baz; static assert(hasUDA!(baz.fooList1, "Flattened")); // => false static assert(hasUDA!(baz.fooList2, "Flattened")); // => false } Kind regards André
Re: NetBSD amd64: which way is the best for supporting 80 bits real/double?
What is long double on NetBSD/amd64, 64-bit or full 80-bit? 80 bit but function set is not full e.g. acos supports long double http://netbsd.gw.com/cgi-bin/man-cgi?acos+3+NetBSD-7.0 cos does not support long double http://netbsd.gw.com/cgi-bin/man-cgi?cos+3+NetBSD-7.0 On Thursday, 11 May 2017 at 10:07:32 UTC, Joakim wrote: Dmd uses compiler intrinsics for those trig functions, and I notice that they seem to just call the native x86 asm instructions: https://github.com/dlang/dmd/blob/master/src/ddmd/root/longdouble.c#L428 As I know native x87 implementation of many math functions is terrible, and it is rarely used in real world.
Re: Fantastic exchange from DConf
On Thursday, 11 May 2017 at 09:39:57 UTC, Kagamin wrote: On Saturday, 6 May 2017 at 06:26:29 UTC, Joakim wrote: Walter: Anything that goes on the internet. https://bugs.chromium.org/p/project-zero/issues/detail?id=1252=5 - a vulnerability in an application that doesn't go on the internet. To be fair, if you're not on the internet, you're unlikely to get any files that will trigger that bug in Microsoft's malware checker, as they noted that they first saw it on a website on the internet. Of course, you could still get such files on a USB stick, which just highlights that unless you completely shut in your computer from the world, you can get bit, just slower and with less consequences than on the internet. I wondered what that Project Zero topic had to do with Chromium, turns out it's a security team that google started three years ago to find zero-day holes in almost any software. That guy from the team also found the recently famous Cloudbleed bug that affected Cloudflare. They have a blog up that details holes they found in all kinds of stuff, security porn if you will: ;) https://googleprojectzero.blogspot.com
Re: NetBSD amd64: which way is the best for supporting 80 bits real/double?
On Thursday, 11 May 2017 at 10:07:32 UTC, Joakim wrote: On Thursday, 11 May 2017 at 02:05:11 UTC, Nikolay wrote: I am porting LDC to NetBSD amd64, and I ask advice how to handle real type. NetBSD has limited support for this type. What is long double on NetBSD/amd64, 64-bit or full 80-bit? We were talking about this when I was porting to Android/x86, where long double is 64-bit but the FPU should support 80-bit. Iain suggested just sticking to the ABI, ie using 64-bit if that's how long double is defined (see next three comments after linked comment): https://github.com/dlang/phobos/pull/2150#issuecomment-42731651 This type exists, but standard library does not provide full set of math functions for it (e.g. sinus, cosinus, and etc). Currently I just forward all function calls to 64 bits version counterparts, but in this case a set of unit tests are failing. I see following approaches to handle this issue: - Totally remove 80 bit real type from NetBSD port (make real==double) - Change tests and skip asserts for NetBSD There is one additional approach: implement these functions in druntime, but it is too big/massive work for me. I wouldn't worry about it too much. If someone really needs this, they will have to chip in. Dmd uses compiler intrinsics for those trig functions, and I notice that they seem to just call the native x86 asm instructions: I hate it if D doesn't fully support the hardware just to be compatible to some bad designed C library. Hey, it's a system language! I want to be able to use the hardware I have to its fullest! And for calling C functions you always have to fing the fitting D-type by checking "mantdig" and map accordingly. Thats really not so difficult.
Re: DIP 1007 Preliminary Review Round 1
On Thursday, 11 May 2017 at 03:46:55 UTC, Nick Sabalausky (Abscissa) wrote: 1. Why are FQNs alone (assume they still worked like they're supposed to) not good enough? Needs to be addressed in DIP. Currently isn't. It is already addressed in the DIP. FQNs only help if they are used and current idiomatic D code tends to rely on unqualified imports/names. 2. The library user is already going to be informed they need to fix an ambiguity anyway, with or without this DIP. Only if you consider "after compiler/library upgrade your project doesn't work anymore" a sufficient "informing" which we definitely don't. 3. Updating code to fix the ambiguity introduced by a new symbol is always trivial (or would be if FQNs were still working properly and hadn't become needlessly broken) and inherently backwards-compatible with the previous version of the lib. Trivial compilation error fixup that takes 5 minutes to address in a single project takes up to one month to propagate across all our libraries in projects per my experience. Actually fixing code is hardly a problem with breaking changes, ever. It is synchronization between developers and projects that makes it so painful. And in override case, there is no backwards compatible solution available at all (see Steven comment). Unlike when symbols being added to a lib, the fix in user-code for a deprecation *can* be non-trivial and *can* be non-backwards-compatible with the previous version of the lib, depending on the exact circumstances. Therefore, unlike added symbols, the "deprecation" feature for removed symbols is justified. Please elaborate. User code fix is always either using FQN or renaming, what non-trivial case comes to your mind? 4. Unlike deprecation, this feature works contrary to the actual flow of development and basic predictability: When a lib author wants to remove a symbol, they already what the symbol is, what it's named and that they have X or Y reason to remove it. But when a lib author wants to add a symbol, it's more speculative: They don't actually KNOW such details until some feature is actually written, implemented and just about ready for release. At which point it's a bit late, and awkward, to go putting in a "foo *will* be added". You describe a typical library that doesn't follow SemVer and generally doesn't bother much about providing any upgrade stability. Naturally, such library developer will ignore `@future` completely and keep following same development patterns. Not everyone is like that though. This document (https://github.com/sociomantic-tsunami/neptune/blob/master/doc/library-maintainer.rst) explains the versioning/development model we use for all D libraries and within such model feature that is written in one major version can be added as `@future` in the previous major version at the same time. And for druntime object.d case it is pretty much always worth the gain to delay merging already implemented addition for one release, putting `@future` stub in the one before. There can never be any hurry so there is no way to be "late".