Re: D is dead
On Thursday, 23 August 2018 at 11:02:31 UTC, Mike Franklin wrote: On Thursday, 23 August 2018 at 10:41:03 UTC, Jonathan M Davis wrote: Languages pretty much always get more complicated over time, and unless we're willing to get rid of more stuff, it's guaranteed to just become more complicated over time rather than less. "A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." -- Antoine de Saint-Exupery I think that's actually a mistranslation from what he actually said, but it's still quite good. Liberties were taken there, but it's probably more applicable to this situation than a lot of the times C/Unix beards try to play it as though their tech of choice is beyond culpability. For context, he's talking about the process of aeronautical engineering and the thrust of this statement is really commentary on effort and elegance. A little before that, he talks about the grand irony that so much thoughtful effort and design goes into refining things so they're as simple as possible. But "simple" is relative to the thing and the task (my understanding is that "simple" kind of conflates "reliable" here, too). So this is where he rightly acknowledges that the process of refinement isn't a waste for what it removes even though it's often much greater than the effort to create something in the first place. It's wrapped in a broader understanding that you have to have something that works at all before you can streamline it. -Wyatt
Re: Being Positive
On Tuesday, 13 February 2018 at 07:35:19 UTC, Dukc wrote: On Monday, 12 February 2018 at 23:54:29 UTC, Arun Chandrasekaran wrote: Sorry if I'm hurting someone's sentiment, but is it just me who is seeing so much negative trend in the D forum about D itself? Well, programmers are engineers, and engineers tend to focus on things that need improvement. We aren't constantly effusive and positive because we care. We care and we see the cracks in the plaster and know that we, all of us, can do better; can BE better. Often all in different ways that others don't agree with. And that's fine. That said, there's a difference between constructive and destructive negativity. It pays to recognise the difference and not indulge the latter. -Wyatt
Re: Introducing Nullable Reference Types in C#. Is there hope for D, too?
On Wednesday, 22 November 2017 at 14:51:02 UTC, codephantom wrote: The core language of D does NOT need what C# is proposing - that is my view. "Need"? Perhaps not. But so far, I haven't seen any arguments that refute the utility of mitigating patterns of human error. If, over time, a large number of D programmers have the same laissez-faire approach towards checking for null, as C# programmers, then maybe they'll start demanding the same thing - but even then, I'll argue the same points I've argued thus far. Null references have been a problem in every language that has them. Just because D is much nicer than its predecessors (and contemporaries, IMO) doesn't mean the "bad old days" (still in progress) of C and C++ didn't happen or that we cannot or should not learn from the experience. Tony Hoare doesn't call null his sin and "billion dollar mistake" as just a fit of pique. In other words, "Well don't do that, silly human!" ends up being an appeal to tradition. Perhaps that's why I've never considered nulls to be an issue. I take proactive steps to protect my code, before the compiler ever sees it. And actually, I cannot recall any null related error in any code I've deployed. It's just never been an issue. Oh, that explains it. He's a _robot_! ;) (The IDE thing is entirely irrelevant to this discussion; why did you bring that up?) And that's another reason why this topic interests me - why is it such an issue in the C# community? From Mads blog about it, it seems to be because they're just not doing null checks. And so the language designers are being forced to step in. If that's not the reason, then I've misunderstood, and await the correct explanation. Again, it's never _not_ been a problem. That C# is nearly old enough to vote in general elections but they're only just now finally doing this should be telling. (And I fully expect this conversation has been going for at least half of that time.) It's probably galvanised by the recent proliferation of languages that hold safety to a higher standard and the community realising that the language can and _should_ share the burden of mitigating patterns of human error. -Wyatt
Re: Should we add `a * b` for vectors?
On Thursday, 28 September 2017 at 01:58:24 UTC, Walter Bright wrote: ADL was always a hack to get around the wretched overloading symbol lookup behavior in C++. Any sufficiently advanced bug is indistinguishable from a feature! ;) -Wyatt
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 18:46:31 UTC, H. S. Teoh wrote: Hmm. I guess there's no easy way to make dmd/ldc emit dependencies with modified SONAMEs? So yeah, you're right, every software that depends on said libraries would have to explicitly depend on a different SONAME depending on what they were built with. OK, crazy idea, nevermind. :-( Doesn't sounds that crazy; you already do it with GCC versions, right? (Debian _does_ have something like that, right? Where you can pick your C compiler.) -Wyatt
Re: Spotted on twitter: Rust user enthusiastically blogs about moving to D
On Tuesday, 7 March 2017 at 03:04:05 UTC, Joakim wrote: https://z0ltan.wordpress.com/2017/02/21/goodbye-rust-and-hello-d/ I like the bit in the comments where he says this: "It doesn’t have to be idiomatic to work just fine, which is relaxing." People often don't get how nice this is. -Wyatt
Re: Deterministic Memory Management With Standard Library Progress
On Sunday, 5 March 2017 at 04:36:27 UTC, Anthony wrote: I would pick both, if I had the time to do so. I'm a college student; with that in mind, I can only really learn one right now without giving up most of my free time. I think it'd be stressful if I tried. This is fair, but, speaking from the field: learning how to JIT-learn and pick up languages quick is a valuable skill that you will never stop using. It's always worth reminding yourself that languages are cheap; the conceptual underpinnings are what's important. -Wyatt
Re: Happy December 13th!
On Tuesday, 13 December 2016 at 10:48:23 UTC, Walter Bright wrote: What a great day to be alive! Just feeling really blessed today, and hope you all are too. Well, Merry Tuesday!
Re: x86 instruction set reference
On Tuesday, 29 November 2016 at 22:37:28 UTC, safety0ff wrote: Other links in the same vein: http://ref.x86asm.net/coder64.html https://defuse.ca/online-x86-assembler.htm And if you're in (Intel) SIMD land, this is a handy reference: https://software.intel.com/sites/landingpage/IntrinsicsGuide -Wyatt
Re: State of issues.dlang.org
On Wednesday, 2 November 2016 at 11:00:58 UTC, Nick Treleaven wrote: One thing I miss is the ability to preview posts on Bugzilla, This was added in Bugzilla 5.0. We're just running 4.4.2 on issues.d.o. Unfortunately, I'm not sure how easy it is to upgrade... -Wyatt
Re: Taking pipeline processing to the next level
On Wednesday, 7 September 2016 at 00:18:59 UTC, Manu wrote: On 7 September 2016 at 01:54, Wyatt via Digitalmars-d wrote: Thanks, that's really interesting, I'll check it out. Here's some work on static rank polymorphism that might also be applicable?: http://www.ccs.neu.edu/home/pete/pub/esop-2014.pdf And in the Related Work, I just noticed Halide, which sounds like it's right up your alley: http://halide-lang.org/ Of course, this comes with the caveat that this is (still!) some relatively heavily-academic stuff. And I'm not sure to what extent that can help mitigate the problem of relaxing type requirements such that you can e.g. efficiently ,/⍉ your 4 2⍴"LR" vector for SIMD on modern processors. That's not what I want though. I intend to hand-write that function (I was just giving examples of how auto-vectorisation almost always fails), the question here is, how to work that new array function into our pipelines transparently... Ah, I misunderstood. Sorry. I had the impression that you wanted to be able to simply write: data.map!(x => transform(x)).copy(output); ...for any data[] and have it lift the transformation to the whole vector. If you're doing the work, I'm curious what you're hoping the end result to look like in terms of the code you want to be able to write. Just a doodle is fine, it doesn't have to stand up to scrutiny. -Wyatt
Re: Taking pipeline processing to the next level
On Monday, 5 September 2016 at 05:08:53 UTC, Manu wrote: A central premise of performance-oriented programming which I've employed my entire career, is "where there is one, there is probably many", and if you do something to one, you should do it to many. From a conceptual standpoint, this sounds like the sort of thing array languages like APL and J thrive on, so there's solid precedent for the concept. I might suggest looking into optimising compilers in that space for inspiration and such; APEX, for example: http://www.snakeisland.com/apexup.htm Of course, this comes with the caveat that this is (still!) some relatively heavily-academic stuff. And I'm not sure to what extent that can help mitigate the problem of relaxing type requirements such that you can e.g. efficiently ,/⍉ your 4 2⍴"LR" vector for SIMD on modern processors. -Wyatt
Re: Quality of errors in DMD
On Friday, 2 September 2016 at 15:12:48 UTC, Steven Schveighoffer wrote: This is an internal compiler error. It's not a standard way of reporting errors in D code. It means the internal state of the compiler is messed. Not much the compiler can do except crash. On one hand, it's encouraging that he's been using DMD for years and didn't know that. On the other, though, considering he's been using DMD for years and didn't know that, I think there's a cogent argument for improving even ICE messages. At the least, have them print "Internal Compiler Error". Taking it further, maybe actually point out that we'd appreciate this being reported at $URL with an "ice" tag and dustmite'd reduction. Roll more details into a "So You Found an ICE" wiki tutorial for the people who have (understandably) never done this before (and link it in the error output as well). -Wyatt
Re: Why D is not popular enough?
On Friday, 12 August 2016 at 12:27:50 UTC, Andrei Alexandrescu wrote: I recall I had a similar reaction as Edward back in the day. No hurt feelings or anything, but the arguments made were so specious I'd roll my eyes whenever I saw them in the C++ forums. -- Andrei So what changed? What gave you the initial kick to start moving from sideline scoffer to benevolent diarch? -Wyatt
Re: Let us talk about error messages
On Wednesday, 10 August 2016 at 18:28:00 UTC, qznc wrote: Rust changed their error message format: https://blog.rust-lang.org/2016/08/10/Shape-of-errors-to-come.html Inspiration for D? I'm pretty fond of some of the stuff Elm is doing, too. Like so: https://pbs.twimg.com/media/CiuJyriXAAEiZnd.jpg:large https://pbs.twimg.com/media/CivQW5kUYAAi4SP.jpg:large Some people will bitch about the verbosity, but that sort of thing is GREAT for people just getting into the language. -Wyatt
Re: Our docs should be more beautiful
On Monday, 18 July 2016 at 15:56:29 UTC, Andrei Alexandrescu wrote: * My pet dream was to work on a project with a beautiful justified and hyphenated website. After endless debates, ugliness has won - I'm looking at a eye-scratching ragged right edge. I just want to point out that Firefox will sporadically justify incredibly poorly, leaving huge spaces between words (In one extreme case, I caught it making something like 3cm gaps). It looks really bad, and isn't actually easier to read when the line length is long. -Wyatt
Re: static if enhancement
On Thursday, 30 June 2016 at 11:06:56 UTC, Steven Schveighoffer wrote: On 6/29/16 11:40 AM, Wyatt wrote: I might be stepping on a land mine by bringing it up, but isn't this sort of thing what contracts are for? No landmines here, but no, that isn't what contracts are for. Perhaps you mean constraints? Sure. IME, contracts are generally written as a series of constraints anyway. A constraint will prevent compilation vs. allowing compilation but emitting a failure at runtime. I agree this is a better mechanism, or just plain allow the compiler to make a proper error when you call gun. Isn't doing it at compile time as simple as using e.g. static assert(hasMember(T, "gun"))? I do feel like I've been in situations before where this kind of thing was better as a recoverable runtime exception, but I can't remember where. The more I think about this, though, the less the proposal makes sense for the problem. Like, I get that it might be a tiny bit annoying, but I see a lot more potential for human error; pulling a "goto FAIL" and running code that shouldn't be or such. -Wyatt
Re: Does a Interpretation-Engine fit in phobos ?
On Thursday, 30 June 2016 at 10:36:44 UTC, qznc wrote: Ok, seriously, it sounds like an awesome feat, but I don't think it is necessary to put it into Phobos. First, a dub package, please. Agree. Does Java even have something like that? That's sort of the exemplar for "hopelessly overdone standard library". Off-topic: Is it possible/feasible/desirable to let dmd use dub packages? DMD shouldn't have to download things from the public internet to do its job. I guess it would make sense to extract parts of dmd into dub packages. As a next step, dmd could use those packages instead of duplicating code. Does it? Which parts? I'm afraid I don't see the benefit. -Wyatt
Re: static if enhancement
On Friday, 24 June 2016 at 18:27:07 UTC, Steven Schveighoffer wrote: void fun(T)(T obj) { static if (!hasMember(T, "gun")) throw new Exception("No gun"); obj.gun; } Call with something that doesn't have a gun member, and even without the reachability warnings (no -w switch), it doesn't compile. However, with an else clause, it would compile. I might be stepping on a land mine by bringing it up, but isn't this sort of thing what contracts are for? -Wyatt
Re: Examples of dub use needed
On Thursday, 23 June 2016 at 07:46:41 UTC, Mike Parker wrote: My intention with the text is to provide a detailed description of every dub command and configuration directive, along with examples of how to use them in both JSON and SDLang formats. Yes, this is a good idea. It took me most of a day of trying to wrestle my project into a dub-compatible state and I honestly ended up giving up because my old Makefile is shorter, faster, and much easier to understand. (IIRC, the last straw was realising dub doesn't seem to have a good answer to Make targets, so building my library test samples was... impossible? Or at least completely obtuse.) I would request that you especially look for common build idioms and how to represent them in dub, because I'm apparently not the only one who thinks it's not obvious. -Wyatt
Re: More suggestions for find()
On Monday, 20 June 2016 at 16:09:21 UTC, qznc wrote: We cannot port it directly since it is GPL code. Would it work to port from Musl instead? -Wyatt
Re: size_t vs uintptr_t
On Tuesday, 14 June 2016 at 21:59:32 UTC, Walter Bright wrote: Ok, I admit these are not likely to emerge. Not in desktop, server, or modern mobile phones, but I think there are some embedded platforms that have this concern. I know that's not a huge priority, but it's nice to be mindful of it. -Wyatt
Re: Andrei's list of barriers to D adoption
On Friday, 10 June 2016 at 17:10:39 UTC, Adam D. Ruppe wrote: On Friday, 10 June 2016 at 15:30:19 UTC, Wyatt wrote: I use it in my toml parser and it's very pleasant. I figured it probably isn't very fast, but it works and that's important. kewl! Did you use the script component for interpreting or just the jsvar part for the data? Just the jsvar; I've got a Ppegged grammar mixin doing most of the heavy lifting. IIRC, you actually wrote it around the time I was fighting a losing battle with nested Variant arrays and it saved me a lot of headache. -Wyatt
Re: [OT] Re: Andrei's list of barriers to D adoption
On Friday, 10 June 2016 at 15:35:32 UTC, jmh530 wrote: On Friday, 10 June 2016 at 15:14:02 UTC, ketmar wrote: 2. you may take a look at my gml engine. it has clearly separated language parser and AST builder (gaem.parser), and AST->VM compiler (gaem.runner/compiler.d). I couldn't for the life of me find a link to this. He linked it earlier: http://repo.or.cz/gaemu.git/tree/HEAD:/gaem/parser -Wyatt
Re: Andrei's list of barriers to D adoption
On Friday, 10 June 2016 at 14:34:53 UTC, Adam D. Ruppe wrote: var globals = var.emptyObject; globals.write = &(writeln!string); Woah, I never thought of using it like that! The downside though is that it is something I basically slapped together in a weekend to support var.eval on a lark... it has a few weird bugs And yet it somehow seems to _work_ better than std.variant. :/ tho idk if I'd recommend it for serious work. Just use D for that! I use it in my toml parser and it's very pleasant. I figured it probably isn't very fast, but it works and that's important. -Wyatt
Re: Optimizations and performance
On Thursday, 9 June 2016 at 16:47:28 UTC, Kagamin wrote: A language optimized for performance of spontaneous code written by newbies, who never learned the language and don't use best practices? Could you stop pretending to completely misunderstand the point? -Wyatt
Re: Optimizations and performance
On Thursday, 9 June 2016 at 13:56:37 UTC, Kagamin wrote: On Thursday, 9 June 2016 at 13:52:38 UTC, Dave wrote: But it is the point of benchmarking So it's not "languages should be fast by default", but "benchmarks should be fast by default"? Well, _this_ took some weird leaps from what I actually said... The point is this sort of language benchmark should use normal code. The sort of code that people who've never heard of Haskell would write. If it's a "fast" language, "ordinary-looking" code should be fast. If being fast requires weird circumlocutions that barely anyone knows, it doesn't matter if experts consider it best practice. -Wyatt
Re: Optimizations and performance
On Thursday, 9 June 2016 at 01:46:45 UTC, Dave wrote: Languages should be fast by default. This. 4,000,000,000% this. If the naïve cases are bad, they're bad and trying to pretend that doesn't matter is some insidious denial. Sure, nearly any code can be optimised to be some sort of "fast", but 99% of it never will be. -Wyatt
Re: 10 lesser known languages, but no dlang?
On Wednesday, 8 June 2016 at 10:08:11 UTC, Ola Fosheim Grøstad wrote: http://programmingzen.com/2016/06/07/10-lesser-known-programming-languages-worth-exploring/ Maybe someone feel the inspiration to add a thoughtful comment in the comment-section on the article with a pointer to dlang.org? Weird, he has Rust, Haxe, and Julia? Are those really lesser-known, at this point? I expected to see F# in there, too, but nope! (And basically _no one_ knows about J...) -Wyatt
Re: Andrei's list of barriers to D adoption
On Tuesday, 7 June 2016 at 08:05:58 UTC, Russel Winder wrote: So instead of debating this endlessly, I think this is about the tenth time this has come up in the last two years, why doesn't a group of people who know about GC algorithms get together and write a new one? In addition to the other answers, it's worth noting that most every good modern GC algorithm I can think of requires barriers. Walter has repeatedly and emphatically declared that D will not have barriers, so we're kind of SoL on on that front. Java and Go don't have that problem. -Wyatt
Re: Andrei's list of barriers to D adoption
On Monday, 6 June 2016 at 14:27:52 UTC, Steven Schveighoffer wrote: I agree. It's telling that nearly all real-world examples we've seen (sociomantic, remedy games, etc.) use D without GC or with specialized handling of GC. I doubt either of the two you named would change, but I wonder how different the tenor of conversation would be in general if D's GC wasn't a ponderous relic? -Wyatt
Re: [OT] Effect of UTF-8 on 2G connections
On Wednesday, 1 June 2016 at 16:45:04 UTC, Joakim wrote: On Wednesday, 1 June 2016 at 15:02:33 UTC, Wyatt wrote: It's not hard. I think a lot of us remember when a 14.4 modem was cutting-edge. Well, then apparently you're unaware of how bloated web pages are nowadays. It used to take me minutes to download popular web pages _back then_ at _top speed_, and those pages were a _lot_ smaller. It's telling that you think the encoding of the text is anything but the tiniest fraction of the problem. You should look at where the actual weight of a "modern" web page comes from. Codepages and incompatible encodings were terrible then, too. Never again. This only shows you probably don't know the difference between an encoding and a code page, "I suggested a single-byte encoding for most languages, with double-byte for the ones which wouldn't fit in a byte. Use some kind of header or other metadata to combine strings of different languages, _rather than encoding the language into every character!_" Yeah, that? That's codepages. And your exact proposal to put encodings in the header was ALSO tried around the time that Unicode was getting hashed out. It sucked. A lot. (Not as bad as storing it in the directory metadata, though.) Well, when you _like_ a ludicrous encoding like UTF-8, not sure your opinion matters. It _is_ kind of ludicrous, isn't it? But it really is the least-bad option for the most text. Sorry, bub. I think we can do a lot better. Maybe. But no one's done it yet. The vast majority of software is written for _one_ language, the local one. You may think otherwise because the software that sells the most and makes the most money is internationalized software like Windows or iOS, because it can be resold into many markets. But as a percentage of lines of code written, such international code is almost nothing. I'm surprised you think this even matters after talking about web pages. The browser is your most common string processing situation. Nothing else even comes close. largely ignoring the possibilities of the header scheme I suggested. "Possibilities" that were considered and discarded decades ago by people with way better credentials. The era of single-byte encodings is gone, it won't come back, and good riddance to bad rubbish. I could call that "trolling" by all of you, :) but I'll instead call it what it likely is, reactionary thinking, and move on. It's not trolling to call you out for clearly not doing your homework. I don't think you understand: _you_ are the special case. Oh, I understand perfectly. _We_ (whoever "we" are) can handle any sequence of glyphs and combining characters (correctly-formed or not) in any language at any time, so we're the special case...? Yeah, it sounds funny to me, too. The 5 billion people outside the US and EU are _not the special case_. Fortunately, it works for them to. The problem is all the rest, and those just below who cannot afford it at all, in part because the tech is not as efficient as it could be yet. Ditching UTF-8 will be one way to make it more efficient. All right, now you've found the special case; the case where the generic, unambiguous encoding may need to be lowered to something else: people for whom that encoding is suboptimal because of _current_ network constraints. I fully acknowledge it's a couple billion people and that's nothing to sneeze at, but I also see that it's a situation that will become less relevant over time. -Wyatt
[OT] The Case Against... Unicode?
On Wednesday, 1 June 2016 at 13:57:27 UTC, Joakim wrote: No, I explicitly said not the web in a subsequent post. The ignorance here of what 2G speeds are like is mind-boggling. It's not hard. I think a lot of us remember when a 14.4 modem was cutting-edge. Codepages and incompatible encodings were terrible then, too. Never again. Well, when you _like_ a ludicrous encoding like UTF-8, not sure your opinion matters. It _is_ kind of ludicrous, isn't it? But it really is the least-bad option for the most text. Sorry, bub. No. The common string-handling use case is code that is unaware which script (not language, btw) your text is in. Lol, this may be the dumbest argument put forth yet. This just makes it feel like you're trolling. You're not just trolling, right? I don't think anyone here even understands what a good encoding is and what it's for, which is why there's no point in debating this. And I don't think you realise how backwards you sound to people who had to live through the character encoding hell of the past. This has been an ongoing headache for the better part of a century (it still comes up in old files, sites, and systems) and you're literally the only person I've ever seen seriously suggest we turn back now that the madness has been somewhat tamed. If you have to deal with delivering the fastest possible i18n at GSM data rates, well, that's a tough problem and it sounds like you might need to do something pretty special. Turning the entire ecosystem into your special case is not the answer. -Wyatt
Re: The Case Against Autodecode
On Tuesday, 31 May 2016 at 19:20:19 UTC, Timon Gehr wrote: The 'length' of a character is not one in all contexts. The following text takes six columns in my terminal: 日本語 123456 That's a property of your font and font rendering engine, not Unicode. (Also, it's probably not quite six columns; most fonts I've tested, 漢字 are rendered as something like 1.5 characters wide, assuming your terminal doesn't overlap them.) -Wyatt
Re: faster splitter
On Tuesday, 31 May 2016 at 08:43:59 UTC, Chris wrote: On Monday, 30 May 2016 at 22:16:27 UTC, qznc wrote: And Desktop: ./benchmark.ldc std: 129 ±24+40 (3121) -17 (6767) manual: 129 ±31+59 (2668) -21 (7244) qznc: 112 ±14+30 (2542) -9 (7312) Chris: 134 ±33+58 (2835) -23 (7068) Andrei: 123 ±27+53 (2679) -18 (7225) (avg slowdown vs fastest; absolute deviation) CPU ID: GenuineIntel Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz ./benchmark.dmd std: 157 ±31+44 (3693) -24 (6234) manual: 143 ±41+73 (2854) -28 (7091) qznc: 116 ±21+35 (3092) -14 (6844) Chris: 181 ±50+74 (3452) -38 (6510) Andrei: 136 ±38+64 (2975) -27 (6953) (avg slowdown vs fastest; absolute deviation) CPU ID: GenuineIntel Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz Benchmark from desktop machine: DMD: std: 164 ±34+43 (4054) -29 (5793) manual: 150 ±41+72 (2889) -29 (7032) qznc: 103 ±6 +42 ( 878) -2 (9090) Chris: 205 ±43+81 (2708) -29 (7232) Andrei: 136 ±31+53 (2948) -22 (6977) (avg slowdown vs fastest; absolute deviation) CPU ID: GenuineIntel Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz === LDC: std: 138 ±23+35 (3457) -18 (6360) manual: 145 ±33+45 (3748) -27 (6181) qznc: 105 ±7 +17 (2267) -4 (7534) Chris: 135 ±33+56 (3061) -23 (6882) Andrei: 121 ±27+52 (2630) -18 (7301) (avg slowdown vs fastest; absolute deviation) CPU ID: GenuineIntel Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz On my laptop Andrei's was the fasted (see post above). Comparing the chips involved, could it be cache related? 3M cache; Andrei wins both: http://ark.intel.com/products/75459/Intel-Core-i5-4200U-Processor-3M-Cache-up-to-2_60-GHz 4M cache; qznc wins DMD (and is faster than the LDC's best? What?); Andrei wins LDC: http://ark.intel.com/products/43560/Intel-Core-i7-620M-Processor-4M-Cache-2_66-GHz 8M cache; qznc wins both: http://ark.intel.com/products/65719/Intel-Core-i7-3770-Processor-8M-Cache-up-to-3_90-GHz http://ark.intel.com/products/75122/Intel-Core-i7-4770-Processor-8M-Cache-up-to-3_90-GHz Normally, I'd expect the 4200U to be similar to the desktop parts. Unless... Say, for the laptops (and I guess the desktops too, but it's more important in a mobile), did you verify the CPU frequency scaling wasn't interfering? -Wyatt
Re: Split general into multiple threads
On Thursday, 26 May 2016 at 18:38:30 UTC, Seb wrote: I proposed two weeks ago to turn off the bot, but it seems some people like it. Most projects I can think of have separate *-commits lists for automated spam like that. -Wyatt
Re: Preprocessing CSS
On Tuesday, 24 May 2016 at 18:03:38 UTC, Thiez wrote: If D owned a scissors factory, would you use those instead of knives when you eat your dinner and call it "dogfooding"? Funny enough, scissors work quite well on food. They're safer and faster than knives in many case. ;) -Wyatt
Re: Idea: swap with multiple arguments
On Tuesday, 24 May 2016 at 12:22:01 UTC, H. S. Teoh wrote: And why would you assume that it would rotate right? The default assumption, unless stated clearly and widely followed everywhere, is unclear and leads to misunderstandings. I'd rather unambiguously name them rotateRight and rotateLeft than the asymmetric rotate / rotateLeft. In the APL family, we have dyadic ⌽ and ⊖: 3⌽⍳9 ⍝ left 4 5 6 7 8 9 1 2 3 ¯3⌽⍳9 ⍝ right 7 8 9 1 2 3 4 5 6 3 3⍴⍳9 1 2 3 4 5 6 7 8 9 1⊖3 3⍴⍳9 ⍝ up 4 5 6 7 8 9 1 2 3 ¯1⊖3 3⍴⍳9 ⍝ down 7 8 9 1 2 3 4 5 6 They rotate left and up in the positive case because that yields the natural indexing order. Right rotation gives you a reverse ordering. -Wyatt
Re: Hide input string from stdin
On Sunday, 22 May 2016 at 22:38:46 UTC, Michael Chen wrote: I tried to write a small program that receive string as password. However, I didn't find available library for hide input string, even in core library. Any suggestion? For Linux, I think you could just use getpass() from core.sys.posix.unistd. Not sure what the Windows equivalent is. An agnostic, user-facing version isn't a terrible idea. Or arguably better, a good way to disable echo on stdin; maybe file a bug about this against std.stdio? -Wyatt
Re: Need a Faster Compressor
On Monday, 23 May 2016 at 14:47:31 UTC, Wyatt wrote: Maybe consider lz4 instead? Disregard that: I see it's come up already.
Re: Need a Faster Compressor
On Saturday, 21 May 2016 at 21:27:37 UTC, Era Scarecrow wrote: I assume this is related to compressing symbols thread? I mentioned possibly considering the LZO library. Maybe consider lz4 instead? Tends to be a bit faster, and it's BSD instead of GPL. https://cyan4973.github.io/lz4/ -Wyatt
Re: DMD producing huge binaries
On Friday, 20 May 2016 at 13:24:42 UTC, Andrei Alexandrescu wrote: I don't see a need for hashing something. Would a randomly-generated string suffice? Naïve question: is a (randomly-)?generated _anything_ required? I've probably missed something, but it seems like line noise just because we feel we must. I guess there's a possibility that there would be multiple matches on the same line with the same object and identifier... Stick the column in there too and call it a day? -Wyatt
Re: Always false float comparisons
On Monday, 16 May 2016 at 12:37:58 UTC, Walter Bright wrote: 7. 80 bit reals are there and they work. The support is mature, and is rarely worked on, i.e. it does not consume resources. This may not be true for too much longer-- both Intel and AMD are slowly phasing the x86 FPU out. I think Intel already announced a server chip that omits it entirely, though I can't find the corroborating link. -Wyatt
Re: Supporting musl libc
On Tuesday, 17 May 2016 at 08:51:01 UTC, Jacob Carlborg wrote: The issue is that musl doesn't support the functions defined by execinfo.h: backtrace, backtrace_symbols_fd and backtrace_symbols, since these are glibc extensions. It's worth noting that musl does support a lot of GNU-isms, but the debug stuff is probably not on the table. Here's some background on that from Rich Felker himself: http://www.openwall.com/lists/musl/2015/04/09/3 -Wyatt
Re: Command line parsing
On Monday, 2 May 2016 at 12:52:42 UTC, Andrei Alexandrescu wrote: This is interesting because it's what std.getopt does but the opposite of what GFLAGS (http://gflags.github.io/gflags/) does. GFLAGS allows any module in a project to define flags. I was thinking of adding GFLAGS-like capabilities to std.getopt but looks like there's no need to... thoughts? "Gflags, the commandline flags library used within Google..." My perspective is those last three words are pretty important. It's a clever idea that's great when you have a whole mountain of things that live and work together that were _designed_ to do so, but I don't think it's going to generalise well. It's sort of like Bazel, which works in Google's colossal pillar of code, but doesn't tend to make a lot of sense for most other projects. I could be wrong, though. I've been using JCommander (http://jcommander.org/#Overview) at work this last week, and it hasn't been too bad. It's a bit different though, because Java and the actual control over what classes you scan for options is still in your hands. (If I'm reading this right, Gflags isn't something you easily have fine control over-- you use it or you don't.) -Wyatt
Re: Uniform Function Call Syntax?
On Sunday, 6 March 2016 at 07:45:58 UTC, Ola Fosheim Grøstad wrote: I think it would be better idea to just add the ability to add unicode operators, and to avoid precedence issues one could just require them to use parentheses. That way you could define opCustom"•" and use it as: I've mentioned this before, but I think a constrained set of user-defined operators using annotations/affixes on the existing set is a better fit for D. It's a lesson well-learned from other languages (cf. OCaml; F#), and fits with D's generally practical bent. I mean, if you want to alias them to Lucky Charms or various hieroglyphs of birds disemboweling men later, I guess maybe that's could work? But I really don't want any language features predicated on the programmer having an APL keyboard. -Wyatt
Re: I guess this is good GSOC 2016 news?
On Monday, 29 February 2016 at 21:22:44 UTC, Jonas Drewsen wrote: https://summerofcode.withgoogle.com/organizations/?sp-category=languages Yes, that IS great news! Though it doesn't seem to say how many slots were given? Craig, any word? -Wyatt
Re: Another new io library
On Thursday, 18 February 2016 at 18:35:40 UTC, Steven Schveighoffer wrote: On 2/18/16 12:08 PM, Wyatt wrote: I hadn't thought of this before, but if we accept that a stream is raw, untyped data, it may be best _not_ to provide a range interface directly. It's easy enough to alias source = sourceStream.as!ubyte; anyway, right? An iopipe is typed however you want it to be. Sorry, sorry, just thinking (too much?) in terms of the conceptual underpinnings. But I don't think we really disagree, either: if you don't give a stream a type it doesn't have one "naturally", so it's best to be explicit even if you're just asking for raw bytes. That's all I'm really saying there. But the concept of what constitutes an "item" in a stream may not be the "element type". That's what I'm getting at. Hmm, I guess I'm not seeing it. Like, what even is an "item" in a stream? It sort of precludes that by definition, which is why we have to give it a type manually. What benefit is there to giving the buffer type separately from the window that gives you a typed slice into it? (I like that, btw.) However, you have some issues there :) popFront doesn't return anything. Clearly, as!() returns the data! ;) But criminy, I do actually forget that ALL the damn time! (I blame Broadcom.) The worst part is I think I've even read the rationale for why it's like that and agreed with it with much nodding of the head and all that. :( And I think parsing/processing stream data works better by examining the buffer than shoehorning range functions in there. I think it's debatable. But part of stream semantics is being able to use it like a stream, and my BER toy was in that vein. Sorry again, this is probably not the place for it unless you try to replace the std.stream for real. -Wyatt
Re: Another new io library
On Thursday, 18 February 2016 at 16:36:37 UTC, Steven Schveighoffer wrote: On 2/18/16 11:07 AM, Wyatt wrote: This looks pretty all-right so far. Would something like this work? foreach(pollItem; zmqSocket.bufferedInput .as!(zmqPollItem) .asInputRange) Yes, that is the intent. All without copying. Great! Note, asInputRange may not do what you want here. If multiple zmqPollItems come in at once (I'm not sure how your socket works), the input range's front will provide the entire window of data, and flush it on popFront. Not so great! That's really not what I'd expect at all. :( (This isn't to say it doesn't make sense semantically, but I don't like how it feels.) I'm thinking I'll change the name byInputRange to byWindow, and add a byElement for an element-wise input range. Oh, I see. Naming. Naming is hard. -Wyatt
Re: Another new io library
On Thursday, 18 February 2016 at 15:44:00 UTC, Steven Schveighoffer wrote: On 2/17/16 5:54 AM, John Colvin wrote: On Wednesday, 17 February 2016 at 07:15:01 UTC, Steven Schveighoffer wrote: On 2/17/16 1:58 AM, Rikki Cattermole wrote: A few things: https://github.com/schveiguy/iopipe/blob/master/source/iopipe/traits.d#L126 why isn't that used more especially with e.g. window? After all, window seems like a very well used word... Not sure what you mean. I don't like that a stream isn't inherently an input range. This seems to me like a good place to use this abstraction by default. What is front for an input stream? A byte? A character? A word? A line? Why not just say it's a ubyte and then compose with ranges from there? If I provide a range by element (it may not be ubyte), then that's likely not the most useful range to have. I hadn't thought of this before, but if we accept that a stream is raw, untyped data, it may be best _not_ to provide a range interface directly. It's easy enough to alias source = sourceStream.as!ubyte; anyway, right? This is why I think it's better to have the user specifically tell me "this is how I want to range-ify this stream" rather than assume. I think this makes more sense with TLV encodings, too. Thinking of things like: switch(source.as!(BERType).popFront){ case(UNIVERSAL|PRIMITIVE|UTF8STRING){ int len; if(source.as!(BERLength).front & 0b10_00_00_00) { // X.690? Never heard of 'em! } else { len = source.as!(BERLength).popFront; } return source.buffered(len).as!(string).popFront; } ...etc. } Musing: I'd probably want a helper like popAs!() so I don't forget popFront()... -Wyatt
Re: Another new io library
On Wednesday, 17 February 2016 at 06:45:41 UTC, Steven Schveighoffer wrote: foreach(line; (new IODevice(0)).bufferedInput .asText!(UTFType.UTF8) .byLine .asInputRange) // handle line This looks pretty all-right so far. Would something like this work? foreach(pollItem; zmqSocket.bufferedInput .as!(zmqPollItem) .asInputRange) 3. The focus of this library is NOT replacement of std.stream, or even low-level i/o in general. Oh. Well maybe that's not the case, but it may have potential anyway. If nothing else, for testing API concepts. 6. There is a concept in here I called "valves". It's very weird, but it allows unifying input and output into one seamless chain. In fact, I can't think of how I could have done output in this regime without them. See the convert example application for details on how it is used. This... might be cool? It bears some similarity to my own ideas. I'd like to see more examples, though. -Wyatt
Re: Speed kills
On Monday, 15 February 2016 at 14:16:02 UTC, Guillaume Piolat wrote: Something that annoyed me a bit is floating-point comparisons, DMD does not seem to be able to handle them from SSE registers, it will convert to FPU and do the comparison there IIRC. I feel like this point comes up often, and that a lot of people have argued x87 FP should just not happen anymore. -Wyatt
Re: Can I get more opinions on increasing the logo size on the site please
On Wednesday, 10 February 2016 at 16:26:33 UTC, Gary Willoughby wrote: Can I get more opinions on increasing the logo size on the website please. See here for an example: https://github.com/D-Programming-Language/dlang.org/pull/1227 Destroy! I agree it's too tall in that PR. Maybe go for a happy medium? http://racket-lang.org/, https://golang.org/, and http://fsharp.org/ all have the same header height: about halfway between current dlang.org and your PR. -Wyatt
Re: dpaste and the wayback machine
On Monday, 8 February 2016 at 20:02:41 UTC, Jesse Phillips wrote: I'm not sure if the wayback machine should be used for version control, if you want to keep a history of your past I suggest using a gist.github.com. I view the wayback machine as a view for what the web used to look like not necessarily what information was in it. I'm pretty sure that's Andrei's thought, too. It's a pastebin; people use it to make web links to pasted things. If it were to disappear, a lot of links would break very permanently because Heritrix has no way to index and crawl the site. -Wyatt
Re: dpaste and the wayback machine
On Sunday, 7 February 2016 at 21:59:00 UTC, Andrei Alexandrescu wrote: Dpaste currently does not expire pastes by default. I was thinking it would be nice if it saved them in the Wayback Machine such that they are archived redundantly. I'm not sure what's the way to do it - probably linking the newly-generated paste URLs from a page that the Wayback Machine already knows of. I just saved this by hand: http://dpaste.dzfl.pl/2012caf872ec (when the WM does not see a link that is search for, it offers the option to archive it) obtaining https://web.archive.org/web/20160207215546/http://dpaste.dzfl.pl/2012caf872ec. Thoughts? You want it in Wayback? Sounds like you need some WARC [0]. Since anyone can upload to IA (using a nice S3-like API, even [1]), this should be pretty uncomplicated. If you can get a list of all the paste URLs, you can use wget [2] to build the WARC fairly trivially. [3] Then I'd suggest getting a dlang account and make an item [4] out of it. Just make sure it's set to mediatype:web and it should get ingested by Wayback. After that? Generate a WARC when a paste is made and use the dlang S3 keys to add it to the previous item (or maybe just do it daily or weekly so as to not stress the derive queue too much). I'm pretty sure that's all that's needed. -Wyatt [0] http://fileformats.archiveteam.org/wiki/WARC [1] https://archive.org/help/abouts3.txt [2] -i, --input-file=FILE download URLs found in local or external FILE. [3] http://www.archiveteam.org/index.php?title=Wget#Creating_WARC_with_wget [4] https://blog.archive.org/2011/03/31/how-archive-org-items-are-structured/
Re: An IO Streams Library
On Sunday, 7 February 2016 at 00:48:54 UTC, Jason White wrote: This library provides an input and output range interface for streams (which is more efficient if the stream is buffered). Thus, many of the wonderful range operations from std.range and std.algorithm can be used with this. Ah, grand! I love the idea and my impression from browsing the source a bit is positive enough to say I'm looking forward to what comes out of this. Though I AM a little ambivalent-- I had a series of pretty in-depth conversations on this topic with a friend a while back and we came to a consensus that stream semantics are a tricky thing because of the historical baggage around them and how they tend to get conflated with other concepts. Looking at your API design, I think you've hit close to a lot of the same conclusions we reached, but here are the notes I took for the sake of providing an additional perspective: http://radiusic.com/doc/streamNotes (Sorry, I tried just pasting them and it was moderately unreadable even in the preview) I think the most important things we hit upon are: 1. A stream is fundamentally unidirectional. 2. A stream is raw, untyped data that becomes a range through an adapter that mediates access. -Wyatt
Re: I was thinking of cool names for a new D gui library when...
On Friday, 5 February 2016 at 20:19:58 UTC, Enjoys Math wrote: Anyhow, had a question: what should const.d be named in this scheme? compost.d And also thought some would appreciate the humor :) Amusing, but I feel like it might be awful confusing to actually use. I've always wondered if these "cute" naming schemes help or harm... -Wyatt
Re: Why do some attributes start with '@' while others done't?
On Tuesday, 26 January 2016 at 12:42:37 UTC, w0rp wrote: I think we bicker and pontificate about these kinds of issues too much. Yes. Sorry, got carried away on a tangent. Do we want @ for every attribute or not? Yes. If you worry about the compiler becoming too complicated, I can assure you it will barely have an impact on compilation speed, and that's all users will care about. Was anyone even concerned about that? -Wyatt
Re: Why do some attributes start with '@' while others done't?
On Monday, 25 January 2016 at 18:34:24 UTC, burjui wrote: On Friday, 22 January 2016 at 18:28:31 UTC, Wyatt wrote: If you need an IDE to work comfortably in a language, something has clearly gone wrong. Oh come on, not that "Vim/Emacs vs IDEs" crap again. Please stop being so black&white and pretentious. Begging your pardon? There's nothing wrong with an IDE; you clearly love yours -- love it to the point of spinning a single sentence into two paragraphs of inflammatory drivel casting me as a cave-dwelling primitive, even -- and that's great for you. But programming languages exist for people and their affordances come from this understanding, so I'm really not sure HOW you justify a language so poorly designed that it can't comfortably be used without an IDE. -Wyatt
Re: Why do some attributes start with '@' while others done't?
On Friday, 22 January 2016 at 18:08:14 UTC, Ola Fosheim Grøstad wrote: That's just conditioning. If you are used to "BEGIN" and "END" picking up braces takes time, same time the other way around. BEGIN and END are still basically braces and they still serve in the capacity of a visual anchor for a block for humans. If you use single item per line and a proper IDE they aren't really different. Python's indentation the same, but you need a decent IDE that does proper re-indentation etc. If you need an IDE to work comfortably in a language, something has clearly gone wrong. -Wyatt
Re: Why do some attributes start with '@' while others done't?
On Friday, 22 January 2016 at 15:36:06 UTC, rsw0x wrote: Maybe it's just me, but without my comfy braces I get lost. Large python files feel like a maze to me honestly. Lispers might have been onto something. It's not just you; I completely agree. Python (and a lot of ruby) is nearly unreadable to me even with wide tabstops. I have a couple projects I started in Python that I've largely abandoned because of that. :/ I'm not overly attached to semicolons -- I think I could live without them in most cases -- but omitting something to properly denote blocks is a deal-breaker at this point. -Wyatt
Re: Why do some attributes start with '@' while others done't?
On Friday, 22 January 2016 at 08:23:48 UTC, Ola Fosheim Grøstad wrote: And, IMO, when redoing the syntax one might want to look at contemporary languages like Swift and C# to see if one can lower the barrier to entry for Apple and Microsoft type programmers. "Contemporary". ;) Aside from Swift's optional semicolons, they're really not all that different. I am not sure if looking like C is all that attractive in 2016 as far as recruiting goes. By 2025 C-style syntax might be viewed as arcane among young programmers. Like Swift, C#, Javascript, Go, Haxe, Rust, Dart, et al? Really, people have been predicting the death of curly-braces languages for decades now, and they're as strong as ever. Do you have anything to base this on, or is it just what you'd (apparently) like to see? -Wyatt
Re: [dlang.org] Let's talk about the logo
On Friday, 22 January 2016 at 00:04:33 UTC, tsbockman wrote: On Thursday, 21 January 2016 at 23:49:39 UTC, cym13 wrote: On Thursday, 21 January 2016 at 23:46:26 UTC, anonymous wrote: The logo is repeatedly being called out as a weak spot of the D brand. But so far Walter has been adamant about keeping it the way it is. [...] I love the third one from the top, it is close enough from the official logo to identify it with no difficulty and yet fits really well in the bar. Yes, the third is the best. The Martian horizon in the background is also a part of the core design of the logo; please don't drop it. I'm certain I've made this same argument in the past. For the website, the third one, without a doubt. For an application icon? Hm, I might prefer the second. -Wyatt
Re: Choosing D over C++, Go, Rust, Swift
On Thursday, 14 January 2016 at 16:08:24 UTC, Joakim wrote: On Thursday, 14 January 2016 at 15:32:10 UTC, Dibyendu Majumdar I think a prominent Link saying - Why choose D? on the home page. and maybe initially this could take to another page with links to articles, videos etc. But longer term it would better to have a more structured presentation of the benefits. Example - show with examples what can be done with D templates that is hard or not possible with C++. And similarly with other languages. I would suggest very aggressive 'marketing' of D advantages. You're right, D's not very good at marketing. On the other hand, have you ever found what you're suggesting on any other programming language's website? I haven't, so they're all in the same boat, each one as bad as the next. Sort of sounds like he wants something in this vein, only as part of the site: http://fsharpforfunandprofit.com/ -Wyatt
Re: IPFS is growing and Go, Swift, ruby, python, rust, C++, etc are already there
On Wednesday, 13 January 2016 at 19:17:00 UTC, karabuta wrote: when they do This is... remarkably optimistic. -Wyatt
Re: Unclear about the benefits of D over C++ and Java
On Sunday, 3 January 2016 at 18:39:21 UTC, Shannon wrote: On Sunday, 3 January 2016 at 15:38:18 UTC, Dibyendu Majumdar wrote: I am looking to choose between D, Swift and Rust for a project that I am currently coding in C++. So far D seems the alternative but I guess I won't know until I try out a few things. Why I now choose D, even for commercial jobs... Nice story; mind if we put it on the user narratives wiki page? -Wyatt
Re: Proposal: Database Engine for D
On Monday, 4 January 2016 at 20:07:55 UTC, Walter Bright wrote: On 1/4/2016 10:25 AM, Russel Winder via Digitalmars-d wrote: It is important that this works. But it should be possible to create an operator algebra for any type: arithmetic types are a very small subset of types used in computing. What do you suggest when the operators and precedence of the desired algebraic type simply do not map cleanly onto C++ operators and precedence grammar? Allow users to define their own operators and redefine the precedence? Where is the line that shouldn't be crossed? On this point specifically, I'm still considering attempting to write a DIP because it would be very useful to have some user-defined operators available. Take the OCaml/F# approach where they provide a set of acceptable characters for operators and define precedence automatically (depending on first character of the name). (And definitely don't do the Haskell thing where any binary function can be turned into an infix op.) -Wyatt
Re: D Consortium as Book / App Publisher... ?
On Sunday, 27 December 2015 at 14:44:37 UTC, Ola Fosheim Grøstad wrote: I think wannabe game programmers is a sizeable market. Programmers that dont have the capacity to learn modern C++ and would pay for a quality tutorial of how to build a commercial level game using OpenGL, OpenAL and a physics engine, with downloadable chapter by chapter source code. This may have potential. Sort of like that old "Game Programming Gems" book series, only geared for a specific language. -Wyatt
Re: We need a good code font for the function signatures on dlang.org
On Wednesday, 16 December 2015 at 21:05:27 UTC, Andrei Alexandrescu wrote: I was looking at https://github.com/D-Programming-Language/dlang.org/pull/1169 and that bold sans serif proportional text for the code is just... well let's say it's time to replace it. What would be a good code font to use for those? I've recently become partial to Fira Mono. https://typecast.com/preview/google/fira%20mono:400:normal It has nice open counters, tall braces that don't overextend, and relatively weighty punctuation. http://typecast.com/blog/10-fonts-for-code There are a few others that may be worth considering here. I don't like the condensed fonts so much because I prefer "O" not be as narrow as "0". For zeros, dot > slash ≫ open. (I used a dotted version of Droid Sans Mono for a long time.) -Wyatt
Re: Some feedback on the website.
On Tuesday, 15 December 2015 at 07:07:23 UTC, deadalnix wrote: Home page: This is a mess. There is way too much here. There is an attention budget and it is important to manage it well. I think you're overstating it: it's a bit busy, but I think it can be fixed. The usual for a programming language goes as follow : - Logo, color as per branding. Yeah, we should probably incorporate the red more. - Language name, quick blurb about what it is, usually ending with a link to tutorial. We have the language name twice! Do we need a longer blurb? None of your examples seem to link to an official tutorial at all, so we're ahead of the game there. (Sort of. It's on a different domain and doesn't match "D style".) - Big fat download button. It's right in the middle of the page. Should we wrap it in to make it more obvious? (Seriously: I agree it should be bigger. Changelog link smaller and underneath instead.) - Some sample code. The one we have on the front page is way too big. It should be a piece of code that someone with 0 experience in the language can understand. The RPN example is too big. The sort lines example is nice. Is there some sort of rotation here? Go kinda gets it right with the dropdown. Scala's tiles are poorly telegraphed. - A menu with quick access to what more experienced users want : stdlib reference, code repository, wiki, forum, language spec, news, this kind of thing. So, the stuff on the sidebar? Some examples: http://www.scala-lang.org/ Well, I guess it's pretty? Examples aren't obvious and the documentation uses a completely different colour scheme for Reasons(?). https://nodejs.org/en/ Thoroughly useless bootstrap placeholder. https://developer.apple.com/swift/ What on earth? There's no download at all, no obvious doc link, way too much verticality, and they've overdone it on the whitespace. I guess they only care about people with high-dollar Apple screens. https://golang.org/ Ugly but functional. Decent layout, though I still don't get this fetish for top links. https://www.rust-lang.org/ Slightly better than Go. Could we stop pretending 1024x768 is The Best Resolution? Last but not least, it wouldn't hurt to hire a designer to have something slick. I think the biggest issues are the sidebar cleanliness and the main content having a single-column design. I like the _idea_ of having the discussion boxouts in the right column, but it comes at the expense of the rest of the content and contributes to the fatigue. -Wyatt
Re: Here's looking at you, kid
On Friday, 20 November 2015 at 15:12:13 UTC, Andrei Alexandrescu wrote: One thing we need to do is deemphasize the "Language Reference" entry in the left menu and promote Ali's book to top level. -- Andrei In that case, if he were willing, might it be worthwhile to host it for him? Maybe apply the usual dlang.org styling to it, as well? I'm not sure how common it really is for the "officially-blessed" tutorial to live on an entirely different domain, but it seems like something that would be viewed as "less professional". -Wyatt
Re: Is Anything Holding you back?
On Friday, 2 October 2015 at 17:17:07 UTC, Jonathan M Davis wrote: Now, using D at work is another thing entirely, but that has more to do with there being existing codebases and it being very difficult to talk coworkers into using a new language or technology than there necessarily being any technical issues. Same boat here, only with the added pain that most of our current stuff is a terrifying tower of Java trash. I'm not even sure what could be done to ease this situation. -Wyatt
Re: Shout out to D at cppcon, when talkign about ranges.
On Wednesday, 30 September 2015 at 02:59:40 UTC, Freddy wrote: So this is what APL feels like. /s Nah, the APL version would be shorter and only use builtins. ;) -Wyatt
Re: Pathing in the D ecosystem is generally broken (at least on windows)
On Sunday, 27 September 2015 at 06:34:29 UTC, Walter Bright wrote: That all conspires to ensure that you CANNOT SEE what the longer values even are! It's pathetic. Well they finally fixed that, at least. A week ago. http://www.ghacks.net/2015/09/22/microsoft-improves-environment-variables-editor-in-latest-windows-10-build/ ...on the Windows version many people swear they'll never install. -Wyatt
Re: Moving back to .NET
On Friday, 25 September 2015 at 14:54:53 UTC, Jonathan M Davis wrote: I bet that using git or mercurial would save our build guy a ton of time, but he just wants to use TFS and thinks that it's great (probably because it's what he's used to, and it's from MS). - Jonathan M Davis Look on the bright side: at least it's not clearcase! -Wyatt
Re: Indicators and traction…
On Wednesday, 23 September 2015 at 13:58:09 UTC, Adam D. Ruppe wrote: We should get TV commercials. I'm not even really kidding, when I see something advertised on television, it plants a seed in my brain that this brand is serious and mainstream. After all, they were able to secure a spot on my local channel! We're talking about perception here and there may not be a technical solution to that. It is a marketing problem and what do professional marketers do to make an impression of their product? It'd probably cost like a million dollars to sponsor Jeopardy! or something though. Television is a bit pie-in-the-sky, but I think you may be on the right track. For example, I know I've seen conferences/conventions that have some advertisement in their program books. Sure, it won't have the shotgun reach of the telly, but...well, I don't know about you all, but I don't even own a TV. Though targeted ads on Youtube might work? Adwords? Other advertising networks? -Wyatt
Re: Implement the "unum" representation in D ?
On Wednesday, 16 September 2015 at 08:53:24 UTC, Ola Fosheim Grøstad wrote: I don't think he is downplaying it. He has said that it will probably take at least 10 years before it is available in hardware. There is also a company called Rex Computing that are looking at unum: Oh hey, I remember these jokers. They were trying to blow some smoke about moving 288 GB/s at 4W. They're looking at unum? Of course they are; care to guess who's advising them? Yep. I'll be shocked if they ever even get to tape out. -Wyatt
Re: Where will D sit in the web service space?
On Sunday, 12 July 2015 at 17:54:02 UTC, Ola Fosheim Grøstad wrote: On Sunday, 12 July 2015 at 12:32:32 UTC, Peter Alexander wrote: Web servers: Why not? Mostly because there is no real visible direction towards making D a competitor that directly addresses specific needs of web programming. So what? Personally, I've dealt with perl, ruby, python, java, and php in the web space and as far as I'm concerned they're all unmaintainable trash. (perl, ironically, gave me the best experience of the five!) If I ever decide I'm masochistic enough to attempt something in that vein again, D is at least as strong a contender for me because it offers fast iteration, solid performance, and a type system that doesn't make me want to punch small animals. Go and Rust, for all their "theoretical superiority" in one place or another, _don't feel good_. Go is to C what Plan 9 is to Unix, which is to say it's a thoroughly unimaginitive, ideologically hampered, overly-conservative iteration from Rob Pike. Rust might be intriguing if it ever catches up to D in being pleasant to use. -Wyatt
Re: Where will D sit in the web service space?
On Thursday, 16 July 2015 at 11:48:59 UTC, ponce wrote: Notch goes pretty fast with Eclipse. Nitpicking, but notch hasn't been involved with Minecraft in years. *g* -Wyatt
Re: Wait, what? What is AliasSeq?
On Wednesday, 15 July 2015 at 15:28:11 UTC, Andrei Alexandrescu wrote: These google searches returned no meaningful results: Try: splat operator Let's not inflate each new name idea to alleged popularity it doesn't really enjoy. Having slept on it, I like splat because it IS relatively new as a named concept in our field. Language is arbitrary, so we can do this and anyone confused can look it up. There will be no turning back. Precisely. -Wyatt
Re: Wait, what? What is AliasSeq?
On Wednesday, 8 July 2015 at 22:01:48 UTC, Observer wrote: all I'm proposing is a process for generating alternative names. AliasBall, AliasPile, AliasGroup, AliasSet, AliasLine... (That is, I generally agree inventing a new term that fits isn't a bad idea.) -Wyatt
Re: std.uni.toLowerCase / .toUpperCase
On Wednesday, 24 June 2015 at 20:03:35 UTC, Vladimir Panteleev wrote: Well, I suppose simply "upperCase" and "lowerCase" are an options, if you squint your eyes and pretend they're verbs. The opposite of "lowerCase" would be "raiseCase". ;) (Huh, "transposeCase"?) -Wyatt
Re: std.path.setExt
On Wednesday, 24 June 2015 at 20:19:49 UTC, Marc Schütz wrote: On Wednesday, 24 June 2015 at 14:01:32 UTC, Wyatt wrote: I don't think I'd interpret these two names as having the same functionality in the first place. I'd probably learn their equivalence completely by accident and only remember it by rote. Interesting. But once you know that, it's easy to tell which is which, no? Is it, though? I mean I _guess_ setExtension() sounds more eager? Familiarity removes my ability to make a first-time judgement. But by the time I've learned of their equivalence and that one is lazy and the other is not, the API has already "lost" as far as I'm concerned. Maybe this can be mitigated with really good docs that lists paired functions together so it's at least easy to find them. Or here's a thought: Since we apparently want to minimise/kill eagerness, can we detect usage of eager functions and catch/flag them? Similar in ideal to Adam's (brilliant) wrapper thing, but with tooling. A @lazy attribute (analogous to @nogc), or a switch, or dfix rules, or something. I don't know. -Wyatt
Re: DIP80: phobos additions
On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote: I actually thought about it more, and D does have a bunch of binary operators that no ones uses. You can make all sorts of weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~, >>>--, &++, ^^+, in++, |-, %~, ect... void main(string[] args){ test a; test b; a +* b; } struct test{ private struct testAlpha{ test payload; } testAlpha opUnary(string s : "*")(){ return testAlpha(this); } void opBinary(string op : "+")(test rhs){ writeln("+"); } void opBinary(string op : "+")(testAlpha rhs){ writeln("+*"); } } Oh right, meant to respond to this. I'll admit it took me a few to really get why that works-- it's fairly clever and moderately terrifying. (I showed it to a friend and he opined it may violate the grammar.) But playing with it a bit...well, it's very cumbersome having to do these overload gymnastics. It eats away at your opUnary space because of the need for private proxy types, and each one needs an opBinary defined to support it explicitly. It also means you can't make overloads for mismatched types or builtin types (at least, I couldn't figure out how in the few minutes I spent poking it over lunch). -Wyatt
Re: std.path.setExt
On Tuesday, 23 June 2015 at 22:51:08 UTC, Vladimir Panteleev wrote: On Tuesday, 23 June 2015 at 22:45:10 UTC, Vladimir Panteleev wrote: Proposed new name: withExtension I feel this fails the litmus you established before: "These functions have the same functionality, but one of them is eager, and the other is lazy. Can you guess which is which?" I don't think I'd interpret these two names as having the same functionality in the first place. I'd probably learn their equivalence completely by accident and only remember it by rote. -Wyatt
Re: D could catch this wave: web assembly
On Tuesday, 23 June 2015 at 11:37:41 UTC, Suliman wrote: Am I right understand that web assembly would not completely new technology and would be just evolution of asm.js, so all of webassembly apps would run in old javascript virtual machine? They covered this question in the FAQ, too: https://github.com/WebAssembly/design/blob/master/FAQ.md#why-create-a-new-standard-when-there-is-already-asmjs
Re: Naming things
On Saturday, 20 June 2015 at 09:27:16 UTC, Vladimir Panteleev wrote: Two examples of controversial name pairs: setExt/setExtension, and toLower/toLowerCase. These functions have the same functionality, but one of them is eager, and the other is lazy. Can you guess which is which If I had to hazard a guess, I'd go with "the shorter one is lazy", but that presumes I'd notice there were two nearly-identical functions in the first place and pick up on the not-well-conveyed implication that one is lazy and the other is not. That's a Bad Thing. And it's a bad thing everyone seems to be tip-toeing around, too. None of the suggestions I've seen so far really call out to me "hey, this is lazy and has a non-lazy counterpart". Would it be so wrong to add "lazy" to the beginning or end so it's super obvious at a glance with zero cognitive overhead? -Wyatt
Re: D could catch this wave: web assembly
On Friday, 19 June 2015 at 14:47:10 UTC, Ola Fosheim Grøstad wrote: On Friday, 19 June 2015 at 14:18:49 UTC, Wyatt wrote: Do you guys have any real arguments against SVG? It is currently the most useful interchangeformat for 2D vector graphics. You want reasons I dislike SVG? I can address a few different levels. It's unbelievably complex even if you reduce it to just the still-graphics parts. This means implementing a renderer it is not even close to trivial (you need to support a huge chunk of CSS, for one thing) and rendering consistency is difficult to get right even in fairly simple situations.[1] Fill rules, path joining, stroke scaling, markers, filters, paint order, the list goes on. It's very byzantine. It's DOM-based anyway, which means you can't reasonably expect good performance when manipulating it in an editor (it only pays lip-service to the concept of "human readable", so you really NEED an editor) or memory requirements. Even just rendering it for output on a device has poor performance (I don't even know if Nvidia has even bothered putting more work into their OpenGL extension for 2D vectors). Even relatively simple animations in SVG can peg a CPU core. It only supports linear gradients, which makes it basically useless for artistic work unless you live in a "Modern" bubble. There's no support for variable stroke width. You can only define one stroke and one fill, so if you want to composite those things, you need to do a lot of duplication. On top of that, blending can only be done through filters (which are milquetoast at best). And text in SVG is...ugh, let's not even start. SVG is somewhat useful if you only need simple diagrams with solid colours and you don't trust PNG for some reason. -Wyatt [1] http://tavmjong.free.fr/blog/?p=1257
Re: D could catch this wave: web assembly
On Thursday, 18 June 2015 at 21:21:13 UTC, Ola Fosheim Grøstad wrote: On Thursday, 18 June 2015 at 19:39:58 UTC, Nick Sabalausky wrote: Great, so it'll have the same fundamental problem as asm.js: Claims to be backwards compatible, but really isn't because the backwards fallback method is likely to be prohibitively slow and will especially fuck mobile browsers that use the fallback. Yeah. This fallback thing does not make much sense. They say WebAssembly will reduce the file size by 7% after compression compared to asm.js . Who cares? In fact, _we_ do. Our flagship product is mostly used via a web application. Lots of Web 2.0 stuff going on there, it's pretty big. This becomes kind of a problem when many of our customers are halfway around the world. Even just 7% would be a win (for bandwidth and latency), but it looks like that's a low-ball estimate: https://github.com/WebAssembly/design/blob/master/FAQ.md#can-the-polyfill-really-be-efficient (The corollary to this is, yes, it does kind of have the same fundamental problem as asm.js. Because it IS asm.js.) But if the endgame becomes real and the the order-of-magnitude parsing speedup happens, it'll be kind of huge. Maybe this suggestion demonstrates ignorance, but I'm thinking "They should just use LLVM IR. It already exists." Maybe toss in some LLVM IR extensions as needed, and boom, done. The LLVM IR isn't stable, so you need a higher level IR. And that's hard to design. So maybe 5 years before they get it right, and _properly_ implemented, in all browsers? They covered this, too: https://github.com/WebAssembly/design/blob/master/FAQ.md#why-not-just-use-llvm-bitcode-as-a-binary-format -Wyatt
Re: D could catch this wave: web assembly
On Friday, 19 June 2015 at 04:18:59 UTC, Joakim wrote: No, NaCl has been built into Chrome, one of the major browsers, "One of the major browsers". One. Not "all". One. In the timeframe that NaCl was ever relevant, we're talking about approximately a third of browsers. And it was never coming to the other 66%. Ubiquity matters. Think about that. Once you're writing your app in WebGL/webasm, what are you really gaining over just making it a mobile app for iOS/Android, both of which support OpenGL/asm? ;) Maybe the part where you're maintaining three separate branches with three different sets of highly-specialised domain specific knowledge and bugs? And that still only covers mobile; iOS/Android aren't everything. (Yet. (Thankfully.)) No, I take issue with the text format, especially XML. That was a horrible idea, regardless of how many good features they built in. I wouldn't call any of those things "good features"-- SVG is fractally terrible. -Wyatt
Re: D could catch this wave: web assembly
On Thursday, 18 June 2015 at 19:23:26 UTC, Joakim wrote: On Thursday, 18 June 2015 at 18:30:24 UTC, Abdulhaq wrote: Of course this is exactly true and it drives me mad too, but you can't just jettison it in favour of a better architecture. Why not? This is exactly what _should_ be done. Same reason you can't just stick your head in the sand and pretend the entire existing body of C and C++ code doesn't exist. It sucks, but them's the breaks. I think the reason these efforts have failed so far is because NaCl was still stuck using the existing web stack for the GUI, NaCl failed because it required a plugin, and did so in a way that made it exclusive to one browser vendor. It's like Java only worse. Or that thrice-be-damned Flash. But if you're just going to avoid the old web stack altogether and try to deploy your canvas/WebGL/assembly native app everywhere using the web browser as a trojan horse, presumably just to get through security or evade sysadmins more easily, you have to question what the point of making it a "web app" even is. The point is it runs in a browser. Do you need a more compelling feature than the ability to run unchanged anywhere there's a browser (basically everywhere)? I mean, I too think most of this "web technology" is trash and really wish the lingua fraca of the Internet wasn't awful-- I would love for text to be foremost and for progressive enhancement to fall back to a normal web site when I visit with elinks. But realistically? This is a damn sight better than any of the other attempts so far because it's just a new feature in the JS VM. If it means we can lower code in a proper language to something a browser can run at something resembling the speed of an ordinary scripting language, it'll be a win already. And this new stuff isn't integrated, I believe canvas doesn't even support hyperlinks. How is that not broken already? Look, I don't fundamentally disagree that this all sucks but dude, chill. Here, go play some Oregon Trail: https://archive.org/details/msdos_Oregon_Trail_The_1990 ;) http://www.w3.org/TR/SVG/paths.html SVG has animation, input handling, and an audio API(!) and you take issue with paths? Weak. :P -Wyatt
Re: Why aren't you using D at work?
On Thursday, 18 June 2015 at 16:04:05 UTC, Chris Piker wrote: On Thursday, 18 June 2015 at 10:17:49 UTC, lobo wrote: On Thursday, 18 June 2015 at 01:13:10 UTC, Chris Piker wrote: On Thursday, 28 May 2015 at 14:38:51 UTC, Manu wrote: [...] About the only thing really holding me back is that the local sys-admins can't: $ yum install gcd Can you install to $HOME ? I can do that, but there are other developers in our group. We need to be able to build each other's software. Java, Python and C are accepted as standard languages around here and seem to cover all our needs. Since we have a "complete set" adding a new one would be met with resistance. Having command line tools available through standard software distribution channels would soften this resistance. If DMD is sufficient, I'm not really any problems. Even FHS has your back. Sysadmin does this: cd /opt; wget http://downloads.dlang.org/releases/2.x/2.067.1/dmd.2.067.1.linux.zip -qO tmp.zip \ && unzip tmp.zip \ && rm tmp.zip \ && echo 'export PATH="${PATH}:/opt/dmd2/linux/bin64"' >> /etc/profile ...and voila. It might be kind of nice to have a "latest" symlink for the download (e.g. http://downloads.dlang.org/releases/latest/dmd.latest.zip), but that'd just be icing. Alternatively, ask have them make you a group-writable volume to use as a --prefix for everything that you might want (we ended up doing this because CantOS so strongly resembles LFS when you want to accomplish anything useful). Or have people add ~cpiker/bin (or whatever your HOME is) to their PATH in ~/.profile (or just add the path in your Makefiles, if you're feeling evil). It could certainly be better, but I wouldn't personally consider it a blocker as things are. -Wyatt
Re: Workaround for typeid access violation
On Thursday, 18 June 2015 at 15:19:19 UTC, Etienne wrote: On Thursday, 18 June 2015 at 15:09:46 UTC, Wyatt wrote: This comes to mind, along with the citations: http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/local-gc.pdf -Wyatt That's exactly what we're talking about, but we do the work that was done by the globalisation policy by using the `new shared` keyword. Maybe I misunderstood. My interpretation was you were disallowing references from shared heap to local heap and obviating the whole globalisation problem. No migrations; just explicit marking of global objects with "new shared". (Though I'm curious how you handle liveness of the global heap; it seems like that depends on information from each thread's allocator?) -Wyatt
Re: Workaround for typeid access violation
On Wednesday, 17 June 2015 at 22:21:21 UTC, Laeeth Isharc wrote: Do you have any links to reading material on this type of GC? This comes to mind, along with the citations: http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/local-gc.pdf -Wyatt
Re: std.container: fork in the road
On Wednesday, 17 June 2015 at 16:21:18 UTC, ixid wrote: On Wednesday, 17 June 2015 at 15:57:38 UTC, Wyatt wrote: But "sanity" and "API versioning" may exist at opposite ends of a spectrum, if I recall my history. What are the downsides? Issues, off the top of my head: figuring out which is which in the first place, separate compilation causing multiple modules to pull in different versions of the same symbols, run-time linking hell when external libraries are added to the mix, and library bloat when different projects depend on different versions. I'm reasonably certain there are other things too. It _might_ be possible to do sanely if this had all been worked out from the outset for D/D2, but I'm not at all confident that it's possible to retrofit (rather, I expect it isn't). -Wyatt
Re: std.container: fork in the road
On Wednesday, 17 June 2015 at 15:50:51 UTC, John Colvin wrote: On Wednesday, 17 June 2015 at 14:57:40 UTC, Wyatt wrote: On Wednesday, 17 June 2015 at 06:08:57 UTC, Andrei Alexandrescu wrote: Took a fresh look at std.container from a Design by Introspection perspective I've seen you use this term a few times now; what does it mean? (Lack of) Google results seem to indicate it's your own neologism. It comes from Andrei's DConf talk. Oh. I guess I'll have to wait for Adam's write-up, then. Or has it been expanded in written form elsewhere? -Wyatt
Re: std.container: fork in the road
On Wednesday, 17 June 2015 at 15:29:34 UTC, ixid wrote: On Wednesday, 17 June 2015 at 14:57:40 UTC, Wyatt wrote: but std.collection isn't nearly so good a name. std.container2 and so on? Dunno. That's not something that really needs addressed right now, is it? Off-the-cuff, in an ideal world there would be some way of easily knowing which API was intended and forwarding to std.deprecated on an as-needed basis (with a warning when it happens). But "sanity" and "API versioning" may exist at opposite ends of a spectrum, if I recall my history. -Wyatt
Re: std.container: fork in the road
On Wednesday, 17 June 2015 at 06:08:57 UTC, Andrei Alexandrescu wrote: Took a fresh look at std.container from a Design by Introspection perspective I've seen you use this term a few times now; what does it mean? (Lack of) Google results seem to indicate it's your own neologism. * The documentation is appallingly bad, making std.container worse than non-existent. I tried using it a couple times. Failed miserably every time. Regarding compatibility, I see three possibilities: #breakmycode! ...is my first impulse. But really it doesn't matter much-- I'm not using std.container anywhere and I suspect it's much the same for most everyone else. I guess option 3 is fine, but std.collection isn't nearly so good a name. -Wyatt
Re: DIP80: phobos additions
On Friday, 12 June 2015 at 03:18:31 UTC, Tofu Ninja wrote: What would the new order of operations be for these new operators? Hadn't honestly thought that far. Like I said, it was more of a nascent idea than a coherent proposal (probably with a DIP and many more words). It's an interesting question, though. I think the approach taken by F# and OCaml may hit at the right notes, though: precedence and fixity are determined by the base operator. In my head, extra operators would be represented in code by some annotation or affix on a built-in operator... say, braces around it or something (e.g. [*] or {+}, though this is just an example that sets a baseline for visibility). -Wyatt
Re: DIP80: phobos additions
On Friday, 12 June 2015 at 00:11:16 UTC, jmh530 wrote: On Thursday, 11 June 2015 at 22:36:28 UTC, Wyatt wrote: 1) a set of operators that have no meaning unless an overload is specifically provided (for dot product, dyadic transpose, etc.) and I see your point, but I think it might be a bit risky if you allow too much freedom for overloading operators. For instance, what if two people implement separate packages for matrix multiplication, one adopts the syntax of R (%*%) and one adopts the new Python syntax (@). It may lead to some confusion. From the outset, my thought was to strictly define the set of (eight or so?) symbols for this. If memory serves, it was right around the time Walter's rejected wholesale user-defined operators because of exactly the problem you mention. (Compounded by Unicode-- what the hell is "2 🐵 8" supposed to be!?) I strongly suspect you don't need many simultaneous extra operators on a type to cover most cases. -Wyatt