Re: Dlang club meeting Thu May 30 7pm at the Red Robin
On Monday, 20 May 2024 at 22:36:30 UTC, Walter Bright wrote: Given last month's successful conversion of a sand pile to an atomic pile, this #dlang meeting will be about resurrecting the lost technology of the Atomic Earth Blaster. Thu May 30 7pm at the Red Robin 2390 148th Ave NE, Redmond, WA 98052 https://twitter.com/WalterBright/status/1792685408620605709 I am planning on being there, and I've got a list...
Re: SecureD 3.0 has been released!
On Thursday, 7 March 2024 at 17:06:00 UTC, Andrea Fontana wrote: On Wednesday, 6 March 2024 at 07:47:04 UTC, Adam Wilson wrote: SecureD 3.0 has been released. This version was set in motion by a Cedric Picard, a D community member with Cryptography experience, reaching out and suggesting a number of improvements to the Symmetric and KDF API's. This resulted in an API for symmetric encryption that improves correctness and security without significantly increasing developer workload. Also improved is the encryption envelope that contains symmetrically encrypted data. Algorithm Changes: - Removed: SHA2-224, AES-OFB - Added: SHA3-224/256/384/512 And I even remembered to update the examples in the README. Future work will focus on using the Operating System provided cryptography where available. Check it out here: https://code.dlang.org/packages/secured Well done Adam! I only miss the docs :) Andrea Yea, I totally whiffed and forgot to add examples for the new methods. And I can never seem to find the time for the DDoc comments that it really needs. It'll happen ... someday.
Re: SecureD 3.0 has been released!
On Thursday, 7 March 2024 at 11:04:08 UTC, Dukc wrote: On Wednesday, 6 March 2024 at 07:47:04 UTC, Adam Wilson wrote: This version was set in motion by a Cedric Picard, a D community member with Cryptography experience, reaching out and suggesting a number of improvements to the Symmetric and KDF API's. Wow, a Cym13 verified crypto library - excellent news! Indeed. I was actually excited to get the feedback. I am quite proud of what this release accomplished.
Re: SecureD 3.0 has been released!
On Thursday, 7 March 2024 at 08:17:47 UTC, aberba wrote: On Wednesday, 6 March 2024 at 07:47:04 UTC, Adam Wilson wrote: SecureD 3.0 has been released. This version was set in motion by a Cedric Picard, a D community member with Cryptography ... And I even remembered to update the examples in the README. +1 I wish more packages did this And even then I still forgot to add examples for the new Password methods...
SecureD 3.0 has been released!
SecureD 3.0 has been released. This version was set in motion by a Cedric Picard, a D community member with Cryptography experience, reaching out and suggesting a number of improvements to the Symmetric and KDF API's. This resulted in an API for symmetric encryption that improves correctness and security without significantly increasing developer workload. Also improved is the encryption envelope that contains symmetrically encrypted data. Algorithm Changes: - Removed: SHA2-224, AES-OFB - Added: SHA3-224/256/384/512 And I even remembered to update the examples in the README. Future work will focus on using the Operating System provided cryptography where available. Check it out here: https://code.dlang.org/packages/secured
Re: Phobos 3 Development is Open!
On Thursday, 29 February 2024 at 00:34:59 UTC, zjh wrote: On Wednesday, 28 February 2024 at 20:41:46 UTC, Adam Wilson wrote: or start a discussion if there is disagreement on how to handle this. Although Github has discussions, why not just discuss them in the `D forum`? This is `the forum`. Github doesn't have the convenience of the `D forum` at all. Why bother with them? A couple of reasons. First it's easier to cross-tab discussions with work on the design docs. Second, we started this during the whole fork and didn't need that bleeding into the discussion. Third, we didn't know how this will this GH experiment would work but we wanted to try it out. So far it's been a smashing success.
Re: Phobos 3 Development is Open!
On Thursday, 29 February 2024 at 13:44:37 UTC, Andrew wrote: On Wednesday, 28 February 2024 at 20:41:46 UTC, Adam Wilson wrote: I am totally on board with this if the community thinks there are improvements to be had here. Head on over to the Design repo and you can either submit a PR to the design document, or start a discussion if there is disagreement on how to handle this. I'll see if I can check it out next weekend, if that's not too late. Studying for my private pilot exam takes priority sadly. As a fellow pilot, I agree with this prioritization.
Re: Phobos 3 Development is Open!
On Wednesday, 28 February 2024 at 19:45:53 UTC, Greggor wrote: On Wednesday, 28 February 2024 at 15:55:52 UTC, Andrew wrote: On Wednesday, 28 February 2024 at 15:45:06 UTC, ryuukk_ wrote: https://github.com/dlang/phobos/pull/8925/files#diff-647aa2ce9ebedd6759a2f1c55752f0279de8ae7ba55e3c270bd59e1f8c1a5162R131 Why can't D have its own types? I have to agree with ryuukk_ on this one; it's one thing to maintain binary compatibility with C, and another to import and use C symbols all over the place in the standard library. I think that interfaces with C (or any other language) should be reduced to the minimum required to make something work, and nothing more. So, keep the usage of malloc/realloc/free to allocators and whatnot, and use fread/fwrite only in stdio. Absolutely agreed, I’d love to use Phobos in code targeting smaller platforms that is not just desktop Linux, macOS, windows or Bsd. Also being able to use smaller libc(s) where a full glibc is not available would be really nice. I am totally on board with this if the community thinks there are improvements to be had here. Head on over to the Design repo and you can either submit a PR to the design document, or start a discussion if there is disagreement on how to handle this.
Phobos 3 Development is Open!
The first PR for Phobos 3 has been merged into the Phobos repo! Now, to be clear, this is mostly a housekeeping PR that paves the way for further work and there isn't actually anything useful in it yet. We've setup the basic structure, DUB build/test config, and copied over the modules that don't have any `std` imports. The next project will be to (slowly) audit each file for any trace of auto-decoding. No module can be added to V3 without first being cleared of auto-decoding. With this comes a few changes in the development process for Phobos. The first of which is that all changes to Phobos will now need to be cross-applied to both libraries. We have kept V3 in the same repo as V2 to make this process easier, but you will need to check the V3 directory (tentatively labeled `lib`) to see if the V2 file you're modifying is included in V3 yet, and if it is, please include your change in corresponding V3 file as well. This will be enforced during the review process. If you are unsure, you can ping me on GitHub as `@LightBender` in your PR and I'll take a look. As we progress, I am sure that the other reviewers will become familiar with the state of the V3 code-base. Note that this process will continue for the indefinite future as we intended to maintain V2 alongside V3+. While the exact level of support that V2 will receive once V3 is launched hasn't been fulled resolved, at a minimum, it must remain in a buildable state indefinitely. Another note is that I do have a day job, so I will only be working on V3 in my spare time, as such, any help with the auto-decoding audit would be immensely appreciated as that will significantly speed up the time to completion and let us start focusing on the projects we all want to see happen in Phobos. If you have any V3 specific questions or comments you can join the discussion that's happening in this GitHub repo: https://github.com/LightBender/PhobosV3-Design Or, you can ask on the Forums and in the Discord and I'll do my best to keep up with the updates in those places. Finally, I would like to thank everybody who has contributed to the design of V3 already (there are too many to list, you know who you are), you've been a wonderful help as we start this journey. Thanks to your efforts the future of Phobos is looking very bright!
Re: Dlang mtg at Red Robin in Overlake 7pm tonight
On Wednesday, 24 January 2024 at 20:49:51 UTC, Walter Bright wrote: be there or be square! PhobosV3 is on the menu!
Re: Seattle D Meetup Mailing List - Ferrari Night
On Saturday, 16 December 2023 at 22:24:46 UTC, Walter Bright wrote: If you want to be on it, email me your address! We hope to have some fun activities for D aficionados. For example, I am planning "Ferrari Night" towards the end of the month where we all meet at the theater to watch "Ferrari". https://www.imdb.com/title/tt3758542 You have my email already!
Re: Seattle Area D-Meetup
On Tuesday, 12 December 2023 at 17:52:12 UTC, Gregor Mückl wrote: Hi! I'm interested in joining this time. Looking forward to meeting you all! I look forward to meeting you!
Seattle Area D-Meetup
Hello Everyone, If you're going to be in the Seattle area over the holidays, Walter, Bruce C, and I will be hanging out at the Red Robin in Redmond on December 14th from 7PM until whenever they kick us out. Normally we would meet after NWCPP, but they are on a holiday break this month so we have the opportunity to rant and rage... I mean have collegial dialog with Walter for an extended period of time this month. Address: 2390 148th Ave NE, Redmond, WA 98052
Re: Release D 2.106.0
On Saturday, 2 December 2023 at 18:09:11 UTC, Iain Buclaw wrote: Glad to announce D 2.106.0, ♥ to the 33 contributors. This release comes with... - In the D language, it is now possible to statically initialize AAs. - In dmd, there's a new `-nothrow` CLI flag. - In dub, `dub init` now has a select menu for package format and license. As always, you can find the release binaries and full changelog on the dlang.org site. http://dlang.org/download.html http://dlang.org/changelog/2.106.0.html -Iain on behalf of the Dlang Core Team I'm excited about this release! The ImportC fixes alone are with the wait. And it's good to see the ODBC bindings restored. Up next: ODBC 4.0 in Phobos.
Re: DLF September 2023 Planning Update
On Wednesday, 15 November 2023 at 05:27:40 UTC, Mike Parker wrote: If someone misses all of that and tries to use tuples without specifying edition N, the compiler should be able to tell them what the problem is, how to solve it (annotate your module declaration with `@edition(N)`), and provide the URL to the relevant documentation. This is exactly backwards from how most languages today. By default you are working with the current edition and you have to down-select to earlier editions. But if we're really enamored of using the crippled version of the language by default then it is absolutely imperative that edition selection be a command-line option. Here is how it should work IMO. The build system needs to able to specify editions at the package level, but when no package level edition is specified, then you assume current. If you need to down-select to an older edition then the dependency specifier in the project file that I control must set the correct edition. This neatly works around the lack of edition specifier in abandonware without enforcing crippled-by-default behavior that is going to confuse the entire world.
Re: First Beta 2.106.0
On Tuesday, 14 November 2023 at 17:44:11 UTC, Steven Schveighoffer wrote: This might be one of the greatest releases of D ever. -Steve I second this.
Re: SerpentOS departs from Dlang
On Saturday, 16 September 2023 at 12:34:24 UTC, Richard (Rikki) Andrew Cattermole wrote: Although I do want a write barrier on each struct/class, to allow for cyclic handling especially for classes. How dare you bring the High Heresy of write barriers into D! I thought that it was well understand that even mentioning Write Barriers is a mortal sin against the Church of @nogc. Kidding aside. If you do this, you might as well turn them on everywhere. After that it's a easy stroll to a non-blocking moving GC, which would end most complaints about the GC (nobody complains about the .NET GC anymore).
Re: SerpentOS departs from Dlang
On Friday, 15 September 2023 at 21:49:17 UTC, ryuukk_ wrote: Ikey seems to still want to use D, so the main driving factor is the contributors, i wonder what are the exact reasons, pseudo memory safety can't be the only reason I would guess that the following is the bigger problem: "we don't quite have the resources to also be an upstream for the numerous D packages we'd need to create and maintain to get our works over the finish line." This has long been a chicken-egg problem for D. We need more packages to attract more users, but we need more users building packages before we can attract more users. DIP1000 is also a bit of marketing problem. We kinda-sorta promise that someday you'll be able to build memory safe programs without a GC. We need to either push through and get it done, or admit we're not actually going to get there and cut it. I know there are ref-counted languages so theoretically it should be workable, but the in a language as complex as D there may be dragons on the edges. In any case, we should not be marketing something we can't actually do.
Re: New beginnings - looking for part-time D programming help
On Friday, 24 March 2023 at 09:01:21 UTC, Sergey wrote: On Friday, 24 March 2023 at 07:54:06 UTC, Monkyyy wrote: On Thursday, 23 March 2023 at 16:02:46 UTC, Laeeth Isharc wrote: Hi. For those that didn't hear, I resigned from Symmetry in September and my last day was a couple of weeks back. [...] "Not hedge fund" and "scripting" doesn't seem like enough of a job description to me My guess it is something related to context reconstruction :) When someone said something with quite limited words and sentences - he has a lot more context in his mind. Not always this context is the same for other people. Based on description this script could help to align the same context of conversation. Just guessing :) Btw is any LLM embedding available in open source? That makes sense to me as well. When I last spoke with Laeeth his brain was 10 steps ahead of, and accelerating away from, wherever he was in the conversation. Keeping up with people like that means that either the other person has the IQ to fill in the context on the fly, or we use tools to fill it in for us lesser minds. I'm guessing that this is about the later since IQ isn't something that we have a lot of control over. Could be useful, but I can see quality being an issue, how useful such a tool would be would depend on how accurately it deduces the correct context. Not an easy problem to solve.
SecureD 2.0 Released
SecureD is a library that provides strong cryptography with a simple-to-use interface that ensures that your data will be correctly and securely stored with a minimum amount of effort. What's New in 2.0? Complete rewrite of symmetric encryption and decryption. Prior to V2 the standard encryption and decryption functions only provided one set of algorithms and no path to use safe alternatives or work with encrypted data from other sources. Both of these shortcomings have now been rectified. If you need to perform custom encryption or interoperate with other systems, please see the encrypt_ex and decrypt_ex functions. New algorithms. - Digests: - SHA2 512/224, 512/256 - SHA3 224, 256, 384, 512 - Symmetric Algorithms: AES (128/192/256), ChaCha20 - Stream Modes: GCM, CTR, Poly1305 (ChaCha20 only) - Block Modes: OFB, CFB, CBC (PKCS7 Padding Only) - KDFs: - HKDF - SCrypt - ECC: - P256 Curve - P521 Curve To support these new algorithms, SecureD now requires OpenSSL 1.1.1. It is my opinion that, as 1.1.1 is the new LTS release, it would be prudent to upgrade now as 1.1.1 will be support for many years to come. Please note that the API's have changed significantly. Additionally, due to the major overhaul of the symmetric encryption code and the new algorithms, I was unable to upgrade the Botan code-path and was forced to remove it to ship. If anybody is interested in doing the work to bring Botan back into the project I would be grateful. -- Adam Wilson IRC: EllipticBit import quiet.dlang.dev;
Re: DConf 2019: Shepherd's Pie Edition
On 12/22/18 10:47 AM, Robert M. Münch wrote: On 2018-12-22 12:18:25 +, Mike Parker said: Thanks to Symmetry Investments, DConf is heading to London! We're still ironing out the details, but I've been sitting on this for weeks and, now that we have a venue, I just can't keep quiet about it any longer. Hi, you should consider the upcoming Brexit chaos, which is expect to have a high impact on all airlines. Currently I wouldn't bet that all parties involved get things sorted out until May... I very much doubt that Brexit will cause anything approaching choas insofar as airlines are concerned. Currently all international flights are governed by the Montreal Convention which was signed by the individual states of the EU and not the EU itself, and the ICAO which is a UN function. They will remain in force regardless of the UK's status vis-a-vis Brexit. There may be the additional annoyance of EU folks having to pass through passport control depending on the final disposition of Brexit, but that's probably it. Chaos is a persuasion word that has zero measurable technical meaning, it's purpose is to allow your mind to fill it's space with your worst nightmares. Whenever I see it in the news I assume that the writer is ideologically opposed to whatever event the writer is describing and lacks any evidence to back up their claims. Airlines have had years to prepare for Brexit, and humans are generally pretty good at avoiding disasters that they've know about for years. My guess is that on Brexit day you won't even notice, save having to pass through an automated passport kiosk. -- Adam Wilson IRC: EllipticBit import quiet.dlang.dev;
Re: Autowrap for .NET is Now Available
On 12/14/18 2:33 PM, Sjoerd Nijboer wrote: Is there any overhead on the generated interface? Or overhead the compiler can't trivially optimise away. Yes, any overheads that would normally be associated with a P/Invoke call will be present here. Do you have any recocmendations about mixing coe like, don't use strings for now or try to minimize switching from D to C# and vice-versa. Strings are fine. But definitely try to minimize switching as there is some pretty significant overheads on the D side with module constructors. And "switching" can happen in some pretty well hidden places, for example, appending to a range is switch. Do you have plans to incorportae this as a VisualD project .csproj project since it's already intended to be microsoft oriented? (It would seem like a good fit to me.) That way you could even take away the mixin code and the "running the main method" code. I do not since I wouldn't know where to start, but it's possible. Does it work with LDC or only with DMD? How about GCC on linux? It works with anything that can output a shared library and C interfaces. :) -- Adam Wilson IRC: EllipticBit import quiet.dlang.dev;
Re: You don't like GC? Do you?
On 10/11/18 11:20 PM, JN wrote: On Thursday, 11 October 2018 at 21:22:19 UTC, aberba wrote: [snip] That is fine, if you want to position yourself as competition to languages like Go, Java or C#. D wants to be a viable competition to languages like C, C++ and Rust, as a result, there are usecases where GC might not be enough. Does it though? The way I see it is that people who want to do what C/C++ does are going to use ... C/C++. The same goes for Java/C#. People who want to do what Java/C# do are pretty much just going to use Java/C#. And nothing D does is going to convince them that D is truly better. For the C/C++ D's more involved involved semantics for non-GC code are ALWAYS going to be a turnoff. And for Java/C# people D's less evolved standard library (and library ecosystem) is ALWAYS going to be a turnoff. Where D shines is in it's balance between the two extremes. If want to attempt what C# can do with C++ i'm going to spend the next ten years writing code to replace what ships OOB in .NET. If I want to use C# as a systems language, I have to reinvent everything that C# relies on from the ground up, which will cost me about 10 years (see MSR's Singularity). IMHO D should focus on being the best possible D it can be. If we take care of D, the rest will attend to itself. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Aurora DirectX Bindings 12.1
On 10/3/18 10:15 PM, rikki cattermole wrote: On 04/10/2018 5:33 PM, Nicholas Wilson wrote: On Thursday, 4 October 2018 at 04:03:27 UTC, rikki cattermole wrote: On 04/10/2018 2:06 PM, Adam Wilson wrote: The Aurora DirectX bindings have been updated to support Windows 10 1809. Also the D2D Effect Authoring SDK has been added. GitHub: https://github.com/auroragraphics/directx DUB: http://code.dlang.org/packages/aurora-directx Please send PR's if you find any bugs! It would be nice to get DirectX bindings into druntime. Just need 9/10 (some are floating about) after these mature a bit. I don't think thats wise, nor is it ever like to happen. Druntime is D's runtime, the only reason there are bindings to the C and and C++ standard library is because of their ubiquity. This is much better suited to a dub package. Side note, its possible that I might end up adding a DX12 backend to LDC/DCompute (which I'm going to have to rename if I do) based on https://github.com/Microsoft/DirectXShaderCompiler/ even more likely if MS (Hi Adam) upstream it to LLVM. Direct3D is part of the system API on Window's today, it is equivalent to using WinAPI for graphics. Which is well within the scope of druntime bindings. A couple of thoughts on this. First IIRC, the Win32 bindings in DRT is nowhere near a complete implementation of the Win32, I think it's primarily from user32.lib. Second, the DirectX bindings themselves are absolutely massive and the move pretty quickly (they change with every release of Windows, so every 6 months right now). Putting them in DRT would significantly slow down updates. For example, I had these updates released within two days of the release of 1809. Third, my implementation is not complete. There are a bunch of missing macros and helper functions that have not yet been ported. So for now I definitely think the package route is a better option. But if you do end up using these bindings on DCompute please let me know! I've made sure that all the Shader interfaces exist, but if you find anything missing I will gladly except PR's. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Aurora DirectX Bindings 12.1
The Aurora DirectX bindings have been updated to support Windows 10 1809. Also the D2D Effect Authoring SDK has been added. GitHub: https://github.com/auroragraphics/directx DUB: http://code.dlang.org/packages/aurora-directx Please send PR's if you find any bugs! -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Please don't do a DConf 2018, consider alternatives
On 10/2/18 4:34 AM, Joakim wrote: On Tuesday, 2 October 2018 at 09:39:14 UTC, Adam Wilson wrote: On 10/1/18 11:26 PM, Joakim wrote: [snip] I disagree. It is not clear what you disagree with, since almost nothing you say has any bearing on my original post. To summarize, I suggest changing the currently talk-driven DConf format to either 1. a more decentralized collection of meetups all over the world, where most of the talks are pre-recorded, and the focus is more on introducing new users to the language or 2. at least ditching most of the talks at a DConf still held at a central location, maybe keeping only a couple panel discussions that benefit from an audience to ask questions, and spending most of the time like the hackathon at the last DConf, ie actually meeting in person. This point has a subtle flaw. Many of the talks raise points of discussion that would otherwise go without discussion, and potentially unnoticed, if it were not for the person bringing it up. The talks routinely serve as a launchpad for the nightly dinner sessions. Benjamin Thauts 2016 talk about shared libraries is one such example. Indeed every single year has brought at least one (but usually more) talk that opened up some new line of investigation for the dinner discussions. Since both of these alternatives I suggest are much more about in-person interaction, which is what you defend, and the only big change I propose is ditching the passive in-person talks, which you do not write a single word in your long post defending, I'm scratching my head about what you got out of my original post. There is much more to the conference than just a 4-day meetup with talks. The idea that it's just the core 8-15 people with a bunch of hangers-on is patently false. It's not about the conversations I have with the "core" people. It's Schveighoffer, or Atila, or Jonathan, or any of a long list of people who are interested enough in coming. Remember these people self-selected to invest non-trivial treasure to be there, they are ALL worthy of conversing with. Since both my mooted alternatives give _much more_ opportunity for such interaction, I'm again scratching my head at your reaction. This is untrue. See responses further down. Is it a "mini-vaction"? Yea, sure, for my wife. For her it's a four day shopping spree in Europe. For me it's four days of wall-to-wall action that leaves me drop-dead exhausted at the end of the day. So it's the talks that provide this or the in-person interaction? If the latter, why are you arguing against my pushing for more of it and ditching the in-person talks? It's everything. The talks, the coding, the talking, the drinking. All of it has some social component I find valuable. Every time I see somebody predicting the end of "X" I roll my eyes. I have a vivid memory of the rise of Skype and videoconferencing in the early 2000's giving way to breathless media reports about how said tools would kill the airlines because people could just meet online for a trivial fraction of the price. People make stupid predictions all the time. Ignoring all such "end of" predictions because many predict badly would be like ignoring all new programming languages because 99% are bad. That means you'd never look at D. And yes, some came true: almost nobody programs minicomputers or buys standalone mp3 players like the iPod anymore, compared to how many used to at their peak. Sure, but the predictions about videoconferencing have yet to come true. As told but the data itself. The travel industry is setting new records yearly in spite of videoconferencing. That's not conjecture or opinion, go look for yourself. As I have previously suggested, the stock prices and order-books of Airbus and Boeing are are record highs. Airplanes are more packed than ever (called load-factor). For example, Delta's system-wide load-factor was 85.6% last year. Which means that 85.6% of all available seats for the entire year were occupied. (Source: https://www.statista.com/statistics/221085/passenger-load-factor-of-delta-air-lines/). Airlines are delivering entire planes for business travelers. All of this demonstrates that videoconferencing has done nothing to curb travel demand and the current data suggest that it is unlikely too in the foreseeable future. That it might at some point in the distant future is not relevant to this discussion. However, it's 2018 and the airlines are reaping record profits on the backs of business travelers (ask me how I know). Airlines are even now flying planes with NO standard economy seats for routes that cater specifically to business travelers (e.g. Singapore Airlines A350-900ULR). The order books (and stock prices) of both Airbus and Boeing are at historic highs. You know what is much higher? Business communication through email, video-conferencing, online source control, etc. that completely replaced old ways
Re: Please don't do a DConf 2018, consider alternatives
On 10/1/18 11:26 PM, Joakim wrote: [snip] I disagree. There is much more to the conference than just a 4-day meetup with talks. The idea that it's just the core 8-15 people with a bunch of hangers-on is patently false. It's not about the conversations I have with the "core" people. It's Schveighoffer, or Atila, or Jonathan, or any of a long list of people who are interested enough in coming. Remember these people self-selected to invest non-trivial treasure to be there, they are ALL worthy of conversing with. Is it a "mini-vaction"? Yea, sure, for my wife. For her it's a four day shopping spree in Europe. For me it's four days of wall-to-wall action that leaves me drop-dead exhausted at the end of the day. Every time I see somebody predicting the end of "X" I roll my eyes. I have a vivid memory of the rise of Skype and videoconferencing in the early 2000's giving way to breathless media reports about how said tools would kill the airlines because people could just meet online for a trivial fraction of the price. However, it's 2018 and the airlines are reaping record profits on the backs of business travelers (ask me how I know). Airlines are even now flying planes with NO standard economy seats for routes that cater specifically to business travelers (e.g. Singapore Airlines A350-900ULR). The order books (and stock prices) of both Airbus and Boeing are at historic highs. There are more conferences, attendees, and business travelers than there has ever been in history, in spite of the great technological leaps in videoconferencing technology in the past two decades. The market has spoken. Reports of the death of business/conference travel have been greatly exaggerated. The reason for this is fundamental to human psychology and, as such, is unlikely to change in the future. Humans are social animals, and no matter how hard we have tried, nothing has been able to replace the face-to-face meeting for getting things done. Be it the conversations we have over beers after the talks, or the epic number of PR's that come out the hackathon, or even mobbing the speaker after a talk. Additionally, the conference serves other "soft" purposes. Specifically, marketing and education. The conference provides legitimacy to DLang and the Foundation both by it's mere existence and as a venue for companies using DLang to share their support (via sponsorships) or announce their products (as seen by the Weka.io announcement at DConf 2018) which further enhances the marketing of both the product being launched and DLang itself. I have spoken to Walter about DConf numerous times. He has nothing against, and indeed actively encourages, local meetups. But they do not serve the purpose that DConf does. My understanding from my conversations with Walter is that the primary purpose of DConf is to provide a venue that is open to anyone interested to come together and discuss all things D. He specifically does not want something that is only limited to the "core" members. As this suggestion runs precisely counter to the primary stated purpose of DConf it is unlikely to gain significant traction from the D-BDFL. Yes, it is expensive, but in all the years I've attended, I have not once regretted spending the money. And indeed, coming from the west coast of the US, I have one of the more expensive (and physically taxing) trips to make. I know a number of people who found jobs in D through DConf, would that not make the conference worth it to them? Something is only expensive if you derive less value from it than it costs. And for many people here, I understand if the cost-benefit analysis does not favor DConf. But calling for an end to DConf simply because it doesn't meet someones cost-benefit ratio is inconsiderate to the rest of us who do find the benefit. Nobody is making you go, and, since you already get everything you want from the YouTube video uploads during the conference, why do you care if the rest of us "waste" our money on attending the conference? That is our choice. Not yours. -- Adam Wilson IRC: LightBender import quiet.dlang.dev; Note: Limiting anything to "core" members is a guaranteed way to create a mono-culture and would inevitably lead to the stagnation of D. Which is why anybody can post to all NG's, even the internals NG.
Re: SecureD moving to GitLab
On 06/05/2018 12:28 AM, Brian wrote: On Tuesday, 5 June 2018 at 06:55:42 UTC, Joakim wrote: On Tuesday, 5 June 2018 at 06:45:48 UTC, Adam Wilson wrote: Hello Fellow D'ers, As some of you know I work for Microsoft. And as a result of the recent acquisition of GitHub by Microsoft, I have decided, out of an abundance of caution, to move all of my projects that currently reside on GitHub to GitLab. [...] This reads like a joke, why would it matter if you contributed to open source projects on an open platform that your employer runs? Yes! We support Github. Note that I am not saying that this is bad move for Microsoft of GitHub. Elsewhere on these forums I have defended the move as the best possible outcome for GitHub. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: SecureD moving to GitLab
On 06/04/2018 11:55 PM, Joakim wrote: On Tuesday, 5 June 2018 at 06:45:48 UTC, Adam Wilson wrote: Hello Fellow D'ers, As some of you know I work for Microsoft. And as a result of the recent acquisition of GitHub by Microsoft, I have decided, out of an abundance of caution, to move all of my projects that currently reside on GitHub to GitLab. [...] This reads like a joke, why would it matter if you contributed to open source projects on an open platform that your employer runs? And this reads like someone who has never talked to a lawyer. :) I am intentionally keeping this ambiguous as possible so that others don't try to take this as legal advice. I'm guessing you live somewhere outside the US? For reference, I do live in the US. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: GitHub could be acquired by Microsoft
On 06/04/2018 08:53 PM, Adam Wilson wrote: On 6/3/18 20:51, Anton Fediushin wrote: This is still just a rumour, we'll know the truth on Monday (which is today). Some articles about the topic: https://fossbytes.com/microsoft-github-aquisition-report/ https://www.theverge.com/2018/6/3/17422752/microsoft-github-acquisition-rumors What's your opinion about that? Will you continue using GitHub? Both GitLab and Bitbucket can be used instead to host your D projects - dub registry supported them for a while now. IMHO Microsoft isn't the type of company I want to see behind the GitHub. Maybe I am wrong since Microsoft has both money and programmers to improve it further, I just don't trust them too much which is the right thing to do when dealing with companies. This means that I will move my repositories elsewhere and use GitHub just to contribute to other projects. I've been thinking how to best respond to this and here is where I am. First, let me state up-front that I work for Microsoft (Office 365 Workplace Analytics). Second, my employer (Volometrix) prior to working for Microsoft was acquired by Microsoft almost three years ago. What that means is that while my division had no fore-warning of this acquisition I have first-hand experience with what will be happening at GitHub over the next months and years. As an employee of Microsoft I am required to follow Microsoft's policy on Social Media, which can be reduced to "If you have nothing nice to say, then say nothing at all." Or stated plainly, what follows may or may not represent the entirety of my thoughts on the matter as I am effectively barred from revealing any negative thoughts. So what I can say about this acquisition is that it is the best possible outcome of GitHub's possible futures for both the company and the employees. GitHub has not been profitable for years and is thought to have had cash reserves for only one or two more months of operations. Losing GitHub entirely overnight would have been an unmitigated disaster for the entire Open-Source community. And there are fates worse than death. Imagine for a second GitHub at Google or ... *shudder* Oracle. Whatever your opinions about Microsoft, you cannot possible imagine that either of those outcomes would have been qualitatively better. In that sense Microsoft was the best of the bad options GitHub. As to any other concerns/opinions, all I will say is ... think laterally. As a reminder I have no inside information on what goes on over in the Azure world and that is where GitHub will land as has been announced. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: GitHub could be acquired by Microsoft
On 06/04/2018 11:46 PM, RalphBa wrote: Sorry to hear that. Since I do not belive Microsoft changed perspective and am convinced they still see open source as cancer I need to assume they try to inflitrate the OSS community the last years. So for sure I won't rely on their stuff. So is there a chance Digital Mars and D main development is getting bought by Microsoft? BR Ralph They have C++ and C#. What do they need D for? -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
SecureD moving to GitLab
Hello Fellow D'ers, As some of you know I work for Microsoft. And as a result of the recent acquisition of GitHub by Microsoft, I have decided, out of an abundance of caution, to move all of my projects that currently reside on GitHub to GitLab. Additionally, until I cease working for Microsoft, I will no longer be contributing code to projects hosted on GitHub, including DLang and it's related projects. I will continue to contribute bug reports and post to the forums. I will post a link to the new SecureD repo on this thread and update the DUB links once I have everything setup correctly post-move. DISCLAIMER: The actions described herein are the result of my specific situation and not intended as a larger commentary on recent events. This message should not be considered legal advice in any way. Any Microsoft employees reading this thread should refer to their lawyers about their specific situation or concerns. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: GitHub could be acquired by Microsoft
On 6/3/18 20:51, Anton Fediushin wrote: This is still just a rumour, we'll know the truth on Monday (which is today). Some articles about the topic: https://fossbytes.com/microsoft-github-aquisition-report/ https://www.theverge.com/2018/6/3/17422752/microsoft-github-acquisition-rumors What's your opinion about that? Will you continue using GitHub? Both GitLab and Bitbucket can be used instead to host your D projects - dub registry supported them for a while now. IMHO Microsoft isn't the type of company I want to see behind the GitHub. Maybe I am wrong since Microsoft has both money and programmers to improve it further, I just don't trust them too much which is the right thing to do when dealing with companies. This means that I will move my repositories elsewhere and use GitHub just to contribute to other projects. I've been thinking how to best respond to this and here is where I am. First, let me state up-front that I work for Microsoft (Office 365 Workplace Analytics). Second, my employer (Volometrix) prior to working for Microsoft was acquired by Microsoft almost three years ago. What that means is that while my division had no fore-warning of this acquisition I have first-hand experience with what will be happening at GitHub over the next months and years. As an employee of Microsoft I am required to follow Microsoft's policy on Social Media, which can be reduced to "If you have nothing nice to say, then say nothing at all." Or stated plainly, what follows may or may not represent the entirety of my thoughts on the matter as I am effectively barred from revealing any negative thoughts. So what I can say about this acquisition is that it is the best possible outcome of GitHub's possible futures for both the company and the employees. GitHub has not been profitable for years and is thought to have had cash reserves for only one or two more months of operations. Losing GitHub entirely overnight would have been an unmitigated disaster for the entire Open-Source community. And there are fates worse than death. Imagine for a second GitHub at Google or ... *shudder* Oracle. Whatever your opinions about Microsoft, you cannot possible imagine that either of those outcomes would have been qualitatively better. In that sense Microsoft was the best of the bad options GitHub. As to any other concerns/opinions, all I will say is ... think laterally. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: SecureD Futures (v2.0)
On 05/29/2018 11:29 AM, Brad Roberts wrote: On 5/29/2018 1:57 AM, Adam Wilson via Digitalmars-d wrote: One of the pillars of SecureD is that ONLY safe, well-known, algorithms are presented. If reasonable we will only present one algorithm for a specific purpose. If there is a good reason to add more than one algorithm, we will. One of the pillars of crypto is that eventually a problem will be found with every algorithm, it's just a matter of time. So, having just one available means that eventually the library will be horribly broken. The corollary here is that having a fallback is pretty essential. Sure. For example, TLS 1.3 supports AES, Camellia, ARIA, and ChaCha20-Poly1305 for symmetric ciphers. However, when you look at the best-practice recommendations, AES is the clear winner. And when you look at what actually gets used, it's AES. AES is the Gold Standard and it would take a fundamental breakthrough in cryptanalysis for that to change. But what happens to TLS1.3 if there is a fundamental breakthrough? - ARIA uses an AES derived substitution-permutation network. - Camellia uses the same S-box that AES does. (It was an AES contender) - Poly1305 for ChaCha20 directly uses AES in it's PRF. Which algorithm do you switch to? Anything that breaks AES can also be used against these other algorithms. In this case, ChaCha20 is probably the best answer, but only if you apply a different MAC to it. It's a good thing that a fundamental breakthrough in cryptanalysis is unlikely in the foreseeable future. Of course, that's what we told ourselves before 3DES and Differential Cryptanalysis. I don't disagree with you. But the overwhelming cacophony of choice presents it's own issues. (See: The Paradox of Choice) SecureD v2 could easily support these ciphers, and I don't really have a problem adding them for the purposes of compatibility. But the default is going to be AES for the forseeable future. The design premise of SecureD is to make the the safe defaults easy and everything else hard. In cases where there are known weaknesses SecureD does intend to support multiple algorithms, but usually only two, for example SHA2 potentially suffers from the same weaknesses as SHA1, and as a result SecureD v2 will be getting SHA3 support whenever OpenSSL 1.1.1 drops. But it's not going to add things like Whirlpool or Skein. Or RSA versus ECC. I am considering adding Curve25519 support to v2, but with NSA/NIST deprecating ECC altogether (as it is less resistant to quantum computing than RSA, see: https://en.wikipedia.org/wiki/Elliptic-curve_cryptography#Quantum_computing_attacks) it may be better to wait-and-see what comes of the work being done on quantum-resistant public-key cryptography. The new Supersingular Isogeny Diffie-Hellman (SIDH) looks like a strong contender, but nothing implements it, and it's way too new to put any trust in it yet. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: SecureD Futures (v2.0)
On 05/28/2018 04:02 PM, sarn wrote: On Monday, 28 May 2018 at 07:52:43 UTC, Adam Wilson wrote: I understand that. Sorry, not for nothing, but you obviously don't. For starters, if you were familiar with the key derivation tools available 24hrs ago, you wouldn't have come up with PBKDF2 on PBKDF2. I suggest slowing down a little, and asking people on a crypto forum if you're still not sure. If you explain your problem from the start, they might even have some better ideas. When that RFC (correctly) recommends using a salt, it's talking about HKDF-Extract, which is a tool for taking a large amount of mostly-random data and turning it into an appropriate KDK. That's not the problem you have here. The problem you have is generating keys for different purposes from a KDK. That's a problem HKDF-Expand addresses, and it doesn't use a salt. Let me ask the question a different way. What is the reason NOT to use 2 different salts for the MAC and KEY generation steps? You might choose to not use extra salts for the same reason you've already chosen to not encrypt three times, or add extra HMACs for each individual block of ciphertext: it's not solving any problems. Ok, but I am not harming anything either. I asked specifically what would be reasons not to do something, you replied with "it's not solving any problems". In a technical sense you are correct. I am not solving any problems. However. I am not creating any either. Adding a salt to an HKDF does not hurt anything. Ever. As to the PBKDF2 comment. I'll concede that I am not perfect. Which is why I asked for input and I obviously "slowed down" enough to bother to ask. And when my mistake was pointed out I corrected it without further comment. We make all make mistakes sometimes, and cryptography is much harder than most to get right. So when we aren't sure we ask. And my statement was posed as a question. I was inviting feedback. One of the pillars of SecureD is that ONLY safe, well-known, algorithms are presented. If reasonable we will only present one algorithm for a specific purpose. If there is a good reason to add more than one algorithm, we will. For example, I am well aware of the BCrypt/SCrypt family as well as Argon2. At this point Argon2 appears to be the heir-apparent. Sadly, Argon2 is so new that neither Botan or OpenSSL implement it. So that one is ruled out for practical purposes, at least for now. I would also like to point out the OpenSSL will never implement BCrypt, but does implement SCrypt, and an Argon2 implementation is an open question due to it's use of threads. It turns out the key-stretching field is very active right now, and precisely what will become the de-facto standard is impossible to say. I don't feel comfortable trying to pick a winner to implement. For the moment, PBKDF2 is stable, broadly implemented, and most importantly, well-understood. SecureD v2 will default to 1,000,000 iterations which yields roughly 650ms on an Intel 8700K. As older hardware is unlikely to be as fast, times of over one second could be seen in production while still providing some buffer against the future. That said the default PBKDF2 method will be tune-able. I have long hoped for a reasonable replacement for PBKDF2, and with all the recent activity it is likely that something will shake out. But the attacks against Argon2 coming out so soon after it's release do not bode well for it in the future. And SCrypt requiring 16MB of RAM to achieve BCrypt's security makes it challenging to recommend for use on servers, especially in today's cloud environments where memory costs money. I know PBKDF2 is the conservative choice, but SecureD is all about conservative. For the moment, there aren't any great options, so conservative wins by default. SCrypt Memory Math: Assuming a 4GB VM in the cloud and we are willing to dedicate 1GB to SCrypt per second at BCrypt equivalent security. 1024/16=64 That's 64 connections per second, that is ... not great. And dedicating 1/4 of your systems RAM just for password hashing is being very generous. You're not going to see that in production because it is terribly expensive. As PBKDF2 and SCrypt are the only key-stretching algorithms available in both OpenSSL and Botan, and I cannot recommend SCrypt for cloud/server scenarios. PBKDF2 it is. To be perfectly honest, key-stretching is a mitigation. Weak passwords will be easily guessed no matter what algorithm is used. The real fix is to force better passwords. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: SecureD Futures (v2.0)
On 05/28/2018 12:14 AM, sarn wrote: On Monday, 28 May 2018 at 06:22:02 UTC, Adam Wilson wrote: On 05/27/2018 08:52 PM, sarn wrote: On Monday, 28 May 2018 at 02:25:20 UTC, Adam Wilson wrote: I like it. But it does require more space. We need three salts and three lengths in the header. One for the PBKDF2 KDK, one for the MAC key, and one for the encryption key. HKDF-Expand doesn't need a salt. You just need one salt to make the KDK (whether you use PBKDF2 or HKDF-Extract for that) and no extra salts for deriving the encryption and MAC key. Strictly speaking, it's is Optional but Strongly Recommended per RFC5869-3.1 There's HKDF-Expand and there's HKDF-Extract. HKDF-Extract takes an optional salt and HKDF-Expand doesn't use a salt at all (take a closer look at that RFC). That OpenSSL routine is the combination of the two, so that's why it takes the salt. I understand that. But the way I read the first paragraph of 3.1 is that yes, it can work without a Salt, but HKDF, in general, is strongly suggested to be used with a salt. If we are really concerned about bytes we could reuse the salt for all three steps (KDK/MAC/KEY), but TBH, I'm not worried about disk usage. Let me ask the question a different way. What is the reason NOT to use 2 different salts for the MAC and KEY generation steps? -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: SecureD Futures (v2.0)
On 05/27/2018 08:52 PM, sarn wrote: On Monday, 28 May 2018 at 02:25:20 UTC, Adam Wilson wrote: I like it. But it does require more space. We need three salts and three lengths in the header. One for the PBKDF2 KDK, one for the MAC key, and one for the encryption key. HKDF-Expand doesn't need a salt. You just need one salt to make the KDK (whether you use PBKDF2 or HKDF-Extract for that) and no extra salts for deriving the encryption and MAC key. Strictly speaking, it's is Optional but Strongly Recommended per RFC5869-3.1 The use case here is that this data is going into storage and that storage is cheap. We don't have to be strict on our byte budget. :) https://tools.ietf.org/html/rfc5869 https://www.openssl.org/docs/man1.1.0/crypto/EVP_PKEY_CTX_set_hkdf_md.html SecureD is supposed to be "Crypto done right." So I might as well do it right and follow the RFC. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: SecureD Futures (v2.0)
On 05/27/2018 05:11 PM, sarn wrote: On Sunday, 27 May 2018 at 10:27:45 UTC, Adam Wilson wrote: struct cryptoHeader { ubyte hdrVersion; // The version of the header ubyte encAlg; // The encryption algorithm used ubyte hashAlg; // The hash algorithm used uint kdfIters; // The number of PBKDF2 iterations ubyte authLen; // The length of the authentication value ubyte saltLen; // The length of the PBKDF2 salt ubyte ivLen; // The length of the IV ulong encLen; // The length of the encrypted data ulong adLen; // The length of the additional data } Should there be any kind of key identifier, or do you consider that a separate issue? hashAlg is used for everything related to key derivation. Salts and IVs are random. - MacKey = PBKDF2 over EncKey once using same inputs as EncKey How about this? Use PBKDF2 to generate a key-derivation-key (KDK), then use HKDF-Expand (https://tools.ietf.org/html/rfc5869) to derive the encryption key and MAC key separately. I think that's more standard. I don't know how much it matters in practice, but a lot of cryptographers don't like generating MAC/encryption keys from each other directly. I like it. But it does require more space. We need three salts and three lengths in the header. One for the PBKDF2 KDK, one for the MAC key, and one for the encryption key. Something like this: struct cryptoHeader { ubyte hdrVersion;// The version of the header ubyte encAlg;// The encryption algorithm used ubyte hashAlg; // The hash algorithm used uint kdfIters; // The number of PBKDF2 iterations ubyte kdkSLen; // The length of the KDK Salt ubyte macSLen; // The length of the MAC Salt ubyte keySLen; // The length of the KEY Salt ubyte ivLen; // The length of the IV ulong encLen;// The length of the encrypted data ulong adLen; // The length of the additional data ubyte authLen; // The length of the authentication value } Also, it's probably safer to use HKDF-Extract instead of using raw keys directly when not using PBKDF2. It is. And we could dot that if you stick to the default encrypt/decrypt routines. I want to expand the symmetric capabilities of SecureD so I am going to rebuild the encrypt/decrypt routines as a composition of smaller methods. That will allow users to either use the default encryption (simple) or if they have to interoperate with other systems, use the component methods to perform their operations. I am not looking to cover all scenarios though, just the common ones. If you have something truly unusual, you'll still need to drop down the base crypto libraries. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: SecureD Futures (v2.0)
On 05/27/2018 09:54 AM, Neia Neutuladh wrote: On Sunday, 27 May 2018 at 10:27:45 UTC, Adam Wilson wrote: Now that SecureD v1 is in the books This would have been a great place to insert a brief description of what SecureD is or a link to the project. Good point. SecureD is a cryptography library that is designed to make non transport-layer cryptography simple to use. I created it after working with OpenSSL and observing that the API design of OpenSSL actively encourages broken implementations. Think of it as a cryptography library for the rest of us. You can read more about it here: http://code.dlang.org/packages/secured -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
SecureD Futures (v2.0)
Now that SecureD v1 is in the books I thought it would be worthwhile to explore what a second version could like. I specifically want to focus on expanding compatibility with other systems. For example: AWS uses SHA2-256 for signing requests. As implemented today SecureD does not support this. However, this is one of the scenarios that falls under SecureD's mission of ease-to-use cryptography for non-transport layer security. I am curious what people would like to see in SecureD moving forward and I would love to get your feedback on what works and what doesn't work in v1 of SecureD. With this in mind here are my ideas for expanding SecureD's capabilities. I'll start with the idea that I think will be the least controversial. 1. Full support for SHA-2 and SHA-3 for Hashes and HMAC's. This means the following modes would be supported: SHA2: 224, 256, 384, 512, 512/224, 512/256 SHA3: 224, 256, 384, 512 The PBKDF2 implementation would also be updated to support the above hash functions. Hash methods supporting salts will be added. To maintain backwards compatibility with v1, methods that match the v1 signatures and forward to the correct implementation would be provided as the v1 defaults are still secure. This requires OpenSSL 1.1.1 at a minimum. Note that SHA3 support could be delayed until SecureD 2.1 if OpenSSL 1.1.1 is not available on a majority of systems prior to shipping 2.0. In such case OpenSSL 1.1.0 would become the minimum supported version. 2. More Flexibility for Symmetric Encryption. Currently SecureD only supports AES256-CTR-HMAC384 and the encrypted data is laid out as: HASH+IV+DATA. This is only useful in scenarios where the developer controls both the encryption and decryption of the data. Additionally, this does not support AEAD modes, only AE modes. The first thing I would like to do is to support all AES, SHA2, and SHA3 modes. I also want to add support for the CBC cipher mode with PCKS#7 padding as that is the most common cipher mode in use. One of the primary use cases for SecureD is long-term storage of data. To facilitate this requirement I would like to add a binary encoded header to each encrypted blob. This header would contain the necessary information to decode the blob (minus the key of course). The header would cost 26 bytes per encrypted blob and would have the following layout: struct cryptoHeader { ubyte hdrVersion; // The version of the header ubyte encAlg; // The encryption algorithm used ubyte hashAlg; // The hash algorithm used uint kdfIters; // The number of PBKDF2 iterations ubyte authLen; // The length of the authentication value ubyte saltLen; // The length of the PBKDF2 salt ubyte ivLen;// The length of the IV ulong encLen; // The length of the encrypted data ulong adLen;// The length of the additional data } The data layout would be: HEADER+AUTH+SALT+IV+ENC+AD Any excluded elements would be marked as 0 in the header and completely excluded from the data layout. The MAC would be computed as: HMAC(HEADER+SALT+IV+ENC+AD) The full encryption process would be as follows: - Create a PBKDF2 salt via RNG - Create IV via RNG - Create header from user inputs - EncKey = PBKDF2 over RawKey unless hashAlg is None or kdfIters == 0 - Use RawKey as-is if hashAlg is None or kdfIters == 0 - MacKey = PBKDF2 over EncKey once using same inputs as EncKey - Encrypt data - Compute HMAC(HEADER+SALT+IV+ENC+AD) - Return RESULT(HEADER+AUTH+SALT+IV+ENC+AD) Methods would be added to work with the following: - Simple encrypted blobs (no header/auth/salt/iv/ad included) - Creating and Verifying AE/AD MACs without headers - Methods to read SecureD v1 encrypted data Note that this plan is NOT compatible with the SecureD v1 encryption data layout and data would need to be re-encrypted. 3. Re-implement the OpenSSL Bindings The Deimos/DUB bindings are ancient and do not support OpenSSL 1.1.x. We can re-implement the bindings taking care to only include the symbols we need. 4. Streams I get asked about this, and, as much as I want to do this, it is not a simple topic. std.streams was removed and right now the only viable replacement is Vibe.D streams. But that means pulling in Vibe.D for a simple cryptography library. At this point that doesn't seem like the right idea. If someone is willing to step-up and do the work I'd be willing to look at it, but for now I want to wait on this, preferably for a standard/generic streams interface to be made available. Please let know what you think! I am very interested to hear about what would make your life easier when working with SecureD an cryptography in general. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
SecureD 1.0.0 Released!
Hello! I am pleased to announce that after a year of development and stabilization SecureD has been released in stable form. The most recent release consists of an upgrade to OpenSSL 1.1 in order to be compliant with more recent and supported versions of OpenSSL. If you need to use OpenSSL 1.0.x add version 'OpenSSL10' to your command line. As always, PR's are welcome. Thank you! -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Aurora DirectX Bindings 12.1
I am happy to announce that after a prolonged hiatus the Aurora DirectX bindings have been updated to support DirectX 12.1 and DirectX 11.4. The project has been refactored to more closely align with the DirectX SDK headers and the scope is significantly increased to include the D3D Video, D2D Effects, SVG, and Debugging libraries. And finally, the package has now been added to DUB. The following components of DirectX are supported: DirectX 12.1 - Shaders - Shader Tracing - Video - SDK Layers (Debugging) DirectX 11.4 - Shaders - Shader Tracing - Video - SDK Layers (Debugging) Direct2D 1.3 - Effects 1.2 - SVG DirectWrite 1.3 DXGI 1.6 WIC The work is a combination of mechanical and hand conversion and there may be some inconsistent or incorrect types in the interface. If you encounter any inconsistencies I would love you pull requests. Please note that most Macros are NOT ported yet. PR's welcome! These bindings are based on the Windows SDK version 10.0.16299.0 (Windows Build 1709 - Redstone 3) GitHub: https://github.com/auroragraphics/directx DUB: http://code.dlang.org/packages/aurora-directx -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: DConf hotel poor QoS
I booked online. I need a different room than the Conference Rate. But while I was there I did notice that the online rate for the conference room was the same as quoted on the conference site (89EUR). -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Release D 2.079.0
On 3/5/18 15:40, Atila Neves wrote: On Monday, 5 March 2018 at 17:47:13 UTC, Seb wrote: On Monday, 5 March 2018 at 15:16:14 UTC, Atila Neves wrote: On Saturday, 3 March 2018 at 01:50:25 UTC, Martin Nowak wrote: Glad to announce D 2.079.0. This release comes with experimental `@nogc` exception throwing (-dip1008), a lazily initialized GC, better support for minimal runtimes, and an experimental Windows toolchain based on the lld linker and MinGW import libraries. See the changelog for more details. Thanks to everyone involved in this https://dlang.org/changelog/2.079.0.html#contributors. http://dlang.org/download.html http://dlang.org/changelog/2.079.0.html - -Martin Is is just me or did this release just break the latest non-beta vibe.d? Is the Jenkins build testing the dub packages on master instead of the latest tag? Atila https://github.com/vibe-d/vibe.d/issues/2058 It's great that there's an issue for vibe. This doesn't change the fact that right now, somebody trying D for the 1st time with the latest official compiler will get an error if they try out the most popular dub package that I know of if they follow the instructions on code.dlang.org. It also doesn't change that I can't upgrade dmd on our CI at work because it can't compile vibe unless I change dozens of dub.sdl files to use a beta version. This breaks semver! I found out about this after removing a dependency on stdx.data.json since dmd >= 2.078.0 broke it (by breaking taggedalgebraic. Yes, I filed a bug.). I can upgrade from 2.077.1 to 2.078.3,but not 2.079.0. I'd have a snowball's chance in hell convincing anyone at a "regular" company of adopting D if anyone there even imagined any of the above could happen. We have to do better than this. Atila May I make a recommendation? Only upgrade to the 2.0xx.2[.3] releases. You'll have to wait a month or so for the latest features, but by then the important packages will have been upgraded and the regressions (mostly) worked out. It's kind of like the old saying about Microsoft software. "Never use the first version of anything". If we treat the .0 releases as "v1" then it fits. :) -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Which Multithreading Solution?
On 3/4/18 11:31, Drew Phil wrote: Hey there! I'm trying to figure out which of D's many multithreading options to use for my application. I'm making a game that runs at 60 frames per second that can also run user-made Lua scripts. I would like to have the Lua scripts run in a separate thread so they can't delay frames if there's a lot of work to do in a burst, but I also need it to be able to update and read from the thread where everything else is. I don't think message passing is going to cut it because the Lua thread is going to need to be able to request to set a value in D on one line, and then request that value back on the next line, and that value needs to be updated at that point, not at the end. I've been marking things as __gshared to get them working, but I think that just makes a copy on all threads, which seems pretty excessive when mostly the Lua will just be reading values, and occasionally calling a function in D to change some values. I think I should try to do some kind of lock thing, but I'm not really sure how that works. If someone could advise me on what tact I should take, that would be very helpful. Thanks! core.thread is probably your best bet in the long-term. std.concurrency could actually be used though, just call receive() immediately after send(). However, the performance of this may not be what you want. My recommendation would be to use std.concurrency to get the logic correct first, then worry about perf. And using std.concurrency will get some of the basic concepts (ex: immutable/shared) right that will port over the regular threads. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: D as a betterC a game changer ?
On 12/27/17 00:10, Pawn wrote: On Wednesday, 27 December 2017 at 09:39:22 UTC, codephantom wrote: IMHO..What will help the cause, in terms of keeping D as a 'modern' programming language, is the willingness of its designers and its community to make and embrace 'breaking changes' ... for example, making @safe the default, instead of @system. It's been expressed that there are now too many codebases such that introducing "breaking changes" would upset many people and companies. D is a mature language, not a young one. This is not true. I was at DConf one year (can't remember which) and I watched the representative of one of D's larger corporate users do everything but actually get on his knees and beg Walter to make a breaking change. IIRC they demonstrated their work around for the missing change a couple of DConf's later. The reason that D isn't making breaking changes is that the language has enough broken stuff as it is. It does not make much sense to fork a code-base with significant known issues, break more things without fixing the existing things, and then release as a new version. It would create even more bugs and perpetuate the 'D is broken' meme. Once D2 has been thoroughly vetted and is free of known-bugs (sometimes called Zero Bug Bounce, there may be unknown bugs that are discovered, but all known bugs are fixed). Additionally, consider that if we have a stable base in D2 it will be much easier to merge bug-fixes into D3 while D3 is being worked on. Let's fix the crap we have now. It'll take a while, it's not sexy, and it certainly won't make headlines on HN or Reddit. But it will have the effect of combating the biggest negative that D has to adoption. The perception of instability. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Maybe D is right about GC after all !
On 12/20/17 10:28, Ali Çehreli wrote: On 12/20/2017 01:14 AM, Paulo Pinto wrote: from developers that learned it before C++98 and can't care less what is being discussed on Reddit and HN. I don't blame them one bit because keeping up with C++ and learning C++ Core Guidelines is a tremendous task: https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md I keep starting writing replies here about C++ Core Guidelines but I delete them after counting to ten. Not this time... :) I think it's a psychological phenomenon worthy of scientific interest how a craft with so many guidelines can still be accepted. I am baffled how otherwise wonderful and smart people can direct others to that document with a straight face, let alone market it as one of the greatest gifts to C++ programmers (cf. CppCon 2015 keynotes by Herb Sutter and Bjarne Stroustrup.) Ali I had Chrome estimate how many pages it would be print out. In "Letter" size it's 181 double-sided pages. It's not "Guidelines" it is a book on "Best Practices" -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: GSoC 2018 - Your project ideas
On 12/5/17 10:20, Seb wrote: Hi all, Google Summer of Code (GSoC) 2018 is about to start soon [1] (the application period for organizations is in January 2018). Hence, I would very happy about any project ideas you have or projects which are important to you. And, of course, if you would be willing to mentor a student, please don't forget to tell me. You can always reach me via mail (seb [at] wilzba [dot] ch) or on Slack (dlang.slack.com). There's also a special #gsoc channel. I have also started to work over the ideas from last year [2], but this page is currently WIP. @Students: if you have any questions or maybe have an idea for a project yourself, please feel free to contact me. I'm more than happy to help! I am looking forward to hearing (1) what you think can be done in three months by a student and (2) will have a huge impact on the D ecosystem. Cheers, Seb [1] https://developers.google.com/open-source/gsoc/timeline [2] https://wiki.dlang.org/GSOC_2018_Ideas I am absolutely up for mentoring this year and there are some fantastic projects on this list. The ones I'd be willing to mentor are: std.database - I've done a significant amount of work on this (not on github yet). And I have almost two decades of experience with various SQL interface libraries. I've seen the good, the bad, and the ugly, and would be able to work very closely with the student. :) std.eventloop - This will be needed if I am ever going to get Async/Await off the ground. std.decimal - I need this for some personal projects. Garbage Collector - It's not on the list but somebody mentioned it. There are actually two PR's outstanding for a precise GC from the 2016 GSoC I mentored; here: https://github.com/dlang/druntime/pull/1603 and here: https://github.com/dlang/druntime/pull/1977. But there is still a ton of room for improvement. There are more areas that precision could be expanded too. The 2016 student started playing around with a type-based pooling collector. There are a number of ideas we could explore. Note that I'm not a big fan of the fork()-based GC idea since it's limited to *nix based systems -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Thoughts about D
On 12/3/17 21:28, Walter Bright wrote: On 12/3/2017 8:59 PM, Adam Wilson wrote: I have to agree with this. I make my living on server side software, and we aren't allowed (by legal) to connect to the server to run debuggers. The *only* thing I have is logging. If the program crashes with no option to trap an exception or otherwise log the crash this could cost me weeks-to-months of debugging time. As I said, the halt behavior will be an option. Nobody is taking away anything. Awesome. :) -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Thoughts about D
On 12/3/17 00:09, Dmitry Olshansky wrote: On Saturday, 2 December 2017 at 23:44:39 UTC, Walter Bright wrote: On 12/2/2017 4:38 AM, Iain Buclaw wrote: But then you need to bloat your program with debug info in order to understand what, why, and how things went wrong. Most of the time (for me) that isn't necessary, because the debugger still shows where it failed and that's enough. Besides, you can always rerun the test case with a debug build. Ehm, if it’s a simple reproducable tool. Good luck debugging servers like that. I have to agree with this. I make my living on server side software, and we aren't allowed (by legal) to connect to the server to run debuggers. The *only* thing I have is logging. If the program crashes with no option to trap an exception or otherwise log the crash this could cost me weeks-to-months of debugging time. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Thoughts about D
On 11/26/17 16:14, IM wrote: Hi, I'm a full-time C++ software engineer in Silicon Valley. I've been learning D and using it in a couple of personal side projects for a few months now. First of all, I must start by saying that I like D, and wish to use it everyday. I'm even considering to donate to the D foundation. However, some of D features and design decisions frustrates me a lot, and sometimes urges me to look for an alternative. I'm here not to criticize, but to channel my frustrations to whom it may concern. I want D to become better and more widely used. I'm sure many others might share with me some of the following points: - D is unnecessarily a huge language. I remember in DConf 2014, Scott Meyers gave a talk about the last thing D needs, which is a guy like him writing a lot of books covering the many subtleties of the language. However, it seems that the D community went ahead and created exactly this language! - D is very verbose. It requires a lot of typing. Look at how long 'immutable' is. Very often that I find myself tagging my methods with something like 'final override nothrow @safe @nogc ...' etc. - It's quite clear that D was influenced a lot by Java at some point, which led to borrowing (copying?) a lot of Java features that may not appeal to everyone. - The amount of trickeries required to avoid the GC and do manual memory management are not pleasant and counter productive. I feel they defeat any productivity gains the language was supposed to offer. - The thread local storage, shared, and __gshared business is annoying and doesn't seem to be well documented, even though it is unnatural to think about (at least coming from other languages). - D claims to be a language for productivity, but it slows down anyone thinking about efficiency, performance, and careful design decisions. (choosing structs vs classes, structs don't support hierarchy, use alias this, structs don't allow default constructors {inconsistent - very annoying}, avoiding the GC, look up that type to see if it's a struct or a class to decide how you may use it ... etc. etc.). I could add more, but I'm tired of typing. I hope that one day I will overcome my frustrations as well as D becomes a better language that enables me to do what I want easily without standing in my way. Well. D has it's own idioms and patterns. So we fully expect some of the idioms that are easy in C++ to be not easy in D, and indeed that's kind of the point. If you look at the those idioms that are hard in D, it's probably because said idiom allows you to do some fantastically unsafe (and insecure) thing in C++. Yes, I am sure that it gives some performance boost, but D is not trying to be the pinnacle of language performance. D is trying to be a memory-safe language, quite intentionally at the expense of speed (see this DConf 2017 talk: https://www.youtube.com/watch?v=iDFhvCkCLb4=1=PL3jwVPmk_PRxo23yyoc0Ip_cP3-rCm7eB). Bounds checking by default, GC, etc. are all memory safety features that come explicitly at the cost of performance. We are not trying to be C++ and we are not trying to replace C++. It sounds like C++ works better for you. We are OK with that. We always recommend using what works best for you. That is after all why WE are here. :) -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Precise GC state
On 11/23/17 13:40, Ola Fosheim Grøstad wrote: On Thursday, 23 November 2017 at 20:13:31 UTC, Adam Wilson wrote: I would focus on a generational GC first for two reasons. The But generational GC only makes sense if many of your GC objects have a short life span. I don't think this fits well with sensible use of a language like D where you typically would try to put such allocations on the stack and/or use RAII or even an arena. Sensible to whom? The great thing about D is that it works just we well for low-level people who need to wring the most out of the metal, AND the high-level productivity minded apps people, who don't. RAII+stack allocations make sense when I care about WHEN an object is released and wish to provide some semblance of control over deallocation (although as Andrei has pointed out numerous time, you have no idea how many objects are hiding under that RAII pointer). But there are plenty of use cases where I really don't care when memory is released, or even how long it takes to release. For example, all of the D apps that I've written and use in production, GC-allocate everything. I don't have a single struct in the code. But I don't care, because the program is so short lived and the working set so small that there will never be GC collection. And it's not like this is an uncommon or unwise pattern in D, DMD itself follows this exact pattern. Obviously my pattern isn't "wrong" or else DMD itself is "wrong". It's just not your definition of "correct". Another use case where RAII makes no sense is GUI apps. The object graph required to maintain the state of all those widgets can be an absolute performance nightmare on destruction. Closing a window can result in the destruction of tens-of-thousands of objects (I've worked on codebases like that), all from the GUI thread, causing a hang, and a bad user experience. Or you could use a concurrent GC and pass off the collection to a different thread. (See .NET) The second is that you still typically have to stop the execution of the thread on the Gen0 collection (the objects most likely to be hot). So with a non-generational concurrent collector you have to stop the thread for the entirety of the scan, because you have no way to know which objects are hot and which are cold. How are you going to prove that references are all kept within the generation in D? There is some very costly book keeping involved that simply don't work well with D semantics. Again, which semantics? If you compile with -betterc, the bookkeeping and it's overhead simply don't exist. Your arguments are based on a presupposition that D should only be used a certain way; a way that, I am sure, mirrors your own usage patterns. D supports a multitude of different usage patterns, some of which look nothing like what you are holding up as "correct". And this is what makes D special. To remove or dismiss as invalid those usage patterns would be detrimental to those of us who use them and be catastrophic to D in general. As a community, can we please stop it with the subjective judgements of what is and is not "sensible" in D, and start supporting the people using it, however they are using it, even if we are sure that they are "wrong"? -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Precise GC state
On 11/23/17 02:47, Nordlöw wrote: On Wednesday, 22 November 2017 at 13:44:22 UTC, Nicholas Wilson wrote: Thats a linker(?) limitation for OMF (or whatever is the win32 object file format). Was just fixed! What improvements to D's concurrency model is made possible with this precise GC? I recall Martin Nowak saying at DConf 2016 that a precise GC will enable data with isolated or immutable indirections to be safely moved between threads Rainer is awesome! That is certainly one aspect that it will enable. I would focus on a generational GC first for two reasons. The first is that you can delay the scans of the later gens if the Gen0 (nursery) has enough space, so for a lot of programs it would result in a significant cut-down in collection times. The second is that you still typically have to stop the execution of the thread on the Gen0 collection (the objects most likely to be hot). So with a non-generational concurrent collector you have to stop the thread for the entirety of the scan, because you have no way to know which objects are hot and which are cold. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Precise GC state
On 11/22/17 05:44, Nicholas Wilson wrote: On Wednesday, 22 November 2017 at 13:23:54 UTC, Nordlöw wrote: On Wednesday, 22 November 2017 at 10:53:45 UTC, Temtaime wrote: Hi all ! https://github.com/dlang/druntime/pull/1603 Only the Win32 build fails as Error: more than 32767 symbols in object file What's wrong? Thats a linker(?) limitation for OMF (or whatever is the win32 object file format). We really should be using the MSVC linker on Windows for both x86 and x64. I propose we change the default behavior of -m32 to point to MSVC, keep the -m32mscoff, but mark it as deprecated, and add a -m32omf flag to retain the current behavior. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Precise GC state
On 11/22/17 02:53, Temtaime wrote: Hi all ! https://github.com/dlang/druntime/pull/1603 Can someone investigate and bring it to us ? 4 years passed from gsoc 2013 and there's still no gc. Many apps suffers from false pointers and bringing such a gc will help those who affected by it. It seems all the tests are passed except win32 because of optlink failures. Maybe there's some chance to accelerate this PR ? Thanks all +1 -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Looking for a job in USA
On 11/20/17 05:11, Satoshi wrote: On Monday, 20 November 2017 at 09:15:15 UTC, Adam Wilson wrote: To get an H1B you'll want to get a job with one of the majors. Microsoft, Google, Apple, Amazon. There are smaller companies, but the majors have a dedicated team of lawyers who can guide your H1B through the process. It is the consulting body-shops are bearing the brunt of the crackdown. Much to the rejoicing of the H1B's on my team. Second, getting a green card for an H1B is easily a 10 year wait. You'll be in for the long-haul. :) None of these companies are looking for employees without working visa in the US. Mostly, they have branch offices in EU, so if I apply for a job, they will hire me in one of the EU offices. But thanks for the advice, I'll apply for some job opportunities at these companies and try my best. The reason for that is actually a quirk of the US visa regime. Look up the L1. It's less restrictive. But it requires you work for one year outside the country. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Looking for a job in USA
On 11/17/17 17:31, Indigo wrote: What is your reasoning for coming to the US? You might want to rethink this as America is collapsing. This is news to me, and I live in the US. Also, if the US is collapsing, that is very bad news for D, seeing as how I live about 45 minutes from Walter, and Andrei lives in the US as well. I think people often mistake the breathless bleating in the news for reality. In the end, I am confident that reality will prevail. It has such an ... uncompromising way about it. :) With that resolved, let us return to the topic at hand. To get an H1B you'll want to get a job with one of the majors. Microsoft, Google, Apple, Amazon. There are smaller companies, but the majors have a dedicated team of lawyers who can guide your H1B through the process. It is the consulting body-shops are bearing the brunt of the crackdown. Much to the rejoicing of the H1B's on my team. Second, getting a green card for an H1B is easily a 10 year wait. You'll be in for the long-haul. :) -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Project Elvis
On 11/10/17 00:24, codephantom wrote: On Friday, 10 November 2017 at 05:23:53 UTC, Adam Wilson wrote: MSFT spends a LOT of time studying these things. It would be wise to learn for free from the money they spent. Is that the same company that made Windows 10? And what? -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Project Elvis
On 11/6/17 12:20, Michael wrote: I can't quite see why this proposal is such a big deal to people - as has been restated, it's just a quick change in the parser for a slight contraction in the code, and nothing language-breaking, it's not a big change to the language at all. On Monday, 6 November 2017 at 19:13:59 UTC, Adam Wilson wrote: I am all for the Elvis operator, however I have two reservations about it. The first is that I don't see much use for it without a null-conditional. The second is that the current proposed syntax ?: is MUCH to easily confused with ?. This is not easy to read: obj1?.obj2?.prop3?:constant. When designing syntax sugar, ergonomics are very important, otherwise people won't use it. Microsoft spent a LOT of time and treasure to learn these lessons for us. I see no reason to ignore them just because "we don't like Microsoft" My proposal would be to copy what MSFT did, expect that I would I would introduce both operators at the same time. Syntax as follows: obj1?.obj2?.prop3 ?? constant In practice I don't see much use of the idiom outside of null's. The ONLY other thing that would work there is a boolean field and you might as well just return the boolean itself because the return values have to match types. I feel this is kind of embellished somewhat. When you write This is not easy to read: obj1?.obj2?.prop3?:constant. you're not separating it out as you do when you write your preferred version: Syntax as follows: obj1?.obj2?.prop3 ?? constant How is obj1?.obj2?.prop3 ?: constant not as easy to read as obj1?.obj2?.prop3 ?? constant You're right, I didn't, that was intentional, because sometimes people write things like that. And it took a while for anyone to say anything about it. That is my point. But that's the thing. The ?? is significantly more obvious in the condensed version. This is something that a UX designer would recognize instantly, but human factors are very definitely not our strong skill as engineers. FWIW, my human factors experience comes from the deep study of airline crashes that I do as a pilot. to me they are the same in terms of readability, only with ?? you have greater chances of mistyping and adding a second ? in there somewhere, whereas the ?: is just a contraction of the current syntax, I really don't think it's that difficult, so I'm not sure what people's hang-ups are, but I don't think the argument that ?? is easier to read than ?: holds any weight here, because one *is* a change to the language, and the other is a change to the parser and a contraction of a standard convention. Two things. ?: is ALSO a change a to language (lexer+parser). As to the whole "it's no more likely to typo the colon than the question" argument, sure, but that depends on the keyboard layout more than anything else, what works for you may not work elsewhere. And in either case, it's an easy compiler error. So you don't win anything with the ?:, but you win readability with the ??. MSFT spends a LOT of time studying these things. It would be wise to learn for free from the money they spent. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Project Elvis
On 10/28/17 04:38, Andrei Alexandrescu wrote: Walter and I decided to kick-off project Elvis for adding the homonym operator to D. Razvan Nitu has already done a good part of the work: https://github.com/dlang/dmd/pull/7242 https://github.com/dlang/dlang.org/pull/1917 https://github.com/dlang/dlang.org/pull/1918 What's needed is a precise DIP that motivates the feature properly and provides a good proposal for it. I'm no fan of bureaucracy but we really need to be pedantic about introducing language features. Walter argued thusly in a PR, and I agree: "I'm concerned that the elvis operator is not well understood, and we shouldn't be designing it in the comments section here. A DIP needs to be written. Things like operator precedence, side effects, type resolution, comparison with the operator in other languages, grammar changes, lvalues, how it would appear in the generated .di file if it isn't its own operator, etc., should be addressed." A lowering looks like the straightforward approach, of the kind: expr1 ?: expr2 ==> (x => x ? x : expr2)(expr1) Who wants to join Razvan in Project Elvis? Thanks, Andrei C# has extensive experience with this operator and I think it would be wise to study the history of what they did and why the did it. NOTE: I understand that other languages have it, and there are variations on the theme, but C# has many similarities to D and extensive "in practice" idioms. C# got the Elvis operator before it got the Null Conditional operator. In C# it only covers the case: a == null. The reason is that in practice most devs only use it like so: a != null ? a : b. The funny thing is that it was almost never used. Some years later the C# team introduces the Null-Conditional operator: ?. which allows you to write: obj1?.obj2?.prop3 ?? constant. NOW people start using Null Coalescing all over the place. I am all for the Elvis operator, however I have two reservations about it. The first is that I don't see much use for it without a null-conditional. The second is that the current proposed syntax ?: is MUCH to easily confused with ?. This is not easy to read: obj1?.obj2?.prop3?:constant. When designing syntax sugar, ergonomics are very important, otherwise people won't use it. Microsoft spent a LOT of time and treasure to learn these lessons for us. I see no reason to ignore them just because "we don't like Microsoft" My proposal would be to copy what MSFT did, expect that I would I would introduce both operators at the same time. Syntax as follows: obj1?.obj2?.prop3 ?? constant In practice I don't see much use of the idiom outside of null's. The ONLY other thing that would work there is a boolean field and you might as well just return the boolean itself because the return values have to match types. For example: return obj1.bool ?? obj2 //Error: incorrect return type return obj1 ?? obj2 // Pass: if same type I cannot actually imagine a scenario outside of objects (including strings) where you could actually use it since the left-hand side MUST evaluate to a boolean. Also, I am going to start repeating this mantra: Just because something CAN be done in the library, does not mean it SHOULD be done in the library. Ergonomics matters. Yes, I understand that D is a powerful language, but Syntax Sugar has it's place in taking common idioms and standardizing them in the language itself (English is loaded with stuff like that) so that everyone can "speak the same language". -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Note from a donor
On 10/28/17 12:46, Jerry wrote: On Saturday, 28 October 2017 at 15:36:38 UTC, codephantom wrote: But if you really are missing my point..then let me state it more clearly... (1) I don't like waiting 4 hours to download gigabytes of crap I don't actually want, but somehow need (if I want to compile 64bit D that is). Start the download when you go to sleep, when you wake up it will be finished. I did this as a kid when I had internet that was probably even slower than yours right now. It'll be like those 4 hours never even happened. (2)I like to have choice. A fast internet might help with (1). (2) seems out of reach (and that's why I dont' and won't be using D on Windows ;-) It's probably why you shouldn't be on Windows to begin with.. (being a recreational programmer, I have that luxury..I understand that others do not, but that's no reason for 'some' to dismiss my concerns as irrelevant. They're relavent to me, and that's all that matters ;-) Talk about being narcissistic ;) Hey Jerry, I appreciate what you're trying to accomplish .. but uh ... don't feed that trolls. ;) -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: D for microservices
On 10/27/17 00:18, Jacob Carlborg wrote: On 2017-10-26 12:25, Adam Wilson wrote: My apologies, something rather the other direction. Instead of forcing compat with vibe.d, going to vibe.d and say: "here is our standard event-loop, it has everything you need, you'll need to use it for all the other goodness to work". I know others can make good arguments about why the vibe event-loop is insufficient, and I'll let them make them. My concern is not about the event loop, it's about asynchronous IO. vibe.d needs to use asynchronous IO and I assume that's regardless what kind of event loop implementation is used. Does all the existing database drivers that you want to use support asynchronous IO? PgSQL/MySQL/MSSQL all do, I think that covers about 90% of usage. IIRC Oracle does as well. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: D for microservices
On 10/26/17 17:51, Jonathan M Davis wrote: On Thursday, October 26, 2017 03:25:24 Adam Wilson via Digitalmars-d wrote: On 10/25/17 23:57, Jacob Carlborg wrote: I'm more concerned that I don't think we'll manage to implement a complete API and 100% bug free at the first try. Depends on how one defines first try. Phobos as a pretty solid process for making sure these things are reasonably bug free. And near as I can tell, Phobos is pretty good about accepting bug fixes. The bigger problem is API bugs. The review process is rigorous enough to weed out a lot of stuff, but the end result is typically an API that we _think_ looks good but usually hasn't seen much real world use (much as its design may be based on real world experience). And if it turns out that the API has problems, it can be very difficult to fix that in a way that doesn't break code. Sometimes, we're able to reasonably do something about it and sometimes not. In theory, std.experimental would mitigate that, and that's where anything that was accepted would go first, but the process for getting new modules into Phobos is very much geared towards getting an API right up front rather than getting something that's close and iterating towards where it ultimately should be. What would probably be better in general would be to be writing stuff that ends up on code.dlang.org first and gets some real world use there and then look at getting it into Phobos later rather than aiming directly for Phobos. Not only would that likely help lead towards better code being in Phobos, but we'd still get useful stuff even if it didn't make it into Phobos. - Jonathan M Davis I understand the concern, however, I can see potential mitigations. For one, steal an API concept from somewhere else. I've had reasonable success so far stealing ADO.NET and the refactoring it into something more idiomatic. Using that as a starting point gave me a pretty good understanding of what was needed and it gave me a prototype API that has been battle-tested. Has anything from code.dlang.org been pulled into Phobos yet? I'm not aware of anything. code.dlang.org is where Phobos projects go to quietly die in obscurity. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Note from a donor
On 10/26/17 08:51, Bo wrote: On Thursday, 26 October 2017 at 12:36:40 UTC, jmh530 wrote: However, if you need Visual Studio installed, then that takes like a half an hour. And a gig of space, just because D needs a small part of it. That is why people do not want to install VS. Why install a competing language studio, when you are installing D. The XCode installer DMG is 5GB, before unpacking. And unlike VS17, I can't pick and choose. :) -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: D for microservices
On 10/25/17 23:57, Jacob Carlborg wrote: On 2017-10-26 00:53, Adam Wilson wrote: This of course makes the assumption that we clean-room our own protocol implementations which I am entirely against. Better to use what already exists. I'm entirely against anything that is not compatible with vibe.d ;) My apologies, something rather the other direction. Instead of forcing compat with vibe.d, going to vibe.d and say: "here is our standard event-loop, it has everything you need, you'll need to use it for all the other goodness to work". I know others can make good arguments about why the vibe event-loop is insufficient, and I'll let them make them. (Something about not supporting GUI loops, paging Mr. Cattermole). If that is really the case I don't see how being entirely vibe.d compatible and meeting the universal standard requirements of Phobos is possible. There would need to be a requirements gathering phase so that the community as a whole can bring their use-cases before we dove into code. I actually don't think the slow updates are huge problem as these DB interface libraries are pretty slow to change themselves. For example, ADO.NET didn't change significantly from it's 1.0 release until the arrival of Async/Await in .NET 4.5, which was 10 years later. The biggest addition prior to Async was TVP support and that was tiny comparatively and came in 2005. The libpq5 interface has barely changed in years. I can't speak to MySQL but IIRC it hasn't changed much either, at least not in a way that would effect the abstraction layer. I'm more concerned that I don't think we'll manage to implement a complete API and 100% bug free at the first try. Depends on how one defines first try. Phobos as a pretty solid process for making sure these things are reasonably bug free. And near as I can tell, Phobos is pretty good about accepting bug fixes. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Note from a donor
On 10/26/17 00:32, Jacob Carlborg wrote: On 2017-10-26 00:36, Adam Wilson wrote: Speaking from very long experience, 95%+ of Windows devs have VS+WinSDK installed as part of their default system buildout. The few that don't will have little trouble understanding why they need it and acquiring it. IIRC, there have been people on these forums that have been asking why they need to download additional software when they already have the compiler. Same on macOS. How many though? Also, we have to do it for macOS, why is Windows special? The macOS setup was just as hard. Download two large packages (XCode+Cmd tools), install, and done. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: D for microservices
On 10/23/17 23:29, Jacob Carlborg wrote: On 2017-10-24 00:02, Adam Wilson wrote: I've been looking pretty extensively at these two items recently. If the database drivers are compatible with Vibe.d AND we wish to provide a common abstraction layer for them (presumably via Phobos) then order for the abstraction layer to aware of the whether the driver is making a blocking or non-blocking call we must include Vibe.D in the abstraction layer. Ergo, we must include at least the vibe-core package in Phobos, or more preferably, DRT. It can be an optional dependency. Looking at ddb [1] it's not that much code that will be different if vibe.d is used or not. It's basically only reading and writing to the socket that is different [2]. Dub provides predefined version for different dependencies, i.e. Have_vibe_d_core. This of course makes the assumption that we clean-room our own protocol implementations which I am entirely against. Better to use what already exists. I had heard noises about that a few months ago. Anything happening on that front? No, not as far as I know. Sönke seems really busy recently. That is what I was afraid of. What would the appetite be for working together to come up with a reasonably generic event loop for DRT that vibe and other systems could then leverage? I would be really nice to database support directly in the standard library but it's not critical for me. It takes a lot of work with and massive overhead to get something into Phobos. It's also going to be really slow get in updates. I'm not sure if it's worth it. [1] https://github.com/pszturmaj/ddb [2] https://github.com/pszturmaj/ddb/blob/master/source/ddb/postgres.d#L189-L246 I actually don't think the slow updates are huge problem as these DB interface libraries are pretty slow to change themselves. For example, ADO.NET didn't change significantly from it's 1.0 release until the arrival of Async/Await in .NET 4.5, which was 10 years later. The biggest addition prior to Async was TVP support and that was tiny comparatively and came in 2005. The libpq5 interface has barely changed in years. I can't speak to MySQL but IIRC it hasn't changed much either, at least not in a way that would effect the abstraction layer. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: D for microservices
On 10/23/17 18:51, rikki cattermole wrote: On 23/10/2017 11:02 PM, Adam Wilson wrote: On 10/23/17 05:08, Jacob Carlborg wrote: * Database drivers for the common databases (PostgreSQL, MySQL, SQLite) compatible with vibe.d * Database driver abstraction on top of the above drivers, perhaps some lightweight ORM library I've been looking pretty extensively at these two items recently. If the database drivers are compatible with Vibe.d AND we wish to provide a common abstraction layer for them (presumably via Phobos) then order for the abstraction layer to aware of the whether the driver is making a blocking or non-blocking call we must include Vibe.D in the abstraction layer. Ergo, we must include at least the vibe-core package in Phobos, or more preferably, DRT. I had heard noises about that a few months ago. Anything happening on that front? An event loop is a key piece of a project (Async/Await) that I want to work on, and having it in DRuntime would make that project fantastically simpler. IMHO, the bulk of the time required is in getting an event loop into DRT, the rest is a LOT of relatively straightforward compiler lowering. IMHO, DRT is in significant need of an event loop system. This would allow us to simplify a large number of problems (Async/Await, GUI's, IO, etc). As near as I can tell, the problem isn't so much doing the work, but getting the required sign-off's for inclusion into DRT. Another problem that I've been made aware of is that vibe-core may not be ideal in certain situations. As this would be landed in DRT itself this would obviously need to be addressed. What would the appetite be for working together to come up with a reasonably generic event loop for DRT that vibe and other systems could then leverage? *whispers* heyyy, heard about SPEW[0]? [0] https://github.com/Devisualization/spew/blob/master/src/base/cf/spew/event_loop/defs.d You've mentioned it a few times. ;-) -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Note from a donor
On 10/25/17 11:23, H. S. Teoh wrote: On Wed, Oct 25, 2017 at 08:17:21AM -0600, Jonathan M Davis via Digitalmars-d wrote: On Wednesday, October 25, 2017 13:22:46 Kagamin via Digitalmars-d wrote: On Tuesday, 24 October 2017 at 16:37:10 UTC, H. S. Teoh wrote: (Having said all that, though, D is probably a far better language for implementing crypto algorithms -- built-in bounds checking would have prevented some of the worst security holes that have come to light recently, like Heartbleed and Cloudbleed. Those were buffer overflows in parsers, not in cryptographic algorithms. The point still stands though that you have to be _very_ careful when implementing anything security related, and it's shockingly easy to do something that actually leaks information even if it's not outright buggy (e.g. the timing of the code indicates something about success or failure to an observer), and someone who isn't an expert in the area is bound to screw something up - and since this is a security issue, it matters that much more than it would with other code. [...] Yeah. There have been timing attacks against otherwise-secure crypto algorithms that allow extraction of the decryption key. And other side-channel attacks along the lines of CRIME or BREACH. Even CPU instruction timing attacks have been discovered that can leak which path a branch in a crypto algorithm took, which in turn can reveal information about the decryption key. And voltage variations to deduce which bit(s) are 1's and which are 0's. Many of these remain theoretical attacks, but the point is that these weaknesses can come from things you wouldn't even know existed in your code. Crypto code must be subject to a LOT of scrutiny before it can be trusted. And not just cursory scrutiny like we do with the PR queue on github; we're talking about possibly instruction-by-instruction scrutiny of the kind that can discover vulnerabilities to timing or voltage. I would not be comfortable entrusting any important data to D crypto algorithms if they have not been thoroughly reviewed. T I am one-hundred-ten percent in agreement with Mr. Teoh here. Even .NET Framework and Core forward to the highly vetted system crypto API's (SChannel on Windows and OpenSSL on Linux/macOS). If you need RSA crypto in D, pull in OpenSSL. Period. Everything else is a good way to run afoul of a security audit, and potentially expose yourself. Phobos could forward to these system provided API's like .NET does and provide an idiomatic D interface, but Phobos itself should absolutely and 110% stay out of the crypto implementation business. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Note from a donor
On 10/25/17 09:34, Mike Parker wrote: On Wednesday, 25 October 2017 at 15:00:04 UTC, bitwise wrote: VC++ command line tools seem to be available on their own: http://landinghub.visualstudio.com/visual-cpp-build-tools Still a big download and requires the Windows SDK to be downloaded and installed separately. Speaking from very long experience, 95%+ of Windows devs have VS+WinSDK installed as part of their default system buildout. The few that don't will have little trouble understanding why they need it and acquiring it. This is one of those breathless "the sky is falling" arguments we hear on these forums sometimes. Usually from linux devs who are inured to having the GCC tools on every machine and automatically assume that because Windows doesn't by default that it won't be there and that getting it will be some insurmountable burden. TBH, the attitudes around here towards Windows devs can be more than a little snobbish. In reality, it is quite easy to find a linux distro that doesn't have GCC by default, container distros for example. So the snobby attitude is really quite unfounded. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Note from a donor
On 10/24/17 07:14, Kagamin wrote: On Tuesday, 24 October 2017 at 13:20:10 UTC, Andrei Alexandrescu wrote: * RSA Digital Signature Validation in Phobos https://issues.dlang.org/show_bug.cgi?id=16510 the blocker for botan was OMF support. IMO, the correct solution here is to deprecate OMF and use the System linker for 32-bit on Windows as that is already the default behavior on 64-bit Windows So instead of -m32 and -m32mscoff, we would have -m32 and -m32omf. I think that this is a reasonable tradeoff. We could leave -m32mscoff in for a while, for backwards compat. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: My two cents
On 10/23/17 22:40, Dmitry Olshansky wrote: On Tuesday, 24 October 2017 at 04:26:42 UTC, Adam Wilson wrote: On 10/23/17 17:27, flamencofantasy wrote: On Monday, 23 October 2017 at 22:22:55 UTC, Adam Wilson wrote: On 10/23/17 08:21, Kagamin wrote: [...] Actually I think it fits perfectly with D, not for reason of performance, but for reason of flexibility. D is a polyglot language, with by far the most number of methodologies supported in a single language that I've ever encountered. [...] There is a lot of misunderstanding about async/await. It has nothing to do with "conservation of thread resources" or trading "raw performance for an ability to handle a truly massive number of simultaneous tasks". Async/await is just 'syntactic sugar' where the compiler re-writes your code into a state machine around APM (Asynchronous programming model which was introduced in .NET 2.0 sometime around 2002 I believe). That's all there is to it, it makes your asynchronous code look and feel synchronous. The only parts of Async/Await that have anything to do with APM are the interop stubs. C#'s Async/Await is built around the Task Asynchronous Programming model (e.g. Task and Task) the compiler lowers to those, not APM. A common misunderstanding is that Task/Task is based on APM, it's not, Task uses fundamentally different code underneath. On Linux/macOS it actually uses libuv (at the end of the day all of these programming models are callback based). I’ll throw in my 2 rubbles. I actually worked on a VM that has async/await feature built-in (Dart language). It is a plain syntax sugar for Future/Promise explicit asynchrony where async automatically return Future[T] or Observable[T] (the latter is async stream). Async function with awaits is then re-written as a single call-back with a basic state machine, each state corresponds to the line where you did await. Example: async double calculateTax(){ double taxRate = await getTaxRates(); double income = await getIncome(); return taxRate * income; } Becomes roughly this (a bit more mechanical though): Future!double calculateTax(){ int state = 0; Promise!double __ret; double taxRate; double income; void cb() { if(state == 0) { state = 1; getTaxRates().andThen((double ret){ taxRate = ret; cb(); }); } else if (state == 1) { state = 2; getIncome().andThen((double ret){ income = ret; cb(); }); else if (state == 2){ __ret.resolve(taxRate*income); } } cb(); } It doesn’t matter what mechanics you use to complete promises - be it IO scheduler a-la libuv or something else. Async/await is agnostic to that. That was actually my point. Windows and Linux/macOS .NET core do different things. It doesn't matter to the dev. You did a better job of illuminating it. :) Still there is a fair amount of machinery to hide the rewrite from the user and in particular print stack trace as if it was normal sequential code. And that is the difficulty we face in D. :) Yes, C#'s async design does make code look and feel synchronous, and it was intentionally designed that way, but that is not *why* they did Async. That misunderstanding arises from an interview that Anders did in which he was asked why they held Async back for three years after announcing it at PDC08. In that same talk Anders specifically says that the purpose of the Async is to free up threads to continue execution, his example is a Windows Desktop Message Pump and fetching pictures from the internet (long-wait IO), but the principal applies to any thread. They held back async for three years because they needed to refine the language syntax to something that people could learn and apply in a reasonable amount of time. IIRC there is a Channel9 video where Anders explains the evolution of Async Await. In other words - explicit asynchrony where a thread immediately returns with Future[T] and moves on to the next task. It’s just hidden by Async/Await. Yup. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: My two cents
On 10/23/17 16:47, Nathan S. wrote: On Monday, 23 October 2017 at 22:22:55 UTC, Adam Wilson wrote: Additionally, MSFT/C# fully recognizes that the benefits of Async/Await have never been and never were intended to be for performance. Async/Await trades raw performance for an ability to handle a truly massive number of simultaneous tasks. Could you clarify this? Do you mean it's not supposed to have better performance for small numbers of tasks, but there is supposed to be some high threshold of tasks/second at which either throughput or latency is better? It's pretty complicated. In general, for a small number of tasks, it will take longer to execute those task with Async than without (Async has an overhead, about 10ms IIRC). So each individual call to an awaitable function will take longer than a call to a blocking function. In fact, MSFT recommends that if you are reasonably certain that most of the time it will take less than about 10ms to just use the blocking methods, as they have less total overhead. The main benefit of Async is in throughput. It allows the physical CPU to handle many incoming requests. So while each individual request may take longer, the overall utilization of the CPU is much higher. Async, has different benefits and drawbacks depending on where it's applied. For example, in UI apps, it allows the main program thread to keep responding to system events while waiting for long-wait IO (the thread is not suspended). Some background that may help understanding is that in blocking IO, what is really happening underneath your blocking call is that the runtime is creating a callback and then calling the OS's thread suspend method (e.g. ThreadSuspend in Windows), then when the callback is called, the thread is resumed and the data passed to into it via the callback. This means that the calling thread cannot execute anything while it's waiting. This is why UX apps appear to freeze when using blocking IO calls. The reason this is done is because the application has no way to say "I'm going to put this task off to the side and keep executing". The thread does not know to start some other processing while waiting. Async allows the app to put that task to the aside and do something else. At a low level the process in .NET it does this by: 1. Serializing the stack frame of the currently executing method to the heap (yes, Async is a GC feature, just like lambdas) at an await. 2. Pulling the next completed task from the heap. 3. Rebuilding the stack frame for that method on any available thread. 4. Continue execution of that stack. Obviously this makes a lot of sense for UI apps, where blocking the main thread can be disastrous, but why internet applications are inherently multi-threaded, so why do we care? The answer is thread context switching. A context switch is the most expensive common CPU operation by an order of magnitude, ranging from 2k-1m cycles. Whereas on modern CPUs a main RAM read is 100-150 cycles and a NUMA different socket read is 300-500 cycles. In .NET the TaskScheduler creates a predetermined number of threads (one per core IIRC) and begins scheduling tasks on those threads. Remembering that each task is really just a block on the heap, in the worst case that will take 500 cycles. Whereas if we had to switch to a different thread, it could be up to 1m cycles. That is a noticeable difference. Context switches are painfully expensive but in the traditional model it was all we had. Task based systems allow us to circumvent the task switch. But they're cumbersome to use without compiler support. For example, the Task Parallel Library was added in .NET 4.0 which includes Task and Task and ALL of the constructs that are used in Async/Await, however, it was not until .NET 4.5 and the arrival of the Async/Await keywords (and the compiler lowerings that they enabled) that people started using Tasks in any significant way. (Source for access times: http://ithare.com/wp-content/uploads/part101_infographics_v08.png) -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: My two cents
On 10/23/17 17:27, flamencofantasy wrote: On Monday, 23 October 2017 at 22:22:55 UTC, Adam Wilson wrote: On 10/23/17 08:21, Kagamin wrote: [...] Actually I think it fits perfectly with D, not for reason of performance, but for reason of flexibility. D is a polyglot language, with by far the most number of methodologies supported in a single language that I've ever encountered. [...] There is a lot of misunderstanding about async/await. It has nothing to do with "conservation of thread resources" or trading "raw performance for an ability to handle a truly massive number of simultaneous tasks". Async/await is just 'syntactic sugar' where the compiler re-writes your code into a state machine around APM (Asynchronous programming model which was introduced in .NET 2.0 sometime around 2002 I believe). That's all there is to it, it makes your asynchronous code look and feel synchronous. The only parts of Async/Await that have anything to do with APM are the interop stubs. C#'s Async/Await is built around the Task Asynchronous Programming model (e.g. Task and Task) the compiler lowers to those, not APM. A common misunderstanding is that Task/Task is based on APM, it's not, Task uses fundamentally different code underneath. On Linux/macOS it actually uses libuv (at the end of the day all of these programming models are callback based). Yes, C#'s async design does make code look and feel synchronous, and it was intentionally designed that way, but that is not *why* they did Async. That misunderstanding arises from an interview that Anders did in which he was asked why they held Async back for three years after announcing it at PDC08. In that same talk Anders specifically says that the purpose of the Async is to free up threads to continue execution, his example is a Windows Desktop Message Pump and fetching pictures from the internet (long-wait IO), but the principal applies to any thread. They held back async for three years because they needed to refine the language syntax to something that people could learn and apply in a reasonable amount of time. IIRC there is a Channel9 video where Anders explains the evolution of Async Await. Source: I was at Build 2011 and sat in on the Anders Hejlsberg (C# language designer) and Stephen Toub (Async/Await implementer) talks where they discussed Async in detail. I also work for MSFT, I can email them directly if you want further clarification on anything. :) -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: My two cents
On 10/23/17 08:21, Kagamin wrote: On Friday, 20 October 2017 at 09:49:34 UTC, Adam Wilson wrote: Others are less obvious, for example, async/await is syntax sugar for a collection of Task-based idioms in C#. Now I think it's doesn't fit D. async/await wasn't made for performance, but for conservation of thread resources, async calls are rather expensive, which doesn't fit in D if we prefer raw performance. Also I found another shortcoming: it doesn't interoperate well with cache: cache flip flops between synchronous and asynchronous operation: when you hit cache it's synchronous, when you miss it it performs IO. Actually I think it fits perfectly with D, not for reason of performance, but for reason of flexibility. D is a polyglot language, with by far the most number of methodologies supported in a single language that I've ever encountered. Additionally, MSFT/C# fully recognizes that the benefits of Async/Await have never been and never were intended to be for performance. Async/Await trades raw performance for an ability to handle a truly massive number of simultaneous tasks. And it is easy to implement both blocking and non-blocking calls side-by-side (MSFT appends 'Async' to the blocking call name). Here is the thing. Many projects (particularly web-scale) are not all that sensitive to latency. Adding 10ms to the total call duration isn't going to effect the user experience much when you've got 500ms of IO calls to make. But blocking calls will lock up a thread for those 500ms. That can disastrous when you have thousands of calls coming in every second to each machine. On the flip side, if you're a financial service corp with millions to throw at hardware and an extreme latency sensitivity, you'll go for the blocking calls, because they absolutely do cost less in overall milliseconds. And you'll make up for the thread blockages by throwing an obscene amount of hardware at the problem. Because hey, you're a multi-billion dollar corp, you'll make back the few million you spent on over-provisioning hardware in a day or two. The point is that not everyone wants, or needs, maximum raw performance per individual task. In the spirit of flexibility, D needs to provide the other choice, because it's not our job to tell our users how to run their business. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: D for microservices
On 10/23/17 05:08, Jacob Carlborg wrote: * Database drivers for the common databases (PostgreSQL, MySQL, SQLite) compatible with vibe.d * Database driver abstraction on top of the above drivers, perhaps some lightweight ORM library I've been looking pretty extensively at these two items recently. If the database drivers are compatible with Vibe.d AND we wish to provide a common abstraction layer for them (presumably via Phobos) then order for the abstraction layer to aware of the whether the driver is making a blocking or non-blocking call we must include Vibe.D in the abstraction layer. Ergo, we must include at least the vibe-core package in Phobos, or more preferably, DRT. I had heard noises about that a few months ago. Anything happening on that front? An event loop is a key piece of a project (Async/Await) that I want to work on, and having it in DRuntime would make that project fantastically simpler. IMHO, the bulk of the time required is in getting an event loop into DRT, the rest is a LOT of relatively straightforward compiler lowering. IMHO, DRT is in significant need of an event loop system. This would allow us to simplify a large number of problems (Async/Await, GUI's, IO, etc). As near as I can tell, the problem isn't so much doing the work, but getting the required sign-off's for inclusion into DRT. Another problem that I've been made aware of is that vibe-core may not be ideal in certain situations. As this would be landed in DRT itself this would obviously need to be addressed. What would the appetite be for working together to come up with a reasonably generic event loop for DRT that vibe and other systems could then leverage? -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: My two cents
On 10/21/17 11:52, bitwise wrote: On Wednesday, 18 October 2017 at 08:56:21 UTC, Satoshi wrote: async/await (vibe.d is nice but useless in comparison to C# or js async/await idiom) Reference counting when we cannot use GC... If I understand correctly, both of these depend on implementation of 'scope' which is being worked on right now. I think reference counting needs 'scope' to be made safe. RC also benefits from scope in that many of the increments/decrements of the ref count can be elided. The performance gain can be significant, and even more so when you use atomic reference counting (like C++ shared_ptr) for thread safety. Async/Await needs to allocate state for the function's local variables. When it's detected that the function's state/enumerator won't escape it's current scope, it can be put on the stack, which is a pretty big optimization. I should also note that, RC has been formally acknowledged as a future goal of D, but as far as I know, async/await has not. Walter has stated numerous times both here and at conferences that Async/Await is definitely a goal. However, it's not as high a priority as the @safe/@nogc work so it hasn't made it to any official vision statement. Also I just talked to him offline about it, and he would need some serious help with it. He doesn't know how to do the compiler rewrite, and there a number of tricky situations one has to deal with. As much as I like Async/Await, I agree that the current plan has higher priority. I'll probably start poking around Async/Await when I can clear the decks a bit of paid work. But that could be a while. :( -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: My two cents
On 10/20/17 04:04, Jonathan M Davis wrote: On Friday, October 20, 2017 02:49:34 Adam Wilson via Digitalmars-d wrote: Here is the thing that bothers me about that stance. You are correct, but I don't think you've considered the logical conclusion of the direction your argument is headed. Pray tell, why must we stop adding syntactic sugar? Why does their need to be an (by your own admission) arbitrary limit? If we look at any moderately successful language (C++/C#/Java/Python/etc.) you will find that all of them have accumulated syntax sugar over the years at a fairly constant rate. For example, C# has added point releases to it's release schedule for which the express purpose is adding syntax sugar. It simply doesn't make sense to add every stray feature that folks ask for. Maybe some additional syntactic sugar will be added to D in the future, but there are way too many feature requests for it to make sense to implement even a small fraction of them. Some things are worth adding, but many really aren't. We have to say no frequently, or we'll be drowned in stray features to implement. And only so much can be added to a language before it becomes completely unwieldy. For instance, how many people are actually experts on C++ and understand all of its ins and outs? _Very_ few people. It's simply too complicated a language. Other languages aren't necessarily as bad off in that regard, but if you add enough to any language, it will get there. Every time you add something to the language, you stand the chance of improving some aspect of the language, but it comes at the cost of additional complexity. Sometimes the addition is worth that complexity, and sometimes it isn't. Knowing when to add something and when not to is often a tough question, but it's still true that you can't add everything. And inevitably, some of the things you leave out will annoy some people by their absence, just as some of the things you add will annoy some folks by being there. For better or worse, Walter and Andrei's current stance is essentially that if something can be reasonably done already in the language as it is, they're not adding a feature to do it. D is already insanely powerful as it is, and too often folks are looking to add a feature to the language when it's trivial to do it with a library solution. That certainly doesn't mean that nothing new is going to be added, but we have far more important features to worry about than saving someone from having to type a few extra characters because they want to use a couple of ?'s instead of typing out the code themselves or using a function call to encapsulate the behavior - e.g. finishing sorting out stuff like scope, @safety, and shared are far more critical. If someone has a really strong argument for why something is worth adding, then they're free to create a DIP for it, and if they can convince Walter and Andrei, it can make it into the language. But at this point in D's development, syntactic sugar really isn't high on the list of things that they consider to be of value. That doesn't mean that they're always right, but on the whole, I agree with them. - Jonathan M Davis I never said "every stray feature" should be added. What I said is that "common idioms" should be added. Preferably without having to perform herculean feats of persuasion. And let's be honest, as a community we have a pretty good handle on what the common idioms are, seeing as how books have been written on the subject. For example, ?? and ?. are ridiculously common idioms that we all perform every day in our D code. And as Mr. Ruppe rightly pointed out, it'd probably take about an hour each to knock together a complete PR for these features. But we have spent years arguing over because somebody at the top said "no more syntax sugar". Grammandos *adore* these types of proclamations because they give grammandos a line in the sand that they can defend at all costs. For example, in the US, we are taught that "ain't" is not an English word and should never be used. However, it has a long and storied history as an English word and has a specific use. But grammandos took their grammar teachers word as iron-clad law and have enforced it mercilessly, resulting in the death of a legitimate word. All because some British royalty didn't like it when the peasants started using the word 200 years ago. So far I have seen three arguments proffered for the ban syntax sugar. The first is "Walter/Andrei doesn't have the time." This is prima facie ridiculous as it presumes that Walter/Andrei must be the one to implement it. If this is the case then D dies in the same moment that Walter/Andrei does. Since that is obviously not the case, this argument can be safely invalidated by simply observing the environment in which D is created. Every time I talk to Walter in-person about these sorts of proclamations he routinely says something a
Re: My two cents
On 10/20/17 01:32, Jonathan M Davis wrote: On Friday, October 20, 2017 08:09:59 Satoshi via Digitalmars-d wrote: On Friday, 20 October 2017 at 04:26:24 UTC, Jonathan M Davis wrote: On Friday, October 20, 2017 02:20:31 Adam D. Ruppe via Digitalmars-d wrote: On Friday, 20 October 2017 at 00:26:19 UTC, bauss wrote: return foo ? foo : null; where return foo ?? null; would be so much easier. return getOr(foo, null); That's really easy to do generically with a function. I wouldn't object to the ?? syntax, but if it really is something you write all over the place, you could just write the function. return foo ? foo.bar ? foo.bar.baz ? foo.bar.baz.something : null; Which could just be: return foo?.bar?.baz?.something; In dom.d, since I use this kind of thing somewhat frequently, I wrote a function called `optionSelector` which returns a wrapper type that is never null on the outside, but propagates null through the members. So you can do foo.optionSelector("x").whatever.you.want.all.the.way.down and it handles null automatically. You can do that semi-generically too with a function if it is something you use really frequently. For better or worse, solutions like this are the main reason that a number of things folks ask for don't get added to the language. It's frequently the case that what someone wants to do can already be done using the language as-is; it just may not be as syntactically pleasing as what the person wants, and they may not know D well enough yet to have come up with the solution on their own. - Jonathan M Davis Yeah, but if it can be done by stuff like you said it's not reason to not implement syntactic sugar for it. array[0 .. 42] can be substituted by array.slice(0, 42) too, but it's not. it's more handy to write void foo(int? a, string? b); than void foo(Maybe!int a, Maybe!string b); same for return a ?? null; than return getOr(a, null); foo.optionSelector("x").whatever.you.want.all.the.way.down it's not clear if you are able or not to able to hit the null. foo?.x?.whatever?.you?.want; is more clear and doesn't need any boilerplate. it doesn't need to be implemented in code. Yes, there is syntactic sugar in the language, and yes, there could be more, but it reached the point a while ago where Walter and Andrei seem to have decided that additional syntactic sugar isn't worth it. For something to be added the language, it generally has to add actual capabilities or solve a problem that is just unreasonable or impossible to solve with a library solution. And honestly, where to draw the line on syntactic sugar is highly subjective. Something that one person might think makes the code nicely concise might seem annoyingly crpytic to someone else. And obviously, not everything can have syntactic sugar. Not everything can be built into the language. A line has to be drawn somewhere. It's just a question of where it makes the most sense to draw it, and that's not at all obvious. There's bound to be disagreement on the matter. D is an extremely powerful language, and for now at least, the conclusion seems to be that its power is being underutilized and that it simply isn't worth adding things to the language if they can be easily done with a library solution. Obviously, there are things that some folks would like to be in the language that aren't, but there's always going to be something that someone wants to be in the language but isn't, and the guys in charge seem to have decided that D is now featureful enough and powerful enough that it's not getting new features unless it actually needs them. Simply making some syntax look prettier isn't enough. A feature has to actually add capabilities that we're missing. There may be exceptions to that, but that's generally where things sit. And honestly, they were going to have to take this stance at some point. We can't keep adding syntactic sugar forever. They just happen to have stopped adding syntactic sugar before they added some syntactic sugar that you'd like D to have. If you can make a really good argument in a DIP as to why D really needs a feature that you want, then it may yet get added (even if it's only syntactic sugar), but it's going to have to be a really compelling argument that is likely going to need to show why the feature is objectively better and thus worth having rather than simply saving you a bit of typing or making the code look prettier. It's probably going to have to clearly reduce bugs or provide capabilities that can't reasonably be done with a library. D is long past the point where the language was in flux and we were constantly adding new features. Features do still get added, but they really have to pull their own weight rather than being a nice-to-have. - Jonathan M Davis Here is the thing that bothers me about that stance. You are correct, but I don't think you've considered the logical conclusion of the direction your argument is headed. Pray tell, why must we stop adding
Re: My two cents
On 10/18/17 23:50, Fra Mecca wrote: [snip] The problem in my opinion is the ecosystem. We miss a build system that is tailored towards enterprises and there is so much work to do with libraries (even discovery of them) and documentation by examples. Indeed ... :) -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: D on quora ...
On 10/15/17 13:40, Laeeth Isharc wrote: On Saturday, 14 October 2017 at 22:43:33 UTC, Adam Wilson wrote: On 10/7/17 14:08, Laeeth Isharc wrote: In a polyglot environment, D's code generation and introspection abilities might be quite valuable if it allows you to write core building blocks once and call them from other languages without too much boilerplate and bloat. One could use SWIG, but... Oh dear, I seem to have accidentally set off a firestorm. Personally, I think there are better places to focus our energy than worrying about what the users of other languages think. We like D, that should be enough for us. The last line was somewhat tongue-in-cheek. There is no way we're going to convert C#/Java users either, and unlike C/C++ we cannot easily inter-operate with them. If we can convert Pascal users, why won't some C# and Java programmers be receptive to D? Plenty of people have approached D from Java: Indeed. I am an example of just such a convert (from C#). :) But it is much more difficult to inter-operate. The easiest path that I can see is Micro-services. Hide your different languages behind a REST API, such that your components are not talking C#<->D anymore, but HTTP<->HTTP. Then the language matters a LOT less, then you can convert individual services to a new language whenever business needs dictate. https://dlang.org/blog/2017/09/06/the-evolution-of-the-accessors-library/ https://dlang.org/blog/2017/08/11/on-tilix-and-d-an-interview-with-gerald-nunn/ https://github.com/intellij-dlanguage/intellij-dlanguage/ (Kingsley came from Java). Why can't we easily interop with Java and C#? I didn't find interop with Java so bad for what I was doing (embedding a Java library via the JVM with callbacks to D code), and Andy Smith found similar. http://dconf.org/2015/talks/smith.pdf (towards the end) C# interop for what I am doing sees easy enough. (I will generate C# wrappers for D structs and functions/methods). This work wasn't open-sourced, and nor did Microsoft send out a press release about their use of D in the COM team. But I spoke to the author in Berlin (maybe you did too), and it wasn't so much work to make it useful: http://www.lunesu.com/uploads/ModernCOMProgramminginD.pdf Very cool. I had no idea MSFT was doing that. I didn't talk to him, but I was fighting a bad cold at DConf this year. :( I avoided talking to a lot of people. Even so, I was more commenting on the fact that D has built in support for inter-operating with C/C++. It's not that it's impossible to inter-operate with C#/Java/etc., but it is significantly more work. And that can be a significant barrier to conversion when the ecosystem they are coming from comfortably provides everything they need already. :) Instead of worrying about how to get more people to come from a specific language. People will come if they see an advantage in D so lets try to provide as many advantages as possible. :) Yes - agree with this. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: D on quora ...
On 10/15/17 22:20, Dmitry Olshansky wrote: On Sunday, 15 October 2017 at 20:24:02 UTC, Adam Wilson wrote: database access (MySQL, PostgreSQL, Aerospike) libraries are available, That is important actually. So important that it should be a Priority 0 Must Have. Luckily it should also be quite strightforward to write them. At minimum there would be a C, Go, Java and Python “driver” to look at. Also surprisingly, D has superior driver for MySQL that talks native binary protocol while most other languages use the text mode. The most important SQL DB's all have high quality C libraries. Why not leverage those? I don't understand the obsession with having everything written in native D. To be bluntly honest, I find the VisualD suffers greatly from not being written as a C# MEF component (Visual Studio's plugin system). There is no demonstrable advantage to writing an IDE plugin in D (as opposed to using the native language of the IDE) other than proving that it can be done. Also, I would caution that MSFT is working *very* hard to retire the COM based plugin system for Visual Studio (here is the direction they are heading: https://github.com/dotnet/project-system). Honestly we could probably knock together an integrated D plugin based on Common Project System fairly quickly, but we'd have to do it in C#. Technically you could write a P/Invoke wrapper out to D, but ... why? preferably as a standard library (like in Dart and Go). Can’t do that. And it’s not standard in Go and Dart but packages, dub should work for that. I've been thinking about this question a LOT, and I'm not convinced it's impossible to put the DB libs into the standard Problem is - the development of std has glacial pace. Even if you put say Aerospike in std, I don’t think it’s in our best interest to have stagnated DB libs. DBs add features and ship new versions all the time. What might work is JDBC style approach - having a common interface in std with implementations in 3rd party. SQL might work this way. I've been (slowly) working on exactly this. I started with a concept loosely based on ADO.NET but lately it's more D idiomatic than that. Also, because SQL is what it is (a text based query language) it would actually be possible to include reference implementations of the major DB vendors for the common interface. (MySQL, PgSQL, etc.) NoSQL though is highly irregular and benefits primarily from features unique to a particular vendor. Completely agree. However, some types of NoSQL DB's could be hidden behind generic interfaces, for example key-value stores. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: D on quora ...
database access (MySQL, PostgreSQL, Aerospike) libraries are available, That is important actually. So important that it should be a Priority 0 Must Have. preferably as a standard library (like in Dart and Go). Can’t do that. And it’s not standard in Go and Dart but packages, dub should work for that. I've been thinking about this question a LOT, and I'm not convinced it's impossible to put the DB libs into the standard library... -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: D on quora ...
On 10/7/17 14:08, Laeeth Isharc wrote: On 10/6/2017 10:19 PM, Adam Wilson via Digitalmars-d wrote: > What if we stop focusing on the C/C++ people so much? The > like their tools and have no perceivable interest in moving > away from them (Stockholm Syndrome much?). The arguments the > use are primarily meant as defensive ploys, because they > compare everything to C/C++ and when it doesn't match in > some way or another the other language must be deficient. > They've already decided that C/C++ is the meter stick > against which all other languages are to be judged. > Unsurprisingly, nothing that is NOT C/C++ meets their > exacting demands. Yes - as Andrei said in a talk, people are looking for an excuse not to have to take the time to learn something new. There's an inordinate strength of feeling in discussions in relation to D. If it's not your cup of tea, why do you care so much? The raising of spurious objections is much better than indifference at this point, and you can see this year that there has been a shift. People got downvoted for repeating the same old talking points that are now demonstrably not correct, whereas before it was a matter of judgement and people could reasonably differ. On Friday, October 06, 2017 23:19:01 Brad Roberts via Or recognize that painting huge diverse groups as if there's a single brush with which to do so is a huge fallacy. Consider that the two leaders, as well as a large number of the contributing developers, come from the c++ community and that's not a bad thing, but rather a big part of _why_ they came to D. As always, focusing on the users of the language tends to pay a lot more dividends than focusing on nay sayers. Luckily, that's how things tend to proceed here, so yay for that. Long tail. We really couldn't care less about what most people think. In fact it's good that most people won't try D because what they will be expecting isn't yet what we offer (glossiness and hand-holding). At any one time there is a proportion of people who are unhappy with their existing choices and looking for something better. It's easy to grow in the early days - you don't need to appeal to most programmers. You just need to have a slightly stronger appeal to those who are already predisposed to like you (and to those who are already using your language and would like to use it for more things). And people conceive of the market in too small a way. People talk as if languages are in a battle to the death with each other - D can't succeed because it's too late, or because Rust has momentum. But the world is a very big place, and the set of people who talk much to a broad audience about what they are doing is a small one and thinking it reflects reality leads to distorted perceptions. Who would have thought that probably one of the larger code bases (500k SLOC) to be rewritten/converted in D might come from Pascal, with D competing with ADA? I don't know what has been decided there, but whatever the outcome, that's highly interesting, because it's probably true of others. Similarly Weka came from nowhere. A random tweet by Fowler led to them building their entire company on a language the founders hadn't used before. Because D is a highly ambitious language that isn't confined to a single domain, and that's a potential replacement in some domains for many other languages, most people won't know anyone who uses D because use is much more spread out. Enterprise users outside of web + big tech/startups don't talk about their choices that much. Microsoft didn't send out a press release when they used D in their COM team, for example - someone there just figured it was a good tool for the job and got on with it. Similarly the company that Andy Smith worked for (a money management company) didn't say anything, and it only happened that he talked about it because I suggested it and he happened to have time. Inordinately at this point in its development D is going to appeal to principals over agents, to people who have at least local authority to make decisions and don't have to persuade others, because the dynamics of how you persuade a committee are based on quite different things than making a decision and living or dying by how it turns out. That's okay. If people don't feel they can use D at work, they shouldn't. Some people will be able to, and as they have some success with it, others will start to imitate them. And in the meantime maybe it will be an advantage of sorts for the earlier adopters. It matters who your customers are, because that shapes what you become. And it's better early on to have people who make decisions on technical merits and who have the authority to do so, then to have a representative enterprise buyer! On Saturday, 7 October 2017 at 08:36:01 UTC, Jonathan M Davis wrote: Honestly, I've gotten to the point that I don't care much about trying to appeal to folks who complain about D. If we j
Re: D on quora ...
On 10/6/17 23:19, Brad Roberts wrote: On 10/6/2017 10:19 PM, Adam Wilson via Digitalmars-d wrote: What if we stop focusing on the C/C++ people so much? The like their tools and have no perceivable interest in moving away from them (Stockholm Syndrome much?). The arguments the use are primarily meant as defensive ploys, because they compare everything to C/C++ and when it doesn't match in some way or another the other language must be deficient. They've already decided that C/C++ is the meter stick against which all other languages are to be judged. Unsurprisingly, nothing that is NOT C/C++ meets their exacting demands. I saw we ditch the lot and focus on the large languages where D can get some traction (C#/Java). Or recognize that painting huge diverse groups as if there's a single brush with which to do so is a huge fallacy. Consider that the two leaders, as well as a large number of the contributing developers, come from the c++ community and that's not a bad thing, but rather a big part of _why_ they came to D. As always, focusing on the users of the language tends to pay a lot more dividends than focusing on nay sayers. Luckily, that's how things tend to proceed here, so yay for that. I'll admit the last sentence was an error. However, my main point is that we really need to stop worrying about other languages, and specifically C/C++. That horse has been beat to death. I enjoy using D and find it superior for many types of project. That's good enough for me. :) -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Release D v2.076.1
On 10/12/17 19:50, Jonathan M Davis wrote: On Thursday, October 12, 2017 14:39:27 b4s1L3 via Digitalmars-d-announce wrote: Also i'd like to say that the policy that is that regression fixes are commited on stable and that the fact that they only come to master in a "sync operation" is a problem. In the travis yaml we have to test dmd, dmd beta (stable, not yet released) and finally dmd master (current working tree). The policy should be changed so that regression fixes are commited to both master and stable, allowing to decrease the CI complexity. The problem is mainly that testing a project with dmd beta is pointless. It's only useful 1 or 2 weeks before a release, which happens , let's say, 4 times per years, leading to a waste of computing resources at the CI service. With reg fixes put in master at the same time that in stable, testing 3 versions of the DMD compiler would not be necessary anymore, i think. I don't know what the best way to handle committing regression fixes is, but I did find it annoying recently when I ran into a bug on master that I'd fixed on stable, but the fix hadn't been merged over yet. The fact that the fixes are delayed on master makes master buggier than it would be otherwise, and a number of us use master as our primary compiler. - Jonathan M Davis At work we branch out of stable (not yet released) and fix the bug. Merge branch to stable and then merge branch to master. Works pretty well. As long as you don't merge things into stable that you never intend for master. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: D on quora ...
On 10/6/17 14:12, Rion wrote: [snip] When every new languages besides Rust or Zig are GC. That same "flaw" is not looked upon as a issue. It seems that D simply carries this GC stigma because the people mentioning are C++ developers, the same people D targets as a potential user base. D can have more success in targeting people from scripting languages like PHP, Ruby, Python, where the GC is not looked upon as a negative. The same effect can be seen in Go its popularity with drawing developers from scripting languages despite it not being there intention. I always felt that D position itself as a higher language and in turn scares people away while at the same time the people it wants to attracts, with most are already set in there ways and looking at excuses to discredit D. The whole C++ has all of D features and does not need a GC / GC is bad excuse we see in the quora.com posting fits that description ( and not only there, also on other sites ). What if we stop focusing on the C/C++ people so much? The like their tools and have no perceivable interest in moving away from them (Stockholm Syndrome much?). The arguments the use are primarily meant as defensive ploys, because they compare everything to C/C++ and when it doesn't match in some way or another the other language must be deficient. They've already decided that C/C++ is the meter stick against which all other languages are to be judged. Unsurprisingly, nothing that is NOT C/C++ meets their exacting demands. I saw we ditch the lot and focus on the large languages where D can get some traction (C#/Java). -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Simple web server benchmark - vibe.d is slower than node.js and Go?
On 9/21/17 11:49, bitwise wrote: On Thursday, 21 September 2017 at 08:01:23 UTC, Vadim Lopatin wrote: There is a simple set of simple web server apps written in several languages (Go, Rust, Scala, Node-js): https://github.com/nuald/simple-web-benchmark I've sent PR to include D benchmark (vibe.d). I was hoping it could show performance at least not worse than other languages. But it appears to be slower than Go and even Node.js Are there any tips to achieve better performance in this test? Under windows, I can get vibe.d configured to use libevent to show results comparable with Go. With other configurations, it works two times slower. Under linux, vibe.d shows 45K requests/seconds, and Go - 50K. The only advantage of D here is CPU load - 90% vs 120% in Go. I'm using DMD. Probably, ldc could speed up it a bit. Probably, it's caused by single threaded async implementation while other languages are using parallel handling of requests? Doesn't vibe-d use Fibers? I tried to build a simple web server with a fiber-based approach once - it was horribly slow. I hope C# (and soon C++) style stackless resumable functions will eventually come to D. The purpose of Async/Await in C# is not to improve performance but to free up the thread while some long-running IO operation is taking place (such as talking to a remote server). In C# the biggest use case is ASP.NET/Core which allows the server to process many times the number of incoming requests(threads) than there are physical cores on the device. This works because another request is often doing some other work behind the scenes (DB query, HTTP call to remote service, etc.) In fact MSFT says that Async/Await will decrease performance of a single instance of execution and are not to be used in situations where the delay is less than about 50ms (in 2011, i've heard that it could be even less with newer versions of the compiler) as it can actually take more time dehydrate/rehydrate the thread than the blocking operation would've taken. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Dynamic binding to the Mono runtime API
On 6/4/17 04:15, Jakub Szewczyk wrote: On Sunday, 4 June 2017 at 09:43:23 UTC, Adam Wilson wrote: On 6/4/17 01:18, Jakub Szewczyk wrote: This is an interface to the Mono libraries, D/CLI would [...] My interest is less in code ports than bindings to the actual code. My experience with code ports or translations is that often subtle bugs creep in during translation due to the fact that each language has different idioms. What I am thinking about is a tool that loads an assembly, examines it's types and methods via this API and emits D code that directly interfaces into the .NET types via this API. The tricky part here is mapping the .NET dependencies into D. The moment the library exposes a type from a dependency, that dependency ALSO needs to be included somehow. All libraries reference "mscorlib", AKA the BCL, so we'd have to provide a "mono-bcl" package on DUB. That's what I actually meant, "porting" was a misused term on my part, "binding" would be a better word, sorry for that. As for the dependency problem - I think that a linking layer generator would accept a list of input assemblies (and optionally, specific classes) to which it should generate bindings, the core Mono types could be automatically translated to D equivalents, and the rest could be left as an opaque reference, like MonoObject* in C, also providing support for very basic reflection through the Mono methods if it turned out to be useful for anyone. Mono actually supports some kind of GC bridging as far as I understand, [...] On the GC side I was mostly thinking about GC Handles so that the objects don't get collected out from underneath us. That is something is trivial to code-gen. As for exceptions, I like the catch->translate->rethrow mechanism. And if the exception is unknown we could simply throw a generic exception. The important thing is to get close to the D experience, not try to map it perfectly. Yes, GCHandles to keep Mono objects in D and a wrapper based on that GC bridge to keep D references from being collected by Mono. I have previously implemented a very similar mechanism for Lua in a small wrapper layer, and it worked perfectly. I can make a static library version, [...] Thank you for this! I find static libraries easier to deal with. I'm sure other people have differing opinions, so having both would make everyone happy. It's now public as v1.1.0, I've tested that it works with the tiny sample, the only important part is that the library to link must be specified by the project using this binding, because those paths may vary across systems, and they cannot be specified in code like the dynamic link ones. However, a simple "libs":["mono-2.0"] entry in dub.json should be enough for most use cases. Thank you for this, I've tried it and it works! I did some in depth research and prototyping in D, and it looks like the only way to enumerate the types in an assembly is to use the Metadata Table API's map everything that way. That's a little beyond the scope of my free time so I'll have to shelve the idea from now. :( -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Dynamic binding to the Mono runtime API
On 6/4/17 01:18, Jakub Szewczyk wrote: This is an interface to the Mono libraries, D/CLI would require quite a lot of compiler changes, both on the front-end and back-end side, but thanks to metaprogramming a wrapper library can get very close to such an interface. I plan on making an automated D to Mono bridge, akin to LuaD, so that it can be used as a scripting platform for e.g. games. I haven't thought about doing it the other way around, but now it seems like a very interesting idea! Unfortunately no XAML (http://www.mono-project.com/docs/gui/wpf/), but many other libraries, such as XWT (https://github.com/mono/xwt) could be ported this way, I'll certainly look into it. My interest is less in code ports than bindings to the actual code. My experience with code ports or translations is that often subtle bugs creep in during translation due to the fact that each language has different idioms. What I am thinking about is a tool that loads an assembly, examines it's types and methods via this API and emits D code that directly interfaces into the .NET types via this API. The tricky part here is mapping the .NET dependencies into D. The moment the library exposes a type from a dependency, that dependency ALSO needs to be included somehow. All libraries reference "mscorlib", AKA the BCL, so we'd have to provide a "mono-bcl" package on DUB. One solution is to simply include the exposed dependency in the generated code. This would work because while the D code would have distinct types for the same class, the underlying .NET types is the same. The drawback with this approach is that you can't share the instances across D interfaces as the types are different. For example, Library1.B and Library2.C both rely on Dependency.A. In D you would have Library1.B.A and Library2.C.A and these two types, while the same in practice, are different types to the compiler. Maybe a clever use of alias can solve this at code-gen time... more research is required there. The other solution is too have the code-gen throw an error when it encounters a type from a dependency. This ensures that the types are the same across libraries in D, but at the cost of increased complexity for the developer when running the tool (specifying extra deps) and when building their code (ensuring all relevant packages are built/referenced). I'm not sold on either one. And it'd be best if it's possible to support both somehow. As for the WPF remark, my brain immediately jump to trying to interface to .NET/Core using a similar mechanism, but alas some research time indicates that this is not possible outside of COM. *le sigh* My apologize for the random side-track. Mono actually supports some kind of GC bridging as far as I understand, as there is a sgen-bridge.h header just for that, and it has apparently been used in the C#-Java Xamarin interace on Android. As for exceptions - D functions have to be wrapped in a nothrow wrapper, but that can be completely automated with templates. The other way around is also quite simple - when invoking a Mono function a pointer can be given that will set an exception reference if one is thrown from the .NET side, and a wrapper can easily rethrow it back to D. On the GC side I was mostly thinking about GC Handles so that the objects don't get collected out from underneath us. That is something is trivial to code-gen. As for exceptions, I like the catch->translate->rethrow mechanism. And if the exception is unknown we could simply throw a generic exception. The important thing is to get close to the D experience, not try to map it perfectly. I can make a static library version, it's just some regex substitutions done on the functions file actually, most probably I'll publish it today. It will be a dub subconfiguration like it is the case for GLFW3. Btw, I've manually ported the basic and configuration headers, so that no mistakes are made, and then used DStep and a modified DStep to generate the rest of the headers - my modification was only to change the way function declarations are generated, to make them in derelict form of alias da_function = void function(...);, and the rest was done with quite a lot of editor(VSCode) shortcuts. Thank you for this! I find static libraries easier to deal with. I'm sure other people have differing opinions, so having both would make everyone happy. I am very excited about this! -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Dynamic binding to the Mono runtime API
On 6/3/17 10:30, Jakub Szewczyk wrote: Mono runtime is a cross-platform, open-source alternative to Microsoft's .NET framework [1], and it can be embedded in other applications as a "scripting" VM, but with JIT-compilation enhanced performance and support of many languages such as C#, F# or IronPython [2]. It provides a C API, so I've bound it to D as a Derelict-based project, available at https://github.com/kubasz/derelict-mono, and as a DUB package (http://code.dlang.org/packages/derelict-mono). It currently wraps the Mono 5.0 API. There's also a simple example of calling a C# main from D code, and C# code calling a native function implemented in D. PS: Because I don't own a Mac I have no idea what the correct paths to the Mono shared library are, so it'd be great if someone could post/create a PR of them. [1] http://www.mono-project.com/ [2] http://www.mono-project.com/docs/advanced/embedding/scripting/ I work with C# professionally and this is some SERIOUSLY cool work. Thank you for this! I've looked over the code a bit and I have a couple of questions. This appears to be an interface to the runtime itself, not a BCL interface correct? It looks like this could be used to could this be used to read into a Mono Class Libraries, and if so would so some sort of automated code generation tool be required? It looks to me like the binding will be non-trivial, with GC, exceptions, etc. that all need to be handled at the call-site. Can we get a static library version of this, or is there a dependency on dynamic libraries? I have to admit I am very impressed. I have spent a lot of time building code generators before and I have to admit that the idea of binding to arbitrary .NET libraries via code generation is extremely appealing to me. I am seriously tempted to take this and start building a binding generator... I seriously need more free time! Way too many cool and useful things happening in D for my limited free time. A D binding for XAML ... THAT would sight to behold! -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Thoughts on some code breakage with 2.074
On 5/9/17 20:23, Patrick Schluter wrote: On Tuesday, 9 May 2017 at 17:34:48 UTC, H. S. Teoh wrote: On Tue, May 09, 2017 at 02:13:34PM +0200, Adam Wilson via Digitalmars-d wrote: > [...] [...] [...] [...] I don't represent any company, but I have to also say that I *appreciate* breaking changes that reveal latent bugs in my code. In fact, I even appreciate breakages that eventually force me to write more readable code! A not-so-recent example: [...] The code breakage annoyance has more to do with 3rd party libraries not very actively maintained than with active codebases imho. *cough* Umm, I think that's a false pointer. If it's not actively maintained, should you really be relying on it? Where I work, current maintenance is one of the first questions we ask, followed immediately by determining whether or not we are able to maintain it ourselves should it go unmaintained. If you're going to take on maintenance yourself, the library is already missing features and you're responsible for fixing it's existing implementation bugs anyways, might as well do the work of upgrading it while you're at it. This is the point of Open Source, we have the opportunity to take unmaintained code and start maintaining it again. Either way all I hear about is corp users not liking breaking changes. That has been demonstrated as a false concern time and time again. If it's a matter of unmaintained libraries, those libraries probably have bigger problems than breaking compiler changes, fork and upgrade them or write your own. Because those have always been the only two choices you've ever had in practice anyways. Telling the world that we can't make breaking changes to the compiler because it might break an unmaintained library is irrational position and extreme position to take. It will *not* win us hearts and minds. Let's stop hiding behind our misplaced fears over corp-users and unmaintained libraries so that we can start improving D for everyone who is using it today. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: Thoughts on some code breakage with 2.074
On 5/8/17 20:33, Brian Schott wrote: Recently the EMSI data department upgraded the compiler we use to build our data processing code to 2.074. This caused several of the thousands of processes to die with signal 8 (floating point exceptions). This was caused by the fix to issue 17243. This is a good thing. We need more breaking changes like this. Now that the floating point exceptions are properly enabled we were able to track down some issues that were being silently ignored. WUT. All I hear on these forums is the abject terror of breaking changes making companies run screaming from D. You mean to say that companies don't actually mind breaking changes if it solves long-standing issues. I'm shocked. SHOCKED I SAY! ;-) Can we PLEASE get more of this? I'm not saying up-end the language, but let's solve some problems. I doubt our corporate users will be very angry. I suspect that most reactions will fall between "minor irritation" and this one. /me looks sideways at shared -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: DLang quarterly EU?
On 5/7/17 12:57, Seb wrote: On Sunday, 7 May 2017 at 06:58:51 UTC, Adam Wilson wrote: On 5/7/17 07:41, Walter Bright wrote: Dang, I wish I could participate in that! Well, technically you could, but it involves a set of rather grueling flights. Depending on the day it's held I might be able to attend once a year. If it's on the weekend, I can make a long weekend out of it. +1 - maybe its worth considering to make it for two days (=one weekend), s.t. the flight is not longer than the meetup? That can work. It would be two or three days vacation depending on flight schedules. It is certainly workable. And it would be a lot of fun to get together to discuss D and hack on projects. Not to mention a cool way to see new cities if it moves around. :) -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: DLang quarterly EU?
On 5/7/17 07:41, Walter Bright wrote: Dang, I wish I could participate in that! Well, technically you could, but it involves a set of rather grueling flights. Depending on the day it's held I might be able to attend once a year. If it's on the weekend, I can make a long weekend out of it. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: DConf hackathon: idea list
On 5/4/17 16:33, rikki cattermole wrote: On 04/05/2017 3:22 PM, Adam Wilson wrote: On 5/4/17 15:32, Seb wrote: Hi all, the DConf hackathon isn’t a hackathon in the traditional sense. It is intended as a day for _collaboratively_ focusing on long-lasting problems and pain points in the D ecosystem, planning upcoming features or DIPs, and creation of a rough roadmap for the next months. Of course, any D hackers who wish to simply progress their own personal projects are welcome too! Experience has shown that in large groups too much time is wasted on giving a voice to everyone, whereas for tiny groups chances are that it takes too long to get the ball rolling. Hence, a group size of four or five D hackers is recommended. Below you can find a list of themes with a short abstract and a couple of ideas. The abstracts and ideas are intended to get you started and guide you. Please feel free to _add your own ideas_ and _add your names_ next to them so that people can ping you (IRC, email, and other IM handles might be handy as well). Of course, you can add your name to multiple projects. On Sunday the first half an hour will be used to finalize the group forming. All existing groups and persons with an idea, but without a group can pitch their idea shortly (one minute max, no slides) and thus find other motivated D hackers. https://docs.google.com/document/d/1L5edu6LLj3Afa3tPgqk-aX-fErwr7sPj37Dt5avoc5w/edit# From the Phobos wishlist: I am working an a generic SQL database interface. If anybody is interested in helping out I have a small amount of code that shows the general design direction I've taken so far. We can discuss the design and collaboratively hack out a prototype. The current code is here: https://github.com/LightBender/std.experimental.database.sql Looking at that I think focusing on describing of tables ext. would be a good first step. I worry that it won't be very flexible memory management or serialization wise. It's definitely the hardest problem to solve. Obviously we could start with a brute force approach (box everything like ADO.NET does) for the sake of getting started and then work on improving it later. For now I compromised and used Variant, which should be sufficient for most cases right now. (See latest commit) I am using Classes and Dynamic Arrays so it's not @nogc yet, but this is something that can be improved over time. There is no reason that this could not use the new Allocators API at a later date. I'm not looking for inclusion into Phobos during the DConf Hackathon, but I would like to block out something that we can start using. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: DConf hackathon: idea list
On 5/4/17 15:32, Seb wrote: Hi all, the DConf hackathon isn’t a hackathon in the traditional sense. It is intended as a day for _collaboratively_ focusing on long-lasting problems and pain points in the D ecosystem, planning upcoming features or DIPs, and creation of a rough roadmap for the next months. Of course, any D hackers who wish to simply progress their own personal projects are welcome too! Experience has shown that in large groups too much time is wasted on giving a voice to everyone, whereas for tiny groups chances are that it takes too long to get the ball rolling. Hence, a group size of four or five D hackers is recommended. Below you can find a list of themes with a short abstract and a couple of ideas. The abstracts and ideas are intended to get you started and guide you. Please feel free to _add your own ideas_ and _add your names_ next to them so that people can ping you (IRC, email, and other IM handles might be handy as well). Of course, you can add your name to multiple projects. On Sunday the first half an hour will be used to finalize the group forming. All existing groups and persons with an idea, but without a group can pitch their idea shortly (one minute max, no slides) and thus find other motivated D hackers. https://docs.google.com/document/d/1L5edu6LLj3Afa3tPgqk-aX-fErwr7sPj37Dt5avoc5w/edit# From the Phobos wishlist: I am working an a generic SQL database interface. If anybody is interested in helping out I have a small amount of code that shows the general design direction I've taken so far. We can discuss the design and collaboratively hack out a prototype. The current code is here: https://github.com/LightBender/std.experimental.database.sql -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Re: pointer not aligned
On 3/30/17 10:47 PM, H. S. Teoh via Digitalmars-d-learn wrote: On Fri, Mar 31, 2017 at 04:41:10AM +, Joel via Digitalmars-d-learn wrote: Linking... ld: warning: pointer not aligned at address 0x10017A4C9 (_D30TypeInfo_AxS3std4file8DirEntry6__initZ + 16 from .dub/build/application-debug-posix.osx-x86_64-dmd_2072-EFDCDF4D45F944F7A9B1AEA5C32F81ED/spellit.o) ... and this goes on forever! More information, please. What was the code you were trying to compile? What compile flags did you use? Which compiler? T I see this on OSX as well. Any code referencing Phobos appears to produce this. It appear after updating the XCode command line tools. It does not appear to effect program execution, but the pages of warnings are really quite annoying. DMD 2.073.2 -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
DConf Hackathon
Hello fellow DConfers! In the spirit of "the DConf 2017 hackathon isn't a hackathon in the traditional sense as most of the time and focus will hopefully be spent discussing, planning and developing future D projects"; I was thinking that it might be beneficial to pull together a list of areas in the D ecosystem that DConf attendees would be interested in hacking on. These areas could then be used to create a number of "hacking groups" where like-minded colleagues could sit together, discuss, and even hack on that topic if they wish. Please reply with your vote and whether or not you are willing to organize those groups. I would be interested in participating in the following areas: Security Libraries (Morning)- Willing to Organize Database Interfaces (Afternoon) - Willing to Organize I am looking forward to seeing everyone there! -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
OpenSSL to switch licenses to Apache 2.0
Hi Everyone, I know that the licensing around OpenSSL has been a somewhat controversial topic around the D world. So I though that you might find this bit of news interesting: https://www.openssl.org/blog/blog/2017/03/22/license/ -- Adam Wilson IRC: LightBender import quiet.dlang.dev;