Building a wasm library, need to override .object._d_newitemT!T
Hello, I've been developing a library[1] based on spasm for which I've implemented the druntime and currently it compiles web apps properly with TypeInfo, no GC, and it even uses diet templates. I'm having a problem implementing the `new` keyword, so that I can start importing more libraries with minimal change. However, LDC calls .object._d_newitemT!T from the original druntime - which I need for compile-time function execution, but my implementation in `module object` doesn't override it in the compiler and the original implementation tries import core.stdc.time which errors out in wasm (with good reasons). Is there a compiler flag than I can use to override module templates? Thanks in advance. [1] https://github.com/etcimon/libwasm
Re: vibe.d benchmarks
On Wednesday, 6 January 2016 at 08:24:10 UTC, Atila Neves wrote: On Tuesday, 5 January 2016 at 14:15:18 UTC, rsw0x wrote: On Tuesday, 5 January 2016 at 13:09:55 UTC, Etienne Cimon wrote: On Tuesday, 5 January 2016 at 10:11:36 UTC, Atila Neves wrote: [...] The Rust mio library doesn't seem to be doing any black magic. I wonder how libasync could be optimized to match it. Have you used perf(or similar) to attempt to find bottlenecks yet? Extensively. I optimised my D code as much as I know how to. And that's the same code that gets driven by vibe.d, boost::asio and mio. Nothing stands out anymore in perf. The only main difference I can see is that the vibe.d version has far more cache misses. I used perf to try and figure out where those came from and included them in the email I sent to Soenke. Perf is a bit hard to understand if you've never used it before, but it's also very powerful. Oh, I know. :) Atila It's possible that those cache misses will be irrelevant when the requests actually do something, is it not? When a lot of different requests are competing for cache lines, I'd assume it's shuffling it enough to change these readings
Re: vibe.d benchmarks
On Wednesday, 6 January 2016 at 08:21:00 UTC, Atila Neves wrote: On Tuesday, 5 January 2016 at 13:09:55 UTC, Etienne Cimon wrote: On Tuesday, 5 January 2016 at 10:11:36 UTC, Atila Neves wrote: [...] The Rust mio library doesn't seem to be doing any black magic. I wonder how libasync could be optimized to match it. No black magic, it's a thin wrapper over epoll. But it was faster than boost::asio and vibe.d the last time I measured. Atila You tested D+mio, but the equivalent would probably be D+libasync as it is a standalone library, thin wrapper around epoll
Re: vibe.d benchmarks
On Tuesday, 5 January 2016 at 10:11:36 UTC, Atila Neves wrote: On Thursday, 31 December 2015 at 08:23:26 UTC, Laeeth Isharc wrote: [...] vibe.d _was_ faster than Go. I redid the measurements recently once I wrote an MQTT broker in Rust, and it was losing to boost::asio, Rust's mio, Go, and Java. I told Soenke about it. I know it's vibe.d and not my code because after I got the disappointing results I wrote bindings from both boost::asio and mio to my D code and the winner of the benchmarks shifted to the D/mio combo (previously it was Rust - I figured the library was the cause and not the language and I was right). I'd've put up new benchmarks already, I'm only waiting so I can show vibe.d in a good light. Atila The Rust mio library doesn't seem to be doing any black magic. I wonder how libasync could be optimized to match it.
Re: vibe.d benchmarks
On Monday, 4 January 2016 at 10:32:41 UTC, Daniel Kozak wrote: V Sat, 02 Jan 2016 03:00:19 + Etienne Cimon via Digitalmars-d <digitalmars-d@puremagic.com> napsáno: On Friday, 1 January 2016 at 11:38:53 UTC, Daniel Kozak wrote: > On Thursday, 31 December 2015 at 18:23:17 UTC, Etienne Cimon > wrote: >> [...] > > ? With libasync, you can run multiple instances of your vibe.d server and the linux kernel will round robin the incoming connections. Yes, but I speak about one instance of vibe.d with multiple workerThreads witch perform really bad with libasync Yes, I will investigate this.
Re: vibe.d benchmarks
On Sunday, 3 January 2016 at 22:16:08 UTC, Nick B wrote: can someone tell me what changes need to be commited, so that we have a chance at getting some decent (or even average) benchmark numbers ? Considering that the best benchmarks are from tools that have all the C calls inlined, I think the best optimizations would be in pragma(inline, true), even doing inlining for fiber context changes.
Re: vibe.d benchmarks
On Saturday, 2 January 2016 at 10:05:56 UTC, Sebastiaan Koppe wrote: That is nice. Didn't know that. That would enable zero-downtime-updates right? Yes, although you might still break existing connections unless you can make the previous process wait for the existing connections to close after killing it. I use docker a lot so normally I run a proxy container in front of the app containers and have it handle ssl and virtual hosts routing. I haven't needed to migrate out of my linux server yet (12c/24t 128gb) but when I do, I'll just add another one and go for DNS round robin. I use cloudflare currently and in practice you can add/remove A records and it'll round robin through them. If your server application is capable of running as multiple instances, it's only a matter of having the database/cache servers accessible from another server and you've got a very efficient load balancing that doesn't require any proxies.
Re: vibe.d benchmarks
On Friday, 1 January 2016 at 11:38:53 UTC, Daniel Kozak wrote: On Thursday, 31 December 2015 at 18:23:17 UTC, Etienne Cimon wrote: On Thursday, 31 December 2015 at 13:29:49 UTC, Daniel Kozak wrote: On Thursday, 31 December 2015 at 12:09:30 UTC, Etienne Cimon wrote: [...] When I use HTTPServerOption.distribute with libevent I get better performance but with libasync it drops from 2 req/s to 80 req/s. So maybe some another performance problem I launch libasync programs as multiple processes, a bit like postgresql. The TCP listening is done with REUSEADDR, so the kernel can distribute it and it scales linearly without any fear of contention on the GC. My globals go in redis or databases ? With libasync, you can run multiple instances of your vibe.d server and the linux kernel will round robin the incoming connections.
Re: vibe.d benchmarks
On Thursday, 31 December 2015 at 08:51:31 UTC, yawniek wrote: On Thursday, 31 December 2015 at 08:23:26 UTC, Laeeth Isharc wrote: Isn't there a decent chance the bottleneck is vibe.d's JSON implementation rather than the framework as such ? We know from Atila's MQTT project that vibe.D can be significantly faster than Go, and we also know that its JSON implementation isn't that fast. Replacing with FastJSON might be interesting. Sadly I don't have time to do that myself. this is not the same benchmark discussed elsewhere, this one is a simple echo thing. no json. it just states that there is some overhead around on various layers. so its testimony is very limited. from a slightly more distant view you can thus argue that 50k rps vs 150k rps basically just means that the framework will most probably not be your bottle neck. none the less, getting ahead in the benchmarks would help to attract people who are then pleasantly surprised how easy it is to make full blown services with vibe. the libasync problem seem seems to be because of TCP_NODELAY not being deactivated for local connection. That would be the other way around. TCP_NODELAY is not enabled in the local connection, which makes a ~20-30ms difference per request on keep-alive connections and is the bottleneck in this case. Enabling it makes the library competitive in these benchmarks.
Re: vibe.d benchmarks
On Thursday, 31 December 2015 at 13:29:49 UTC, Daniel Kozak wrote: On Thursday, 31 December 2015 at 12:09:30 UTC, Etienne Cimon wrote: That would be the other way around. TCP_NODELAY is not enabled in the local connection, which makes a ~20-30ms difference per request on keep-alive connections and is the bottleneck in this case. Enabling it makes the library competitive in these benchmarks. When I use HTTPServerOption.distribute with libevent I get better performance but with libasync it drops from 2 req/s to 80 req/s. So maybe some another performance problem I launch libasync programs as multiple processes, a bit like postgresql. The TCP listening is done with REUSEADDR, so the kernel can distribute it and it scales linearly without any fear of contention on the GC. My globals go in redis or databases
Re: LDC with Profile-Guided Optimization (PGO)
On Tuesday, 22 December 2015 at 14:49:51 UTC, Johan Engelen wrote: On Tuesday, 15 December 2015 at 23:05:38 UTC, Johan Engelen wrote: Hi all, I have been working on adding profile-guided optimization (PGO) to LDC [1][2][3]. At this point, I'd like to hear your input and hope you can help with testing! Unfortunately, to try it out, you will need to build LDC with LLVM3.7 yourself. PGO should work on OS X, Linux, and Windows. Would it help if binaries are available? Or is general interest low? -Johan Sorry I don't read the forums often. This is definitely going to be a game changer for me, I need PGO to help with Botan performance issues and I'm going to be developing an embedded server on Intel Edison with vibe.d/botan/http2 soon, this compiler could make quite the difference. I'll be testing it one I get my prototype!
Re: DMD is slow for matrix maths?
On Monday, 26 October 2015 at 20:30:51 UTC, rsw0x wrote: On Monday, 26 October 2015 at 11:37:17 UTC, Etienne Cimon wrote: On Monday, 26 October 2015 at 04:48:09 UTC, H. S. Teoh wrote: On Mon, Oct 26, 2015 at 02:37:16AM +, Etienne Cimon via Digitalmars-d wrote: If you must use DMD, I recommend filing an enhancement request and bothering Walter about it. T I'd really like the performance benefits to be available to DMD users as well. I think I'll have to write it all with inline assembler just to be sure... dmd will never reach gdc/ldc performance, gcc and LLVM have entire teams of people that actively contribute to their compilers. LDC couldn't inline it either. My only options at this point is to write the assembly or link to a C library.
Re: DMD is slow for matrix maths?
On Tuesday, 27 October 2015 at 18:18:36 UTC, Etienne Cimon wrote: On Monday, 26 October 2015 at 20:30:51 UTC, rsw0x wrote: On Monday, 26 October 2015 at 11:37:17 UTC, Etienne Cimon wrote: On Monday, 26 October 2015 at 04:48:09 UTC, H. S. Teoh wrote: [...] I'd really like the performance benefits to be available to DMD users as well. I think I'll have to write it all with inline assembler just to be sure... dmd will never reach gdc/ldc performance, gcc and LLVM have entire teams of people that actively contribute to their compilers. LDC couldn't inline it either. My only options at this point is to write the assembly or link to a C library. Btw, DMD and LDC had similar performance.
Re: DMD is slow for matrix maths?
On Monday, 26 October 2015 at 04:48:09 UTC, H. S. Teoh wrote: On Mon, Oct 26, 2015 at 02:37:16AM +, Etienne Cimon via Digitalmars-d wrote: If you must use DMD, I recommend filing an enhancement request and bothering Walter about it. T I'd really like the performance benefits to be available to DMD users as well. I think I'll have to write it all with inline assembler just to be sure...
DMD is slow for matrix maths?
I've been playing around with perf and my web server and found that the bottleneck is by far the math module of Botan: https://github.com/etcimon/botan/blob/master/source/botan/math/mp/mp_core.d I'm probably a bit naive but I was wishing for some inlining to happen. I see LOTS of CPU time spent on "pop" instructions to return from a simple multiply function, and the pragma(inline, true) was refused on all of these. So, should I wait for an inline? Should I import another library? Should I rewrite all the maths in assembly manually for each processor? Should I write another library that must be compiled with LDC/release for maths? I think the best option would be for an inline feature in DMD that works, but I'm wondering what the stance is right now about the subject?
Re: Invalid assembler comparison
On Friday, 23 October 2015 at 15:17:43 UTC, Etienne Cimon wrote: Hello, I've been trying to understand this for a while now: https://github.com/etcimon/botan/blob/master/source/botan/math/mp/mp_core.d#L765 This comparison (looking at it with windbg during cmp operation) has these invalid values in the respective registers: rdx: 9366584610601550696 r15: 8407293697099479287 When moving them into a ulong variable with a mov [R11], RDX before the CMP command, I get: RDX: 7549031027420429441 R15: 17850297365717953652 Which are the valid values. Any idea how these values could have gotten corrupted this way? Is there a signed integer conversion going on behind the scenes? I found out that there was an integer conversion going on behind the scenes when using jnl. I had to use jnb http://stackoverflow.com/questions/27284895/how-to-compare-a-signed-value-and-an-unsigned-value-in-x86-assembly
Invalid assembler comparison
Hello, I've been trying to understand this for a while now: https://github.com/etcimon/botan/blob/master/source/botan/math/mp/mp_core.d#L765 This comparison (looking at it with windbg during cmp operation) has these invalid values in the respective registers: rdx: 9366584610601550696 r15: 8407293697099479287 When moving them into a ulong variable with a mov [R11], RDX before the CMP command, I get: RDX: 7549031027420429441 R15: 17850297365717953652 Which are the valid values. Any idea how these values could have gotten corrupted this way? Is there a signed integer conversion going on behind the scenes?
Re: Experience: Developing Cloud Foundry applications with D
On Tuesday, 6 October 2015 at 09:36:42 UTC, Marc Schütz wrote: On Tuesday, 6 October 2015 at 05:45:18 UTC, Andre wrote: vagrant@vagrant-ubuntu-trusty-64:~/projects/tests/vibed_test$ dub Target memutils 0.4.1 is up to date. Use --force to rebuild. Target libasync 0.7.5 is up to date. Use --force to rebuild. Target vibe-d 0.7.25 is up to date. Use --force to rebuild. Building vibed_test ~master configuration "debug", build type debug. Compiling using dmd... Enhanced memory security is enabled. Using Linux EPOLL for events Linking... Running ./bin/app Listening for HTTP requests on :::8080 Listening for HTTP requests on 0.0.0.0:8080 E: Could not mlock 65536 bytes Does it keep running? AFAIK, the last line is just a warning from the botan library that attempts to allocate non-swappable memory for holding secret keys etc. The error is with mlock, the ulimit for locked memory is too low on non-root user accounts, so it falls back to simple zeroize of swappable memory
Re: Experience: Developing Cloud Foundry applications with D
On Monday, 5 October 2015 at 06:24:44 UTC, Andrei Alexandrescu wrote: On 10/5/15 1:34 AM, Rikki Cattermole wrote: Vibe.d has a provider called libasync. Libasync is fully implemented in D. You probably should have tried that at least. Although I still would recommend trying it ;) It's a lot better then what we have in Phobos. Cue choir asking for porting of libasync to phobos. I've first asked this a couple of years ago. -- Andrei To be fair, libasync was release only october last year :) Will work on adding libasync to Phobos once I'm finished adding a few more features to my main project! (3-4 months). Will need to strip memutils, will you have std.allocator ready? Anyone is free to pick up this project and move libasync to phobos if ready before me. I've gotten this all-D web framework (vibe.d, botan-D TLS, D libhttp/2, etc) into production on my end and it works great!
Re: dmd codegen improvements
On Tuesday, 18 August 2015 at 10:45:49 UTC, Walter Bright wrote: So if you're comparing code generated by dmd/gdc/ldc, and notice something that dmd could do better at (1, 2 or 3), please let me know. Often this sort of thing is low hanging fruit that is fairly easily inserted into the back end. I think someone mentioned how other compilers unroll loops at more than 2 levels. Other than that, there was a recent Java vs D thread which showed it orders of magnitude faster on vtable calls. So I think the most amazing feature would be to allow profiling sampling to compile with samples and select which functions to inline or do some magic around vtable pointers like what Java is doing. Finally, I'm going to write this down here and haven't had time to look more into it but I've never been able to compile Botan with optimizations on DMD64 Win64 VS2013 (https://github.com/etcimon/botan), it's really strange having a crypto library that you can't optimize, building -O -g also gives me a ccog.c ICE error. I think it might be something about `asm pure` that uses some locals, does that eliminate the function call parameters?
Re: dmd codegen improvements
On Tuesday, 18 August 2015 at 12:32:17 UTC, Etienne Cimon wrote: a crypto library that you can't optimize, building -O -g also gives me a ccog.c ICE error. I think it might be something about `asm pure` that uses some locals, does that eliminate the function call parameters? Sorry that was cgcod.c Internal error: backend\cgcod.c 2311 FAIL .dub\build\__test__full__-unittest-windows-x86_64-dmd_2068-8073079C502FEEB927744150233D4046\ __test__full__ executa ble I'll try and file a bugzilla about this. I think stability should be first concern.
Re: D fund
On Sunday, 9 August 2015 at 09:15:16 UTC, ref2401 wrote: Does the fund exist? Are there sponsors? How can one donate some money to D? Ask a D developer you appreciate (in private) to give you his paypal/email, and pay him directly like he's a musician on the side of the road. He will be motivated by this simple gesture of receiving the few dollars, his happiness is going to reflect through his open source work, and the D community will benefit as a whole. It would probably help if D developers put a donations paypal link on their README.md, but I think most people have such a grim outlook on making money from the open source work, that they don't even take the time to put that link on there.
Re: Why Java (server VM) is faster than D?
On Monday, 3 August 2015 at 17:33:30 UTC, aki wrote: On Monday, 3 August 2015 at 16:47:58 UTC, John Colvin wrote: changing two lines: final class SubFoo : Foo { int test(F)(F obj, int repeat) { I tried it. DMD is no change, while GDC gets acceptable score. D(DMD 2.067.1): 2.445 D(GDC 4.9.2/2.066): 0.928 Now I got a hint how to improve the code by hand. Thanks, John. But the original Java code that I'm porting is about 10,000 lines of code. And the performance is about 3 times different. Yes! Java is 3 times faster than D in my app. I hope the future DMD/GDC compiler will do the similar optimization automatically, not by hand. Aki. LLVM might be able to do achieve Java's optimization for your use case using profile-guided optimization. In principle, it's hard to choose which function to inline without the function call counts, but LLVM has a back-end with sampling support. http://clang.llvm.org/docs/UsersManual.html#profile-guided-optimization Whether or not this is or will be available soon for D in LDC is a different matter.
Re: DMD on WIndows 10
On Saturday, 1 August 2015 at 07:56:34 UTC, John Chapman wrote: On Friday, 31 July 2015 at 22:02:13 UTC, Paul D Anderson wrote: I'm waiting to upgrade from Windows 7 to Windows 10 to avoid the inevitable just-released bugs, but does anyone have any info about D on Windows 10? Has anyone tried it? I'm on Windows 10, and my DMD-built programs run just fine. Did you get a sort of freeze on Win10 when trying to open some executables downloaded from the internet? I was annoyed a bit when re-downloading all my apps, the installer just appeared out of nowhere 5 minutes later.
Re: Arrays and struct assignment, pt. 2
On Sunday, 2 August 2015 at 02:49:03 UTC, Jonathan M Davis wrote: On Sunday, 2 August 2015 at 01:50:50 UTC, David Nadlinger wrote: Again, am I missing something obvious here? I can't quite believe that struct lifetime would have been quite as broken for so long. I suspect that what it comes down to is that opAssign doesn't get used all that frequently. Most structs simply don't need it, so code which would hit the bug probably isn't all that common. Obviously, such code exists, but it requires using both opAssign and then putting those structs in arrays - and then catching the resulting bug (which you would hope would happen, but if the difference is subtle enough, it wouldn't necessarily be caught). And if structs with opAssign normally also define a postblit, then it's that much less likely that the problem would be hit. - Jonathan M Davis I couldn't get reference counted types to work as struct members, for some hard-to-track reason, and am actively avoiding it right now as a result. Maybe we've found a cause here? The might be a lot of people like me that gave up trying to track it, and are simply avoiding error-prone uses of structs.
Re: D for Android
On Thursday, 30 July 2015 at 19:38:12 UTC, Joakim wrote: On Monday, 25 May 2015 at 20:08:48 UTC, Joakim wrote: On Monday, 18 May 2015 at 15:47:07 UTC, Joakim wrote: Sure, have fun with your new devices. :) Hopefully, I'll get Android/ARM working before then, but I don't and won't have any AArch64 devices to test. Not that it matters, as 64-bit ARM has even less share than x86 right now. Earlier this week, I stumbled across a way to get TLS working with ldc for Android/ARM, similar to the approach used for Android/x86 so far. Exception-handling on ARM for ldc is currently unfinished (https://github.com/ldc-developers/ldc/issues/489), so if I disable a handful of tests related to that, I get 36 of 42 druntime modules' unit tests and around 31 of 70 phobos modules' unit tests to pass. All tests were run from the command line on my Android tablet. It appears there are issues related to unicode and the GC causing many of the remaining failures. Some good news, I've made progress on the port to Android/ARM, using ldc's 2.067 branch. Currently, all 46 modules in druntime and 85 of 88 modules in phobos pass their tests (I had to comment out a few tests across four modules) when run on the command-line. There is a GC issue that causes 2-3 other modules to hang only when the tests are run as part of an Android app/apk, ie a D shared library that's invoked by the Java runtime. I've compiled an Android/ARM app that will run the remaining majority of tests on Android 5 Lollipop or newer, which you can download and try out on your Android 5 devices: https://github.com/joakim-noah/android/releases/tag/apk All tests run on my Android 5.1 device, while the last two modules tested by this app hang on an Android 5.0 device I tested. All patches used are linked from the above release. Thanks, I didn't remember you were the one working on this. I've been following this and I'm just as eager to start testing my libraries with it. I think Android could also use a cross-platform web plugin framework. I've started to refactor the idea, and just being able to enhance a website with native code on any platform would be great, it would really make up for being forced into doing all-javascript when writing the UI in HTML5/CSS right now.
Re: D Web Services Application Potential?
On Wednesday, 29 July 2015 at 11:06:03 UTC, Ola Fosheim Grøstad wrote: On Wednesday, 29 July 2015 at 10:39:54 UTC, yawniek wrote: sorry typo. i meant we now can have statefull apis. Ok, then I get it. ;) and i disagree on the limited usefulness. do you have REST api in native apps? i don't see much reason why we should not develop web applications the way we develop native apps. The goal should be to keep the server-side simple, robust, transactional and generic. Then push all the fickle special-casing to the client side. Why do work on the server when you can do almost everything on the client, caching data in Web Storage/IndexedDB? There's a really minimal amount of code on web servers nowadays with javascript frameworks and databases doing all the work. I actually use the size of a vibe.d application (2mb) to my advantage to produce a plugin that will overload certain requests on the client's computer (via a windows service or launchd daemon and reverse proxy). This allows much more extensive use of local resources, which is really untapped way of developing web applications at the moment, it really lets your imagination fly.
Re: D Web Services Application Potential?
On Wednesday, 29 July 2015 at 00:12:21 UTC, Brandon Ragland wrote: For actual web applications, and front-end development currently done in your more traditional languages, D could be used, in a style similar to Java's JSP, JSTL, and EL. Just without the notion of scripts in the pages themselves, as this would mean writing an on-the-fly interpreter, or compiling whole pages, which surely isn't an option for a compiled performant language; if we want it to be readily adapted. Apologizes if I jumped around a lot, and misspelled. More difficult than I thought typing from my phone. Most developers nowadays are having a lot of success building web apps with an AngularJS MVC Vibe.d, rather than rendering the page entirely from the back-end. Heck, they can even build android or ios native apps with this architecture (see Ionic framework). So I think this makes more sense than rendering pages in the back-end, even if most legacy web stuff did that.
Re: D Web Services Application Potential?
On Wednesday, 29 July 2015 at 01:23:54 UTC, Etienne Cimon wrote: Most developers nowadays are having a lot of success building web apps with an AngularJS MVC Vibe.d, rather than rendering Sorry, I meant with an AngularJS MVC Web services
Re: std.data.json formal review
On Tuesday, 28 July 2015 at 18:45:51 UTC, Sönke Ludwig wrote: Am 28.07.2015 um 17:19 schrieb Etienne Cimon: On Tuesday, 28 July 2015 at 14:07:19 UTC, Atila Neves wrote: Start of the two week process, folks. Code: https://github.com/s-ludwig/std_data_json Docs: http://s-ludwig.github.io/std_data_json/ Atila This is cool: https://github.com/s-ludwig/std_data_json/blob/aac6d846d596750623fd5c546343f4f9d19447fa/source/stdx/data/json/value.d#L183 I was getting tired of programmatically checking for null, then checking for object type, before moving along in the object and doing the same recursively. Not quite as intuitive as the optional chaining ?. operator in swift but it gets pretty close https://blog.sabintsev.com/optionals-in-swift-c94fd231e7a4#5622 An idea might be to support something like this: json_value.opt.foo.bar[2].baz or opt(json_value).foo.bar[2].baz opt (name is debatable) would return a wrapper struct around the JSONValue that supports opDispatch/opIndex and propagates a missing field to the top gracefully. It could also keep track of the complete path to give a nice error message when a non-existent value is dereferenced. I like it quite well. No, actually, a lot. Thinking about it some more... this could end up being the most convenient feature ever known to mankind and would likely push it towards a new age of grand discoveries, infinite fusion power and space colonization. Lets do it
Re: std.data.json formal review
On Tuesday, 28 July 2015 at 15:07:46 UTC, Rikki Cattermole wrote: On 29/07/2015 2:07 a.m., Atila Neves wrote: Start of the two week process, folks. Code: https://github.com/s-ludwig/std_data_json Docs: http://s-ludwig.github.io/std_data_json/ Atila Right now, my view is no. Unless there is some sort of proof that it will work with allocators. I have used the code from vibe.d days so its not an issue of how well it works nor nit picky. Just can I pass it an allocator (optionally) and have it use that for all memory usage? After all, I really would rather be able to deallocate all memory allocated during a request then you know, rely on the GC. I totally agree with that, but shouldn't it be consistent in Phobos? I don't think it's possible to make an interface for custom allocators right now, because that question simply hasn't been ironed out along with std.allocator. So, anything related to allocators belongs in another thread imo, and the review process here would be about the actual json interface
Re: std.data.json formal review
On Tuesday, 28 July 2015 at 14:07:19 UTC, Atila Neves wrote: Start of the two week process, folks. Code: https://github.com/s-ludwig/std_data_json Docs: http://s-ludwig.github.io/std_data_json/ Atila This is cool: https://github.com/s-ludwig/std_data_json/blob/aac6d846d596750623fd5c546343f4f9d19447fa/source/stdx/data/json/value.d#L183 I was getting tired of programmatically checking for null, then checking for object type, before moving along in the object and doing the same recursively. Not quite as intuitive as the optional chaining ?. operator in swift but it gets pretty close https://blog.sabintsev.com/optionals-in-swift-c94fd231e7a4#5622
Re: std.data.json formal review
On Tuesday, 28 July 2015 at 15:55:04 UTC, Brad Anderson wrote: On Tuesday, 28 July 2015 at 15:07:46 UTC, Rikki Cattermole wrote: On 29/07/2015 2:07 a.m., Atila Neves wrote: Start of the two week process, folks. Code: https://github.com/s-ludwig/std_data_json Docs: http://s-ludwig.github.io/std_data_json/ Atila Right now, my view is no. Just a reminder that this is the review thread, not the vote thread (in case anyone reading got confused). Unless there is some sort of proof that it will work with allocators. I have used the code from vibe.d days so its not an issue of how well it works nor nit picky. Just can I pass it an allocator (optionally) and have it use that for all memory usage? After all, I really would rather be able to deallocate all memory allocated during a request then you know, rely on the GC. That's a good point. This is the perfect opportunity to hammer out how allocators are going to be integrated into other parts of Phobos. From what I see from std.allocator, there's no Allocator interface? I think this would require changing the type to `struct JSONValue(Allocator)`, unless we see an actual interface implemented in phobos.
Re: D Web Services Application Potential?
On Sunday, 26 July 2015 at 03:04:21 UTC, Brandon Ragland wrote: On Sunday, 26 July 2015 at 02:53:12 UTC, Etienne Cimon wrote: On 2015-07-25 22:35, Brandon Ragland wrote: On Sunday, 26 July 2015 at 00:46:58 UTC, Etienne Cimon wrote: [...] In relation to DDB: Have you seen: https://github.com/buggins/ddbc It's most similar to the JDBC driver in Java. Currently supports MySQL, PostgreSQL and SQLite. That might be a good starting point to expand the SQL driver support for a web framework. I dug around some of your repos, too early to comment but I'll sift through more of it as time allows, see if I can't offer anything towards your current goals in the near future. I fully agree that D would be a great fit for web development. Thanks for the reply. Yes, the goal is to avoid libpq. A typical Vibe.d TCP Connection is based on what you know as Green Threads, it's called Tasks/Fibers in D. It means you have to avoid any library that uses thread-blocking I/O because you're using 1 thread to handle all requests. That would make sense then. Was unaware vibe.d was using green threading. The JVM dropped green threads circa 1.2, a long time ago. I suppose the complexity of asynchronous I/O was never implemented to avoid these blocking issues with fibers/tasks? There's a way to avoid the blocking by spawning more threads (through worker tasks), but it's so much more efficient to use native protocol implementations. After all, this is how D and vibe.d can get the most req/s compared to the other native frameworks. Somebody took the time to write a very elaborate standoff and it shows a pretty accurate picture of it: https://atilanevesoncode.wordpress.com/2013/12/05/go-vs-d-vs-erlang-vs-c-in-real-life-mqtt-broker-implementation-shootout/
Re: D Web Services Application Potential?
On 2015-07-25 18:47, Brandon Ragland wrote: Hi All, Not entirely certain if there is a decent D web applications server implementation as of yet, but if there is a project going on, I'd love to have a gander. On the off-chance there isn't one, who would be interested in going at it, call it, a 'group' project. I've been yearning for a D web app server for a while, as most of my day to day work is done on Java EE containers (think Glassfish, Weblogic, etc. Java Beans, lalala) and the insane system usage has bothered me from day one. There's Wt for C++, although I don't see much coming from that, though the concept is grand. Rust has a few up and coming web server frameworks as well. D could really excel here. -Thoughts? Am I crazy (probably)? Not crazy. I've been working towards exactly that as hard I could (see on https://github.com/etcimon/), seeing how D would be the best language to write any web backend with. I've since worked on writing all the architecture in D: a new TLS library Botan, along with libhttp2 for HTTP/2 support and an async event loop (TCP/UDP/FileSystem/FileWatcher/DNS/Timers) library. I've written all the glue code to have it in the vibe.d framework and tested it as thoroughly as I could. I now consider it an achievement and use it in my web applications. I'm currently concentrating on improving an async postgresql driver called DDB and adding transactions, Json, TLS, Listen/Notify, etc. I think my next priority would be to rewrite the back-end of http://www.cosmocms.org with D and the vibe.web.web Web Interface, and Redis+Postgresql (sessions in redis). It's about 4k-5k LOC I think a good and necessary library would be for cross-platform, async DNS. I've been looking at this one in particular: https://github.com/miekg/dns
Re: D Web Services Application Potential?
On 2015-07-25 22:35, Brandon Ragland wrote: On Sunday, 26 July 2015 at 00:46:58 UTC, Etienne Cimon wrote: On 2015-07-25 18:47, Brandon Ragland wrote: Hi All, Not entirely certain if there is a decent D web applications server implementation as of yet, but if there is a project going on, I'd love to have a gander. On the off-chance there isn't one, who would be interested in going at it, call it, a 'group' project. I've been yearning for a D web app server for a while, as most of my day to day work is done on Java EE containers (think Glassfish, Weblogic, etc. Java Beans, lalala) and the insane system usage has bothered me from day one. There's Wt for C++, although I don't see much coming from that, though the concept is grand. Rust has a few up and coming web server frameworks as well. D could really excel here. -Thoughts? Am I crazy (probably)? Not crazy. I've been working towards exactly that as hard I could (see on https://github.com/etcimon/), seeing how D would be the best language to write any web backend with. I've since worked on writing all the architecture in D: a new TLS library Botan, along with libhttp2 for HTTP/2 support and an async event loop (TCP/UDP/FileSystem/FileWatcher/DNS/Timers) library. I've written all the glue code to have it in the vibe.d framework and tested it as thoroughly as I could. I now consider it an achievement and use it in my web applications. I'm currently concentrating on improving an async postgresql driver called DDB and adding transactions, Json, TLS, Listen/Notify, etc. I think my next priority would be to rewrite the back-end of http://www.cosmocms.org with D and the vibe.web.web Web Interface, and Redis+Postgresql (sessions in redis). It's about 4k-5k LOC I think a good and necessary library would be for cross-platform, async DNS. I've been looking at this one in particular: https://github.com/miekg/dns In relation to DDB: Have you seen: https://github.com/buggins/ddbc It's most similar to the JDBC driver in Java. Currently supports MySQL, PostgreSQL and SQLite. That might be a good starting point to expand the SQL driver support for a web framework. I dug around some of your repos, too early to comment but I'll sift through more of it as time allows, see if I can't offer anything towards your current goals in the near future. I fully agree that D would be a great fit for web development. Thanks for the reply. Yes, the goal is to avoid libpq. A typical Vibe.d TCP Connection is based on what you know as Green Threads, it's called Tasks/Fibers in D. It means you have to avoid any library that uses thread-blocking I/O because you're using 1 thread to handle all requests.
Re: Dangular - D Rest server + Angular frontend
On Sunday, 19 July 2015 at 19:54:31 UTC, Jarl André Hübenthal wrote: Hi I have created a personal project that aims to learn myself more about D/vibe.d and to create a simple and easy to grasp example on Mongo - Vibe - Angular. Nice ! I'm also working on a project like this, using some paid angularjs admin template from themeforest, although I'm powering it with Vibe.d / Redis and PostgreSQL 9.4 with its new json type. controllers, but it works as a bootstrap example. I am thinking to create another view that uses ReactJS because its much much more better than Angular. ReactJS has been very promising and I hear a lot of hype around it. However I believe Angular lived through its hype and is now more mature in plenty of areas, for example its Ionic framework for cross-mobile apps is reaching its gold age with seemingly fluid performance on every device! With vibe.d being a cross-platform framework, you'd be even able to build a Web Application that communicates with a client-side OS API, effectively closing the gap between web dev and software dev. So, there is two structs, but I really only want to have one. Should I use classes for this? Inheritance? Vibe.d is famous for its compile-time evaluation, understanding structures like with reflection but producing the most optimized machine code possible. You won't be dealing with interfaces in this case, you should look at the UDA api instead: http://vibed.org/api/vibe.data.serialization/. For example, if your field might not always be in the JSON, you can mark it @optional. struct PersonDoc { @optional BsonObjectID _id; ulong id; string firstName; string lastName; } You can also compile-time override the default serialization/deserialization instructions for a struct by defining the function signatures specified here: http://vibed.org/api/vibe.data.json/serializeToJson or the example here: https://github.com/rejectedsoftware/vibe.d/blob/master/examples/serialization/source/app.d This being said, your questions are most likely to be answered if you ask at http://forum.rejectedsoftware.com
Re: Where will D sit in the web service space?
On Saturday, 18 July 2015 at 11:19:45 UTC, Ola Fosheim Grøstad wrote: StackOverflow has become the de-facto documentation resource for software engineers. It saves me insane amounts of time, many other programmers say the same thing. Google has been known to shut down it's own support-forums in order to get higher activity on StackOverflow. StackOverflow is an excellent resource, I've had trouble finding answers on it for D though because the D.learn forums contain all the QA. I wish we could mirror those on stack overflow or even channel it there instead. We're stuck in the 90's using NNTP+forums. I see basically 4 reasons to use languages like C++/D/Rust: 1. Low level hardware/OS access 2. Throughput 3. Lowered memory usage 4. Detailed control over execution patterns. We're done with desktop UI. The problem domain has shifted with SPA (single page applications) revolution on the web and angularjs. Nothing is as elegant and completely featured like D in the natively compiled world. I say natively, because that's the only way to resolve the interpreter war, seeing your interpreter banned just like firefox did you flash. Other languages have too much legacy to carry or have started with the wrong language design and will be eventually dropped just like lisp and perl. WebRTC, bitcoins and torrenting have only scratched the surface for future web applications. I can guarantee you that there will be an era where desktop applications are p2p downloaded, installed, displayed on in a browser and all resources shared over p2p. Browsers will never be appropriate because it will always have to slow down the applications and filter everything for security. With time, over the years, we will see these primitives being developed and the world will turn to the language that allows them to accomplish this, because it will be the only solution to web neutrality.
Re: Where will D sit in the web service space?
On Saturday, 18 July 2015 at 15:11:30 UTC, Ola Fosheim Grøstad wrote: However, I currently don't see much advantage in having the same language on client and server, so I'll probably stick to TypeScript/Dart, Angular2/Polymer in the near future because of debugging and tooling. I think these are very good choices. I prefer to really invest in learning and developing on D simply because the resulting code is more easily redistributable, because you get more bang for the buck when optimizing it, because the developers are generally better coming from the C++ world and being hobbyists, etc. And also, D is more promising. A lot of things can happen to deprecate Dart, or TypeScript development completely. Nobody/nothing's ever going to deprecate D, if anything you'll only see the smarter devs being less afraid to pick it up and bring it further. over p2p. Browsers will never be appropriate because it will always have to slow down the applications and filter everything for security. IMHO: In the long term time consuming tasks might be offloaded to some simplified replacement for OpenCL. I was talking more about being able to operate a website or web application that has been censored or sabotaged. If something happens in the coming years to the free web as we know it, people will have to turn to custom computer programs and p2p to help open up their web services. I'm not talking about the NSA censoring stuff. I'm talking about companies being anti-competitive. This seems to be becoming more and more likely as they (Godaddy, Google, Mozilla, Facebook, Amazon, Apple, Microsoft, Oracle, ISPs etc) become greedy and start to play rough with each-other and newcomers, using security as an excuse. It can way too easy to flip the switch on a website or technology (eg the flash player incidents over the years). The only solution I see is to stop relying on them so much!
Re: Where will D sit in the web service space?
On Sunday, 12 July 2015 at 21:13:35 UTC, Ola Fosheim Grøstad wrote: On Sunday, 12 July 2015 at 20:36:26 UTC, Etienne Cimom wrote: backend being a natively compiled service in the computer. With a 2 megabytes packed PE that is. It may not seem useful, but to me it's revolutionary. What is PE? The idea here is a portable executable (PE), in vibe.d, as a service with daemonize, that compiles packed to 2MB with TLS and a single TCP link using HTTP/2. I haven't tried Golang but I'm sure if I did it would be a close call. The lack of template meta-programming / generics makes it much less convenient to use though, so it's no go for me :) C++ could also do it if a framework existed for it, but even then, the language isn't as safe/convenient. So, I guess D wins here where a simple dub build with the right packages will dump a nice executable with everything you need for a desktop webUI application.
Re: goroutines vs vibe.d tasks
On Wednesday, 1 July 2015 at 18:09:19 UTC, Mathias Lang wrote: On Tuesday, 30 June 2015 at 15:18:36 UTC, Jack Applegame wrote: Just creating a bunch (10k) of sleeping (for 100 msecs) goroutines/tasks. Compilers go: go version go1.4.2 linux/amd64 vibe.d: DMD64 D Compiler v2.067.1 linux/amd64, vibe.d 0.7.23 Code go: http://pastebin.com/2zBnGBpt vibe.d: http://pastebin.com/JkpwSe47 go version build with go build test.go vibe.d version built with dub build --build=release test.d Results on my machine: go: 168.736462ms (overhead ~ 68ms) vibe.d: 1944ms (overhead ~ 1844ms) Why creating of vibe.d tasks is so slow (more then 10 times)??? In your dub.json, can you use the following: subConfigurations: { vibe-d: libasync }, dependencies: { vibe-d: ~0.7.24-beta.3 }, Turns out it makes it much faster on my machine (371ms vs 1474ms). I guess it could be a good thing to investigate if we can make it the default in 0.7.25. I don't benchmark my code frequently, but that's definitely flattering :) I hope we can see a release LDC 2.067.0 soon so that I can optimize the code further. I've given up on 2.066 a while back
Re: Announcing libasync, a cross-platform D event loop
On Sunday, 28 June 2015 at 16:57:44 UTC, Suliman wrote: Also next code that I take from example after run absolutely do not do nothing: This code will register a directory watcher in the working directory, on the thread's event loop, and then use timers to create file/folder activity to trigger this directory watcher. The only thing needed to make this work is r/w access to the working directory and a running event loop. So, adding the line: g_evl.run(10.seconds); If you don't have an event loop running, your application is basically not going to receive callback events.
Re: Announcing libasync, a cross-platform D event loop
On Sunday, 28 June 2015 at 17:10:16 UTC, Etienne Cimon wrote: g_evl.run(10.seconds); Hmm, sorry that would be g_evl.loop(10.seconds) or getThreadEventLoop().loop(10.seconds). I use the vibe.d driver most often.
Re: Announcing libasync, a cross-platform D event loop
On Sunday, 28 June 2015 at 19:34:46 UTC, Suliman wrote: void main() { void dirWatcher() { auto g_watcher = new AsyncDirectoryWatcher(getThreadEventLoop()); g_watcher.run({ DWChangeInfo[1] change; DWChangeInfo[] changeRef = change.ptr[0..1]; while(g_watcher.readChanges(changeRef)){ writeln(change); } }); g_watcher.watchDir(.); getThreadEventLoop(); } dirWatcher(); //destroyAsyncThreads(); } Is this code enough to monitoring folder? If I run it it's terminating, it's seems that eventloop not starting or I placed it in wrong place? You should do getThreadEventLoop().loop(); and it will monitor the changes forever. Try making changes in the filesystem manually :)
Re: Announcing libasync, a cross-platform D event loop
On Sunday, 28 June 2015 at 18:33:28 UTC, Suliman wrote: Hmm, sorry that would be g_evl.loop(10.seconds) or getThreadEventLoop().loop(10.seconds). I use the vibe.d driver most often. I changed paths to hardcoded to prevent issue with them, but next code after run only create hey folder and nothing more. No files in it: Try putting the timers outside the callbacks in the meantime. I'll test it tomorrow when I'm at the office =) -- And how to detect if some there was some manipulation inside monitoring folder? How I can recive name of file that was added to folder? There's a path in the DWFileChange change structure, you can access it with change.path. You'll have to put it in a path parser to access the filename, such as std.path
Re: Question about Walter's Memory DisAllocation pattern
On 2015-06-27 01:06, thedeemon wrote: On Saturday, 27 June 2015 at 03:10:51 UTC, Etienne Cimon wrote: What you're asking for is probably type inference. Notice the auto return type? This keyword is actually very advanced and bleeding edge in natively compiled languages. Inference of return type was in ML since 1973. Such bleeding, so edge. ;) Is there some other natively compiled language that implemented the auto keyword before? Or are you only talking about the theory?
Re: Heisenbug involving Destructors GC - Help Needed
On 2015-06-26 14:27, Maxime Chevalier-Boisvert wrote: I seem to have run into a heisenbug involving destructors and the GC. I'm kind of stuck at this point and need help tracking down the issue. I put the broken code in a branch called heisenbug on github: https://github.com/higgsjs/Higgs/tree/heisenbug The problem manifests itself on runs of `make test` (my unittests), but only some of the time. I wrote a script to run `make test` repeatedly to try and find a solution: https://github.com/higgsjs/Higgs/blob/heisenbug/source/repeatmaketest.py The problem usually manifests itself after 5 to 15 runs on my machine. I get a segmentation fault, not always in the same place. The randomness seems to stem from address space randomization. It seems the issue is caused by my freeing/reinitializing the VM during unit tests. More specifically, commenting out this line makes the problem go away: https://github.com/higgsjs/Higgs/blob/heisenbug/source/runtime/vm.d#L741 Higgs can run all of my benchmarks without ever failing, but restarting the VM during `make test` seems to be causing this problem to happen. It's not impossible that there could be another underlying issue, such as the JITted code I generate corrupting some memory location, but it would seem that if this were the case, the issue would likely show up outside of unit tests. Any help would be appreciated. This might come as a surprise to you as much as it did to me at the time, but when you have GCRoot* root; where GCRoot is a struct, if you destroy(root), you're setting your local pointer to null. You're not actually calling the destructor on the struct. Also, I would avoid throwing of any type in a destructor. https://github.com/higgsjs/Higgs/blob/0b48477120c4acce46a01b05a1d4b035aa432550/source/jit/codeblock.d#L157
Re: Question about Walter's Memory DisAllocation pattern
On 2015-06-26 22:45, Parke via Digitalmars-d wrote: Hi, I have a question about Walter's DConf keynote and the Memory DisAllocation pattern. http://dconf.org/2015/talks/bright.html The following example is from the slides of Walter's talk. auto toString(uint u) { static struct Result { this(uint u) { idx = buf.length; do { buf[--idx] = (u % 10) + '0'; u /= 10; } while (u); } @property bool empty() { return idx == buf.length; } @property char front() { return buf[idx]; } void popFront() { ++idx; } char[uint.sizeof * 3] buf; size_t idx; } return Result(u); } import std.stdio; void main() { writeln(toString(28)); } My question is: Does use of this pattern in D require that the size of the Result struct be known by the compiler at compile time? Or, perhaps more precisely: Does the caller of toString need to know the size of the struct that toString will return? In the above example, buf's length is uint.sizeof * 3. But what if buf's length was a function of u (and therefore only known at run-time), rather than a function of uint.sizeof? Thanks! -Parke What you're asking for is probably type inference. Notice the auto return type? This keyword is actually very advanced and bleeding edge in natively compiled languages. Internally, when the function is processed by the compiler, all types are evaluated before the instructions are even looked at, and the final size of all types are compiled and automatically put in place. Afterwards, the instructions must obey consistency, ie. the caller can't use anything but auto or ReturnType!toString to accept an auto type because there's no way you can guess the mangling/size/etc otherwise. This also means you can choose a ridiculous name or you can change the size of a stack allocated object any time without breaking an interface. e.g. you could change this without breaking the API to something like: auto toString(T)(T u) if (isNumeric!T) { static struct GenericResult { this(T u) { ... char[T.sizeof * 3] buf; ... }
Re: Heisenbug involving Destructors GC - Help Needed
On 2015-06-26 23:44, Temtaime wrote: Disagree. Destroy on a pointer calls dtor of a struct. Why it should be an error ? Exactly what I assumed too. Can you imagine all of the random errors that stem from such a basic, low-level assumption? I carry a scar for every day I've spent in the debugger, and this bug has given me its load of torments. Of course, I'm the only one to blame, I didn't know: import std.stdio; void main() { struct A { ~this() { writeln(Dtor); } } A* a = new A; destroy(a); writeln(Done); } output: Done
Re: PHP verses C#.NET verses D.
On Tuesday, 23 June 2015 at 06:26:39 UTC, Nick B wrote: On Thursday, 18 June 2015 at 03:44:08 UTC, Etienne Cimon wrote: So now I can build a full web application/server executable in less than 2mb packed, and it runs faster than anything out there. Etienne Do you have an performance numbers, as to how fast your web application/server is, or is this based on your personal experience ? Nick I don't have current performance results because I've been focused on adding features, but these results were taken on a previous version: https://atilanevesoncode.wordpress.com/2013/12/05/go-vs-d-vs-erlang-vs-c-in-real-life-mqtt-broker-implementation-shootout/
Re: PHP verses C#.NET verses D.
On Sunday, 21 June 2015 at 03:16:31 UTC, Nick B wrote: On Friday, 19 June 2015 at 11:28:30 UTC, Etienne Cimon wrote: On Thursday, 18 June 2015 at 05:23:25 UTC, Nick B wrote: On Thursday, 18 June 2015 at 03:44:08 UTC, Etienne Cimon wrote: Will you explain how it is different to Vibe.d ? It has HTTP/2, a new encryption library, it uses a native TCP event library, lots of refactoring. In short, the entire thing is in D rather than linking with OpenSSL and libevent. Etienne Can you explain the benefits of writing these libraries in D, as against just linking to these libraries. Is it for faster execution, or better debugging, or some other reason ? When writing bindings, you need to write unit tests for your bindings and an interface. You're adding a layer of software with its own propensity for errors. You also don't have access to the underlying implementation in the IDE. You also can't add features to facilitate your own programs unless you control the repository. Honestly, I can't stand having layers and layers of garbage to make up for language differences. I like the cleanliness of it all and it probably ended up taking me half the time because I avoided debugging new code.
Re: Future(s) for D.
On Sunday, 21 June 2015 at 19:08:47 UTC, Jacob Carlborg wrote: On 20/06/15 16:00, Etienne wrote: Yep, looks like we already have better. I don't understand how D hasn't fully picked up in Web Dev at this point. Are they expecting an e-commerce/blogging/cms platform to go with it? My biggest reason is the lack of libraries. There's probably a lack of a Joomla/Wordpress/Magento platform in D with plugin support. Libraries would pour into D if there was a similar tool.
Re: PHP verses C#.NET verses D.
On Friday, 19 June 2015 at 11:29:58 UTC, Etienne Cimon wrote: On Friday, 19 June 2015 at 11:28:30 UTC, Etienne Cimon wrote: On Thursday, 18 June 2015 at 05:23:25 UTC, Nick B wrote: On Thursday, 18 June 2015 at 03:44:08 UTC, Etienne Cimon wrote: [...] Also, the HTTP client has a cookiejar and more settings. hmm, I think I can mention the capture debugger. I'm still developing it but it's becoming quite complete. It's a runtime tool shows the HTTP client/server request/response headers, form files/fields, json input/output, for specific request paths, and works in builds without debug info. The trace library that's used for it also maintains custom call stack trace that works in builds withoutu debug info. https://htmlpreview.github.io/?https://github.com/etcimon/vibe.d/blob/master/views/capture.html
Re: PHP verses C#.NET verses D.
On Thursday, 18 June 2015 at 05:23:25 UTC, Nick B wrote: On Thursday, 18 June 2015 at 03:44:08 UTC, Etienne Cimon wrote: On Wednesday, 17 June 2015 at 18:40:01 UTC, Laeeth Isharc wrote: Any idea how far away it might be from being something that someone could use in an enterprise environment simply, in the same kind of way that vibed is easy? I appreciate that making it broadly usable may not be what interests you, and may be a project for someone else. I would say 3 months. So it'll probably be a year considering how off my last estimates were. Etienne - Interesting back story. Will this be under a Boost licence ? Will you provide a link ? Even the vibe.d library was much more advanced than what I could find with an open source license that allowed static compilation at the time (1 yr 1/2 ago), so I went forward with that and worked my way through. [snip\] So now I can build a full web application/server executable in less than 2mb packed, and it runs faster than anything out there. It's standalone, works cross-platform, etc. Will you explain how it is different to Vibe.d ? It has HTTP/2, a new encryption library, it uses a native TCP event library, lots of refactoring. In short, the entire thing is in D rather than linking with OpenSSL and libevent. It's MIT licensed. I have it here: https://github.com/etcimon/vibe.d The dub.json uses relative paths though while I'm developing. You're free to adjust the file and try it, we can consider it stable.
Re: PHP verses C#.NET verses D.
On Friday, 19 June 2015 at 11:28:30 UTC, Etienne Cimon wrote: On Thursday, 18 June 2015 at 05:23:25 UTC, Nick B wrote: On Thursday, 18 June 2015 at 03:44:08 UTC, Etienne Cimon wrote: [...] Etienne - Interesting back story. Will this be under a Boost licence ? Will you provide a link ? [...] [snip\] [...] It has HTTP/2, a new encryption library, it uses a native TCP event library, lots of refactoring. In short, the entire thing is in D rather than linking with OpenSSL and libevent. Also, the HTTP client has a cookiejar and more settings.
Re: Workaround for typeid access violation
On Thursday, 18 June 2015 at 11:43:18 UTC, ketmar wrote: On Wed, 17 Jun 2015 22:35:12 +, Etienne Cimon wrote: e.g. __gshared MyObj g_obj1; Thread 1: g_obj1 = new MyObj; Thread 2: g_obj1.obj2 = new MyObj; Thread 3: write(g_obj1.obj2); -- access violation (probably) so no way to anchor the whole object tree by assigning it to __gshared root? sure, this will never make it into mainline. __gshared is a little sketchy. We can have TLS GC by default by piping `new shared` to a shared GC. It's even safer, because then we have the type system helping out. e.g. shared MyObj g_obj1; Thread 1: g_obj1 = new shared MyObj; Thread 2: g_obj1.obj2 = new shared MyObj; Thread 3: write(g_obj1.obj2); -- success!
Re: Workaround for typeid access violation
On Thursday, 18 June 2015 at 15:43:10 UTC, Wyatt wrote: On Thursday, 18 June 2015 at 15:19:19 UTC, Etienne wrote: On Thursday, 18 June 2015 at 15:09:46 UTC, Wyatt wrote: This comes to mind, along with the citations: http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/local-gc.pdf -Wyatt That's exactly what we're talking about, but we do the work that was done by the globalisation policy by using the `new shared` keyword. Maybe I misunderstood. My interpretation was you were disallowing references from shared heap to local heap and Exactly, unless you cast(shared) before you copy into the global heap - the local object must be kept trackable in the local heap or space. In any case, the shared is recursive and disallows this, so it's perfectly suitable to keep a shared GC consistent. obviating the whole globalisation problem. No migrations; just explicit marking of global objects with new shared. (Though I'm curious how you handle liveness of the global heap; it seems like that depends on information from each thread's allocator?) The global heap and every thread will be scanned in stop the world setting just like the current GC. However, the idea is to use it less by making it exclusively for objects that were created with `new shared` `new immutable` `.idup` or `.sdup` (so that we can duplicate local objects into the shared GC conveniently) In all of my security library (Botan) or TCP library (libasync) or even the vibe.d implementation at https://github.com/etcimon/vibe.d I haven't encountered a single instance of an object that was moved to another thread. The shared GC will be known to be very rarely useful.
Re: Workaround for typeid access violation
On Wednesday, 17 June 2015 at 22:21:21 UTC, Laeeth Isharc wrote: On Wednesday, 17 June 2015 at 21:35:34 UTC, Etienne wrote: I am no compiler/runtime guru, but it sounds like if it were possible to make this a swappable option for D this might be very important. Is this true, and how much work is involved in making this industrial strength? To me this is 100 times more stable simply because destructors can no longer be called from any thread. I'm also going to keep this implementation for its guarantee that destructors will be called. The absence of locking and stop the world makes it a very high performance GC. Of course, the tradeoff is the absence of support in moving objects between threads. That's really an insignificant limitation compared to the above points. I don't expect it to be merged simply because nobody is familiar enough with this type of GC to have a say, except for what a GC shouldn't guarantee in general. And I'm really tired of people telling me what I can't or shouldn't do, when I'm brainstorming on this forum. Do you have any links to reading material on this type of GC? I appreciate that it may in the general case be more stable, but in the particular implementation usually there is some distance to go from something good enough for personal use to something that others can depend on without having to understand the nitty gritty of the implementation. So that is what I was asking... No reading material. It's straightforward from my train of though, your object will be collected if it's not either in the immediate global namespace, in the current thread's stack, in the current thread's local namespace or in objects that were allocated in the current thread's GC. People saying that this will fail in basic message-passing or multi-threading are right. If you move your object in another thread's object even though the other thread's object is in the global namespace, you can't expect it to be tracked by the thread-local GC. e.g. __gshared MyObj g_obj1; Thread 1: g_obj1 = new MyObj; Thread 2: g_obj1.obj2 = new MyObj; Thread 3: write(g_obj1.obj2); -- access violation (probably) That's the moving objects between threads limitation I'm talking about. You need to use manual memory management and containers if you're going to use it for this purpose.
Re: PHP verses C#.NET verses D.
On Thursday, 18 June 2015 at 02:01:33 UTC, Nick B wrote: Yes I too would be interested on more background as to your opinion, as why its 20 years ahead of everything else out there. Natively compiled: Moore's law predicts that the burden in advancements in computing speed will migrate into software. This may be enough to rule out increasingly the use of managed language for developments where cost is important and more than 1 server will be needed. Compiling vs interpreting can make all the difference between requiring 1000 servers vs 10 servers. Template metaprogramming: This is the first reason I've chosen to use D in the first place. The idea that I could write code that writes code, and make it statically typed and safe. C++ has this but the errors are insane, the static if is not there, CTFE is just starting to pick up, there's no traits or very limited compile-time reflection (ie. static if(__traits(compiles, { some_operation(); })). The compile time is also much slower, there's simply too many legacy features in the language that have made it suffer in the long run. Even a package manager like dub is something nobody can agree on, because the community is so divided. Overall, it would take decades for the most powerful language C++ to reach the current state of D in terms of compile-time capabilities. This is important because preprocessors are the only alternatives and they suck for larger projects. I won't cover again everything that the dlang site can say about the language, and I could go on with how D has the entire web stack (I didn't release it fully yet) but that would be throwing myself flowers :P
Re: PHP verses C#.NET verses D.
On Wednesday, 17 June 2015 at 18:40:01 UTC, Laeeth Isharc wrote: Any idea how far away it might be from being something that someone could use in an enterprise environment simply, in the same kind of way that vibed is easy? I appreciate that making it broadly usable may not be what interests you, and may be a project for someone else. I would say 3 months. So it'll probably be a year considering how off my last estimates were. Of course, I never calculated any help (and haven't gotten any really) Any chance you could write a bit more on this? Your personal story and why you believe this. We could post on the Wiki as part of a series of narratives on people who have found D helpful. Stories are a powerful complement to just ticking off features. I started off as a C, C#, Javascript PHP programmer with 6 years of experience, building mostly e-commerce and information systems on a contractual basis. One day, I decided I had enough and wanted to invest my time in a faster web engine because I was tired of seeing all those slow and bloated libraries that can barely serve 10 requests per second when they're put to any practical use. I decided to go for C++ and learn everything I could about writing an integrated solution. I found interesting libraries but everytime I wanted to add a feature, I'd have to import another library and it again became bloated but in terms of code base. Nothing seemed to work together (Qt Boost?). The D programming language came up frequently in search results when looking for C++-related concepts, and I saw everything I needed in Phobos. The language features seemed similar at first but I quickly realized how much more convenient the language was as a whole and it felt much like a managed language overall. Even the vibe.d library was much more advanced than what I could find with an open source license that allowed static compilation at the time (1 yr 1/2 ago), so I went forward with that and worked my way through. The most interesting part is that everytime I had a problem, I never had to google the error from the compiler because it was quite straightforward. I did have to debug the memory a lot but all the tools from C/C++ work for that. So now I can build a full web application/server executable in less than 2mb packed, and it runs faster than anything out there. It's standalone, works cross-platform, etc. I really have to put the blame on the language for the speed at which this very large project was finished. I completed the STL-equivalent memory library, TLS/crypto security library, the low-level async TCP library, the HTTP/2 implementation and the integration of everything in a web application framework within about 10 months. I learned the language for about 6 months through that coming from a background more familiar with managed languages. I can't say D isn't meant for large projects, it's a really fucking solid language that was built for the future.
Re: Workaround for typeid access violation
On Wednesday, 17 June 2015 at 14:19:33 UTC, ketmar wrote: this is an implementation detail. nothing guarantees that is will stay like that even for one more commit. relying on this means that your code is bugged and can break at any time, without warning. more than that, if user pulled in another GC implementation, completely adhering to specs, your code simply goes nuts, forcing the poor user to guess what's wrong. so as other people already wrote, simply don't do it. there will be no way to do what you want here until GC requirements in specs will be changed (and this is unlikely to happen). If we can freeze the druntime interface anytime soon (while adding said support to the interface), the library can be subject to micro-optimizations by deferring the linking process and allowing it to be compiled and linked through dub. This allows applications to use whatever GC is best for them. Parallel, precise, thread-local, you name it. This being said, I know my use of the core.gc implementation carries no forward guarantees, so, I might end up dragging it along for a while and merging only certain parts of druntime in the future.
Re: Martin Nowak is officially MIA
On Wednesday, 17 June 2015 at 16:16:09 UTC, berlin wrote: well, read something to your world situation. take it from an old kufr that dos not want to live under islamic law: http://www.jihadwatch.org/ http://www.thereligionofpeace.com/ http://www.barenakedislam.com/ http://schnellmann.org/Understanding_Muhammad_Contents.html you might also want to take a closer look at taqiyya - that is why nobody can trust a muslim. I know all about that. It's a little hard to assimilate people though. I'm from a french city of 700,000 (Quebec) surrounded by 300 million english men over 300 years, and we're not english (yet)
Re: Workaround for typeid access violation
On Tuesday, 16 June 2015 at 22:21:28 UTC, Steven Schveighoffer wrote: If you want to manage memory from a GC'd object, you can use C malloc and C free. Or, you can write your own GC that solves this problem that Sun/Oracle couldn't :) In all seriousness, you could potentially have exceptions to this rule, but it would take a lot of cajoling of druntime and some funky @UDA magic. -Steve Yeah, I'm going to make a personal branch and develop a GC thread-local, and ensure memory/vtbl is left intact during collection/finalization, it's a good experiment and I believe it will be a way to solve the issue in the near future. Of course, no support for moving shared objects in a foreign thread anymore.. like that was any useful in the first place :P
Re: Martin Nowak is officially MIA
On Wednesday, 17 June 2015 at 02:16:37 UTC, Andrei Alexandrescu wrote: Hello, Martin has not replied to any communication for more than two weeks now, and I'm starting to fear something might have happened to him. If anyone in Berlin could get in touch with him and let me/us know he's alright, I'd appreciate it. It's https://github.com/MartinNowak?tab=activity I see a message from 5 days ago here. I think he's fine but probably got tangled up in something else at the moment.
Re: Workaround for typeid access violation
On Tuesday, 16 June 2015 at 22:31:38 UTC, Etienne Cimon wrote: Yeah, I'm going to make a personal branch and develop a GC thread-local, and ensure memory/vtbl is left intact during collection/finalization, it's a good experiment and I believe it will be a way to solve the issue in the near future. Of course, no support for moving shared objects in a foreign thread anymore.. like that was any useful in the first place :P My libraries run fine on a TLS GC so far: https://github.com/etcimon/druntime/commit/7da3939637bd1642400dbed83e3b0ff2844386ac Only error was with a signal handler trying to allocate on the GC. I think it'll be worth it for me to use this from now on. There's no locking and possibly no stop the world.
Re: CPU cores threads fibers
On 2015-06-14 08:35, Robert M. Münch wrote: Hi, just to x-check if I have the correct understanding: fibers = look parallel, are sequential = use 1 CPU core threads = look parallel, are parallel = use several CPU cores Is that right? Yes, however nothing really guarantees multi-threading = multi-core. The kernel reserves the right and will most likely do everything possible to keep your process core-local to use caching efficiently. There's a few ways around that though https://msdn.microsoft.com/en-us/library/windows/desktop/ms686247%28v=vs.85%29.aspx http://man7.org/linux/man-pages/man2/sched_setaffinity.2.html
Re: 64-bit DMD .exe for windows?
On Sunday, 14 June 2015 at 04:09:56 UTC, E.S. Quinn wrote: I've got a project that, due to extensive use of LuaD conversions, templates with a lot of parameters, and CTFE, has managed to require 4gb of ram to compile. Which means that, for the moment, I can't build on windows as the dmd compiler is a 32-bit executable and throws an out of memory error. Is there any chance that we could publish a win64 build of dmd.exe? Yes, I had the same problem. Instructions are here: https://github.com/etcimon/botan/blob/master/dmd64_build_instructions.txt
Re: version: multiple conditions
On Saturday, 13 June 2015 at 21:51:43 UTC, bitwise wrote: I shouldn't have to add another version just for that last dlopen block. It's not finegrained control, it's cruft. Bit It works with constants definition files. https://github.com/etcimon/botan/blob/master/source/botan/constants.d The mono-d IDE doesn't always like it but it's the best I've got.
Re: Adding pclmulqdq assembly instruction to dlang asm.
On Saturday, 13 June 2015 at 19:48:07 UTC, \u2603 wrote: pclmulqdq is an assembly instruction on Intel CPUs that has been introduced together with the AES instructions. pclmulqdq provides multiplication on binary fields and is very usefull for implementing fast and timing attack resistant cryptographic algorithms (e.g. GCM). The D asm supports all AES instructions but not pclmulqdq. How can I add support for this instruction? Could I write a patch myself? Compiler development is unknown land to me and I'd be very glad to get some help! You mean like this? https://github.com/etcimon/botan/blob/master/source/botan/modes/aead/gcm.d#L437
Re: I finally got a stack trace on my InvalidMemoryOperationError
On 2015-06-04 21:12, Vladimir Panteleev wrote: On Friday, 5 June 2015 at 01:07:28 UTC, Etienne wrote: I mean come on here, I made a fatal error and my application is overdue for crashing every thread and D is so broken that it adds a deadlock on top of that, and you're telling me you'll feel guilty for allocating the stack trace on the GC because it's in an invalid state. There probably wouldn't be any problem with using the C heap or something. Using the GC is just making it likely to crash while trying to allocate it. Something is simply wrong with the community culture as a whole if that's the case. http://forum.dlang.org/post/nkwpjnydlqnnpsxst...@forum.dlang.org Nice, this is RESOLVED WONTFIX I better keep a druntime patch aside then.
Re: Throwing InvalidMemoryOperationError
On Thursday, 4 June 2015 at 17:43:35 UTC, Adam D. Ruppe wrote: If int 3 doesn't work for some reason btw, you could always just deliberately write to a null pointer and trigger a segfault in the overridden function, would have the same result in the debugger. I feel like this onError thing is meant to be overridable by importing core.exception too, but I don't see that in the source. The linker-based override definately works today though! So far I've tried the null pointer to get a segmentation fault. It failed. I'm trying to rebuild gdb because this error is what I got: Message: Process 61701 (gdb) of user 0 dumped core. Stack trace of thread 61701: #0 0x00629bdf make_vector_type (gdb) #1 0x00670a18 read_type_die (gdb) #2 0x0066ee87 lookup_die_type (gdb) #3 0x0067002a read_type_die (gdb) #4 0x0066ee87 lookup_die_type (gdb) #5 0x00672aeb new_symbol_full (gdb) #6 0x00674e1f process_die (gdb) #7 0x00674a2b process_die (gdb) #8 0x00675031 process_die (gdb) #9 0x00678e57 dw2_do_instantiate_symtab (gdb) #10 0x00679f38 dwarf2_read_symtab (gdb) #11 0x005e4451 psymtab_to_symtab (gdb) #12 0x005e54d3 find_pc_sect_symtab_from_partial (g #13 0x005e0143 find_pc_sect_symtab (gdb) #14 0x005dc3dd blockvector_for_pc_sect (gdb) #15 0x005dc57d block_for_pc (gdb) #16 0x006e70bb inline_frame_sniffer (gdb) #17 0x006e52c6 frame_unwind_try_unwinder (gdb) #18 0x006e567f frame_unwind_find_by_frame (gdb) #19 0x006e1d2b get_prev_frame_if_no_cycle (gdb) #20 0x006e4029 get_prev_frame_always (gdb) #21 0x006e4761 get_prev_frame (gdb) #22 0x006e4a3c unwind_to_current_frame (gdb) #23 0x00610b71 catch_exceptions_with_msg (gdb) #24 0x006e1e40 get_current_frame (gdb) #25 0x00603909 handle_inferior_event.part.32 (gdb) #26 0x006055ee fetch_inferior_event (gdb) #27 0x0061c7f2 inferior_event_handler (gdb) #28 0x0061a7d1 process_event (gdb) #29 0x0061abca gdb_do_one_event (gdb) #30 0x0061ae3e start_event_loop (gdb) #31 0x00613c13 captured_command_loop (gdb) #32 0x00610d3a catch_errors (gdb) #33 0x00615526 captured_main (gdb) #34 0x00610d3a catch_errors (gdb) #35 0x0061568b gdb_main (gdb) #36 0x004604a5 main (gdb) #37 0x7f3d3893ffe0 __libc_start_main (libc.so.6) #38 0x004604e8 _start (gdb)
Re: Throwing InvalidMemoryOperationError
On Thursday, 4 June 2015 at 17:58:34 UTC, Adam D. Ruppe wrote: On Thursday, 4 June 2015 at 17:51:31 UTC, Etienne Cimon wrote: I'm trying to rebuild gdb because this error is what I got: wow that's messed up. Did you try it with dmd -gc too? Or a non-debug version of the program entirely? Maybe your version of gdb has a bug in reading D debugging info. With a non-debug, you won't get line numbers in the stack trace, but the mangled function name should still really narrow down your search. (there's a ddemangle program that comes with dmd that can translate it or reading by eyeball isn't bad either, should see your class name in there) Yeah, obviously I had to use exec-file to avoid symbols because dub test compiles with symbols. It took some time to remember but this is basically how I proceeded to debug the whole botan library a few months ago. Using addr2line or a backtrace library. It's nice not having to do this for all my projects, but having to follow all these steps is obviously unfriendly for a state-of-the-art language like D. Here's what I get from the `asm { int 3; }` type of breakpoint: Program received signal SIGUSR1, User defined signal 1. 0x00ccd8f7 in ?? () (gdb) bt #0 0x00ccd8f7 in ?? () #1 0x in ?? () (gdb) c Continuing. Program received signal SIGUSR2, User defined signal 2. 0x77122cc7 in sigsuspend () from /lib64/libc.so.6 (gdb) Continuing. D: Error: Invalid memory operation [Thread 0x76c6c700 (LWP 67283) exited] ^C Program received signal SIGINT, Interrupt. 0x779c9f1d in __lll_lock_wait () from /lib64/libpthread.so.0 (gdb) bt #0 0x779c9f1d in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x779c4906 in pthread_mutex_lock () from /lib64/libpthread.so.0 #2 0x00dbccb1 in ?? () #3 0x0119c1b0 in ?? () #4 0x7fffc470 in ?? () #5 0x in ?? () (gdb) q A debugging session is active. [root@localhost build]# addr2line -e __test__full__ 0x0ccd8f7 /home/devpriv/botan/source/botan/math/numbertheory/numthry.d:744
Re: Throwing InvalidMemoryOperationError
On Thursday, 4 June 2015 at 16:20:28 UTC, Adam D. Ruppe wrote: On Thursday, 4 June 2015 at 16:12:54 UTC, Etienne Cimon wrote: On another note, considering the unimaginable amount of bugs that can stem from throwing in a constructor Throwing from a constructor is kinda important as it is the only way to signal failure on its input... Wouldn't that be with `this() in { assert() }` ? My concern is the fact that the destructor won't be called. Or will it?
Re: Throwing InvalidMemoryOperationError
On another note, considering the unimaginable amount of bugs that can stem from throwing in a constructor or destructor, I don't see why D shouldn't just enforce a nothrow on them.
Re: Throwing InvalidMemoryOperationError
On Thursday, 4 June 2015 at 16:49:07 UTC, Adam D. Ruppe wrote: On Thursday, 4 June 2015 at 16:32:39 UTC, Etienne Cimon wrote: Wouldn't that be with `this() in { assert() }` ? Not necessarily. Consider something like a file wrapper, if fopen is null inside the ctor, you'd generally throw on that. My concern is the fact that the destructor won't be called. Or will it? It won't be for deterministic objects (structs on the stack) but is for GC'd objects (eventually). I don't think it should be called since if the constructor fails, the object doesn't really exist and there's nothing to destroy. You can reliably clean up intermediate things in a constructor using scope(failure): struct Foo { FILE* file, file2; this(something somename) { file = fopen(somename); if(file is null) throw FileException(somename); scope(failure) { fclose(file); file = null; } file2 = fopen(somename2); if(file2 is null) throw FileException(somename2); } ~this() { if(file !is null) fclose(file); if(file2 !is null) fclose(file2); } } That'd work whether the destructor is called automatically or not and isn't too hard to write since scope(failure) is pretty convenient. Nice, I'll try and use that once I find the reason I get this error: https://travis-ci.org/etcimon/botan/jobs/65410185#L426 Somewhere random in a 100k line code base, a deadlock is triggered in the GC by some object's destructor. :/
Re: Throwing InvalidMemoryOperationError
Well, I think the error is that the GC is not using the TLS matching the corresponding object's destructors. Could this be possible?
Re: std.allocator: FreeList uses simple statistics to control number of items
What you could do is calculate the average allocation size and std deviantions in a moving window, and the z-score for each freelist and use this lookup table: https://www.stat.tamu.edu/~lzhou/stat302/standardnormaltable.pdf If P 0.10 (maybe use this as a setting) this means the probability of the next allocations going through this freelist is too low and you can decide to tighten the freelist and let the deallocations fall through to the underlying allocator.
Re: std.allocator: nomenclature needed
On 2015-05-14 23:27, Andrei Alexandrescu wrote: Also, I need two more good names: one for what's now called porcelain - high-level typed interface for allocators, and one for best of Beatles (not defined yet) - a module collecting a number of canned good allocator designs by connecting together components. This may sound like a request, but it's probably more of a direction. I intended to start writing my D software on top of a ScopedPool stack. I'll explain... It's a manual memory management strategy I devised (invented) because I needed to optimize heavy use of XML/Json/DOM/tree-based structures. The idea is to allocate in the top-most pool of the caller and de-allocate when the pool goes out of scope. A pool is nice because it's meant to trash all the objects instantly, it's much faster and efficient. Just like when a process crashes. You simply use `alloc!T` instead of `new T`. The scope is created with `auto pool = ScopedPool(1024*16);`, which basically allocates pools in 16kb increments. You can use `pool.freeze()` and then it will temporarily pop the pool from the stack so that `alloc!T` uses the pool one level lower. The lowest level is the GC. Example: https://github.com/etcimon/spd Implementation source: https://github.com/etcimon/memutils/blob/master/source/memutils/scoped.d If you like the idea, let this be a permission to license it to Boost =)
Re: mscoff x86 invalid pointers
On 2015-05-10 03:54, Baz wrote: On Sunday, 10 May 2015 at 04:16:45 UTC, Etienne Cimon wrote: On 2015-05-09 05:44, Baz wrote: On Saturday, 9 May 2015 at 06:21:11 UTC, extrawurst wrote: On Saturday, 9 May 2015 at 00:16:28 UTC, Etienne wrote: I'm trying to compile a library that I think used to work with -m32mscoff flag before I reset my machine configurations. https://github.com/etcimon/memutils Whenever I run `dub test --config=32mscoff` it gives me an assertion failure, which is a global variable that already has a pointer value for some reason.. I'm wondering if someone here could test this out on their machine with v2.067.1? There's no reason why this shouldn't work, it runs fine in DMD32/optlink and DMD64/mscoff, just not in DMD32/mscoff. Thanks! you can always use travis-ci to do such a job for you ;) doesn't -m32mscoff recquire phobos to be compiled as COFF too ? I think that travis uses the official releases (win32 releases have phobos as OMF) so he can't run the unittests like that... The dark side of the story is that you have to recompile phobos by hand with -m32mscoff...I'm not even sure that there is a option for this in the win32.mak... Meh, I ended up upgrading to 2.068 and everything went well. I clearly remember 2.067.1 working but spent a whole day recompiling druntime/phobos COFF versions in every configuration possible and never got it working again Could you tell me the way to compile druntime phobos 32bit COFF ? Would you have some custom win32.mak to share ? Thx. I edited win64.mak, you need to change it to MODEL=32mscoff and remove all occurence of amd64/ in the file (there are 3), for both druntime and phobos. Save this to win32mscoff.mak You need to place the phobos32mscoff.lib into dmd2/windows/lib32mscoff/ (the folder doesn't exist)
Re: mscoff x86 invalid pointers
On 2015-05-09 05:44, Baz wrote: On Saturday, 9 May 2015 at 06:21:11 UTC, extrawurst wrote: On Saturday, 9 May 2015 at 00:16:28 UTC, Etienne wrote: I'm trying to compile a library that I think used to work with -m32mscoff flag before I reset my machine configurations. https://github.com/etcimon/memutils Whenever I run `dub test --config=32mscoff` it gives me an assertion failure, which is a global variable that already has a pointer value for some reason.. I'm wondering if someone here could test this out on their machine with v2.067.1? There's no reason why this shouldn't work, it runs fine in DMD32/optlink and DMD64/mscoff, just not in DMD32/mscoff. Thanks! you can always use travis-ci to do such a job for you ;) doesn't -m32mscoff recquire phobos to be compiled as COFF too ? I think that travis uses the official releases (win32 releases have phobos as OMF) so he can't run the unittests like that... The dark side of the story is that you have to recompile phobos by hand with -m32mscoff...I'm not even sure that there is a option for this in the win32.mak... Meh, I ended up upgrading to 2.068 and everything went well. I clearly remember 2.067.1 working but spent a whole day recompiling druntime/phobos COFF versions in every configuration possible and never got it working again
Re: Refactoring D as an MSc Project
On 2015-03-02 16:50, Jamie wrote: Hello all! This is my first post on the forums. I'm interested in possibly creating a refactoring tool for D for my MSc dissertation (I notice there is currently no refactoring in D?). I wanted to ask if this is possible with D's current state? More specifically: - Can I easily access things such as the AST? If you can write C#, it would be interesting to add it as a feature in Mono-D https://github.com/aBothe/Mono-D/ It's an IDE engine, there is also a parser (written in C#) called D_Parser as a submodule. I use mono-d all the time to work on my libraries. Currently the refactoring feature in monodevelop is pretty much useless because nobody implemented it I guess.
Re: Botan Crypto and TLS for D
On 2015-02-18 11:41, Andrei Alexandrescu wrote: I'd love to add libasync to Phobos! -- Andrei Even as I add this as a dependency? : https://github.com/etcimon/memutils Instead of a single ScopedFiberPool, I intend to have ScopedPools with one stack in fiber, another in thread, and using the GC as a fallback. You can find a code example of the idea here: https://github.com/rejectedsoftware/vibe.d/issues/978#issuecomment-73819358
Re: Botan Crypto and TLS for D
On 2015-02-18 07:17, Jacob Carlborg wrote: On 2015-02-18 02:14, Etienne Cimon wrote: My favorite part is: vibe.d projects now compiles the entire software stack into a fully-featured standalone executable without any license issues. Isn't libevent required? Not anymore. I also wrote libasync and a vibe.d driver for it https://github.com/etcimon/libasync
Re: Botan Crypto and TLS for D
On 2015-02-18 05:22, ketmar wrote: On Wed, 18 Feb 2015 06:35:08 +, Joakim wrote: accompanied by benchmarks of the C++ and D code it's better to keep silence. dmd was never very good in optimising code. ;-) Not really, most of the sensitive code is optimized via native instructions, the crypto algorithms should be all the same. If you count the seconds for the unit test to run, powermod (public key cryptography) was equal in debug mode. Didn't check release though, but debug was 11sec and GCC optimizes the C++ version to 3 seconds :-p The sensitive parts are AES-NI and GCM, where the processor does the encryption, and I handled with care those native instructions so that should be 600MB/s - 1GB/s regardless of the compiler As for the learning experience, I spent most of the time doing search replace from C to D types and names, writing utils/simd instructions (__m128i __m256i xmmintrins.h etc), writing memutils (~= STL) because I needed proper containers and allocators to work with.
Re: Botan Crypto and TLS for D
On 2015-02-18 01:35, Joakim wrote: Good work. You should write up a post about the experience, perhaps accompanied by benchmarks of the C++ and D code. It will help publicize your project and let others learn from your effort. Sure, if you can somehow push this in DMD: http://forum.dlang.org/thread/m9lvn5$28cr$1...@digitalmars.com The release build won't work without it, I get an ICE and I didn't have time to isolate this :/ I won't have an LDC/GDC version for a few months, this project is like a big unit test of failing cases for those compilers
Re: Botan Crypto and TLS for D
On 2015-02-18 14:50, Andrei Alexandrescu wrote: This is integration tactics that will need to be resolved. I don't see them as showstoppers. -- Andrei You're right it did sound like that. It was partly preference and partly a need for the circular buffer to solve futures and promises following this issue: https://github.com/etcimon/libasync/pull/11#issue-57401462 I'm going to remove the custom memory stuff and start moving the library into Phobos after I'm done debugging the linking issues for Botan on Windows.
Re: Botan Crypto and TLS for D
On 2015-02-17 19:11, ketmar wrote: so you did it. great! so maybe vibe.d can drop that OpenSSL dependency soon. ;-) Apart from the debugging experience, there's something empowering about having all the low-level stuff available in Mono-D when writing a website =) Other than that, it's also easier to customize (through inheritance). It also compiles faster 6-7 seconds (D) vs 70 seconds (C++). My favorite part is: vibe.d projects now compiles the entire software stack into a fully-featured standalone executable without any license issues. I'll be working on HTTP/2 with websocket-style full duplex communications once this is done, and then a CMS that has a windows explorer-like desktop front-end with a redis filesystem and distributed node management. So many nice projects :D
Re: Botan Crypto and TLS for D
On 2015-02-17 20:54, Rikki Cattermole wrote: On 18/02/2015 10:00 a.m., Etienne wrote: I'd like to announce the first release of Botan, which implements all features of v1.11.10 in the C++ library. I gave special attention to properly translating it to correct D code. It only runs with DMD master for now, only tested on Linux x86 or x86_64, and it uses a custom allocator library called memutils which must be placed in ../ I'd also want to underline that Alexander Bothe from Mono-D put some special attention to making sure the IDE runs smoothely with Botan. All tests are passing at the time of this writing (which is thousands of tests for all algorithms, incl x509, pubkeys, tls and so on). I'll let the wiki/api docs/code talk for me, I'm off to writing an TLS driver for vibe.d now Have fun! I'm quite excited by this. I do hope however that we get an ssh library now. Maybe git + mercurial + svn as well. We could do so much with that! It looks like this library (using Botan C++) could simply be translated to D code: https://github.com/cdesjardins/ne7ssh The only problem I see is that it's licensed QPL. However, the maintainer is missing and I think the library is simple enough to use it as a guideline/reference (along with other RFCs and libraries) and re-write an ssh library from scratch to get something new and original out of it and possibly use a more open license
Re: Botan Crypto and TLS for D
On 2015-02-17 23:17, Rikki Cattermole wrote: I saw that, I was worried about the license as well. I'll ping Craig. Maybe there is still time for somebody to take it on for GSOC? One thing for sure, on this one the answer books are open. I'm glad I chose Botan. For HTTP/2 I'll probably use nghttp2, I was hesitating with the Go implementation.
D semantics, shared as a heap storage specifier
This is the only issue preventing a truly thread-local GC for better multi-core scalability for D applications. From: https://github.com/D-Programming-Language/druntime/pull/1057#issuecomment-65904128 The best way to achieve a thread-local GC would be to improve and enforce `shared`-correctness in Phobos/druntime (at first). We need to start considering `shared` as a heap storage attribute as well, for consistency. An optional compiler warning (through a flag) would be a start. If even a 30% speedup is possible down the line, it's worth it. The more threads, the more improvements. There's also some new opportunities with this. Here's an example that involves TLS data to influence the behavior of shared objects, without using a global `T[Thread]` hashmap. ```D shared class A { private bool m_init; // different on every thread public shared: AA m_impl; synchronized void init() { if (!m_init) m_impl.add(Thread.getThis()); } ... } ```
Re: jsnode crypto createHmac createHash
Keep an eye on this one: Botan in D, https://github.com/etcimon/botan Should be finished in a couple weeks. e.g. from the TLS module: auto hmac = get_mac(HMAC(SHA-256)); hmac.set_key(secret_key); hmac.update_be(client_hello_bits.length); hmac.update(client_hello_bits); hmac.update_be(client_identity.length); hmac.update(client_identity); m_cookie = unlock(hmac.flush());
Re: Is someone still using or maintaining std.xml2 aka xmlp?
On 2014-11-28 15:15, Tobias Pankrath wrote: Old project link is http://www.dsource.org/projects/xmlp The launchpad and dsource repositories are dead for two years now. Anyone using it? Nope. I found kXML while searching for the same, it has everything I've needed up to spec. I'm maintaining a fork that works with DMD 2.066+ here: https://github.com/etcimon/kxml
Re: Reducing Pegged ASTs
On 2014-11-25 10:12, Nordlöw wrote: Is there a way to (on the fly) reduce Pegged parse results such as I've made an asn.1 parser using pegged tree map, it's not so complex and does the reducing as well. https://github.com/globecsys/asn1.d Most of the meat is in asn1/generator/ In short, it's much easier when you put all the info in the same object, in this case it's an AEntity: https://github.com/globecsys/asn1.d/blob/master/asn1/generator/asntree.d#L239 When the whole tree is done that way, you can easily traverse it and move nodes like a linked list.. I've made a helper function here: https://github.com/globecsys/asn1.d/blob/master/asn1/generator/asntree.d#L10 You can see it being used here: https://github.com/globecsys/asn1.d/blob/38bd1907498cf69a08604a96394892416f7aa3bd/asn1/generator/asntree.d#L109 and then here: https://github.com/globecsys/asn1.d/blob/master/asn1/generator/generator.d#L500 Also, the garbage collector really makes it easy to optimize memory usage, ie. when you use a node in multiple places and need to re-order the tree elements. I still have a bunch of work to do, and I intend on replacing botan's ASN1 functionalities with this and a DER serialization module. Beware, the pegged structure isn't insanely fast to parse because of the recursion limits I implemented very inefficiently because I was too lazy to translate the standard asn.1 BNF into PEG.. Also, the bigger bottleneck would be error strings. For a 1-2 months of work (incl. learning ASN.1), I'm very satisfied with the technology involved and would recommend intermediate structures with traversal helpers.
Re: A different, precise TLS garbage collector?
On 2014-11-16 10:21, Xinok wrote: How about immutable data which is implicitly shareable? Granted you can destroy/free the data asynchronously, but you would still need to check all threads for references to that data. Immutable data would proxy through malloc and would not be scanned as it can only contain immutable data that cannot be deleted nor scanned. This is also shared by every thread without any locking. Currently, immutable data is global in storage but may be local in access rights I think? I would have assumed it would automatically be in the .rdata process segments.
Re: A different, precise TLS garbage collector?
On 2014-11-16 10:20, Sean Kelly wrote: We'll have to change the way immutable is treated for allocations. Which I think is a good thing. Just because something can be shared doesn't meant that I intend to share it. Exactly, I'm not sure how DMD currently handles immutable but it should automatically be mangled in the global namespace in the application data. If this seems feasible to everyone I wouldn't mind forking the precise GC into a thread-local library, without any stop the world slowdown. A laptop with 4 cores in a multi-threaded application would (theoretically) run through the marking/collect process 4 times faster, and allocate unbelievably faster due to no locks :) The only problem is having to manually allocate shared objects, which seems fine because most of the time they'd be deallocated in shared static ~this anyways.
Re: A different, precise TLS garbage collector?
This GC model also seems to work fine for locally-allocated __gshared objects. Since they're registered locally but available globally, they'll be collected once the thread that created it is gone. Also, when an object is cast(shared) before being sent to another thread, it's usually still in scope once the other thread returns. So there seems to be some very thin chances that existing code will be broken with a thread-local GC.
Re: A different, precise TLS garbage collector?
On 2014-11-16 19:32, Ola Fosheim Grøstad ola.fosheim.grostad+dl...@gmail.com wrote: Does the allocated object belong to a global database, a thread local database or a fiber cache which is flushed automatically when moving to a new thread? Or is it an extension of the fiber statespace that should be transparent to threads? I'm not sure what this means, wouldn't the fiber stacks be saved on the thread-local space when they yield? In turn, they become part of the thread-local stack space I guess. Overall, I'd put all the GC allocations through malloc the same way it is right now. I don't see anything that needs to be done other than make multiple thread-local GC instances and remove the locks. I'm sure I'll find obstacles but I don't see them right now, do you know of any that I should look out for?
Re: Pragma mangle and D shared objects
On 2014-10-26 14:25, Etienne Cimon wrote: On 2014-10-25 23:31, H. S. Teoh via Digitalmars-d-learn wrote: Hmm. You can probably use __traits(getAllMembers...) to introspect a library module at compile-time and build a hash based on that, so that it's completely automated. If you have this available as a mixin, you could just mixin(exportLibrarySymbols()) in your module to produce the hash. Exactly, or I could also make it export specific functions into the hashmap, a little like a router. It seems like a very decent option. I found an elegant solution for dealing with dynamic libraries: https://github.com/bitwise-github/D-Reflection
Re: What IDE/EDITOR do you use for D?
On 2014-10-29 15:38, dan wrote: What IDE/EDITOR do you use for D? What plugins if you use Vim? Mono-D. It has a light fast ui, auto-complete, and integrates perfectly with dub and git http://wiki.dlang.org/Mono-D
Re: Dart bindings for D?
On 2014-10-29 18:12, Laeeth Isharc wrote: Rationale for using Dart in combination with D is that I am not thrilled about learning or writing in Javascript, yet one has to do processing on the client in some language, and there seem very few viable alternatives for that. It would be nice to run D from front to back, but at least Dart has C-like syntax and is reasonably well thought out. I actually thought this over in the past and posted my research here: http://forum.dlang.org/thread/ll38cn$ojv$1...@digitalmars.com It would be awesome to write front-end tools in D. However, there won't be much browser support unless you're backed by Google or Microsoft. What's going to replace javascript? Will it be typescript? asm.js? dart? PNaCl? The solution is obviously to compile from D to the target language. But what's the real advantage? Re-using some back-end MVC libraries? All the communication is actually done through sockets, there's never any real interaction between the back-end/front-end. Also, you realize the front-end stuff is so full of community contributions that you're actually shooting yourself in the foot if you divert away from the more popular language and methodologies. So, I settle with javascript, and I shop for libraries instead of writing anything at all. There's so much diversity in the front-end world, a few hundred lines of code at most are going to be necessary for an original piece of work. Heh.