Re: I'll be presenting at NWCPP on Jan 21 at Microsoft
On 1/5/2015 5:39 PM, Jeremy DeHaan wrote: That's really funny that this is your topic. I was planning on going a blog post on almost the exact same thing. I really wish I could come and see it but I don't know how bad busing out there would be. :( There is good bus service to the Microsoft campus (haven't tried it myself, but there seem to be legions of buses around there).
[Issue 10989] [CTFE] Uncaught exception messages are not pretty printed if message wasn't literal
https://issues.dlang.org/show_bug.cgi?id=10989 --- Comment #7 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/D-Programming-Language/dmd https://github.com/D-Programming-Language/dmd/commit/73e48f02b6eca202f7bacc0023555d5608f7019c fix Issue 10989 - [CTFE] Uncaught exception messages are not pretty printed if message wasn't literal Fix the remained case. https://github.com/D-Programming-Language/dmd/commit/94db1fa80e004a456a68b014a9f0498c216a77ed Merge pull request #4252 from 9rnsr/fix10989 Issue 10989 - [CTFE] Uncaught exception messages are not pretty printed if message wasn't literal --
Re: An idea for commercial support for D
On Monday, 5 January 2015 at 22:51:25 UTC, Joseph Rushton Wakeling via Digitalmars-d wrote: On 05/01/15 21:57, Joakim via Digitalmars-d wrote: If you're not paying, you're not a customer. The alternative is to use the bug-ridden OSS implementation you're using now for free, and not have a paid version for those who want those bugs fixed. I don't doubt that some irrational people interpret the existence of a paid version in the way you laid out, and in extreme cases that _can_ happen (just as there are OSS vendors who write bad OSS code just so they can make more money off your favored support model), but that's more an issue with their sloppy thinking than anything else. See, this is where I find _your_ point of view irrational, because you fail to see how straightforwardly damaging closed source can be to adoption. The fact of the matter is that for a great many users, and particularly for a great many corporate adopters of development toolchains, today it matters hugely that the toolchain is free-as-in-freedom. Not free 6 months down the line -- free, now, in its entirety. Non-free code (even temporarily), secret development, etc., are simply deal-breakers for a great many people. A smart business model will engage with this fact and find a way to drive money to development without closing things up. I don't think such people matter, ie they're a very small but vocal minority. Also, these people are deeply irrational, as every piece of hardware they're using comes with many closed binary blobs. They are either ignorant of this fact or just choose to make silly demands anyway. There are also fully open source languages which are fully commercially supported. How do your managers wrap their minds around such a paradox? ;) See, if I was in your shoes, I'd be trying to take on board the feedback about why your proposed model would be unattractive to his managers, rather than making sarcastic points that don't actually identify a conflict with their position. Heh, the whole point of the sarcastic comment was to point out the obvious conflict in their position. :) Most commercial adopters are going to consider it very important to have a support option that says, If you have a serious blocker, you can pay us money to guarantee that it gets fixed. They are not going to be at all happy about a support option that says, If we develop a fix, then you are not going to get it in a timely manner unless you pay. Understanding that distinction is very important. Haha, you do realize that those two quotes you laid out are the exact same option? In the first option, you pay for a fix. In the second option, you pay for a fix. What distinction you're hoping to draw has not been made. My point is that such artificial distinctions are silly, whether because of the amount of support or source available. The alternative to paid bug fixes is not that all the bugs you want fixed get done for free: it's _no_ bug fixes, as we see today. For example, selective imports at module scope has been broken for more than eight years now, as those symbols are leaked into any module that imports the module with the selective import. There are many more bugs like that, that could actually be fixed much faster if there were more paid devs working on D. You're talking about the alternative to paid bug fixes as if the only way of having paid bug fixes is to follow your model of locking them away from the wider community. That's simply not true. I wait with bated breath for your model of paid bug fixes that doesn't involve closing the code for the bug fixes at all. You must have discovered some billion-dollar scheme, because every software company in the world is waiting to copy your brilliant method. Having both paid and free versions available is not a paywall on a language. Unless those versions are identical, yes it is. No, it isn't. Your being able to use the always OSS dmd/gdc for free means the language is always available to you. Just because someone else is using an enhanced version of ldc doesn't make the free version any less available to you. To suggest otherwise is to distort the language to make your argument, ie flat out lying. A company is not going to just write a bunch of patches and open source all of them unless they have some complementary business model to go with it, whether google making more mobile revenue off Android or Apple providing clang as the system compiler on OS X and making money off the bundled Mac. So why not focus on creating those complementary business models? If you have a complementary business model for a D compiler, feel free to suggest one and get people to use it. I don't think complementary business models are generally a good idea, because the people making money are usually going to focus on the place they're making money. This is why google doesn't care that much if AOSP and
Re: Questions about TDPL book
On Tuesday, 6 January 2015 at 03:20:27 UTC, weaselcat wrote: Is it still worth buying TDPL since it's almost 5 years old? I realize classics like KR C are near timeless, but D has seen a lot of changes. Has the ebook version been updated at all(i.e, with the errata?) How is the physical quality of the print book? Thanks. - I'd definitely recommend reading it. The vast majority of it is still very accurate and it's just a great read. You may find this handy: http://wiki.dlang.org/Differences_With_TDPL My Kindle version has not been updated with errata changes the last time I looked. Andrei has mentioned that there would be another printing with changes but that was awhile back so I'm not sure if that's still planned.
decodeReverse
For my particular project (it binds with something like finite state machine) I will write some counterpart of decode function from std.utf. Future function will decode string backward, return dchar and change index passed by reference. Is it interesting for community that I code this feature in general way targeting in phobos for future?
Re: http://wiki.dlang.org/DIP25
On 1/5/2015 2:04 PM, Steven Schveighoffer wrote: To give you an example of why that sucks, imagine that your accessor for member_x is nothrow, but your setter is not. This means you either make an exception, or you just split up obvious file-mates into separate corners. Source control gets confused if one of those attributes changes. Nobody is happy. Grouping by attributes is probably one of the worst ways to have readable/maintainable code. One of the most important reasons why unittests are so successful is that you can just plop the code that tests a function right next to it. So easy to find the code, so easy to maintain when you change the target of the test. Making some way to bundle attributes, or be able to negate currently one-way attributes would go a long way IMO. I know and agree. I was just responding to the 'impossible' characterization.
Re: For the lulz: ddmd vs libdparse lexer timings
On Monday, 5 January 2015 at 00:50:57 UTC, Brian Schott wrote: Looks like it's time to spend some more time with perf: http://i.imgur.com/k50dFbU.png X-axis: Meaningless (Phobos module file names) Y-axis: Time in hnsecs (Lower is better) I had to hack the ddmd code to get it compile (more 1337 h4x were required to compile with LDC than with DMD), so I haven't uploaded the code for the benchmark to Github yet. Both tests were in the same binary and thus had the same compiler flags. Now with more copy-paste inlining! http://i.imgur.com/D5IAlvl.png I'm glad I could get this kind of speed up, but not happy with how ugly the changes were.
Re: lint for D
On Sunday, 4 January 2015 at 00:05:51 UTC, Martin Nowak wrote: https://github.com/Hackerpilot/Dscanner Brilliant thanks - I've successfully integrated it into my IntelliJ plugin
Re: For the lulz: ddmd vs libdparse lexer timings
On 2015-01-05 05:04, Brian Schott wrote: Getting dub to turn on optimizations is easier than getting it to turn off debugging. dub build --build=release ? -- /Jacob Carlborg
Re: DlangUI project update
On Tuesday, 30 December 2014 at 10:37:14 UTC, ketmar via Digitalmars-d-announce wrote: p.s. there is small glitch with checked checkboxes though: image is not transparent. I've created pull request for dlib with added support of transparency in indexed color PNGs. Issue with non-transparent buttons will be fixed after pull request integration.
Re: DlangUI project update
On 26 December 2014 at 22:33, Vadim Lopatin via Digitalmars-d-announce digitalmars-d-announce@puremagic.com wrote: Hello! DlangUI project is alive and under active development. https://github.com/buggins/dlangui Recent changes: - new controls: ScrollWidget, TreeView, ComboBox, ... - new dialogs: FileOpenDialog, MessageBox - a lot of bugfixes - performance improvements in software renderer - killer app: new example - Tetris game :) Try Demos: # download sources git clone https://github.com/buggins/dlangui.git cd dlangui # example 1 - demo for most of widgets dub run dlangui:example1 --build=release # tetris - demo for game development dub run dlangui:tetris --build=release DlangUI is cross-platform GUI library written in D. Main features: - cross platform: uses SDL for linux/macos, Win32 API or SDL for Windows - hardware acceleration: uses OpenGL for drawing when built with version USE_OPENGL - easy to extend: since it's native D library, you can add your own widgets and extend functionality - Unicode and internationalization support - easy to customize UI - look and feel can be changed using themes and styles - API is a bit similar to Android - two phase layout, styles Screenshots (a bit outdated): http://buggins.github.io/dlangui/screenshots.html See project page for details. I would like to get any feedback. Will be glad to see advises, bug reports, feature requests. Best regards, Vadim Is there any chance of supporting user-supplied rendering primitives? If this were a library that lived above some application supplied rendering primitives, then I could make use of this. What rendering primitives are required? Pixel buffers? Any vertex processing happening? Text I imagine is a tough one...
Re: D and Nim
What is kill future of Nim? D is successor of C++, but Nim? Successor of Python?
Re: For the lulz: ddmd vs libdparse lexer timings
On 5 January 2015 at 09:23, Daniel Murphy via Digitalmars-d digitalmars-d@puremagic.com wrote: Iain Buclaw via Digitalmars-d wrote in message news:mailman.4141.1420448690.9932.digitalmar...@puremagic.com... void foo(int bar, ...) { va_list* va = void; va_list[1] __va_argsave; va = __va_argsave; ... } The above being compiler generated by DMD. Should that be va = __va_argsave[0] ? Yes. More or less, both should do the same. :-) So what _should_ DMD be generating? That depends on how we agree to go forward with this. From memory, we each do / did things differently. I have no doubt that the way I've done it is a kludge at best, but I'll explain it anyway. GDC *always* uses the real va_list type, our type-strict backend demands at least that from us. So when it comes down to the problem of passing around va_list when it's a static array (extern C expects a ref), I rely on people using core.vararg/gcc.builtins to get the proper __builtin_va_list before importing modules such as core.stdc.stdio (printf and friends) - as these declarations are then rewritten by the compiler from: int vprintf(__builtin_va_list[1] va, in char* fmt, ...) to: int vprintf(ref __builtin_va_list[1] va, in char* fmt, ...) This is an *esper* workaround, and ideally, I shouldn't be doing this...
Re: simple dub question - avoiding creating a vibed project
On 5/01/2015 11:42 p.m., Laeeth Isharc wrote: Figured out a fix: versions: [VibeCustomMain], It is still mysterious as to why it is pulling in vibed though (I don't import it, and I didn't think ddbc did). https://github.com/mysql-d/mysql-native/blob/master/package.json
Re: call for GC benchmarks
On Sunday, 4 January 2015 at 05:38:06 UTC, Martin Nowak wrote: I'd like to have a few more real world GC benchmarks in druntime. The current ones are all rather micro-benchmarks, some of them don't even create garbage. So if someone has a program that is heavily GC limited, I'd be interested in seeing that converted to a benchmark. Made the start with one https://github.com/D-Programming-Language/druntime/pull/1078 that resembles a mysql to mongodb importer I wrote recently. You could try building really old versions of DCD. I converted my entire D parsing library to allocators several months ago and got a huge speed boost.
Re: D and Nim
Daniel Murphy: Every C++ programmer has hit this bug at some point: struct S { int a; S(int a) { a = a; } }; I have a bug report for something like that [TM]: https://issues.dlang.org/show_bug.cgi?id=3878 Bye, bearophile
Re: simple dub question - avoiding creating a vibed project
On Monday, 5 January 2015 at 10:46:17 UTC, Rikki Cattermole wrote: On 5/01/2015 11:42 p.m., Laeeth Isharc wrote: Figured out a fix: versions: [VibeCustomMain], It is still mysterious as to why it is pulling in vibed though (I don't import it, and I didn't think ddbc did). https://github.com/mysql-d/mysql-native/blob/master/package.json aha. isn't this a poor default for dub though? ie if your parent project itself does not depend on vibed, the default should be that you take care of main yourself, and it does not try and use vibed's, no ?
Re: D and Nim
Ary Borenszweig: Are there proofs of percentage of bugs caused by incorrectly mutating variables that were supposed to be immutable? I don't know, probably not, but the progress in language design is still in its pre-quantitative phase (note: I think Rust variables are constant by default, and mutable on request with mut). It's not just a matter of bugs, it's also a matter of making the code simpler to better/faster understand what a function is doing and how. I don't remember having such bug in my life. Perhaps you are very good, but a language like D must be designed for more common programmers like Kenji Hara, Andrei Alexandrescu, or Raymond Hettinger. Bye, bearophile
Re: D and Nim
On Monday, 5 January 2015 at 11:01:51 UTC, bearophile wrote: I don't remember having such bug in my life. Perhaps you are very good, but a language like D must be designed for more common programmers like Kenji Hara, Andrei Alexandrescu, or Raymond Hettinger. Bye, bearophile kapow!
Re: simple dub question - avoiding creating a vibed project
On 5/01/2015 11:55 p.m., Laeeth Isharc wrote: On Monday, 5 January 2015 at 10:46:17 UTC, Rikki Cattermole wrote: On 5/01/2015 11:42 p.m., Laeeth Isharc wrote: Figured out a fix: versions: [VibeCustomMain], It is still mysterious as to why it is pulling in vibed though (I don't import it, and I didn't think ddbc did). https://github.com/mysql-d/mysql-native/blob/master/package.json aha. isn't this a poor default for dub though? ie if your parent project itself does not depend on vibed, the default should be that you take care of main yourself, and it does not try and use vibed's, no ? https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/appmain.d#L28
Re: D and Nim
On Monday, 5 January 2015 at 09:51:22 UTC, Suliman wrote: What is kill future of Nim? D is successor of C++, but Nim? Successor of Python? A C++ successor is any language that earns its place in a OS vendors SDK as the OS official supported language for all OS layers. Which one it will be is still open game.
Re: call for GC benchmarks
Am 04.01.2015 um 06:37 schrieb Martin Nowak: I'd like to have a few more real world GC benchmarks in druntime. The current ones are all rather micro-benchmarks, some of them don't even create garbage. So if someone has a program that is heavily GC limited, I'd be interested in seeing that converted to a benchmark. Made the start with one https://github.com/D-Programming-Language/druntime/pull/1078 that resembles a mysql to mongodb importer I wrote recently. I have a 3D Space shooter implemented in D. Before I transitioned it over to complete manual memory management, the GC was the biggest bottleneck. Would you be interrested in something like that as well, or are smaller applications with a command line interface preferred? If you are interrested I might be able to branch of a old revision and make it compile with the latest dmd again. Kind Regards Benjamin Thaut
simple dub question - avoiding creating a vibed project
Hi. I am building an example for hibernated (I put a main around the sample code extract from the website). How do I stop dub trying to build a vibed project? Here is my dub.json { name: ddbc example, description: example for DB Connector for D language, similar to JDBC, authors: [Vadim Lopatin,Laeeth Isharc], homepage: https://github.com/buggins/ddbc;, license: Boost Software License (BSL 1.0), dependencies: { mysql-native: =0.0.12, ddbc: =0.2.16, }, targetType: executable, libs-posix: [sqlite3, pq], libs-windows: [sqlite3, libpq], copyFiles-windows-x86: [ libs/win32/sqlite3.dll, libs/win32/libpq.dll, libs/win32/intl.dll ], sourceFiles-windows-x86 : [ libs/win32/sqlite3.lib, libs/win32/libpq.lib ], targetPath: ., }
[Issue 10989] [CTFE] Uncaught exception messages are not pretty printed if message wasn't literal
https://issues.dlang.org/show_bug.cgi?id=10989 e10s electrolysis.j...@gmail.com changed: What|Removed |Added Status|RESOLVED|REOPENED CC||electrolysis.j...@gmail.com Resolution|FIXED |--- --- Comment #4 from e10s electrolysis.j...@gmail.com --- This bug is still alive, at least around the first test case, though Don's reduced one shows a nicer message. Result: ctfe_ex.d(4): Error: uncaught CTFE exception object.Exception(['S', 'o', 'm', 'e', 't', 'h', 'i', 'n', 'g', ' ', '4', '2', ' ', 'w', 'i', 'c', 'k', 'e', 'd', ' ', 'h', 'a', 'p', 'p', 'e', 'n', 'e', 'd', '!'][0..29]) ctfe_ex.d(6):called from here: (*() = 0)() --
Re: simple dub question - avoiding creating a vibed project
Figured out a fix: versions: [VibeCustomMain], It is still mysterious as to why it is pulling in vibed though (I don't import it, and I didn't think ddbc did).
Re: For the lulz: ddmd vs libdparse lexer timings
Iain Buclaw via Digitalmars-d wrote in message news:mailman.4143.1420452193.9932.digitalmar...@puremagic.com... That depends on how we agree to go forward with this. From memory, we each do / did things differently. I have no doubt that the way I've done it is a kludge at best, but I'll explain it anyway. GDC *always* uses the real va_list type, our type-strict backend demands at least that from us. So when it comes down to the problem of passing around va_list when it's a static array (extern C expects a ref), I rely on people using core.vararg/gcc.builtins to get the proper __builtin_va_list before importing modules such as core.stdc.stdio (printf and friends) - as these declarations are then rewritten by the compiler from: int vprintf(__builtin_va_list[1] va, in char* fmt, ...) to: int vprintf(ref __builtin_va_list[1] va, in char* fmt, ...) This is an *esper* workaround, and ideally, I shouldn't be doing this... I just read the discussion in https://github.com/D-Programming-Language/dmd/pull/3568 and I think I finally get it, lol. AIUI your solution won't work for user C++ functions that take va_list, because either type or mangling will be correct, but never both. Is that correct? Can gdc compile the tests in 3568? I'm going to have a look at turning va_list into a magic type that the compiler will pass by reference when necessary and always mangle correctly.
Re: For the lulz: ddmd vs libdparse lexer timings
David Nadlinger wrote in message news:qlzdmlnzlklofmlkq...@forum.dlang.org... It is. It breaks vararg cross-platform compatibility (e.g. Linux x86 vs. Linux x86_64) and GDC/LDC will never need it. It's something that we really to be fixed sooner than later. The only reason why the current situation is bearable is that C varargs are rarely ever used in D-only code. Do you know how to fix it in dmd? I don't know why it's there in the first place.
Re: For the lulz: ddmd vs libdparse lexer timings
On 5 January 2015 at 08:28, Daniel Murphy via Digitalmars-d digitalmars-d@puremagic.com wrote: David Nadlinger wrote in message news:qlzdmlnzlklofmlkq...@forum.dlang.org... It is. It breaks vararg cross-platform compatibility (e.g. Linux x86 vs. Linux x86_64) and GDC/LDC will never need it. It's something that we really to be fixed sooner than later. The only reason why the current situation is bearable is that C varargs are rarely ever used in D-only code. Do you know how to fix it in dmd? I don't know why it's there in the first place. It is there because you still use a synthetic pointer for C varargs on x86_64. This synthetic pointer needs to be initialised to point to a static array otherwise bad things happen when you pass on to C. Enter __va_argsave to the rescue. void foo(int bar, ...) { va_list* va = void; va_list[1] __va_argsave; va = __va_argsave; ... } The above being compiler generated by DMD.
Re: Gource visualisations of various D repositories
On Tuesday, 23 December 2014 at 17:33:07 UTC, Gary Willoughby wrote: For a bit of fun and prompted by a thread requesting such, i've created a few visualisation videos generated from D repositories by Gource. I would love to see a graph with all the blocked issues and their dependencies.
Re: D and Nim
Thanks everyone for the incite so far! Reading between the lines, I gather most thoughts are that both languages are similar in their positioning/objectives yet differ in certain domains (e.g. generic/template capabilities) and qualities (e.g. Nim opinionated choice of scope delimiters). Does that sound logical? This was kind of the thing I was fishing for when thinking of the post.
Re: For the lulz: ddmd vs libdparse lexer timings
Iain Buclaw via Digitalmars-d wrote in message news:mailman.4141.1420448690.9932.digitalmar...@puremagic.com... void foo(int bar, ...) { va_list* va = void; va_list[1] __va_argsave; va = __va_argsave; ... } The above being compiler generated by DMD. Should that be va = __va_argsave[0] ? So what _should_ DMD be generating?
Re: DlangUI project update
On Monday, 5 January 2015 at 09:43:28 UTC, Manu via Digitalmars-d-announce wrote: On 26 December 2014 at 22:33, Vadim Lopatin via Digitalmars-d-announce digitalmars-d-announce@puremagic.com wrote: Hello! DlangUI project is alive and under active development. https://github.com/buggins/dlangui Is there any chance of supporting user-supplied rendering primitives? If this were a library that lived above some application supplied rendering primitives, then I could make use of this. What rendering primitives are required? Pixel buffers? Any vertex processing happening? Text I imagine is a tough one... Not sure what do you mean under user supplied rendering primitives. If you want to render UI into custom rendering buffer, you can define DrawBuf based class. It requires following drawing primitives to be implemented: - fill whole buffer with solid color - fill rectangle with solid color - draw font glyph (8 bit alpha image) - draw 32 bit RGBA image If your app is OpenGL based, there is already GLDrawBuf wich draws into opengl. As well, UI can be drawn in ColorDrawBuf - 32bit RGBA buffer - and then transferred to your surface. For embedding into third party framework, dlangui needs external mouse and key events translated into its own events.
Re: For the lulz: ddmd vs libdparse lexer timings
On 5 January 2015 at 11:21, Daniel Murphy via Digitalmars-d digitalmars-d@puremagic.com wrote: Iain Buclaw via Digitalmars-d wrote in message news:mailman.4143.1420452193.9932.digitalmar...@puremagic.com... That depends on how we agree to go forward with this. From memory, we each do / did things differently. I have no doubt that the way I've done it is a kludge at best, but I'll explain it anyway. GDC *always* uses the real va_list type, our type-strict backend demands at least that from us. So when it comes down to the problem of passing around va_list when it's a static array (extern C expects a ref), I rely on people using core.vararg/gcc.builtins to get the proper __builtin_va_list before importing modules such as core.stdc.stdio (printf and friends) - as these declarations are then rewritten by the compiler from: int vprintf(__builtin_va_list[1] va, in char* fmt, ...) to: int vprintf(ref __builtin_va_list[1] va, in char* fmt, ...) This is an *esper* workaround, and ideally, I shouldn't be doing this... I just read the discussion in https://github.com/D-Programming-Language/dmd/pull/3568 and I think I finally get it, lol. AIUI your solution won't work for user C++ functions that take va_list, because either type or mangling will be correct, but never both. Is that correct? Can gdc compile the tests in 3568? That is correct for user code, but not druntime C bindings. GDC can compile the test in 3568 thanks to the GCC backend providing the va_list struct a name (__va_list_tag). However it for sure cannot run the program though. Only body-less declarations in core.stdc.* are rewritten to ref va_list.
Re: GSOC - Holiday Edition
On Monday, 5 January 2015 at 03:33:15 UTC, Mike wrote: On Sunday, 4 January 2015 at 17:25:49 UTC, Martin Nowak wrote: Exceptions on MC sounds like a bad idea, That is a bias of old. It is entirely dependent on the application. Many modern uses of microcontrollers are not hard real-time, and while my work was primarily on ARM microcontrollers, my previous comments were about using D for bare-metal and systems programming in general. Last time I build an embedded ARM project the resulting D binary was as small as the C++ one. Yes, my Hello World! was 56 bytes, but, it's not only about getting something to work. A group of people that builds the infrastructure is needed. I can't strictly follow your conclusion, that half of the language needs to be change. The only thing I needed to do last time, was to disable ModuleInfo generation in the compiler. My conclusion is not that half the language needs to change. As I said in a previous post, the changes needed are likely few, but fundamental, and can't be implemented in infrastructure alone if you want the result to be more than Hey, I got it to work. The original thread prompting this discussion was about having a bare-metal GSOC project. As I and others have shown, such a project is possible, interesting, entertaining and educational, but it will always be just that without language/compiler/toolchain support. A more worthwhile GSOC project would be to add those few, yet fundamental, language/compiler/toolchain changes to make the experience feel like the language was designed with intent for the purpose of systems programming. But I don't think that will be of much interest to embedded/kernel/bare-metal programmers, but rather more for those with an interest in language and compiler design. Mike Personally I would chose Netduino and MicroEJ capable boards if I ever do any electronics again as hobby. Given your experience wouldn't D be capable to target such systems as well? .. Paulo
Re: D and Nim
On Monday, 5 January 2015 at 00:01:34 UTC, Walter Bright wrote: D: printf(%d LANGUAGE D %d\n, len, sw.peek().msecs); Correctly written D: writeln(len, LANGUAGE D , sw.peek().msecs); Just a note that the reason it uses printf is because, when ldc was working on ARM, writeln produced gibberish characters. On Sunday, 4 January 2015 at 21:46:09 UTC, Ary Borenszweig wrote: There was a time I liked D. But now to make the code fast you have to annotate things with pure nothrow @safe to make sure the compiler generates fast code. This leads to code that's uglier and harder to understand. For this particular benchmark I noticed little effect on the speed of the program from these annotations, I just originally added them for that warm fuzzy feeling that comes from marking things immutable/pure. Also, in relation to comments about -boundscheck=off (aka noboundscheck), it's interesting to check out the latest Rust version. Previously, it was a bit slower than C++, D and Nimrod. Now, it matches them... by converting the code to use a tree in order to avoid bounds checks! https://github.com/logicchains/LPATHBench/blob/master/rs.rs. Personally I prefer D's approach.
Compile for other OS's on Windows?
Is it possible to compile for other OS's on Windows using dmd?
Re: For the lulz: ddmd vs libdparse lexer timings
Daniel Murphy wrote in message news:m8dv1g$1cg4$1...@digitalmars.com... Druntime and phobos rely on va_list converting to void*. Should this a) be allowed on platforms where va_list is a pointer b) always be allowed c) never be allowed ??? And what about explicit casts?
Re: For the lulz: ddmd vs libdparse lexer timings
Iain Buclaw via Digitalmars-d wrote in message news:mailman.4146.1420457999.9932.digitalmar...@puremagic.com... That is correct for user code, but not druntime C bindings. GDC can compile the test in 3568 thanks to the GCC backend providing the va_list struct a name (__va_list_tag). However it for sure cannot run the program though. Only body-less declarations in core.stdc.* are rewritten to ref va_list. Druntime and phobos rely on va_list converting to void*. Should this a) be allowed on platforms where va_list is a pointer b) always be allowed c) never be allowed ???
Re: http://wiki.dlang.org/DIP25
On 1/5/15 8:06 AM, deadalnix wrote: On Monday, 29 December 2014 at 20:26:27 UTC, Steven Schveighoffer wrote: On 12/29/14 2:50 PM, Walter Bright wrote: On 12/29/2014 5:53 AM, Steven Schveighoffer wrote: On 12/28/14 4:33 PM, Walter Bright wrote: inout is not transitive, so a ref on the container doesn't apply to a ref on the contents if there's another level of indirection in there. I'm not sure what you mean by this, but inout as a type modifier is definitely transitive. As a type modifier, yes, it is transitive. As transferring lifetime to the return value, it is not. I strongly suggest not to use inout to mean this. This idea would be a disaster. On the other hand, inout IS a disaster, so why not ? I strongly disagree :) inout enables so many things that just aren't possible otherwise. Most recent example: https://github.com/D-Programming-Language/druntime/pull/1079 inout only gets confusing when you start using inout delegates. -Steve
Re: GSOC - Holiday Edition
On 01/05/2015 04:50 AM, Mike wrote: Exactly, that's good example. Can we please file those as betterC bugs in https://issues.dlang.org/. If we sort those out, it will be much easier next time.
Re: GSOC - Holiday Edition
On Saturday, 3 January 2015 at 16:17:44 UTC, Mathias LANG wrote: On Wednesday, 31 December 2014 at 03:25:53 UTC, Craig Dillabaugh wrote: I was hoping folks to take a brief break from bickering about features, and arguing over which posters have been naughty, and which have been nice, to get a bit of input on our 2015 Google Summer of Code Proposal ... :o) Thanks for doing this, we definitely need more manpower. I would be willing to mentor something related to Vibe.d, however I don't have anything to propose ATM. Bt if you find something, feel free to email me. There was a discussion about redesigning the dlang.org. It looks like there's some WIP ( https://github.com/w0rp/new-dlang.org ), but I didn't follow the discussion closely enough (and it's now around 400 posts). Could it be a possible project, provided that such project would have to be done in D ? Rikki wants to do D web development (see this thread), and his project is using Vibe D. Perhaps you can check it out. Do you think you might be interested in serving as the backup mentor for that one? As for the web page, that would possibly be a tough sell to Google if they consider it more of a 'documentation' project than a 'coding' project, since they explicitly state that documentation projects are not allowed (I was considering suggesting a Phobos documentation project submission, so did a bit of research on that). However, there has been some talk of improvements to DDOC around here, maybe something could be cooked up there ... we still have a bit more than a month to get projects lined up.
Re: GSOC - Holiday Edition
On 01/05/2015 04:38 AM, Mike wrote: I forgot to mention in my last post your proposal for moving TypeInfo to the runtime [1] is also one of the changes I had in mind. It would be an excellent start, an important precedent, and would avoid the ridiculous TypeInfo-faking hack necessary to get a build. And again, you have a good chance to convince people that -betterC shouldn't generate TypeInfo.
Re: Compile for other OS's on Windows?
On Monday, 5 January 2015 at 11:49:32 UTC, Bauss wrote: Is it possible to compile for other OS's on Windows using dmd? This is what's known as cross compiling and is not currently supported by DMD at this time.
Re: simple dub question - avoiding creating a vibed project
On Monday, 5 January 2015 at 10:27:06 UTC, Laeeth Isharc wrote: Hi. I am building an example for hibernated (I put a main around the sample code extract from the website). How do I stop dub trying to build a vibed project? Here is my dub.json { name: ddbc example, description: example for DB Connector for D language, similar to JDBC, authors: [Vadim Lopatin,Laeeth Isharc], homepage: https://github.com/buggins/ddbc;, license: Boost Software License (BSL 1.0), dependencies: { mysql-native: =0.0.12, ddbc: =0.2.16, }, targetType: executable, libs-posix: [sqlite3, pq], libs-windows: [sqlite3, libpq], copyFiles-windows-x86: [ libs/win32/sqlite3.dll, libs/win32/libpq.dll, libs/win32/intl.dll ], sourceFiles-windows-x86 : [ libs/win32/sqlite3.lib, libs/win32/libpq.lib ], targetPath: ., } I opened an issue about this last year: https://github.com/mysql-d/mysql-native/issues/44
Re: What exactly shared means?
On Saturday, 3 January 2015 at 23:11:08 UTC, Jonathan M Davis via Digitalmars-d-learn wrote: Ideally, you would never cast away shared, and it would be cast away for you by the compiler in sections of code where it can guarantee that it's safe to do so (that was part of the idea behind synchronized classes). But that's incredibly difficult to do, particularly in a useful way, so we don't currently have it. And yes, that sucks, and we definitely want to fix it, but I still think that it's far better than having everything be shared by default like you get in languages like C++ and Java. Efficient automatic synchronization is difficult, yes. You can try to tie groups of entities to a lock, but that will only work in some scenarios. To me it sounds like having everything shared by default is the most conservative (safest) approach, and that it would make sense to put restrictions on parameters when you need more performance. If D's approach should make sense the compiler would allowed to elide atomics on members of an object the reference to the object is not marked as shared. That can easily go horribly wrong. I am also not overly happy with D making TLS default. That means new threads instantiate a lot of unused memory if the workload is heterogeneous (different threads do different type of work). TLS only make sense for things that all threads need.
Re: D and Nim
On Monday, 5 January 2015 at 10:21:12 UTC, Paulo Pinto wrote: On Monday, 5 January 2015 at 09:51:22 UTC, Suliman wrote: What is kill future of Nim? D is successor of C++, but Nim? Successor of Python? A C++ successor is any language that earns its place in a OS vendors SDK as the OS official supported language for all OS layers. Which one it will be is still open game. But C++ gained traction before any OS officially supported it (sans BeOS)? Without an ABI, I think C++ will be it's own successor. And I think key C++ people know this and will avoid creating an ABI... Besides that I don't think there will be a single replacement. It will more likely be several languages aiming at different domains where you have different hardware requirements (hpc, embedded, servers, interactive apps...) D needs to pick one area, and do it well there. * If D is aiming for conserving memory and realtime apps, then it needs better memory model/reference type system, * if D is aiming for convenient server programmer (that can afford wasting memory), then it need to tune the language for better garbage collection. With no tuning... other languages will surpass it. Be it Rust, Chapel, Go, Nim or one of the many budding language projects that LLVM has inspired...
Re: http://wiki.dlang.org/DIP25
On Wednesday, 31 December 2014 at 21:08:29 UTC, Dicebot wrote: This mostly matches my current opinion of DIP25 + DIP69 as well. It is not as much problem of lacking power but utterly breaking KISS principle - too many special cases to remember, too many concepts to learn. Path of minimal necessary change is tempting but it is path to C++. Yes especially when this path create non orthogonal features, which inevitably create a complexity explosion down the road. This is the very old simple vs easy problem. Easy is tempting, but simple is what we want and they sometime are very different things.
Re: simple dub question - avoiding creating a vibed project
I opened an issue about this last year: https://github.com/mysql-d/mysql-native/issues/44 Thanks. Laeeth.
Re: D and Nim
On Monday, 5 January 2015 at 13:13:43 UTC, Ola Fosheim Grøstad wrote: On Monday, 5 January 2015 at 10:21:12 UTC, Paulo Pinto wrote: On Monday, 5 January 2015 at 09:51:22 UTC, Suliman wrote: What is kill future of Nim? D is successor of C++, but Nim? Successor of Python? A C++ successor is any language that earns its place in a OS vendors SDK as the OS official supported language for all OS layers. Which one it will be is still open game. But C++ gained traction before any OS officially supported it (sans BeOS)? Yes. It was almost immediately adopted by C compiler vendors given it came from ATT and as compatible with C. UNIX vendors jumped on it for CORBA and telecommunications (C++ original field) so by the early 90's pretty much all UNIXes had some form of C++ support. Walter's work was also an influence, given it was the first C++ compiler to directly produce native code. Epoch/Symbian and later OS/400 revisions were also done in C++. On MS-DOS I was already using C++ back in 1993. Without an ABI, I think C++ will be it's own successor. And I think key C++ people know this and will avoid creating an ABI... C ABI only works in OS that happen to be written in C. There are a few where this is not the case. OS/400 is such an example. For C++ there is the Itanium ABI, COM/WinRT on Windows and the upcoming C++17 ABI. Besides that I don't think there will be a single replacement. It will more likely be several languages aiming at different domains where you have different hardware requirements (hpc, embedded, servers, interactive apps...) D needs to pick one area, and do it well there. * If D is aiming for conserving memory and realtime apps, then it needs better memory model/reference type system, * if D is aiming for convenient server programmer (that can afford wasting memory), then it need to tune the language for better garbage collection. With no tuning... other languages will surpass it. Be it Rust, Chapel, Go, Nim or one of the many budding language projects that LLVM has inspired... Yes there are lots of options, still the ones that live longer as system programming languages, are the ones that get OS vendor adoption. So far, it has always been the case. -- Paulo
Re: http://wiki.dlang.org/DIP25
On Monday, 29 December 2014 at 20:26:27 UTC, Steven Schveighoffer wrote: On 12/29/14 2:50 PM, Walter Bright wrote: On 12/29/2014 5:53 AM, Steven Schveighoffer wrote: On 12/28/14 4:33 PM, Walter Bright wrote: inout is not transitive, so a ref on the container doesn't apply to a ref on the contents if there's another level of indirection in there. I'm not sure what you mean by this, but inout as a type modifier is definitely transitive. As a type modifier, yes, it is transitive. As transferring lifetime to the return value, it is not. I strongly suggest not to use inout to mean this. This idea would be a disaster. -Steve On the other hand, inout IS a disaster, so why not ?
Re: I'll be presenting at NWCPP on Jan 21 at Microsoft
On Monday, 5 January 2015 at 07:46:20 UTC, Walter Bright wrote: http://nwcpp.org/ All are invited. Now I just have to write the presentation :-( Congratulations. I hope the talk goes well. Will audio be available afterwards? At a slight tangent, has anything more recent been written on the C++ interface? I understand it is more complete than what is described on the Wiki/at dlang.org and have not been able to find a write-up of this. Thanks. Laeeth.
Re: For the lulz: ddmd vs libdparse lexer timings
Daniel Murphy wrote in message news:m8dv49$1cgs$1...@digitalmars.com... And what about explicit casts? Oh yeah, and how does __va_argsave work, why do we need it? Looking at the druntime and phobos code, I'm not sure which stuff is correct, which stuff needs to have the X86_64 version deleted, and which should be moved to va_arg.
Re: D and Nim
On Monday, 5 January 2015 at 08:13:29 UTC, Jonathan wrote: Thanks everyone for the incite so far! Reading between the lines, I gather most thoughts are that both languages are similar in their positioning/objectives yet differ in certain domains (e.g. generic/template capabilities) and qualities (e.g. Nim opinionated choice of scope delimiters). Does that sound logical? This was kind of the thing I was fishing for when thinking of the post. First sentence ... did you mean 'insight' or was that some sort of Freudian slip :o)
Re: D and Nim
On Monday, 5 January 2015 at 13:47:24 UTC, Paulo Pinto wrote: For C++ there is the Itanium ABI, COM/WinRT on Windows and the upcoming C++17 ABI. If there will be a C++17 ABI and it is adopted, then that will be the beginning of the end for C++ IMO. (Wishful thinking... ;-) Yes there are lots of options, still the ones that live longer as system programming languages, are the ones that get OS vendor adoption. So far, it has always been the case. By my definition of system level programming the only adopted system level programming language since the 1980s has been C (and C++ only as C-with-bells-and-whistles). Then you have some fringe languages such as Ada, and now probably also Rust as it is approaching version 1.0. I cannot really see Nim or D taking that slot. They appear to have too wide a scope. I think only a focused language that can bring along better optimization and manual memory handling has a chance against C/C++ in system programming. (We have to remember that C/C++ are moving too with various extensions that also are gaining traction: OpenMP, Cilk...)
Re: D and Nim
On 1/5/15 8:01 AM, bearophile wrote: Ary Borenszweig: Are there proofs of percentage of bugs caused by incorrectly mutating variables that were supposed to be immutable? I don't know, probably not, but the progress in language design is still in its pre-quantitative phase (note: I think Rust variables are constant by default, and mutable on request with mut). It's not just a matter of bugs, it's also a matter of making the code simpler to better/faster understand what a function is doing and how. You said Computer Science has found that the right default for variables is to have them immutable. I don't think Rust == Computer Science. Otherwise their compiler would be fast (Computer Science knows how to do fast compilers). At least I like that they are introducing a new feature to their language that none other has: lifetimes and borrows. But I find it very hard to read their code. Take a look for example at the lerp function defined in this article: http://www.willusher.io/2014/12/30/porting-a-ray-tracer-to-rust-part-1/ Rust: ~~~ pub fn lerpT: Mulf32, T + AddT, T + CopyT(t: f32, a: T, b: T) - T { *a * (1.0 - t) + *b * t } ~~~ C++: ~~~ templatetypename T T lerp(float t, const T a, const T b){ return a * (1.f - t) + b * t; } ~~~ I don't remember having such bug in my life. Perhaps you are very good, but a language like D must be designed for more common programmers like Kenji Hara, Andrei Alexandrescu, or Raymond Hettinger. I don't think those are common programmers :-)
Re: Compile for other OS's on Windows?
On Monday, 5 January 2015 at 12:54:00 UTC, Gary Willoughby wrote: On Monday, 5 January 2015 at 11:49:32 UTC, Bauss wrote: Is it possible to compile for other OS's on Windows using dmd? This is what's known as cross compiling and is not currently supported by DMD at this time. Any alternatives?
Re: GSOC - Holiday Edition
On 01/05/2015 02:59 AM, Craig Dillabaugh wrote: Do you feel the current posting on the Wiki accurately best reflects what work needs to be done on this project. Yeah, it's pretty good. I've thrown out the hosted ARM project (AFAIK gdc and ldc are almost done) and filled in some details for the bare-metal project.
Re: D and Nim
On Monday, 5 January 2015 at 14:40:18 UTC, Ary Borenszweig wrote: You said Computer Science has found that the right default for variables is to have them immutable. I don't think Rust == Computer Science. Otherwise their compiler would be fast (Computer Science knows how to do fast compilers). FWIW, proper computer scientists do not care about making fast compilers... They care about proving properties such as why and when algorithm 1 is faster than algorithm 2 for data sets that approaches infinite size, given infinite memory... Computer Science is the stepchild of Discrete Mathematics. Highly impractical, but very useful.
Re: D and Nim
On Monday, 5 January 2015 at 14:22:04 UTC, Ola Fosheim Grøstad wrote: On Monday, 5 January 2015 at 13:47:24 UTC, Paulo Pinto wrote: For C++ there is the Itanium ABI, COM/WinRT on Windows and the upcoming C++17 ABI. If there will be a C++17 ABI and it is adopted, then that will be the beginning of the end for C++ IMO. (Wishful thinking... ;-) For your reference, http://isocpp.org/files/papers/n4028.pdf Yes there are lots of options, still the ones that live longer as system programming languages, are the ones that get OS vendor adoption. So far, it has always been the case. By my definition of system level programming the only adopted system level programming language since the 1980s has been C (and C++ only as C-with-bells-and-whistles). Then you have some fringe languages such as Ada, and now probably also Rust as it is approaching version 1.0. Yes, C, C++, Ada have all been adopted by OS vendors for systems programming (bare metal/full OS stack). I cannot really see Nim or D taking that slot. They appear to have too wide a scope. I think only a focused language that can bring along better optimization and manual memory handling has a chance against C/C++ in system programming. (We have to remember that C/C++ are moving too with various extensions that also are gaining traction: OpenMP, Cilk...) Sadly me neither. I think C++11/14 has improved the language quite a lot. For those willing to wait until 2017, it will look even better, assuming modules and concepts lite get in. Clang/XCode also brought the .NET/JVM tooling capabilities to C++, which is being adopted by other vendors (JetBrains, Microsoft, ...). However, the majority of C++ code out there is mostly pre-C++98 in style. So what I got to learn from CppCon 2014 videos is that I should not miss my C++ days at work. It also remains to be seen what Apple and Microsoft do with their new babies (Swift, .NET Native, Dafny). -- Paulo
Re: call for GC benchmarks
On 01/05/2015 11:26 AM, Benjamin Thaut wrote: If you are interrested I might be able to branch of a old revision and make it compile with the latest dmd again. I'm interested in realistically simulating your allocation patterns. That includes types and allocation sizes, allocation order, lifetime and connectivity. Definitely sounds interesting.
Re: GSOC - Holiday Edition
On Monday, 5 January 2015 at 14:46:25 UTC, Martin Nowak wrote: On 01/05/2015 02:59 AM, Craig Dillabaugh wrote: Do you feel the current posting on the Wiki accurately best reflects what work needs to be done on this project. Yeah, it's pretty good. I've thrown out the hosted ARM project (AFAIK gdc and ldc are almost done) and filled in some details for the bare-metal project. Thanks.
Re: http://wiki.dlang.org/DIP25
On Monday, 5 January 2015 at 14:00:13 UTC, Steven Schveighoffer wrote: On 1/5/15 8:06 AM, deadalnix wrote: On Monday, 29 December 2014 at 20:26:27 UTC, Steven Schveighoffer wrote: On 12/29/14 2:50 PM, Walter Bright wrote: On 12/29/2014 5:53 AM, Steven Schveighoffer wrote: On 12/28/14 4:33 PM, Walter Bright wrote: inout is not transitive, so a ref on the container doesn't apply to a ref on the contents if there's another level of indirection in there. I'm not sure what you mean by this, but inout as a type modifier is definitely transitive. As a type modifier, yes, it is transitive. As transferring lifetime to the return value, it is not. I strongly suggest not to use inout to mean this. This idea would be a disaster. On the other hand, inout IS a disaster, so why not ? I strongly disagree :) inout enables so many things that just aren't possible otherwise. Most recent example: https://github.com/D-Programming-Language/druntime/pull/1079 inout only gets confusing when you start using inout delegates. -Steve IMO, inout (and const/immutable to a degree) is a failure for use with class/struct methods. This became clear to me when trying to use it for the toString implementation of Nullable.
Re: D and Nim
On Monday, 5 January 2015 at 14:52:00 UTC, Paulo Pinto wrote: For your reference, http://isocpp.org/files/papers/n4028.pdf Yeah, I saw that one, but when ABI was brought up in one of the CppCon videos I perceived a lack of enthusiasm among the other committee members. Maybe I got the wrong impression, we'll see... It also remains to be seen what Apple and Microsoft do with their new babies (Swift, .NET Native, Dafny). Yes. I bet the management of big corporations often focus more on what the competitors are doing than pure technical merits. So I am pretty sure that Microsoft is keen to have something like Swift, just in case. I guess that means increased internal pressure to make C# shine in the C++ domain...
Re: Phobos colour module?
On Monday, 5 January 2015 at 15:57:32 UTC, Ola Fosheim Grøstad wrote: But I agree that colour theory is solid enough to be considered stable and that it would be a great benefit to have a single library used across multiple projects. It is also very suitable for templated types. Yeah, in my misc repo, there used to be stand along image.d and simpledisplay.d. Now, they both depend on color.d. Even just a basic definition we can use elsewhere is nice to have so other libs can interop on that level without annoying casts or pointless conversions just to please the type system when the contents are identical. I went with struct Color { ubyte r,g,b,a; } not perfect, probably not good enough for like a Photoshop, and sometimes the bytes need to be shuffled for different formats, but eh it works for me.
Re: http://wiki.dlang.org/DIP25
On Monday, 5 January 2015 at 14:00:13 UTC, Steven Schveighoffer wrote: I strongly disagree :) inout enables so many things that just aren't possible otherwise. Most recent example: https://github.com/D-Programming-Language/druntime/pull/1079 inout only gets confusing when you start using inout delegates. -Steve You are arguing that inout is useful. That simply makes it a useful disaster :)
Re: For the lulz: ddmd vs libdparse lexer timings
On 5 January 2015 at 12:11, Daniel Murphy via Digitalmars-d digitalmars-d@puremagic.com wrote: Iain Buclaw via Digitalmars-d wrote in message news:mailman.4146.1420457999.9932.digitalmar...@puremagic.com... That is correct for user code, but not druntime C bindings. GDC can compile the test in 3568 thanks to the GCC backend providing the va_list struct a name (__va_list_tag). However it for sure cannot run the program though. Only body-less declarations in core.stdc.* are rewritten to ref va_list. Druntime and phobos rely on va_list converting to void*. Should this a) be allowed on platforms where va_list is a pointer b) always be allowed c) never be allowed ??? For consistency? I would go with (c) as va_list could be anything, even a struct (PPC). That and people shouldn't (*!*) be manipulating va_list directly, though unfortunately we do for std.format, etc. The only realistic option would be (a).
Re: Phobos colour module?
On Thursday, 1 January 2015 at 06:38:41 UTC, Manu via Digitalmars-d wrote: I've been working on a pretty comprehensive module for dealing with colours in various formats and colour spaces and conversions between all of these. It seems like a hot area for duplicated effort, since anything that deals with multimedia will need this, and I haven't seen a really comprehensive implementation. Indeed, I stop you right there: I did one as well in the past, but definitively not high quality enough to be interesting for 3rd party. Does it seem like something we should see added to phobos? Yes.
Re: GSOC - Holiday Edition
On Monday, 5 January 2015 at 11:38:17 UTC, Paulo Pinto wrote: Personally I would chose Netduino and MicroEJ capable boards if I ever do any electronics again as hobby. Given your experience wouldn't D be capable to target such systems as well? Yes, D is perfectly capable of targeting those boards using GDC and potentially even LDC, although LDC still has a few strange bugs [1]. In fact, with the right hackery, I assume D will generate far better code (smaller and faster) than the .Net Micro Framework or MicroEJ. Another interesting offering is the Intel Edison/Galileo boards [2]. I'm under the impression that DMD would be able to generate code for those boards as well. Although those boards are less like microcontrollers and more like micro PCs (e.g. Raspberry Pi, BeagleBone Black) As a hobby, I highly recommend anyone interested getting themselves a board and trying it out. The boards are surprisingly inexpensive. With the right knowledge, it takes very little to get started, and can be quite rewarding to see the hardware come alive with your code. 1. Get yourself a GDC cross-compiler [3], and whatever tools are needed to interface a PC to your board (OpenOCD, or vendor-supplied tools). 2. Throw out Phobos and D Runtime, and create a small object.d with a few stubs as your runtime. 4. Write a simple program (e.g. blinky, semi-hosted hello world [4]) 5. Create a linker script for your board. This can be difficult the first time as you need an intimate understanding of your hardware and how the compiler generates code. 6. Use OpenOCD or your vendor's tools to upload the binary to your board, and bask in the satisfaction of bringing the board to life. You won't be able to use classes, dynamic arrays, and a multitude of other language features unless you find a way to implement them in your runtime, but you will be able to write C-like code only with added bonuses like CTFE, templates, and mixins. I'm sure those that actually take the plunge will find it to be a fun, educational, and rewarding exploration. Mike [1] - https://github.com/ldc-developers/ldc/issues/781 [2] - http://www.intel.com/content/www/us/en/do-it-yourself/maker.html [3] - http://wiki.dlang.org/Bare_Metal_ARM_Cortex-M_GDC_Cross_Compiler [4] - http://wiki.dlang.org/Minimal_semihosted_ARM_Cortex-M_%22Hello_World%22
Re: Compile for other OS's on Windows?
On Monday, 5 January 2015 at 15:00:05 UTC, Bauss wrote: On Monday, 5 January 2015 at 12:54:00 UTC, Gary Willoughby wrote: On Monday, 5 January 2015 at 11:49:32 UTC, Bauss wrote: Is it possible to compile for other OS's on Windows using dmd? This is what's known as cross compiling and is not currently supported by DMD at this time. Any alternatives? You might be able to lightly tweak ldc to do it: I was able to cross-compile druntime/phobos, their unit tests, and some small sample apps on a linux/x86 host to run on a linux/ARM target. The problem isn't really the D compiler so much as the other needed tools and environment. Dmd and the other D compilers are automatically configured to use your system linker and link against the system's C standard library. Well, optlink or the Microsoft linker on Windows don't know how to link for linux or OS X! So you have to set up linkers and C libraries for every other OS you want to build for on Windows. It's possible: the Android NDK can be installed on Windows with Cygwin and compile C/C++ code for the various Android architectures. But none of the D compilers have gone to all the trouble to provide that cross-compiling support out of the box for all the various OSs they support. It's easier to just run each OS in a VM on top of Windows, as Colin said.
Re: GSOC - Holiday Edition
On Saturday, 3 January 2015 at 03:33:29 UTC, Rikki Cattermole wrote: On 3/01/2015 3:59 p.m., Craig Dillabaugh wrote: On Saturday, 3 January 2015 at 00:15:42 UTC, Rikki Cattermole wrote: On 3/01/2015 4:30 a.m., Craig Dillabaugh wrote: On Thursday, 1 January 2015 at 06:19:14 UTC, Rikki Cattermole wrote: clip 10) Rikki had mentioned a 'Web Development' project, but I don't have enough to post on the project ideas page. Are you still interested in doing this. Yes I am. I don't know what I'm doing in the near future (need a job) so I can't explore this too much. But I know I will be able to mentor for it. Hope that everyone has a great 2015, and I look forward to your feedback. Cheers, Craig It would be great to have you as a mentor, but we definitely need fairly solidly defined projects. Any chance you can come up with something by the end of January. Craig Indeed. I created a list for Cmsed https://github.com/rikkimax/Cmsed/wiki/Road-map#what-does-other-web-service-frameworks-offer Right now it basically comes down to e.g. QR code, bar code, PDF. QR and bar code isn't that hard. Not really a GSOC project. PDF definitely is worthy. PDF is an interesting case, it needs e.g. PostScript support. And preferably image and font loading/exporting. So it might be a good worth while project. As it expands out into numerous other projects. Thanks. Would you like to add something to the Wiki, or would you prefer if I did so. Also, what license are you using? Cheers, Craig When it comes to my open source code bases I have two rules. - If you use it commercially at the very least donate what its worth to you. - For non commercial, as long as I'm not held liable you are free to use it in any way you want. At the very least, get involved e.g. PR's, issues. So liberal licenses like MIT, BSD. Which are compatible with e.g. BOOST. Please do write up a skeleton for me on the wiki. I can pad it out. Will help to keep things consistent. I will try to add something in the coming days (hopefully by mid-week). However, I believe you have to pick a specific OSI approved license for the project for it to be considered for GSOC.
Re: call for GC benchmarks
Am 05.01.2015 um 17:02 schrieb Kiith-Sa: On Monday, 5 January 2015 at 14:52:36 UTC, Martin Nowak wrote: On 01/05/2015 11:26 AM, Benjamin Thaut wrote: If you are interrested I might be able to branch of a old revision and make it compile with the latest dmd again. I'm interested in realistically simulating your allocation patterns. That includes types and allocation sizes, allocation order, lifetime and connectivity. Definitely sounds interesting. Maybe make a proxy GC, record all allocations to a file, then replay those allocations as a benchmark? That won't work. Not only the allocations are important but the pointers between them as well. Your proposed solution would only work if all pointers within a D program are known and could be recorded.
Re: For the lulz: ddmd vs libdparse lexer timings
On 5 January 2015 at 12:13, Daniel Murphy via Digitalmars-d digitalmars-d@puremagic.com wrote: Daniel Murphy wrote in message news:m8dv1g$1cg4$1...@digitalmars.com... Druntime and phobos rely on va_list converting to void*. Should this a) be allowed on platforms where va_list is a pointer b) always be allowed c) never be allowed ??? And what about explicit casts? Casts should always be explicit. I think it would be best if va_list is treated as a distinct type to others, even if the underlying type is a char* (x86) or void* (ARM OABI).
Re: For the lulz: ddmd vs libdparse lexer timings
On 5 January 2015 at 13:37, Daniel Murphy via Digitalmars-d digitalmars-d@puremagic.com wrote: Daniel Murphy wrote in message news:m8dv49$1cgs$1...@digitalmars.com... And what about explicit casts? Oh yeah, and how does __va_argsave work, why do we need it? Looking at the druntime and phobos code, I'm not sure which stuff is correct, which stuff needs to have the X86_64 version deleted, and which should be moved to va_arg. IIRC, there is some minor duplication between std.format and core.stdc.stdarg, but I think that we really should be able to get things working without changing druntime or phobos.
Re: GSOC - Holiday Edition
On 5 January 2015 at 14:46, Martin Nowak via Digitalmars-d digitalmars-d@puremagic.com wrote: On 01/05/2015 02:59 AM, Craig Dillabaugh wrote: Do you feel the current posting on the Wiki accurately best reflects what work needs to be done on this project. Yeah, it's pretty good. I've thrown out the hosted ARM project (AFAIK gdc and ldc are almost done) and filled in some details for the bare-metal project. Around the time of Dconf 2013, gdc's ARM port was passing the (as of then) D2 testsuite. Things might have changed since though. Regards Iain
Re: http://wiki.dlang.org/DIP25
On Sunday, 4 January 2015 at 01:12:14 UTC, Manu via Digitalmars-d wrote: It's like this: ref is a massive problem when it finds it's way into meta. ref is relatively rare today... so the problem is occasional. scope on the other hand will be epic compared to ref. If we infer scope (which we'll probably need to), chances are, the vast majority of functions will involve scope. We can't have the trouble with ref (read: trouble with 'storage class') applied to the majority of functions. Hey Manu, I think it would still be a good idea to provide code examples of your points right in the forums. I was able to look at the file from luaD and see how the problems were occurring, but it would hasten my understanding just to see several 'reduced test cases' of that example and others, if possible.
Re: http://wiki.dlang.org/DIP25
On 1/5/15 4:10 PM, Walter Bright wrote: On 12/30/2014 4:14 AM, Steven Schveighoffer wrote: But I agree. The problem is, most times, you WANT to ensure your code is @safe pure nothrow (and now @nogc), even for template functions. That's a lot of baggage to put on each signature. I just helped someone recently who wanted to put @nogc on all the std.datetime code, and every signature had these 4 attributes except a few. I tried to have him put a big @safe: pure: nothrow: @nogc: at the top, but the occasional exceptions made this impossible. The way to do it is one of: 1. reorganize the code so the non-attributed ones come first 2. write the attributes as: @safe pure nothrow @nogc { ... functions ... } ... non attributed functions ... @safe pure nothrow @nogc { ... more functions ... } To give you an example of why that sucks, imagine that your accessor for member_x is nothrow, but your setter is not. This means you either make an exception, or you just split up obvious file-mates into separate corners. Source control gets confused if one of those attributes changes. Nobody is happy. Grouping by attributes is probably one of the worst ways to have readable/maintainable code. One of the most important reasons why unittests are so successful is that you can just plop the code that tests a function right next to it. So easy to find the code, so easy to maintain when you change the target of the test. Making some way to bundle attributes, or be able to negate currently one-way attributes would go a long way IMO. -Steve
Re: @api: One attribute to rule them All
On Monday, 5 January 2015 at 21:25:01 UTC, Daniel N wrote: An alternative could be to use the already existing 'export'. 'extern'. Yeah, something like 'extern (noinfer):'.
Re: @api: One attribute to rule them All
On Monday, 5 January 2015 at 22:00:40 UTC, Zach the Mystic wrote: On Monday, 5 January 2015 at 21:25:01 UTC, Daniel N wrote: An alternative could be to use the already existing 'export'. 'extern'. Yeah, something like 'extern (noinfer):'. Err, yeah, whatever works!
Re: An idea for commercial support for D
On 05/01/15 21:57, Joakim via Digitalmars-d wrote: If you're not paying, you're not a customer. The alternative is to use the bug-ridden OSS implementation you're using now for free, and not have a paid version for those who want those bugs fixed. I don't doubt that some irrational people interpret the existence of a paid version in the way you laid out, and in extreme cases that _can_ happen (just as there are OSS vendors who write bad OSS code just so they can make more money off your favored support model), but that's more an issue with their sloppy thinking than anything else. See, this is where I find _your_ point of view irrational, because you fail to see how straightforwardly damaging closed source can be to adoption. The fact of the matter is that for a great many users, and particularly for a great many corporate adopters of development toolchains, today it matters hugely that the toolchain is free-as-in-freedom. Not free 6 months down the line -- free, now, in its entirety. Non-free code (even temporarily), secret development, etc., are simply deal-breakers for a great many people. A smart business model will engage with this fact and find a way to drive money to development without closing things up. There are also fully open source languages which are fully commercially supported. How do your managers wrap their minds around such a paradox? ;) See, if I was in your shoes, I'd be trying to take on board the feedback about why your proposed model would be unattractive to his managers, rather than making sarcastic points that don't actually identify a conflict with their position. Most commercial adopters are going to consider it very important to have a support option that says, If you have a serious blocker, you can pay us money to guarantee that it gets fixed. They are not going to be at all happy about a support option that says, If we develop a fix, then you are not going to get it in a timely manner unless you pay. Understanding that distinction is very important. My point is that such artificial distinctions are silly, whether because of the amount of support or source available. The alternative to paid bug fixes is not that all the bugs you want fixed get done for free: it's _no_ bug fixes, as we see today. For example, selective imports at module scope has been broken for more than eight years now, as those symbols are leaked into any module that imports the module with the selective import. There are many more bugs like that, that could actually be fixed much faster if there were more paid devs working on D. You're talking about the alternative to paid bug fixes as if the only way of having paid bug fixes is to follow your model of locking them away from the wider community. That's simply not true. Having both paid and free versions available is not a paywall on a language. Unless those versions are identical, yes it is. A company is not going to just write a bunch of patches and open source all of them unless they have some complementary business model to go with it, whether google making more mobile revenue off Android or Apple providing clang as the system compiler on OS X and making money off the bundled Mac. So why not focus on creating those complementary business models? That community involvement would still be there for the OSS core with D, but you would get support for a closed patch from the developer who wrote it. ... There is essentially nothing different from this situation and the hybrid model I've described, in terms of the product you'd be using. The only difference is that it wouldn't be a company, but some selection of independent devs. Bottom line: if some individual or group of devs want to try and make a business selling proprietary patches to the DMD frontend, or phobos, the licensing allows them to do that. Good luck to them, and if they want to submit those patches to D mainline in future, good luck to them again. However, I don't see it making any sense for a company to invest in proprietary patches to a toolchain, because 99% of the time, when you need a patch written, it's a bugfix. And when you want a bugfix, you don't want a patch that applies only to your version of the toolchain and which you (or your friendly proprietary-patch-writing consultant) have to keep rebasing on top of upstream for the next 6 months -- you want upstream fixed. Otherwise you'll wind up paying far more merely for maintenance of your proprietary extensions, than you would have just to get someone to write a patch and get it straight into the open-source upstream. I also think you assume far too much value on the part of privileged/early access to bugfixes. A bug in a programming language toolchain is either a commercial problem for you or it isn't. If it's a commercial problem, you need it fixed, and that fix in itself has a value to you. There is not really any comparable change in value if that
Re: http://wiki.dlang.org/DIP25
On Monday, 5 January 2015 at 19:18:34 UTC, Steven Schveighoffer wrote: On 1/5/15 11:51 AM, deadalnix wrote: On Monday, 5 January 2015 at 14:00:13 UTC, Steven Schveighoffer wrote: I strongly disagree :) inout enables so many things that just aren't possible otherwise. Most recent example: https://github.com/D-Programming-Language/druntime/pull/1079 inout only gets confusing when you start using inout delegates. You are arguing that inout is useful. That simply makes it a useful disaster :) I guess you and me have different ideas of what a disaster is :) -Steve Nop. Great usefulness makes it almost impossible to get rid of in its current form.
Re: Bad error message example
Am 05.01.2015 um 18:51 schrieb Daniel Murphy: Benjamin Thaut wrote in message news:m8eian$21nu$1...@digitalmars.com... Today I had a bad template error message and I though I might post it here so something can be done about it, the error message was: Please report in bugzilla: http://d.puremagic.com/issues/ Done: https://issues.dlang.org/show_bug.cgi?id=13942
Re: Phobos colour module?
On 6 January 2015 at 04:11, via Digitalmars-d digitalmars-d@puremagic.com wrote: On Monday, 5 January 2015 at 16:08:27 UTC, Adam D. Ruppe wrote: Yeah, in my misc repo, there used to be stand along image.d and simpledisplay.d. Now, they both depend on color.d. Even just a basic definition we can use elsewhere is nice to have so other libs can interop on that level without annoying casts or pointless conversions just to please the type system when the contents are identical. Yes, that too. I was more thinking about the ability to create an adapter that extracts colour information from an existing data structure and adds context information such as gamma. Then let you build a function that say reads floats from 3 LAB pointers and finally returns a tuple with a 16 bit RGB pixel with gamma correction and the residue in a specified format suitable for dithering... ;-] It is quite common error to do computations on colours that are ignorant of gamma (or do it wrong) which results in less accurate imaging. E.g. When dithering you need to make sure that the residue that is left when doing bit truncation is added to the neighbouring pixels in a linear addition (without gamma). Making stuff like that less tedious would make it a very useful library. I have thought about how to handle residue from lossy-encoding, but I haven't thought of an API I like for that yet. Dithering operates on neighbourhoods of pixels, so in some ways I feel it is beyond the scope of colour.d, but residue is an important detail to enable dithering that should probably be expressed while encoding. Currently, I have a colour template which can be arbitrarily typed and components defined in some user-specified order. It binds the colourspace to colours. 'CTo to(CTo, CFrom)(CFrom colour)' is defined and performs arbitrary conversions between colours. I'm finding myself at a constant struggle between speed and maximizing-precision. I feel like a lib should maximise precision, but the trouble then is that it's not actually useful to me... Very few applications care about colour precision beyond ubyte, so I feel like using double for much of the processing is overkill :/ I'm not sure what the right balance would look like exactly. I can make fast-paths for common formats, like ubyte conversions between sRGB/Linear, etc use tables. Performing colourspace conversions in fixed point (where both sides of conversion are integer types) might be possible without significant loss of precision, but it's tricky... I just pipe through double now, and that's way overkill. I'll make a PR tonight some time for criticism.
Re: I'll be presenting at NWCPP on Jan 21 at Microsoft
On Monday, 5 January 2015 at 07:46:20 UTC, Walter Bright wrote: http://nwcpp.org/ All are invited. Now I just have to write the presentation :-( That's really funny that this is your topic. I was planning on going a blog post on almost the exact same thing. I really wish I could come and see it but I don't know how bad busing out there would be. :(
Re: @api: One attribute to rule them All
On 05/01/15 22:14, Zach the Mystic via Digitalmars-d wrote: I get a compiler error. The only way to stop it is to add unnecessary visual noise to the first function. All of these attributes should be something that you *want* to add, not something that you *need*. The compiler can obviously figure out if the function throws or not. Just keep an additional internal flag for each of the attributes. When any attribute is violated, flip the bit and boom, you have your implicit function signature. Bear in mind one quite important factor -- all that alleged noise isn't simply about getting stuff to work, it's about promises that the function makes to downstream users. You do touch on this yourself, but I think you have missed how your @api flag could go wrong. I suggest a new attribute, @api, which does nothing more than to tell the compiler to generate the function signature and mangle the name only with its explicit attributes, and not with its inferred ones. Inside the program, there's no reason the compiler can't continue to use inference, but with @api, the exposed interface will be stabilized, should the programmer want that. Simple. IMHO if anything like this is to be implemented, the extra flag should be to indicate that a function is _not_ intended to be part of the API and that therefore it is OK to infer its attributes. Here's the rationale. Suppose that I have a bunch of functions that are all intended to be part of the public API of my project. I accidentally forget to tag one of them with the @api attribute, so its attributes will be auto-inferred, but the function is still public, so downstream users will wind up using it. 3 months later, I realize my mistake, and add the @api attribute -- at which point downstream users' code will break if their code was relying on the unintended inferred attributes. If on the other hand you take the assumption that attributes should by default _not_ be auto-inferred, and you accidentally forget to tag a function to auto-infer its attributes, that can be fixed without breaking downstream. It's quite analogous in this respect to the argument about final vs. virtual by default for class methods.
Re: Template function type inference with default arguments
On Sunday, 4 January 2015 at 00:22:01 UTC, ixid wrote: Why don't templates take a type from the default argument if nothing else is supplied? https://issues.dlang.org/show_bug.cgi?id=2803
Re: @api: One attribute to rule them All
On 06/01/15 00:48, Joseph Rushton Wakeling via Digitalmars-d wrote: IMHO if anything like this is to be implemented, the extra flag should be to indicate that a function is _not_ intended to be part of the API and that therefore it is OK to infer its attributes. Hmm. On thinking about this some more, it occurs to me that this might be fundamentally about protection. If it were forbidden to auto-infer attributes for a non-templated public function, then quite a few of my objections above might go away.
Re: @api: One attribute to rule them All
On Monday, 5 January 2015 at 23:48:17 UTC, Joseph Rushton Wakeling via Digitalmars-d wrote: Here's the rationale. Suppose that I have a bunch of functions that are all intended to be part of the public API of my project. I accidentally forget to tag one of them with the @api attribute, A more likely scenario is that your library starts small enough not to need the @api attribute, then at some point it gets really, really huge. Then in one fell swoop you decide to @api: your whole file so that the public interface won't change so often. I'm picking the most extreme case I can think of, in order to argue the point from a different perspective. so its attributes will be auto-inferred, but the function is still public, so downstream users will wind up using it. 3 months later, I realize my mistake, and add the @api attribute -- at which point downstream users' code will break if their code was relying on the unintended inferred attributes. Attribute inference provides convenience, not guarantees. If a user was relying on the purity of a function which was never marked 'pure', it's only convenience which allows him to do it, both on the part of the user, for adding 'pure', and the library writer, for *not* adding it. Adding @api (or 'extern (noinfer)') cancels that convenience for the sake of modularity. It's a tradeoff. The problem itself is solved either by the library writer marking the function 'pure', or the user removing 'pure' from his own function. Without @api, the problem only arises when the library writer actually does something impure, which makes perfect sense. It's @api (and D's existing default, by the way) which adds the artificiality to the process, not my suggested default. It's quite analogous in this respect to the argument about final vs. virtual by default for class methods. I don't think so, because of so-called covariance. Final and virtual each have their own advantages and disadvantages, whereas inferring attributes only goes one way. There is no cost to inferring in the general case. My suggestion, (I now prefer 'extern(noinfer)'), does absolutely nothing except to restore D's existing default, for what I think are the rare cases it is needed. I could be wrong about just how rare using extern(noinfer) will actually be, but consider that phobos, for example, just doesn't need it, because it's too small a library to cause trouble if all of a sudden one of its non-templated functions becomes impure. A quick recompile, a new interface file, and now everyone's using the new thing. Even today, it's not even marked up with attributes completely, thus indicating that you never even *could* have used it for all it's worth. Have I convinced you?
Re: Questions about TDPL book
On Tuesday, 6 January 2015 at 03:20:27 UTC, weaselcat wrote: Is it still worth buying TDPL since it's almost 5 years old? I realize classics like KR C are near timeless, but D has seen a lot of changes. Has the ebook version been updated at all(i.e, with the errata?) How is the physical quality of the print book? Thanks. - Book quality is fine, although the paper is quite thin. I don't think there is that much outdated information in TDPL. I don't have any ideas about the ebook version.
Questions about TDPL book
Is it still worth buying TDPL since it's almost 5 years old? I realize classics like KR C are near timeless, but D has seen a lot of changes. Has the ebook version been updated at all(i.e, with the errata?) How is the physical quality of the print book? Thanks. -
Re: An idea for commercial support for D
Joseph Rushton Wakeling via Digitalmars-d wrote in message news:mailman.4177.1420498284.9932.digitalmar...@puremagic.com... A company is not going to just write a bunch of patches and open source all of them unless they have some complementary business model to go with it, whether google making more mobile revenue off Android or Apple providing clang as the system compiler on OS X and making money off the bundled Mac. However, I don't see it making any sense for a company to invest in proprietary patches to a toolchain, because 99% of the time, when you need a patch written, it's a bugfix. And when you want a bugfix, you don't want a patch that applies only to your version of the toolchain and which you (or your friendly proprietary-patch-writing consultant) have to keep rebasing on top of upstream for the next 6 months -- you want upstream fixed. Otherwise you'll wind up paying far more merely for maintenance of your proprietary extensions, than you would have just to get someone to write a patch and get it straight into the open-source upstream. This is very important - upstreaming your patches means that the community will maintain them for you. This is why it's useful for a company to develop their own patches and still contribute back upstream.
Conditional functions
Is it possible to use static if in a template structure to have some member functions only for specific types? E.g.: struct Foo(T) { ... T get() { ... } static if(isMutable!T) { void set(T x) { ... } } }
Re: Conditional functions
On Mon, 05 Jan 2015 17:47:09 +, Dominikus Dittes Scherkl wrote: Is it possible to use static if in a template structure to have some member functions only for specific types? Yep. This is actually a frequently used pattern in functions that return ranges.
Re: Conditional functions
On Monday, 5 January 2015 at 17:55:49 UTC, Justin Whear wrote: On Mon, 05 Jan 2015 17:47:09 +, Dominikus Dittes Scherkl wrote: Is it possible to use static if in a template structure to have some member functions only for specific types? Yep. This is actually a frequently used pattern in functions that return ranges. Cool. I'm every day again astonished how cool D really is.
Re: Phobos colour module?
On Monday, 5 January 2015 at 16:08:27 UTC, Adam D. Ruppe wrote: Yeah, in my misc repo, there used to be stand along image.d and simpledisplay.d. Now, they both depend on color.d. Even just a basic definition we can use elsewhere is nice to have so other libs can interop on that level without annoying casts or pointless conversions just to please the type system when the contents are identical. Yes, that too. I was more thinking about the ability to create an adapter that extracts colour information from an existing data structure and adds context information such as gamma. Then let you build a function that say reads floats from 3 LAB pointers and finally returns a tuple with a 16 bit RGB pixel with gamma correction and the residue in a specified format suitable for dithering... ;-] It is quite common error to do computations on colours that are ignorant of gamma (or do it wrong) which results in less accurate imaging. E.g. When dithering you need to make sure that the residue that is left when doing bit truncation is added to the neighbouring pixels in a linear addition (without gamma). Making stuff like that less tedious would make it a very useful library.
Re: D and Nim
On Monday, 5 January 2015 at 04:10:41 UTC, H. S. Teoh via Digitalmars-d wrote: On Sun, Jan 04, 2015 at 07:25:28PM -0800, Andrei Alexandrescu via Digitalmars-d wrote: On 1/4/15 5:07 PM, weaselcat wrote: Why does reduce! take the seed as its first parameter btw? It sort of messes up function chaining. Mistake. -- Andrei When are we going to fix this? T monarch dodra tried to make a reduce that was backward compatible while moving the seed but it didn't work out in the end. He was working on a fold which was basically reduce with the arguments swapped but I'm not sure what happened to it.
Re: For the lulz: ddmd vs libdparse lexer timings
Iain Buclaw via Digitalmars-d wrote in message news:mailman.4157.1420479008.9932.digitalmar...@puremagic.com... For consistency? I would go with (c) as va_list could be anything, even a struct (PPC). That and people shouldn't (*!*) be manipulating va_list directly, though unfortunately we do for std.format, etc. The only realistic option would be (a). I think the only code that needs to manipulate va_list directly is low-level enough that forcing use of a union or *cast(void**)va is reasonable. I think I've got a handle on this, sort of. I've moved the declaration of __va_argsave into the glue layer, and added intrinsic detection for va_start/va_end/va_arg (the two-arg form). I've implemented them in the backend for win32 and they have passed a simple test! I'll run some more extensive tests tomorrow, and then have a look at some other platforms. Do you think we can change _all_ the druntime and phobos code to just use va_arg directly? It would be nice to have it all portable like that.
Bad error message example
Today I had a bad template error message and I though I might post it here so something can be done about it, the error message was: /usr/include/dlang/dmd/std/conv.d(278): Error: template instance isRawStaticArray!() does not match template declaration isRawStaticArray(T, A...) I was not using isRawStaticArray anywhere in my code. And the error was generally not helpfull at all and yes that was all of it. So after staring at my source file for 10 minutes I finally found the line that caused the error: s.amount = to!double(); I accidentally deleted the contents between the two parentheses. It would be great if dmd would give a more meaningfull error message in this case. Why are the instanciated from messages sometimes given and sometimes not? Kind Regards Benjamin Thaut
Re: Bad error message example
Benjamin Thaut wrote in message news:m8eian$21nu$1...@digitalmars.com... Today I had a bad template error message and I though I might post it here so something can be done about it, the error message was: Please report in bugzilla: http://d.puremagic.com/issues/
Re: For the lulz: ddmd vs libdparse lexer timings
On 5 January 2015 at 17:44, Daniel Murphy via Digitalmars-d digitalmars-d@puremagic.com wrote: Iain Buclaw via Digitalmars-d wrote in message news:mailman.4157.1420479008.9932.digitalmar...@puremagic.com... For consistency? I would go with (c) as va_list could be anything, even a struct (PPC). That and people shouldn't (*!*) be manipulating va_list directly, though unfortunately we do for std.format, etc. The only realistic option would be (a). I think the only code that needs to manipulate va_list directly is low-level enough that forcing use of a union or *cast(void**)va is reasonable. I think I've got a handle on this, sort of. I've moved the declaration of __va_argsave into the glue layer, and added intrinsic detection for va_start/va_end/va_arg (the two-arg form). I've implemented them in the backend for win32 and they have passed a simple test! I'll run some more extensive tests tomorrow, and then have a look at some other platforms. Do you think we can change _all_ the druntime and phobos code to just use va_arg directly? It would be nice to have it all portable like that. Oh, yeah, do it! You have references to __va_argsave in phobos, don't you?
Re: What exactly shared means?
On Monday, January 05, 2015 12:59:26 via Digitalmars-d-learn wrote: I am also not overly happy with D making TLS default. That means new threads instantiate a lot of unused memory if the workload is heterogeneous (different threads do different type of work). TLS only make sense for things that all threads need. Well, if you don't like the choice of TLS by default, then you're going to be unhappy with shared and its related issues regardless. Personally, I think that having TLS be the default is a fantastic improvement over C++ and that it results in much cleaner and safer code, especially since the vast majority of code only lives on one thread anyway if you're dealing with threads cleanly. But it's definitely true that what we're up to is an experiment in how to handle TLS and shared storage, and by no means have we gotten it perfect. - Jonathan M Davis