Re: Fastest JSON parser in the world is a D project
Am Fri, 13 Jul 2018 18:14:35 + schrieb iris : > Any idea about the performance of this json parser? > https://jsonformatter.org/json-parser ? That one is implemented in client side JavaScript. I didn't measure it, but the closest match in Kostya's benchmark could be the Node JS entry that is an order of magnitude slower. -- Marco
Re: DIP 1016--ref T accepts r-values--Community Review Round 1
Am Sat, 21 Jul 2018 19:22:05 + schrieb 12345swordy : > On Saturday, 21 July 2018 at 08:55:59 UTC, Paolo Invernizzi wrote: > > > Frankly speaking, my feeling is that D is becoming a horrible > > mess for the programmer... > > > /Paolo > How!? Please Explain. You need to demonstrate evidence instead of > appeal to emotional fallacy by resorting to "feels". > > -Alexander The DIP increases consistency recalling that rvalues are accepted: - for the implicit 'this' parameter in methods - in foreach loop variables declared as ref No more special rules: rvalues are implicitly promoted to lvalues where needed. The feeling probably comes from the inevitable realization that the community is pluralistic and Dlang acquired a lot of features that go towards someone else's vision for a good PL. Some want a relaxed stance towards breaking change, some want C++ or ObjC compatibility, some want to know what assembly a piece of code compiles to or have soft realtime constraints that don't work with a system language's mark GC. Is D2 messier than D1? Sure it is, and it caters to more use cases, too. As soon as you substantiate what exact feature is adding to the horrible mess, someone (often a group) will jump to defend it, because they have a good use case or two. It is kind of ironic that in order to do better than C++ you have to support most of what modern C++ compilers offer and end up having tons of unrelated features that make the language just as bloated as C++ after a decade of community feedback. It is a system PL. I think it needs to be this way and is a lot cleaner with basic data types and more expressive still, lacking a lot of C++'s legacy. -- Marco
Re: DIP 1016--ref T accepts r-values--Community Review Round 1
Am Fri, 20 Jul 2018 10:33:56 -0600 schrieb Jonathan M Davis : > On Friday, July 20, 2018 15:50:29 meppl via Digitalmars-d wrote: > > On Friday, 20 July 2018 at 13:21:11 UTC, Jonathan M Davis wrote: > > > On Friday, July 20, 2018 05:16:53 Mike Parker via Digitalmars-d > > > > > > wrote: > > >> ... > > > > > > ... > > > Allowing ref to accept rvalues goes completely against the idea > > > that ref is for passing an object so that it can be mutated and > > > have its result affect the caller. With this DIP, we'd likely > > > start seeing folks using ref all over the place even when it > > > has nothing to do with having the function mutating its > > > arguments, and that's not only error-prone, but it obfuscates > > > what ref was originally intended for. > > > ... > > > > So, if `immutable` would be the default in D, you would be okay > > with "DIP 1016"? Because then people would have to write `ref > > mutable` for mutation > > No. I don't see why that would help at all. > > Honestly, if D had immutable by default, I'd probably quit D, because it > would make the language hell to use. Some things make sense as immutable but > most don't. If I wanted that kind of strait jacket, I'd use a language like > Haskell. > > But regardless, even if I could put up with a such a default, it wouldn't > help, because the use case that Manu is trying to solve would be using ref > mutable just like the use cases that ref is normally used for now. There > needs to be a distinction between the case where ref is used because the > intent is to mutate the object and the case where the intent is to avoid > having to copy lvalues and mutation is acceptable but not the goal. C++ > solves this by using const, since once const is used, mutation is no longer > an issue, so the refness is clearly to avoid copying, but with how > restrictive const is in D, it probably wouldn't solve much, because many use > cases couldn't use const. We really do need a mutable equivalent to C++'s > const&. But Manu's solution unnecesarily destroys the dinstinction between > ref being used as means to mutate the argument and ref being used to avoid > copying the argument. Since we can't use const for that, we really need a > new attribute. > > - Jonathan M Davis I understand the distinction you make, but I don't feel strongly about it. Vector math can use const in D just fine as there are no indirections involved, so my functions would have `ref const` arguments all over the place to discriminate between "for performance" and "for mutation". That said, I'm not opposed to some other keyword or @tribute either. -- Marco
Re: Dscanner - DCD - Dfix ... Editor support or the lack of it.
Am Sat, 27 Jan 2018 07:54:37 -0500 schrieb Steven Schveighoffer: > If I had to write swift code without xcode, it would take me so much > extra time, because there are things you just aren't going to get done > without the tools. Swift's libraries are also vast and IMO confusingly > named. Same thing with Java. Without an IDE you see ridiculously long names and a lot of typing. But they do follow conventions that are understood by Java IDEs. The dummy implementation of an interface for example is always called Adapter and can be auto-generated. All byte streams end in ...Stream and similar. This makes it easy to have mnemonics handy: "I'm looking for an input, buffered, stream". So you type IBS, auto-complete and the IDE expands that to InputBufferStream and takes care of the necessary import. Some languages are developed with IDE support in mind, but are then limited in expressiveness and not editor friendly. -- Marco
Re: Dscanner - DCD - Dfix ... Editor support or the lack of it.
Am Sat, 27 Jan 2018 14:58:27 -0800 schrieb "H. S. Teoh": > I use ctags with vim, and it's amazingly efficient: two keystrokes and > I'm right at the right file in the right place on top of the definition > of an identifier. Less than 1 second. Yet when I work with my coworker, > who uses a fancy GUI-based IDE, he has pull up the search function, > re-type the identifer that the cursor is already sitting on, then wait > for the thing to slowly churn through 50,000 source files looking for a > pattern match, then reach for the mouse and drag the scrollbar down a > long list of possible matches, then open the file, then navigate to the > right place in the file. An order of magnitude slower. His IDE doesn't seem all that fancy. All the ones I came across had "Jump to definition" (Visual Studio, IntelliJ, NetBeans, Eclipse, MonoDevelop, Turbo Delphi, Qt Creator, CoEdit). F3in Mono-D Ctrl+Shift+Up in CoEdit In Mono-D when you type in the opening '(' of a function call, it would pop up the documentation and show you which argument you are currently at. The arrow keys can be used to pick another function overload with the current one displayed as e.g. 1/3 in the corner. > As for renaming files, what has that got to do with Vim? It's just > ctrl-Z, `mv orig.d dest.d`. Maybe followed by `git add dest.d`. Two > seconds max. Again, being unable to work with the OS efficiently is not > a sign of an inherent flaw of the OS, just the inexperience of the user. In a well working IDE like Eclipse for Java, you'd click on the file, press F2 or whatever the key for a rename is, type the new name (without extension), press Enter and it would handle the file rename, git add and updating all references to the module/class in the loaded projects. Eat that editor user. :p I for one can't do Ctrl+Z, "mv orig.d dest.d" and "git add dest.d" in 2 seconds. I'm a slow typer. :( And that still doesn't update all my imports referring to orig.d. > T -- Marco
Re: Trello group for build tools, IDEs, OS integration?
Am Mon, 23 Apr 2018 16:02:18 + schrieb Seb: > I agree that the current Bugzilla instance is suboptimal for > collaboration. > > FYI: there's already a DLang Trello board, but it wasn't actively > used: > > https://trello.com/b/XoFjxiqG/active > https://trello.com/b/jGdlx9vZ/backlog > > GitHub Projects seem to be more successful so far, but not many > people use them either: > > https://github.com/dlang/phobos/projects > https://github.com/dlang/dmd/projects Anyways I was thinking less of splitting work in units and assigning it to milestones than of a special interest group that just talks and does nothing except for making decisions and writing them down some place where all tool devs and package maintainers would look for them when something they want to work on may be interesting to others in the group. None of the technical solutions we use convinced me so far. -- Marco
Trello group for build tools, IDEs, OS integration?
I am not familiar with Trello (have no account there), but I noticed that it is easier to keep track of important issues over there than on a news group. I was thinking that the group of people that work on the language, runtime and Phobos have only a small overlap with those working on what's in the subject line, Martin being a prominent exception. People involved with the eco system have or _have had_ their own pet peeves. To name a few: * Installation paths https://forum.dlang.org/thread/20131112205019.12585bbd@marco-leise * What files to include in the binary releases https://forum.dlang.org/post/aymaziydrfhapuiur...@forum.dlang.org * dub and existing tooling or system package managers https://github.com/dlang/dub/issues/845 https://github.com/dlang/dub/issues/342 * New `version` identifiers for build tools (like dub's Have_*) https://forum.dlang.org/thread/scszysuevvjrjhahr...@forum.dlang.org It often revolves around establishing standards for a better user experience or better separating responsibilities. People in the group would include package maintainers for various OSs, IDE developers and tool developers. The discussions needs to remain visible for future reference and the outcomes should be collected to become something like a code style guide, just for those involved with the eco system. Bugzilla could be put to use for this, but I think it is less accessible. 1. Is this a good idea at all ? 2. If yes, could affairs be split up enough in the current Trello group or would a separate group be ideal ? 3. I'm thinking that the takeaway of the discussions should end up on wiki.dlang.org in an "eco system" category. Maybe start with one page and then move things to sub pages as sections grow. The goal is to have everyone on the same page, not everything. -- Marco
Re: Is sorted using SIMD instructions
Am Thu, 12 Apr 2018 09:37:58 + schrieb Stefan Koch: > On Thursday, 12 April 2018 at 07:25:27 UTC, Per Nordlöw wrote: > > Neither GCC, LLVM nor ICC can auto-vectorize (and use SIMD) the > > seemly simple function > > > > bool is_sorted(const int32_t* input, size_t n) { > > if (n < 2) { > > return true; > > } > > > > for (size_t i=0; i < n - 1; i++) { > > if (input[i] > input[i + 1]) > > return false; > > } > > > > return true; > > } > > > > Can D's compilers do better? > > > > See http://0x80.pl/notesen/2018-04-11-simd-is-sorted.html > > I highly doubt it. > this cannot be auto-vectorized. > There is no way to proof the loop dimensions. > > I'd be a different story if you unrolled the loop by hand. > I guess then you'd see gcc and clang putting some simd in there > maybe. I had it triggered in gcc one day, when I changed from 3 ubyte color components to 4 in a struct and I do believe in both cases it was padded to 4 bytes. One should probably read a bit into what the respective backends can detect and modify code to match that. D compilers are ultimately limited by what the backends offer in terms of auto-vectorizing non-SIMD code. It is easy to vectorize a loop without pointer aliasing that performs C=A+B, but your code above requires complex logic. There is no SIMD instruction in the SSE series that would check if an array is sorted as far as I know. Instead you'd load one vector starting at index 0 and another starting at index 1 of the same "input" array and compare them piece-wise. That's an difficult problem for the compiler: * one of the vectors is always going to be an unaligned load, which may be degrade performance * most of the data is loaded twice You could alternatively load only one vector and shuffle that to compare N-1 items at a time, but at that point it feels like asking the compiler to automatically create source code for your problem. As if one just says: "Hey, I need to print all primes up to 100." And the compiler understands what to do. -- Marco
Re: D compiles fast, right? Right??
Am Tue, 3 Apr 2018 16:15:35 -0700 schrieb "H. S. Teoh": > On Tue, Apr 03, 2018 at 10:59:13PM +, Atila Neves via Digitalmars-d wrote: > > That sentence might as well be fingernails on a blackboard for me! I > > save compulsively. Whenever I stop typing, C-x C-s it is for me. > > […] I can't imagine not saving for long periods of time. Me too, but for me it was Delphi 2005/2006 with frequent IDE crashes that had me start doing this and dmd eating into swap memory when I write messy CTFE code that keeps me doing Ctrl+S before before any action that has any likelihood to stall the system. I also don't trust my own code. -- Marco
Re: Deprecating this(this)
Am Mon, 2 Apr 2018 11:57:55 -0400 schrieb Andrei Alexandrescu: > Problem is we don't have head-mutable in the language. Yes, for built-in > slices the mechanism is simple - just change qualifier(T[]) to > qualifier(T)[]. For a struct S, there is no way to convert from > qualifier(S) to tailqualifier(S). > > I plan to attack this directly in the DIP - provide a way for structs to > express "here's what implicit conversion should be applied when doing > template matching". > > Andrei You are hitting a prominent type system flaw here. What may look like a hurdle on the path to fix this(this) is also at the core of getting "shared" into a good shape and probably affects how we will discuss "immutable destructors" and their kin in the future. The question is "How transitive is a qualifier when we strip it top-level on an aggregate?" In https://issues.dlang.org/show_bug.cgi?id=8295 I've been arguing for removing all qualifiers on shallow copies and the case you mentioned where top level qualifiers are stripped for template matching reconfirms me that there is generally some merit to that semantic, that should be explored. Shared structs need elaborate code to be copied, that's for sure. There may be a mutex to be used or values may be copied using atomic loads. The result would be what you dubbed "tailqualifier(S)". I.e. in case of shared, a thread-local copy of the fields that make up the struct. But then it starts to become messy: * Are there cases where we want references contained in the struct to become unshared, too? * If yes, what if these references were marked shared themselves in the struct's definition? * If all fields become unshared, shouldn't the now superfluous mutex be removed from the struct? If so, what started out as a bit blit, now produces a different type entirely. I'm interested to hear more on your thoughs on "tailqualifier(S)". -- Marco
Re: Why think unit tests should be in their own source code hierarchy instead of side-by-side
I understand your opinion and I think it is all reasonable. You talk about longer compile times since every D module is like a C++ header. That touches one of my pet peeves with the language or eco system as it stands and I wonder if you would agree with me on the following: Libraries should be tied into applications using interface files (*.di) that are auto-generated by the compiler for the _library author_ with inferred function attributes. If after a code change, a declaration in the *.di file changes, the library's interface changed and a new minor version must be released. The language must allow to explicitly declare a function or method as @gc, impure, etc. so the auto-inferred attributes don't later become an issue when the implementation changes from e.g. a pure to an impure one. Opaque struct pointers as seen in most C APIs should also be considered for *.di files to reduce the number of imports for member fields. That means: * No more fuzzyness about whether a library function will remain @nogc, @safe, etc. in the next update. * Explicit library boundaries that don't recursively import the world. -- Marco
Re: Efficient way to pass struct as parameter
Am Wed, 3 Jan 2018 10:57:13 -0800 schrieb Ali Çehreli: > On 01/03/2018 10:40 AM, Patrick Schluter wrote: > > On Tuesday, 2 January 2018 at 23:27:22 UTC, H. S. Teoh wrote: > >> > >> When it comes to optimization, there are 3 rules: profile, profile, > >> profile. I used to heavily hand-"optimize" my code a lot (I come from > >> a strong C/C++ background -- premature optimization seems to be a > >> common malady among us in that crowd). > > > > That's why I always tell that C++ is premature optimization oriented > > programming, aka as POOP. > > […] > > That's why I like producer functions that return values: > > vector makeInts(some param) { > // ... > } > > And if they can be 'pure', D allows them to be used to initialize > immutable variables as well. Pretty cool! :) > > Ali May I add, this is also optimal performance-wise. The result variable will be allocated on the caller stack and the callee writes directly to it. So even POOPs like me, do it. -- Marco
Re: functions allowed to overload on const int vs int vs immutable int? + spec is not accurate
Am Fri, 26 Jan 2018 19:45:54 + schrieb timotheecour: > this compiles, but equivalent in C++ (const int vs int) would > give a > compile error (error: redefinition of 'fun'); what's the > rationale for > allowing these overloads? > > ``` > void fun(int src){ writeln2(); } > void fun(immutable int src){ writeln2(); } > void fun(const int src){ writeln2(); } There is also `inout` and `shared`. Granted, for basic types it makes no difference, but once references are involved you benefit from keeping the qualifier. A const or mutable string must be assumed to change after the function returns. Immutable strings on the other hand can be copied by reference. Immutable stuff is also implicitly shared, so can be used by multiple threads without synchronization. Optimizations for mutable versions only are also possible, like in-place editing. If your implementation is the same for immutable and mutable qualifiers, just provide the const version, but keep in mind that the type qualifiers are lost then. Any functions you call further down will also use their "const" implementation. Doing this as an overload of course helps with generic programming and maintenance. You can migrate from "const" only to providing multiple implementations without inventing new names or breaking code. It may also have to do with how C++'s const is not much more than a static check, while D's immutable is both transitive and supposed to give strong guarantees that the object never changes. That assumption is somewhat tied to the idea that immutable objects are either in read-only sections of the executable or that the garbage collector will keep them alive until all references are gone. -- Marco
Re: Lazily parse a JSON text file using stdx.data.json?
Am Sun, 17 Dec 2017 10:21:33 -0700 schrieb David Gileadi: > On 12/17/17 3:28 AM, WebFreak001 wrote: > > On Sunday, 17 December 2017 at 04:34:22 UTC, David Gileadi wrote: > > uh I don't know about stdx.data.json but if you didn't manage to succeed > > yet, I know that asdf[1] works really well with streaming json. There is > > also an example how it works. > > > > [1]: http://asdf.dub.pm > > Thanks, reading the whole file into memory worked fine. However, asdf > looks really cool. I'll definitely look into next time I need to deal > with JSON. There is also the JSON parser from https://github.com/mleise/fast if you need to parse 2x faster than RapidJSON ;) -- Marco
Re: Advertise D's great compatibilty with JavaScript
Am Fri, 13 Oct 2017 17:57:12 + schrieb John Gabriele: > Why do you choose Lua? Whatever replaces Javascript (and compiles > to wasm) will be used for large apps, like how Javascript is > currently used. My understanding is that Lua is not particularly > well suited for building large apps. I agree with that. Whenever the time comes to make adjustments to the Lua code I miss the good old "compiler will tell me where type became incompatible" style of refactoring. As common in scripting languages nothing stops you from having typos in a property name that you want to assign a new value. It'll just create a new one. -- Marco
Re: Beta 2.076.1
P.S.: The directory layout could be improved as well. Currently there is: src\ +-dmd +-druntime +-phobos But druntime in posix.mak:10 expects a src directory inside the dmd directory: dmd\ +-src So effectively the directory names have to be swapped for that to work. At that point the superfluous directories for the other operating systems, containing only dmd.conf could be removed as well. Other than that I'm happy with the package, as it provides the man pages, pre-built HTML documentation and a binary to bootstrap dmd on systems that lack a D compiler. (The use case being compilation from source for Gentoo Linux.) -- Marco
Re: Beta 2.076.1
Could you include the "default_ddoc_theme.ddoc" and "config.sh" in the source releases? The sources cannot be compiled without them. -- Marco
Re: @safe(bool)
Am Sun, 20 Aug 2017 00:29:11 + schrieb Nicholas Wilson: > On Saturday, 19 August 2017 at 17:10:54 UTC, bitwise wrote: > > I'm still concerned about having to read code that's laced full > > of custom attributes, the resolution of which may span several > > files, templates, etc. > > > > I also think this type of thing could have a detrimental effect > > on modularity when you end up having to include "myAttribs.d" > > in every single file you want to work on. I would much rather > > have a flexible in-language solution, or a solution that didn't > > require me to define my own attributes. > > Having worked on a project with a lot of attributes, my > suggestion would be to import it via a package.d, you'll be > importing that anyway. +1 A bigger project /usually/ has some default imports. Typical use cases are unifying compiler versions and architectures, back-porting new Phobos features and custom error handling and logging. For example, in dub the modules in the "internal" package are imported into most of the bigger modules. You can also create a file template with documentation header, license and default imports if you need to create a lot of modules. -- Marco
Re: Dynamic array leak?
Am Fri, 11 Aug 2017 18:44:56 + schrieb bitwise: > […] That can't work and here is why: Druntime employs a conservative GC that will treat several things as potential pointers to GC memory. From the top of my head, the entire stack as well as void[] arrays and unions that contain pointers. Some integer variable on the stack or a chunk of a void[] that happens to have the same value as your GC pointer will keep it alive. Same for a union of an integer and a pointer where you have set the integer part to the address of your GC memory chunk. These misidentifications (false pointers) can become a real problem on 32-bit systems, where due to the small address space many things can look like valid pointers and keep GC memory alive that should long since have been recycled. P.S.: Also keep in mind that if you were to run multi-threaded, the ptr you test for could have been recycled and reassigned between GC.collect() and GC.addrOf(). Some unittesting frameworks for example run the tests in parallel. -- Marco
Re: __dtor vs __xdtor
Am Fri, 11 Aug 2017 17:10:14 + schrieb bitwise: > Ok thanks. > > I don't understand why you would ever want to call __dtor > then...is it possible to have only __dtor without also having > __xdtor? Like, if I want to call a struct's destructor, do I have > to check for both, or can I just always check for, and call > __xdtor? I think it was simply that all the special methods needed a symbol name, so this() was called __ctor and ~this() was called __dtor. It was never supposed to cover field destruction, mixed in destructors or inheritance in classes. User code was not expected to call these directly anyways. Not very long ago __xdtor and __xpostblit were introduced that wrap up the entire finalization and copy operation. __dtor will remain as the 1:1 representation of the ~this() method. -- Marco
Re: readText with added null-terminator that enables sentinel-based search
Am Tue, 08 Aug 2017 20:48:39 + schrieb Nordlöw: > Has anybody written a wrapper around `std.file.readText` (or > similar) that appends a final zero-byte terminator in order to > realize sentinel-based search in textual parsers. What do you mean by similar? There are many ways to load a file into memory before appending \0. In fast.json I used a memory mapped file. On some OSs you can read past the end of such mappings safely to generate extra \0 bytes. 16 zero-bytes is a good amount if you want to use the SSE4.2 string instruction. https://github.com/mleise/fast/blob/master/source/fast/json.d#L1464 -- Marco
Re: returning D string from C++?
Am Sat, 05 Aug 2017 20:17:23 + schrieb bitwise: > virtual DString getTitle() const { > DString ret; > ret.length = GetWindowTextLength(_hwnd) + 1; > ret.ptr = (const char*)gc_malloc(ret.length, 0xA, NULL); > GetWindowText(_hwnd, (char*)ret.ptr, ret.length); > return ret; > } In due diligence, you are casting an ANSI string into a UTF-8 string which will result in broken Unicode for non-ASCII window titles. In any case it is better to use the wide-character versions of Windows-API functions nowadays. (Those ending in 'W' instead of 'A'). Starting with Windows 2000, the core was upgraded to UTF-16[1], which means you don't have to implement the lossy conversion to ANSI code pages and end up like this ... [information loss] UTF-8 <-> Windows codepage <-> UTF-16 || in your code inside Windows ... but instead directly pass and get Unicode strings like this ... UTF-8 <-> UTF-16 | in your code string to zero terminated UTF-16: http://dlang.org/phobos/std_utf.html#toUTF16z zero terminated UTF-16 to string: ptr.to!string() or just ptr[0..len] if known Second I'd like to mention that you should have set ret.length = GetWindowText(_hwnd, (char*)ret.ptr, ret.length); Currently your length is anything from 1 to N bytes longer than the actual string[2], which is not obvious because any debug printing or display of the string stops at the embedded \0 terminator. [1] https://en.wikipedia.org/wiki/Unicode_in_Microsoft_Windows [2] https://msdn.microsoft.com/de-de/library/windows/desktop/ms633521(v=vs.85).aspx -- Marco
Re: all OS functions should be "nothrow @trusted @nogc"
Am Tue, 1 Aug 2017 10:50:59 -0700 schrieb "H. S. Teoh via Digitalmars-d": > On Tue, Aug 01, 2017 at 05:12:38PM +, w0rp via Digitalmars-d wrote: > > Direct OS function calls should probably all be treated as unsafe, > > except for rare cases where the behaviour is very well defined in > > standards and in actual implementations to be safe. The way to get > > safe functions for OS functionality is to write wrapper functions in D > > which prohibit unsafe calls. > > +1. I think I got it now! size_t strlen_safe(in char[] str) @trusted { foreach (c; str) if (!c) return strlen(str.ptr); return str.length; } :o) -- Marco
Re: newCTFE Status July 2017
Am Sun, 30 Jul 2017 14:44:07 + schrieb Stefan Koch: > On Thursday, 13 July 2017 at 12:45:19 UTC, Stefan Koch wrote: > > [ ... ] > > Hi Guys, > > After getting the brainfuck to D transcompiler to work. > I now made it's output compatible with newCTFE. > > See it here: > https://gist.github.com/UplinkCoder/002b31572073798897552af4e8de2024 > > Unfortunately the above code does seem to get mis-compiled, > As it does not output Hello World, but rather: > > Funny, it is working and mis-compiling at the same time. I figure with such complex code, it is working if it ends up *printing anything* at all and not segfaulting. :) -- Marco
Dlang + compile-time contracts
Coming from D.learn where someone asked for some automatism to turn runtime format strings to `format()` into the equivalent `format!()` form automatically to benefit from compile-time type checks I started wondering... The OP wasn't looking for other benefits of the template version other than argument checking and didn't consider the downsides either. So maybe there is room for improvement using runtime arguments. So let's add some features: 1) compile-time "in" contract, run on the argument list 2) functionality to promote runtime arguments to compile-time string format(string fmt) in(ctfe) { // Test if argument 'fmt' is based off a compile-time // readable literal/enum/immutable static if (__traits(isCtfeConvertible, fmt)) { // Perform the actual promotion enum ctfeFmt = __traits(ctfeConvert, fmt); static assert(ctfeFmt == "%s", "fmt string is not '%s'"); } } body { return "..."; } Note that this idea is based on existing technology in the front-end. Compare how an alias can stand in for a CT or RT argument at the same time: void main() { const fmt1 = "%x"; auto fmt2 = "%s"; aliasTest!fmt1; aliasTest!fmt2; } void aliasTest(alias fmt)() { import std.stdio; static if (__traits(compiles, {enum ctfeFmt = fmt;})) // "Promotion" to compile time value enum output = "'fmt' is '" ~ fmt ~ "' at compile-time"; else string output = "'fmt' is '" ~ fmt ~ "' at runtime"; writeln(output); } This prints: 'fmt' is '%x' at compile-time 'fmt' is '%s' at runtime For technical reasons a compile-time "in" contract can not work in nested functions so all the CTFE contracts need to be on the top level, user facing code. That means in practice when there are several formatting functions, they'd extract the implementation of the compile-time contract into separate functions. I have no idea how exactly that scales as the `static if (__traits(isCtfeConvertible, …))` stuff has to remain in the contract. (It's probably ok.) Extending the CTFE promotion to any variables that can be const-folded is not part of this idea as it leaves a lot of fuzzyness in language specification documents and results in code that produces errors in one compiler, but not in another. Since some people will still find it beneficial it should be a compiler vendor extension and print a warning only on contract violations. -- Marco
Re: Compile Time versus Run Time
Am Mon, 31 Jul 2017 15:43:21 + schrieb Martin Tschierschke: > As a rookie in D programming I try to understand the power of > templated functions with compile time parameters. With DMD 2.074 > a compile time format > (auto output = format!("Print this %s")(var);) > > was introduced, now we all know that very many of this format > strings are immutable, so wouldn't it be cool to automatically > detect this and use the compile time version? > > Without the need to think about it and to use an other syntax? > Is this theoretically possible? I see no way to accomplish this. For the compiler to see the contents of the format string it needs to be a template argument and as soon as you want to also allow runtime values to be accepted there, you need to use an alias, which in turn precludes the use of function results or concatenation. Believe me I've spent quite some time on trying something like this for format strings. > Regards mt. As far as using template arguments for code optimizations go, I know that at least GCC will turn runtime arguments into template arguments of sorts internally, thereby creating duplicates of the function with one or more arguments optimized out. On the other hand, you don't want to drive this too far. While it is nice to have compile-time checks, templates are actually troublesome on some levels. For example, the duplicated code makes it hard to cache the formatting function in the CPU and when writing libraries you always have to provide the full implementation that will get linked into the host application, which is a concern under certain licensing schemes. The benefits of shared libraries, like loading the code into memory once and use it by multiple processes or fixing security issues in one central place and have all programs use the new code without recompilation are also void. I.e. without template arguments, `format()` and all its dependencies (templates as well as regular functions) are compiled right into the Phobos shared library for all programs to use. If there is a security issue, it can be replaced with a patched version. Now with template arguments, `format!()` is compiled into each Dlang application multiple times for each format string and security fixes cannot be applied without recompiling them all. -- Marco
Re: Error 1: Previous Definition Different : _D3gtk3All12__ModuleInfoZ (gtk.All.__ModuleInfo)
Am Fri, 28 Jul 2017 22:53:52 + schrieb FoxyBrown: > After upgrading to latest dmd and having to rebuild gtk, I now > get the following error > > Error 1: Previous Definition Different : > _D3gtk3All12__ModuleInfoZ (gtk.All.__ModuleInfo) > > > in my apps that were previously working(no changes, opened up old > app and tried to build it and it didn't work). All I did was > upgrade dmd2. Expect potential changes to the ABI in every Dlang version. After a compiler upgrade you have to rebuild all libraries. That's why for the Gentoo Linux packages I maintain, there is a separate library path for each Dlang version, compiler vendor and architecture (i.e. 32-bit/64-bit). That way an upgrade doesn't affect existing apps at the cost of manually maintaining the list of compilers (and versions) you want to install GtkD for. > So tired of D and it's crap ;/ So unstable in so many ways. About > 10% as productive overall than other languages I've used. It's > error messages are about as helpful as a rock. Error messages did get better, but generic functions will always end up looking more verbose than e.g. errors for C function calls. -- Marco
Re: @safe and null dereferencing
Am Thu, 27 Jul 2017 17:59:41 + schrieb Adrian Matoga: > On Thursday, 27 July 2017 at 17:43:17 UTC, H. S. Teoh wrote: > > On Thu, Jul 27, 2017 at 05:33:22PM +, Adrian Matoga via > > Digitalmars-d wrote: [...] > >> Why can't we just make the compiler insert null checks in > >> @safe code? > > > > Because not inserting null checks is a sacred cow we inherited > > from the C/C++ days of POOP (premature optimization oriented > > programming), and we are loathe to slaughter it. :-P We > > should seriously take some measurements of this in a large D > > project to determine whether or not inserting null checks > > actually makes a significant difference in performance. > > That's exactly what I thought. A typical non-synthetic worst case candidate should be in the tests that would invoke a lot of null checks. (Could be a function call at first to count checks per run of executable and pick a good project.) -- Marco
Re: An Issue I Wish To Raise Awareness On
Am Thu, 20 Jul 2017 08:56:57 + schrieb Kagamin: > On Wednesday, 19 July 2017 at 12:56:38 UTC, Marco Leise wrote: > > That's exactly what I was opposing in the other post. These > > handles are opaque and never change their value. Within the > > Dlang language barrier they can be immutable and as such, > > implicitly shared. > > Given transitivity of immutability the handle should have the > same immutability as the resource it represents. I understand that you apply D keywords to C types in a best fit fashion, to get some errors from the compiler when you use them in the wrong context. It should work alright in some APIs (maybe you can show me an example). But since C does not have these qualifiers, one function may be used for shared and unshared resources or the library may even be compiled with or without thread-safety enabled and you have to query that at *runtime*, where you have no help from the type-system. So I believe, relying on the pattern will be frustrating at times as not general enough. What I seek to achieve by slapping immutable on things like file descriptors and opaque types is the use as hash table keys. As the hashed part of hash table keys must not change, this approach enables us to use these types as keys and statically verify immutability, too. Because opaque structs and integer handles are used in C APIs to *hide* the implementation details, the compiler is deliberately left blind. That FILE* could as well be an integer as far as transitivity of `immutable` goes. Nothing the compiler *can see of it* will ever change. And that's all the type system will ever care about really! Even if part of that hidden structure is actually returned mutably by some function, it is just an implementation detail. Whether you slap `immutable` on anything there is mostly cosmetic. Now what does that mean for type checks in practice? 1) Looking at POSIX' C `fgetc()` function, the stream is a plain FILE*: int fgetc (FILE *stream). Since I/O is thread-safe[1], it should ideally be `shared` the way you put it. And the way I look at it, it should be `immutable` on *our* side of the language barrier. (I.e. Dlang wont be able to change its contents or see the contents change.) 2) You can look up items by file descriptors or FILE* in hash tables implementations with immutable keys. So we can take this away: * Making a struct opaque, is implicitly making it immutable, because it's contents cannot be modified nor read directly - the compiler cannot reason about its contents. * Now we also have a layman's head-const, making resource pointers usable as immutable hash table keys. As you can see my thinking revolves around the idea that hash table keys must be immutable and that stems from the idea that once hashed and sorted into a table, the hashed data must not change. There are other approaches and druntime's AAs simply allow mutable keys: void main() { struct S { int* i; } int a = 1, b = 2; uint[S] aa; aa[S()] = 42; foreach(ref s_key; aa.byKey()) // Allows "innocent" changes to the keys s_key.i = foreach(ref s_key; aa.byKey()) // AA doesn't find the key it just returned! Range violation. uint u = aa[s_key]; } -- Marco
Re: proposed @noreturn attribute
Am Wed, 19 Jul 2017 12:13:40 + schrieb Moritz Maxeiner: > On Wednesday, 19 July 2017 at 11:35:47 UTC, Timon Gehr wrote: > > a value of type bottom can be used to construct a value for any > > other type. > > AFAIK from type theory, bottom is defined as having no values (so > one can't reason about the relationship of such non-existent > value(s) to values of other types). 2018, Dlang is now an esoteric language. After a long bike-shedding the "bottom type" has been named "nirvana" and assigning it to a variable of any other type signifies intent to give the program a reincarnation. On Posix this was efficiently implemented via fork and exec, Windows implementation is still suffering from bad vibes (bugs). Phobos comes in several flavors now, because it was discovered that one Phobos can never be enough to capture all the worlds paradigms and was considered the main offender to peace on the forums. So there is now an assembly optimized fast Phobos for performance fans without safety nor GC; a type theory Phobos that tries hard to hide the fact that structs have a fixed data layout and makes types first class citizens, but doesn't interop with C at all; an auto-decoding Phobos; and a batteries included Phobos with database drivers, audio, image and GUI bindings. -- Marco
Re: An Issue I Wish To Raise Awareness On
Am Wed, 19 Jul 2017 08:50:11 + schrieb Kagamin: > On Tuesday, 18 July 2017 at 19:24:18 UTC, Jonathan M Davis wrote: > > For full-on value types, it should be a non-issue though. > > Not quite. Value types include resource identifiers, which may > have threading requirements, e.g. GUI widget handles and OpenGL > handles, assuming they are thread-safe and making them implicitly > shared would be incorrect. That's exactly what I was opposing in the other post. These handles are opaque and never change their value. Within the Dlang language barrier they can be immutable and as such, implicitly shared. Your thinking is less technical, trying to find a best fit between type system and foreign API, so that only handles with a thread-safe API may become `shared`. I like the idea, but it is impractical. It sometimes depends on whether a library was compiled with multi-threading support or not and a value type can be copied from and to shared anyways, rendering the safety argument void: int x; shared int y = x; int z = y; -- Marco
Re: An Issue I Wish To Raise Awareness On
Am Tue, 18 Jul 2017 18:10:58 + schrieb Atila Neves: > Now I've read your post properly: there is only one destructor. > With the fix I mentioned, just don't defined the `shared` > version, there's no need to. Postblit is still a problem, however. > > Atila The issue is wider than just `shared` by the way: https://issues.dlang.org/show_bug.cgi?id=13628 Some may jump to say that an immutable struct can't be destructed, but my perspective here is that immutable only applies to what the compiler can introspect. A file descriptor or an opaque struct pointer from a C API are just flat values and escape the compiler. They can be stored in an immutable struct and still need `close()` called on them. Layman's head-const :p -- Marco
Re: dmd and Archlinux
Am Tue, 11 Jul 2017 06:21:33 -0600 schrieb Jonathan M Davis via Digitalmars-d: > On Tuesday, July 11, 2017 12:00:51 PM MDT Seb via Digitalmars-d wrote: > > @mleise: OP is using the testing repos where the PIE enforcement > > already landed [1], but libphobos.a isn't built with -fPIC on > > Arch yet. > > The Arch developers, however, are aware of the need to rebuild > > libphobos [2], but maybe we can simply build Phobos with -fPIC by > > default on Posix [3]. > > > > [1] https://www.archlinux.org/todo/pie-rebuild > > [2] https://bugs.archlinux.org/task/54749 > > [3] https://github.com/dlang/phobos/pull/5586 > > PIC and PIE may make sense from security perspective, but they've sure made > dealing with some the Linux distros over the last year or so a bit of a > pain. > > - Jonathan M Davis It adds to security and adds another shovel of dirt on the grave of x86. X86 needs to emulate access to global constants in PIE, while amd64 is using offsets relative to the instruction pointer. RIP x86. (pun intended!) -- Marco
Re: dmd and Archlinux
Am Sun, 09 Jul 2017 18:35:09 + schrieb Antonio Corbi: > Hi! > > Are there any news about the status of packaging dmd for > archlinux? > > The last dmd compiler packaged is 2.074.0 and since the last > batch of updated packages in archlinux, dmd generated objects > fail to link with libphobos with erros like these: > > /usr/bin/ld: /usr/lib/libphobos2.a(object_a_66e.o): relocation > R_X86_64_32 against `.rodata.str1.1' can not be used when making > a shared object; recompile con -fPIC > /usr/bin/ld: /usr/lib/libphobos2.a(object_b_58c.o): relocation > R_X86_64_32 against `.rodata.str1.1' can not be used when making > a shared object; recompile con -fPIC > /usr/bin/ld: /usr/lib/libphobos2.a(object_c_7f4.o): relocation > R_X86_64_32 against `.rodata.str1.1' can not be used when making > a shared object; recompile con -fPIC > /usr/bin/ld: /usr/lib/libphobos2.a(object_d_a07.o): relocation > R_X86_64_32 against `.rodata.str1.1' can not be used when making > a shared object; recompile con ... > > A. Corbi The linker gold (since binutils 2.27) is problematic with DMD and would emit such errors. (Is ld a symlink to ld.gold in this case?) Also binutils 2.28 breaks how DMD's druntime loads shared libraries. You have to either downgrade binutils or use the hack of adding '-fPIC' when compiling executable that link against shared libraries. binutils 2.26 should be the most compatible version for the time being. -- Marco
Re: Types: The Next Generation (Was: Why is phobos so wack?)
Am Sun, 9 Jul 2017 16:22:16 -0400 schrieb "Nick Sabalausky (Abscissa)": > […] a sufficiently-smart compiler could conceivably even > choose "runtime" vs "compile-time" (or even, "it varies") > based on optimization priorities. GCC already does this, i.e. find runtime arguments of constant value and generate a second instance of the function with that argument optimized out. -- Marco
Re: proposed @noreturn attribute
Am Sat, 8 Jul 2017 03:15:39 -0700 schrieb Walter Bright: > […] > > Having an @noreturn attribute will take care of that: > > @noreturn void ThisFunctionExits(); > > Yes, it's another builtin attribute and attributes are arguably a failure in > language design. The 'none' return type sounds interesting, because a @noreturn function is also a void function, it is practical to implement this as a void sub-type or compiler recognized druntime defined "type intrinsic". On the other hand, the attribute solution has worked well for the existing compilers in practice and such a rarely used tag doesn't add significantly to the meticulous D programmers list: "pure @safe nothrow @nogc". -- Marco
Re: Checked vs unchecked exceptions
Am Thu, 06 Jul 2017 13:16:23 + schrieb Moritz Maxeiner: > On Thursday, 6 July 2017 at 11:01:26 UTC, Marco Leise wrote: > > Am Thu, 06 Jul 2017 01:31:44 + > > schrieb Moritz Maxeiner : > > > >> But to be clear (and the title and description of any DIP > >> addressing this should reflect this): > >> These are not checked exceptions, because checked exceptions > >> would require bar to declare its exception set manually. > > > > Yep, absolutely clear. Just like "auto a = 1" does not declare > > a variable as we all know declarations start with a type. > > Red herring. > […] > > --- > void foo() throws AExp throws BExc { ... } > void bar1() { foo(); } // Checked exceptions require this to > result in a compilation error > void bar2() throws AExc throws BExc { foo(); } // this must be > used for checked exceptions > --- You are right, it was a red herring. The code example makes it very obvious that inference means letting the exceptions slip through unchecked and out of main() in the wildest case. > Invalid premise. The definition of checked exceptions is de facto > fixed by Java [1], because it not only coined the term but > remains the only major PL to use them. That's right, but still one can distill general ideas and leave implementations details aside. Pretty much like the Platonic Ideal. Then you look at what the complaints are with the current implementation and see if you can satisfy all sides. I don't know if this is any good beyond an example of a different implementation of checked exceptions, but here is one option against the "every function in the call chain accumulates more and more 'throws' declarations": /** * Throws: * ZeroException when i is 0 * NonZeroException when i is not 0 */ void foo(int i) { if (i == 0) throw new ZeroException(); throw new NonZeroException(); } void bar(int i) @check_exceptions { foo(i); // Error: The following exceptions are not handled: // ZeroException thrown from foo() when i is 0 // NonZeroException thrown from foo() when i is not 0 } I.e. everything stays the same until a programmer needs a verification of what (s)he should/could handle right away, what needs to be wrapped and what can be passed further up the call chain. That's close to impossible now in deeply nested code. Resource unavailability prone to race conditions can often be handled by asking the user to fix the issue and continue for example (including network, disk space, RAM, video encoding hardware slots, exclusive microphone use). In other cases an exception is only thrown when an incorrect argument is passed. Knowing (statically) that you pass only good values you can catch the exception and turn it into an assert instead of passing it up the call chain, potentially allowing the caller to be nothrow. -- Marco
Re: GtkD nothing
Am Thu, 06 Jul 2017 03:49:04 + schrieb FoxyBrown: > Unfortunately, importing that module seems to throw an error for > some insane reason. > > Error 42: Symbol Undefined _D3gtk6All12__ModuleInfoZ > (gtk.AllGTK.__ModuleInfo) > > without importing it in to the project(but directly importing all > the other modules works fine, e.g., copying and pasting). > Sounds like there is no compiled version of gtk.All in the GtkD lib. You could compile it together with your application as a workaround. -- Marco
Re: gdc is in
Am Wed, 21 Jun 2017 15:11:39 + schrieb Joakim: > the gcc tree: > > https://gcc.gnu.org/ml/gcc/2017-06/msg00111.html > > Congratulations to Iain and the gdc team. :) > > I found out because it's on the front page of HN right now, where > commenters are asking questions about D. I missed this thread. So the persistence payed off in the end. Congratulations from me, too. Frankly, I wasn't sure if this day would ever come. -- Marco
Re: Checked vs unchecked exceptions
Am Thu, 06 Jul 2017 01:31:44 + schrieb Moritz Maxeiner: > But to be clear (and the title and description of any DIP > addressing this should reflect this): > These are not checked exceptions, because checked exceptions > would require bar to declare its exception set manually. Yep, absolutely clear. Just like "auto a = 1" does not declare a variable as we all know declarations start with a type. Instead of defining checked exceptions how it bests fits all your posts in this thread, why not say "Checked exceptions as implemented in Java were a good idea, had they allowed the compiler to infer them where possible." Of course we can still call those inferred checked exceptions something else. ;o) -- Marco
Re: Checked vs unchecked exceptions
In general, I'm of the opinion that DLL API should be fully explicit. It's the only way you can reasonably provide ABI stability. That means no templates, no attribute inference and explicit exception lists if the compiler were to used them. -- Marco
Re: Checked vs unchecked exceptions
Am Wed, 28 Jun 2017 11:04:58 + schrieb Moritz Maxeiner: > One could also make an exception for bodyless functions and allow > specification of the exception set *only* there, e.g. > > --- > // Allow "checked exceptions" for stubs only > void foo() throws AException throws BException; > --- Ah, come one, now that you see the need you can also embrace it as an option for people who want to add some documentation. void foo() throws AException // When the ant's life ends prematurely throws BException; // When the ant gets distracted by a lady bug -- Marco
Re: Linux linker errors with binutils >= 2.27
This is the bug report for binutils 2.28 and the "module is already defined in" error: https://issues.dlang.org/show_bug.cgi?id=17375 Basically you need to add -fPIC to the command line for the executable when linking against shared objects. One way to do so is to add that switch to dmd.conf. Now that x86 32-bit is dying I feel that's acceptable in most situations. -- Marco
Re: Linux linker errors with binutils >= 2.27
Am Wed, 5 Jul 2017 22:14:47 +0300 schrieb ketmar: > Marco Leise wrote: > > > Since binutils 2.27 I can not compile dmd with the `gold` > > linker any more. On my Gentoo Linux amd64 host, when linking > > the 32-bit version of druntime the following error is > > repeatedly output for all kinds of symbols: > > > > libdruntime.so.a(libdruntime.so.o): > > relocation R_386_GOTOFF against preemptible symbol > mangled name> cannot be used when making a shared object > > > > Now starting with binutils 2.28 - in addition to the above - > > dynamically linked Dlang executables error out in druntime > > with an error message of this type: > > > > Fatal Error while loading '/usr/lib64/libphobos2.so.0.74': > > The module '' is already defined in ''. > > > > Does anyone have solutions to one or both of these issues at > > hand? For Gentoo packages I made shared linking the default > > and unless my configuration is a unique snowflake this is > > right now affecting all DMD users there. > > From reading `checkModuleCollisions()` > > [https://github.com/dlang/druntime/blob/master/src/rt/sections_elf_shared.d#L891] > > it would seem that changes in copy-relocations have something > > to do with the second issue it. > > 1. patch your compiler to use old ELF_COMDAT hack in elfobj.c: > #define ELF_COMDAT TARGET_LINUX > +#undef ELF_COMDAT > +#define ELF_COMDAT 0 > > 2. downgrade to old binutils, new binutils are not working with dmd .so's. > > step "2" is essential. alas. That information is gold, ketmar! -- Marco
Linux linker errors with binutils >= 2.27
Since binutils 2.27 I can not compile dmd with the `gold` linker any more. On my Gentoo Linux amd64 host, when linking the 32-bit version of druntime the following error is repeatedly output for all kinds of symbols: libdruntime.so.a(libdruntime.so.o): relocation R_386_GOTOFF against preemptible symbol cannot be used when making a shared object Now starting with binutils 2.28 - in addition to the above - dynamically linked Dlang executables error out in druntime with an error message of this type: Fatal Error while loading '/usr/lib64/libphobos2.so.0.74': The module '' is already defined in ''. Does anyone have solutions to one or both of these issues at hand? For Gentoo packages I made shared linking the default and unless my configuration is a unique snowflake this is right now affecting all DMD users there. From reading `checkModuleCollisions()` [https://github.com/dlang/druntime/blob/master/src/rt/sections_elf_shared.d#L891] it would seem that changes in copy-relocations have something to do with the second issue it. -- Marco
Re: {OT} My machines are dead
Am Tue, 06 Jun 2017 12:46:21 + schrieb Stefan Koch: > Hi Guys, > > bad news my dev machine and the backup machine are dead. > > The replacement hard-drive I ordered a week ago came today, but > that's no longer the issue. > If someone in the North Rhine-Westphalia area, wants get rid of a > working computer, please contact me :) > > Cheers, > Stefan It's not 1st of April! You can't just say that your hard-drive and your backup failed at the same time and all your CTFE work is now up in the clouds! -- Marco
[OT] I-frame cutting in H.264
Am Sat, 27 May 2017 22:19:11 + schrieb Era Scarecrow: > Only if you have to recompress it. Some tools like VirutalDub > allow you to chop and copy without altering the data stream (it's > good for taking out commercials or shortening clips). Although I > wouldn't be surprised if you wanted to add a logo, do some fading > or some fancy stuff, at which point direct stream copying won't > work. Ok, but even then your source material would ideally have to be encoded with the same codec and parameters. A different resolution would not work, while a change in frame rate is tolerable. H.264 also makes cutting on I-frames more difficult than previous codecs as they don't clear the reference frame buffer. So following frames could still reference frames before the cut, resulting in artifacts. An actual keyframe for the start of the video, jumping to chapters or seeking is now called IDR (Instantaneous Decoder Refresh), which is also an I-frame, hence why old cutting tools like VirtualDub don't see the difference. Another low cost video editor that offers "smart rendering" without re-encoding is PowerDirector. They provide an option to use only IDR-frames or any I-frame for cuts (as you would do in VirtualDub), for cases where in your source material they are _actually_ proper keyframes with no cross-references. I tried the latter on footage shot with a Sony digicam and it resulted in a broken video stream. So unless you know that every I-frame is an IDR in your sources it is not advisable to use them as cut points. -- Marco
Re: avoid extra variable during void pointer cast
Am Mon, 15 May 2017 19:30:00 + schrieb Bauss: > pragma(inline, true); doesn't actually do what you think it does. > In lining is always done whenever possible and that only tells > the compiler to spit out an error if it can't inline it. A compiler doesn't simply inline whenever it can. A big function that's called often would lead to massive code duplication in that case. What I meant pragma(inline, true) to do is overrule this cost calculation. Since the OP asked for no extra function calls, the error on failure to inline seemed appropriate. Cross-module inlining may fail for example on some compiler(s) or with separate compilation. -- Marco
Re: D on AArch64 CPU
Am Sun, 14 May 2017 15:11:09 + schrieb Richard Delorme: > Or should I wait for an offcial support of this architecture? You ARE the official support now. :) -- Marco
Re: avoid extra variable during void pointer cast
Am Sun, 14 May 2017 20:18:24 + schrieb Kevin Brogan: > I have a piece of code that takes a callback function. > > The callback has the signature void callback(void* state, void* > data) > > There are several of these functions. All of them use state and > data as differing types. > > As an example, let's look at one that uses both of them as int*. > > addInt(void* state, void* data) > { > *cast(int*)state += *cast(int*)data; > } > > Is it not possible to specify the cast as an alias so that I can > declare the cast once at the beginning of the function? > > Something like this? > > addInt(void* state, void* data) > { > alias _state = cast(int*)state; // Error: basic type > expected, not cast > alias _data = cast(int*)data; // Error: basic type expected, > not cast > > *_state += *_data; > } No, that is not possible. An alias can only be assigned a symbol. > I can always do this: > > addInt(void* state, void* data) > { > int* _state = cast(int*)state; > int* _data = cast(int*)data; > > *_state += *_data; > } > > But I don't want to create a new variable and assign it everytime > I call the function. The examples I'm using are contrived, but in > the c code I am porting this from, the callback gets called > thousands of times a second, every optimization matters, and the > variables are used many times per function. I don't want to > riddle the code with casts if i can avoid it and I don't want to > create and destroy useless proxy variables every time the > function is called. Let the compiler optimize the assignment away and don't worry much about it. Inlining also works well within the same module. In this case here I would probably use "consume" functions as I dub them: import std.traits; pragma(inline, true) /* Not really needed I hope ;) */ ref T consume(T)(ref void* data) if (!hasIndirections!T) { T* ptr = cast(T*)data; data += T.sizeof; return *ptr; } You can then rewrite your addInt function like this: void add(T)(void* state, void* data) if (isNumeric!T) { state.consume!T += data.consume!T; } -- Marco
Re: "I made a game using Rust"
Am Fri, 12 May 2017 05:42:58 + schrieb Lewis: > Ah okay. If I understand correctly, the "game" itself is just the > two .d files in the Scripts folder, which get compiled then > linked with the prebuilt .lib for the engine. If so, a 10-12s > compile just for those two files still sounds really long to me. > Either I'm misunderstanding, or there's something bizarre going > on that's causing those build times. You can't tell how long a compile takes from looking at the size of the modules you pass to the linker. Unlike C++, Dlang lacks header files that could short-circuit import chains. What this means is that all the imports in a module (except inside templates) are analyzed recursively. In the game at hand this only means parsing the entire engine code base in addition to the two modules, but projected onto LibreOffice or other big projects it would parse the code of so many libraries that compilation times would sky-rocket. (I told this story here once or twice before, but haven't actively done anything to improve the situation.) Things you can do today: - If an import is only used in templates, copy it inside of them. This works today and stops the import from being analyzed unless the template is instantiated during the compilation. Multiple instances wont pessimize compile times. - Move imports from top-level into the functions that use them. There is no other way to say "This import not needed by our public interface". - Don't turn everything and the dog into a template. Nested templates stretching over several modules require analysis of all those modules along the way, while a regular function may stop right there. - Following all the above, generate .di files for your library's exported modules. All the imports inside non-template functions will be removed and only what's needed by the public interface (i.e. argument types and return types) is kept. Several ideas spread in the past. From the top of my head: - Walter had the idea to make imports evaluate lazily, and improve the situation without resorting to .di files. - Some issued with the .di file generation stop it from being widely used for example by dub when installing libraries. - The "export" keyword could be narrowed down to be the only visibility level that goes into the .di files created for a library. (Removing public and package visibility symbols and imports pulled in by them in turn.) Last not least, with single-file-compilation like in C++ you may still shoot yourself in the foot in Dlang, because when you update a template and forget to recompile all modules that instantiate it, it is left in the old state there and cause conflicts at link time or runtime errors. Disclaimer: If you find mistakes in the above or something simply isn't true any more please give an update on the current situation. -- Marco
Re: dmd: can't build on Arch Linux or latest Ubuntu
Am Wed, 10 May 2017 20:59:33 +0300 schrieb ketmar: > > On Wednesday, 10 May 2017 at 11:51:03 UTC, Atila Neves wrote: > >> I can't build dmd on Arch Linux anymore. I'm told it's because of a > >> binutils update. Annoying. > > > yeah, the great thing. i was hit by that bus too, and had to mutilate > druntime to make unittests work again. lucky me, i'm using static phobos. > don't even want to think about the scale of the disaster for people with > libphobos2.so... O.O Like what will happen to us on Gentoo with libphobos2.so when binutils-2.8 moves to stable here? And can it all be solved by compiling with -fPIC, like is required for hardened installations anyways or are there dmd versions that flat out wont work any longer? -- Marco
Re: The D ecosystem in Debian with free-as-in-freedom DMD
Am Wed, 03 May 2017 01:02:38 + schrieb Moritz Maxeiner: > On Tuesday, 2 May 2017 at 23:27:28 UTC, Marco Leise wrote: > > Am Tue, 02 May 2017 20:53:50 + > > schrieb Moritz Maxeiner : > > > >> On Tuesday, 2 May 2017 at 19:34:44 UTC, Marco Leise wrote: > >> > > >> > I see what you're doing there, but your last point is > >> > wishful thinking. Dynamically linked binaries can share > >> > megabytes of code. Even Phobos - although heavily templated > >> > - has proven to be very amenable to sharing. For example, a > >> > "Hello world!" program using `writeln()` has these sizes > >> > when compiled with `dmd -O -release -inline`: > >> > > >> > static Phobos2 : 806968 bytes > >> > dynamic Phobos2 : 18552 bytes > >> > > >> > That's about 770 KiB to share or 97.7% of its total size! > >> > Awesome! > >> > >> Is all of that active code, or is some of that (statically > >> knowable) never getting executed (as in could've been removed > >> at compile/link time)? > > > > I guess David gave you the answer. So it's just 95.4% of its > > total size. :p > > Under the assumption that ldc2 produces no dead code in the > output; is that a reasonable assumption (I'm not sure)? > > > By the way, is the fully dynamic linking version possible with > > ldc2 now as well ? > > I did have a modified ebuild to try that out a while back and it > seemed to work fine in my limited testing scope. Since I quite > often change installed d compilers and don't want my programs > (like tilix) to stop working (or have old d compiler versions > being kept installed *just* because some programs were built with > them), I generally link against phobos statically, anyway, so I > my tests weren't exhaustive. The cmake flag to to use in the > ebuild would be BUILD_SHARED [1]. > > [1] > https://github.com/ldc-developers/ldc/blob/v1.2.0/CMakeLists.txt#L522 I remember seeing this line and thinking "aww .. no way to build both versions?". I.e. enable the "static-libs" USE flag to _also_ build static libraries accompanying the shared ones. I'll create an issue asking about that. -- Marco
Re: The D ecosystem in Debian with free-as-in-freedom DMD
Am Tue, 02 May 2017 20:53:50 + schrieb Moritz Maxeiner: > On Tuesday, 2 May 2017 at 19:34:44 UTC, Marco Leise wrote: > > > > I see what you're doing there, but your last point is wishful > > thinking. Dynamically linked binaries can share megabytes of > > code. Even Phobos - although heavily templated - has proven to > > be very amenable to sharing. For example, a "Hello world!" > > program using `writeln()` has these sizes when compiled with > > `dmd -O -release -inline`: > > > > static Phobos2 : 806968 bytes > > dynamic Phobos2 : 18552 bytes > > > > That's about 770 KiB to share or 97.7% of its total size! > > Awesome! > > Is all of that active code, or is some of that (statically > knowable) never getting executed (as in could've been removed at > compile/link time)? I guess David gave you the answer. So it's just 95.4% of its total size. :p By the way, is the fully dynamic linking version possible with ldc2 now as well ? -- Marco
Re: The D ecosystem in Debian with free-as-in-freedom DMD
Am Tue, 11 Apr 2017 15:03:36 + schrieb qznc: > On Tuesday, 11 April 2017 at 12:56:59 UTC, Jonathan M Davis wrote: > > But if we just use dub - which _is_ the official packaging and > > build tool - then we avoid these issues. Ideally, the compiler > > and dub would be part of the distro, but libraries don't need > > to be. And it sounds like that's basically how the Go and Rust > > folks want to function as well. So, it would make sense for > > these languages to simply not have their libraries be included > > in distros. The build tools are plenty. > > This is not compatible with Debian. Debian requires to include > *everything*. You must be able to build every package without > network access only from source packages. > > Essentially, somebody must fix this dub issue: > https://github.com/dlang/dub/issues/838 I created a similar issue 3 years ago: https://github.com/dlang/dub/issues/342 A key point is how environment variables that everyone agreed upon made "make" a flexible build tool, being able to use a user specified compiler, installation directory, compiler flags and more. This flexibility is relied on by Linux packaging tools to produce binaries for a specific target. Then there is the problem that at some point you rely on non-Dlang libraries that only the system package manager can provide and update. In the past this always ended in the system package manager winning over the language tool as the common program to update and clean the system. Dicebot and bioinfornatics (Fedora) felt that dub was for developers and wasn't tailored to system administration at that time. p0nce, s-ludwig and markos (Debian) also gave input and the net result was: A Makefile generator for dub shall be written that exposes the desired environment variables and can be used by existing Linux package managers. -- Marco
Re: The D ecosystem in Debian with free-as-in-freedom DMD
Am Tue, 11 Apr 2017 07:40:12 -0700 schrieb Jonathan M Davis via Digitalmars-d: > It could always just be distributed as a static library. There arguably > isn't much point in distributing it as a shared library anyway - > particularly when it's not ABI compatible across versions. I actively avoid > having phobos as a shared library on my systems, because it just causes > versioning problems when programs are built against it. All of those > problems go away with a static library. And so much of Phobos is templated > anyway that there isn't even much to share. > > - Jonathan M Davis I see what you're doing there, but your last point is wishful thinking. Dynamically linked binaries can share megabytes of code. Even Phobos - although heavily templated - has proven to be very amenable to sharing. For example, a "Hello world!" program using `writeln()` has these sizes when compiled with `dmd -O -release -inline`: static Phobos2 : 806968 bytes dynamic Phobos2 : 18552 bytes That's about 770 KiB to share or 97.7% of its total size! Awesome! Another package we have on Gentoo now is the tiling terminal emulator Tilix, compiled with its default `dmd -O`: static Phobos2+GtkD : 14828272 bytes dynamic Phobos2+GtkD : 6126464 bytes So here we get ~8.3 megabytes in binary size reduction due to sharing. (Though 6 MiB for a terminal is still a lot - are we slim yet? :P ) Some forms of proprietary code and GPL code need clean shared library interfaces, free from templates and attribute inference. I mentioned this before: As far as feasible have full attribute inference for non-exported functions and reliance on only explicit attributes for exported functions, so that code changes can't break the public API (due to mangling changes). When Dlang's ABI changes, release a major version and make a public announcement so that everyone can schedule the system update to a convenient time as is done with GCC C++ ABI changes. -- Marco
Re: The D ecosystem in Debian with free-as-in-freedom DMD
Am Wed, 12 Apr 2017 07:42:42 + schrieb Martin Nowak: > Our point releases might also contain small ABI > incompatibilities, so they aren't really eligible as patch > version. I've actually been hit by this in one point release on Gentoo, where I used dynamic linking for Dlang as soon as it was possible. Programs would suddenly complain about a mismatch between the size of some compiler generated item after a point release update of dmd. Just posting this to give the discussion a bit more substance. :) -- Marco
Re: What are we going to do about mobile?
Am Sun, 09 Apr 2017 12:44:15 + schrieb Nick B: > > I'd say we just have /more/ fully capable computers around us > > nowadays. I'd probably roughly split it into > > - web/cloud server machines, often running VMs > > - scientific computation clusters > > - desktops (including notebooks) > > - smart phones > > - embedded devices running Linux/Android (TVs, receivers, > > refrigerators, photo boxes, etc...) > > perhaps we need need real data as to what markets are really > growing ? Maybe. It begs the question if growing markets are naturally better than markets of a stable size that can be expected to exist for the next 25 years or so. Otherwise my point was that embedded developers often don't need much of an "eco system" to get started. -- Marco
Re: Dlang forum: some feature requests
Am Mon, 10 Apr 2017 12:24:15 + schrieb Vladimir Panteleev: > On Monday, 10 April 2017 at 09:35:30 UTC, crimaniak wrote: > > IMHO, it's better to do the same as with HTML letters: > > text/markdown body + text/plain body for clients not supporting > > markdown. > > Multipart messages have their issues. > > Are there any clients which will render a text/markdown part as > Markdown? I'm not aware of any. Claws Mail renders it the same as text/plain. I don't know what other news clients do, but the idea to render all text/* as text/plain goes a long way until a proper plugin is available. (text/html without a plugin is really horrible though) I think once a big player starts to support Markdown the rest will follow, but it makes no sense for the Dlang news groups to try and get the ball rolling. (A problem with HTML mail is the great flexibility. Some mails are unreadable on a dark color scheme or the different fonts and sizes drive you mad.) -- Marco
Re: Dlang forum: some feature requests
Am Sat, 08 Apr 2017 12:34:56 + schrieb Vladimir Panteleev: > This is planned in a future update, as part of Markdown support. That will be a web front-end rendering feature only, I guess. _Someone_ should write Markdown rendering plugins for popular news readers. What mime type will the Markdown enhanced messages use? text/plain or text/markdown as per https://de.wikipedia.org/wiki/Markdown ? -- Marco
Re: What are we going to do about mobile?
Am Thu, 06 Apr 2017 05:24:07 + schrieb Joakim: > D is currently built and optimized for that dying PC platform. As long as the world still needs headless machines running web sites, simulations, cloud services, ...; as long as we still need to edit office documents, run multimedia software to edit photos and video, play AAA video games the "PC master race" way; I'm confident that we have a way to go until all the notebooks, PCs and Macs disappear. :) I'd say we just have /more/ fully capable computers around us nowadays. I'd probably roughly split it into - web/cloud server machines, often running VMs - scientific computation clusters - desktops (including notebooks) - smart phones - embedded devices running Linux/Android (TVs, receivers, refrigerators, photo boxes, etc...) When targeting smart phones you have to comply to every manufacturer's frameworks and security measures. On the other hand you can directly sell software and services. The embedded device I know best is my TV receiver, which boots into Linux and then starts a statically compiled executable that handles GUI rendering, remote control input and communication with the hardware. If you knew the protocols you could replace it with something written in Dlang. These devices are not as prominent as phones, but the barrier of entry is relatively low for many applications once you have bindings to a couple of frequently needed C libraries such as freetype, ffmpeg or opencv. > What needs to be done? Same as anything else, we need people to > try it out and pitch in, like this guy who's now trying ldc out > on an embedded device with an old ARMv5 core: > >[…] > > I realize D is never going to have a polished devkit for mobile > unless a company steps up and charges for that work. But we can > do a lot better than the complacency from the community we have > now. As you can use mostly the same compiler targets for embedded as for phones, your best bet to stabilize the ldc targets are probably the embedded developers, because they can see the immediate benefit for their projects and their knowledge about the underlying hardware can help track down bugs. -- Marco
Re: Design to interfaces or Design to introspections
Am Fri, 07 Apr 2017 11:51:10 + schrieb سليمان السهمي (Soulaïman Sahmi): > […] > > Then I stumbled upon DIP84, which reminded me of the other GoF' > principle "Program to an interface, not an implementation". > And I started wondering why would I ever write code like this: > > auto someAlgorithm(SomeInterface obj) { /* ... */ } > > When I can do this: > > auto someAlgorithm(I)(I obj) if(isSomeInterface!I) { /* ... > /* } > > or more generically: > > auto someAlgorithm(I)(I obj) > if(satisfiesInterface!(I, SomeInterface, SomeOtherInterface > /*... etc */) > { /* body ... /* } > > This would be a modern design to introspection? that goes with > the modern design by introspection. > […] > > What do you think. Running the same algorithm on class and struct instances is a much desired feature, especially where programmers tend to fall into two camps where one mostly uses classes and the other mostly uses structs. But reasons for using one over the other slip into the concept of one-function-to-rule-them-all as well. And there is more to consider: - Interfaces variables hold references to the data, while for structs you have to explicitly add "ref" to the argument or else the algorithm will work on a (shallow) copy. - Wrapping structs in interfaces and using interfaces exclusively results in only on instance of the algorithm ending up in the executable instead of one per each class and struct type, which helps with things like executable size, instruction cache hits and debug symbol size and readability. - Some forms of licensing break when using templates in the public API of libraries. Two examples: A proprietary software company sells programming libraries. They need to keep their source code private to prevent theft, but if their API used templates they'd have to provide sources for them and any templates used inside of them. On the other hand a GPL licensed open source library - to be usable in a proprietary project - must ensure that none of its code gets compiled into the target application. Again templates would break that. Then there are other general considerations in favor of interfaces or templates. - Template methods are not virtual functions that you can override in a child class. - Calling virtual methods on interfaces is slower (on some architectures more than others)[1] and there is no static introspection to decide some code paths at compile time. (Devirtualization is a hot topic.) - Templates increase complexity for the compiler and runtime. There have been a few subtle issues in the past, for example symbol name length explosion[2] and object files containing old code in separate compilation scenarios[3]. [1] http://eli.thegreenplace.net/2013/12/05/the-cost-of-dynamic-virtual-calls-vs-static-crtp-dispatch-in-c [2] http://forum.dlang.org/post/efissyhagontcungo...@forum.dlang.org [3] https://issues.dlang.org/show_bug.cgi?id=9922 -- Marco
Re: Making preconditions better specified and faster
Am Thu, 15 Dec 2016 13:48:22 -0500 schrieb Andrei Alexandrescu: > https://issues.dlang.org/show_bug.cgi?id=16975 Here is what I understood. While currently a contract that has a failing assert would be treated differently from one that throws an exception, in the future all kinds of errors would make it return false instead of true. What's not explicitly mentioned is what happens from there on, but I assume that 'false' leads to some handler function in druntime being invoked? Also when you write that unrecoverable errors "cannot be the case" in contracts, what do you mean by that? A) Consider all contracts to have no side-effects. In case of 'false' return, throw a ContractFailedException and continue execution in the caller. B) That was meant to read "may not be the case". A failed assert would cause the contract to return 'false', while an Exception would be propagated to the outside. The former is unrecoverable, the latter is recoverable. (Note: That contradicts what I understood above.) C) Something else. (I bet that's the correct answer.) -- Marco
Re: x86 instruction set reference
Am Tue, 29 Nov 2016 03:53:06 -0800 schrieb Walter Bright: > http://www.felixcloutier.com/x86/ > > I find this easier to use for quick lookups than the Intel PDF files, because > any instruction is just 2 clicks away. You mean ... like that 3600 pages "Intel® 64 and IA-32 Architectures Software Developer’s Manual" I linked in that bug report earlier today? Aside form being the complete and authoritative source on how Intel's CPUs operate, it really doesn't have much going for it. :D -- Marco
Re: Simplicity and complexity of dlang
Am Wed, 09 Nov 2016 14:09:14 + schrieb Chris: > On Wednesday, 9 November 2016 at 14:03:01 UTC, eugene wrote: > > On Wednesday, 9 November 2016 at 08:10:13 UTC, MGW wrote: > >> Having a huge project (approximately 8000 strings) and while > >> supporting it permanently I often find out that changes that > >> take place in new versions of the program (particularly in > >> 2.072)are too hard to perceive and use (especially the basic > >> notions have been replaced, for ex.:the sorting of strings). > >> Offer: > >> To fix the basic notions such as sorting etc. > >> It is possible to divide the language into levels. For > >> example, for beginners it will permanently be a simple and > >> immutable and for investigations it will be another separate > >> branch. > > > > why do you update your compiler if there are no issues with the > > current version? > > You have to upgrade your compiler in order to be able to keep on > using D in the future. Else your project will be frozen in time. The OP is looking for "a DMD for beginners, that is permanently immutable". I think the last release of DMD 1 would fit that shoe in a half serious, half joking way. -- Marco
Re: State of issues.dlang.org
Am Sat, 29 Oct 2016 17:12:54 + schrieb Jacob: > That doesn't make the point any less valid. If something is broke > you fix it or replace it. I agree with Vladimir and cannot really understand your urge to completely replace it. As long as people don't abuse their editing abilities it can be fine to make a title more specific, add a tag and so on. And the custom views help in getting the desired information out of it. It is extremely flexible in that aspect; more than I would expect from a much more modern and younger project. -- Marco
Re: Linus' idea of "good taste" code
Am Tue, 25 Oct 2016 15:53:54 -0700 schrieb Walter Bright: > It's a small bit, but the idea here is to eliminate if conditionals where > possible: > > https://medium.com/@bartobri/applying-the-linus-tarvolds-good-taste-coding-requirement-99749f37684a#.nhth1eo4e > > This is something we could all do better at. Making code a straight path > makes > it easier to reason about and test. > > Eliminating loops is something D adds, and goes even further to making code a > straight line. On a more controversial note, I sometimes replace nested blocks of conditionals and loops with flat spaghetti code and goto with verbose labels. There are situations where you can explain straight forward what needs to be done first, second and last, but special cases and loops make it hard to tell from "normal" code. if (valueTooBig) goto BreakBigValueDown; ProcessValue: ... return; // == // BreakBigValueDown: // makes big values manageable ... goto ProcessValue; There is a concise piece of code that handles 90% percent of the cases and another block for anything that requires special handling. It can in theory also avoid costly memory loads for rarely used code. > One thing I've been trying to do lately when working with DMD is to separate > code that gathers information from code that performs an action. (The former > can > then be made pure.) My code traditionally has it all interleaved together. Interesting. -- Marco
Re: gdc in Linux distros recommended?
Am Wed, 19 Oct 2016 19:25:39 + schrieb TheGag96: > On Wednesday, 19 October 2016 at 03:29:10 UTC, Marco Leise wrote: > > On the other hand LDC subjectively offers a couple more D > > specific enhancements, like turning GC allocations into stack > > allocations in trivial cases > > Whoa, seriously? I know it's a bit off-topic, but do you have a > code example of where this would happen? That's amazing! Sorry, I don't have a concrete example. David Nadlinger keeps emphasizing that the escape analysis is extremely simple. Try a function with only "auto test = new Object;" in it and extend from there using the "-vgc" switch to see when it starts to fail. -- Marco
Re: rvalue references
Am Wed, 19 Oct 2016 11:29:50 +0200 schrieb Timon Gehr: > Yes, the lack of rvalue references can be annoying but 'const' should be > orthogonal to any implemented solution. > > There should be a way to pass rvalues by reference that are not > prevented from being mutated, and it should be the same way as for const > references. (For example, one possible approach is to just allow rvalues > to bind to all 'ref' parameters (as is done for the implicit 'this' > reference), alternatively there could be some additional "allow rvalues > here" annotation that does not influence mutability.) Ok, got ya now! -- Marco
Re: gdc in Linux distros recommended?
Am Tue, 18 Oct 2016 16:02:28 -0700 schrieb Ali Çehreli: > I have a friend who has started writing a library in D. > > Although I recommended that he should use a recent dmd or ldc, he thinks > gdc is a better candidate because it's "available to the masses" through > Linux distros similar to how gcc is. Although he has a good point, the > gdc that came with his distro does not even support @nogc. > > Thoughts? Can you please tell him to change his mind! :p > > Ali If he is starting right *now*, missing fixes or language enhancements will cause confusion when he asks questions on the newsgroup or on IRC. (But he has you for that, right?) Back in the days I would have opted for GDC, too. It didn't lag far behind and I had hopes that it would get merged into GCC for good, meaning it would become the de facto D compiler on GNU systems. Nowadays I also see the large version gap and that it still hasn't been merged into mainline GCC. On the other hand LDC subjectively offers a couple more D specific enhancements, like turning GC allocations into stack allocations in trivial cases or the long list of compiler flags. Also with the backend being a library it is more flexible in the context of updating the front-end independently from the backend, which fits Dlang's development cycle better IMO. I'd say start with DMD, as it comes practically free of dependencies and is the fastest compiler, which may be the most useful aspect when you start to learn the language and need to iterate often. -- Marco
Re: gdc in Linux distros recommended?
Am Wed, 19 Oct 2016 00:07:12 + schrieb bachmeier: > On Tuesday, 18 October 2016 at 23:31:42 UTC, bachmeier wrote: > > > That's not a very convincing argument IMO. DMD packages are > > available for download on this site. As I have learned the hard > > way, the experience isn't always the best when you rely on > > distro packagers. I once had to change distros because of a > > package maintainer that didn't care that things were broken. I > > would much rather rely on a project's own packages, because > > there is an incentive to make sure they work, they won't get > > abandoned, and the latest version is always available. There is > > no sense in which DMD is not available to the masses. > > According to this page > https://gdcproject.org/downloads/ > there are only distro packages for Ubuntu, Debian, and Arch. If > that's accurate, there really is no sense in which GDC is more > available than DMD. You'll have to check the distros themselves to get the full list. Usually, if the community is big enough, there will be one enthusiast that thinks this Dlang thing is cool and adds a package in some external package list. Gentoo: https://wiki.dlang.org/GDC#Linux_distribution_packages Mint: https://community.linuxmint.com/software/view/gdc-4.6 ... just google for " gdc" -- Marco
Re: rvalue references
Am Tue, 18 Oct 2016 22:43:01 +0200 schrieb Timon Gehr: > It wouldn't even be the same thing if it was allowed. D const is not C++ > const. Enforcing transitive read-only on rvalue references does not make > that much sense. For me using const ref vs. const is typically entirely a matter of avoiding a costly copy. The function body shouldn't care whether the data came from an rvalue or lvalue. It is exemplified in the following lines where doSomething takes a const ref and worldTransform returns by value: const worldTransform = someObj.worldTransform(); otherObj.doSomething(arg1, arg2, worldTransform); I'd prefer to write: otherObj.doSomething(arg1, arg2, someObj.worldTransform()); as the temporaries add significant noise in some functions and make the equivalent C++ code look cleaner. It may be noteworthy that the lack of rvalue references only really got annoying for me with matrix calculations, because there are so many medium sized temporaries. Some of them come directly from binary operators. Functions often need to perform a set of computations that bare a lot of similarity, where visual cues help the understanding. In a reduced form: calculate( matrix); // works calculate(2*matrix); // doesn't work, requires temporary calculate(3*matrix); // " Introducing temporaries makes the similarity of the calculation less evident. Making calculate take auto-ref would result in duplicate code and thrash the instruction cache (especially with 2 or 3 arguments). Dropping "ref" results in unnecessary matrix copies at this and other call sites. -- Marco
Re: Should r.length of type ulong be supported on 32-bit systems?
Am Sat, 1 Oct 2016 01:04:01 -0400 schrieb Andrei Alexandrescu: > On 09/30/2016 11:58 PM, Walter Bright wrote: > > On 9/30/2016 7:31 PM, Andrei Alexandrescu wrote: > >> https://github.com/dlang/phobos/pull/4827 still allows that but > >> specifies that > >> phobos algorithms are not required to. -- Andrei > > > > A couple instances where a length may be longer than the address space: > > > > 1. large files > > 2. sequences of mathematically generated values > > > > I don't see a particular advantage to restricting hasLength() to be > > size_t or ulong. Whether an algorithm requires the length to be within > > the addressable capabilities of the CPU or not should be up to the > > algorithm. > > > > There is also cause for r.length to return an int even on a 64 bit > > system - it can be more efficient. > > Indeed the "I have a dream" cases are quite the lure. The reality in the > field, however, is that Phobos thoroughly assumes length has type > size_t. The problem is defining hasLength in ways that are at the same > time general and that also work is extremely laborious. Just grep > through Consider: > > 1. std.algorithm.comparison:1007 (abridged) > > const rc = mulu(r, c, overflow); > if (_matrix.length < rc) > > So here we assume length is comparable for inequality with a const > size_t. That should probably rule out signed integrals. > > 2. ibid, 1694 (abridged) > > static if (hasLength!(Range1) && hasLength!(Range2)) > { > return r1.length == r2.length; > } > > Here we assume any two ranges satisfying hasLength have lengths > comparable for equality. That projects a rather tenuous definition of > hasLength: "the range must define a length that is comparable for > equality with any other range that also defines a length". Not > impossible, but definitely not what we have right now, de facto or de jure. > > 3. ibid, 634 (abridged) > > immutable len = min(r1.length, r2.length); > > So here two lengths must be comparable for ordering, have a common type, > and have an immutable constructor. > > 4. ibid, 656 (abridged) > > return threeWay(r1.length, r2.length); > > Here threeWay takes two size_t. So both lengths must be at least > convertible implicitly to size_t. > > I got to the end of the first file I scanned with grep, and I don't know > what a robust definition of hasLength should be - outside of size_t > itself. The more locations one examines, the more the capabilities > required get closer to those of size_t - or the more latent bugs exist > in phobos if we want to support something else. > > We have two options here. > > 1. Support a range of types (e.g.: any integral and any user-defined > type that supports the following list of primitives: ...) for length > with hasLength, and then have each range-based algorithm define its own > additional restrictions if they have such. That means right now a bunch > of phobos code is incorrect and needs to be fixed. > > 2. Require lengths to be size_t. > > I'm all for generality, but of the useful kind. It seems to me > supporting a complicated definition of length is a protracted > proposition with little upside. That's why I'm going with the > engineering solution in the PR. > > > Andrei Me too, I haven't looked into it, but I noticed that all the points you mentioned are resolved if length was required to be a natural number, i.e. unsigned basic type. Based on these points alone, I tend to agree with Walter. Note: Point 4. is arguable, because `threeWay` is a local function designed to compare char types and only used once to compare lengths under the (silent) assumption that they are < int.max. I.e. It would be cleaner to replace that call with an explicit comparison. -- Marco
Re: Easy sockets - don't exist yet?
Just in case, here are the relevant docs: http://dlang.org/phobos/std_net_curl.html
Re: Easy sockets - don't exist yet?
Am Mon, 26 Sep 2016 23:40:10 + schrieb Vincent: > 1. Easy to use. No more stupid "UNIX sockets", "TCP types" and so > on. Just simple as this: > > // Client side > auto sock = new ClientSocket("google.com", 80); > sock.WriteLine("GET / HTTP/1.0"); > sock.WriteLine("Host: google.com"); > sock.WriteLine();// empty line sent Haha, this is not how I learned network layers at school. You seem to want on the ... Network Layer (3): A connection based socket using the Internet Protocol Transport Layer (4): A stateful connection using TCP Application Layer (6): HTTP Just that you don't ask for HTTP directly, but shoehorn a packet based socket into sending microscopic strings. In this case I recommend cURL, which you can feed with all the header data at once and sends your complete request in one packet. That'll also handle most of the HTTP specialties. Not all data transmissions via IP are TCP either. A good bunch is sent via stateless UDP. That would not be considered a stream though. I'm just getting at the name "ClientSocket" here, which can entail more than TCP/IP streams. -- Marco
Re: How to debug (potential) GC bugs?
Am Sun, 25 Sep 2016 16:23:11 + schrieb Matthias Klumpp: > So, I would like to know the following things: > > 1) Is there any caveat when linking to C libraries and using the > GC in a project? So far, it seems to be working well, but there > have been a few cases where I was suspicious about the GC > actually doing something to malloc'ed stuff or C structs present > in the bindings. If you pass callbacks into the C code, make sure they never throw. Stack unwinding and exception handling generally doesn't work across language boundaries. A tracing garbage collector starts with the assumption that all the memory that it allocated is no longer reachable and then starts scanning the known memory for any pointers to allocations that falsify this assumption. What you malloc'ed is unknown to the GC and wont be scanned. Should you ever have GC memory pointers in your malloc'ed stuff, then you need to call GC.addRange() to make those pointers keep the allocations alive. Otherwise you will get a "used after free" error: data corruption or access violations. A simple case would be a string that you constructed in D and store in C as a pointer. The GC can automatically scan the stack and any globals/statics on the D side, but that's about it. I know of no tools similar to valgrind specially designed to debug the D GC. You can plug into the GC API and keep track of the allocation sizes. I.e. write a proxy GC. -- Marco
Re: [OT] Punctuation of "Params:" and "Returns:"
Am Thu, 22 Sep 2016 14:26:15 -0400 schrieb Andrei Alexandrescu: > Consider e.g. > http://dtest.thecybershadow.net/artifact/website-ff971425e306789aa308cfb07cba092cadd11161-d41523308b293d6e0a3d6da971a68db8/web/phobos-prerelease/std_algorithm_sorting.html#pivotPartition, > > where we can see the parameters table and the returns subsection. > > How to formulate these lingustically? You are not the only one scratching his head. I went through the same stages, from full sentence over removing repeated words, to lowercase without punctuation. /Currently/ I'm using: Returns: pointer to the parent object; maybe `null` if this is the root object Anything needing a long explanation then needs to be written in a way that it refers to the description paragraph(s) above in brief. Happy bike-shedding :) -- Marco
Re: ddoc latex/formulas?
Am Fri, 16 Sep 2016 12:30:37 +1000 schrieb Manu via Digitalmars-d: > > That strikes me as an inferior solution to what's available today, which is > > used at http://erdani.com/d/DIP1000-typing-baseline.html. > > I'm guessing that's mathjax? Strangely, it renders really badly on my > system. I can barely read it. > I'd still rather use latex to produce images for my own docs. I don't understand any of it, but it looks really good on Firefox. Everything is crisp and it even honors the system's subpixel hinting setting so it even surpasses the prerendered grayscale images from Wikipedia! -- Marco
Re: Ddoc macro syntax
Am Fri, 16 Sep 2016 13:16:35 +0200 schrieb Jacob Carlborg: > My biggest issue with the macros is not the syntax (I don't like that > either) but it's that one needs to use them too much. Same for me. I feel like this discussion is probably picking out the wrong enemy. Sure macros need some way of escaping, but I'm happy with anything that replaces macros in common use case scenarios with more readable syntax, just like the design goals stated back in the day: 1. It looks good as embedded documentation, not just after it is extracted and processed. 2. It's easy and natural to write, i.e. minimal reliance on and other clumsy forms one would never see in a finished document. The abundance of macros for common formatting tasks like emphasis, (un)ordered lists and - a while ago - inline code, contradicts point 2 when compared to a bottom up approach, where you take a look at some plain text documents and ask yourself: If there is only ASCII, how do people use it creatively to convey the idea of formatting in a natural way and can we deduce rules from that to automatically transform text into PDF/HTML/CHM/... I want to think that markdown came into existence like this. Someone sat down and formalized a list of things people already do and slapped a name on it. -- Marco
Re: colour lib needs reviewers
Am Wed, 14 Sep 2016 23:44:16 -0700 schrieb Walter Bright: > I suspect that adding Markdown would help a lot (yes, I changed my mind about > that). +1 -- Marco
Re: colour lib needs reviewers
Am Wed, 14 Sep 2016 13:47:43 + schrieb Meta: > On Wednesday, 14 September 2016 at 13:28:23 UTC, Manu wrote: > > Cheers. > > Yeah, I need to do better with ddoc. > > > > ... I'm just gonna go on the record and say that I am really, > > really not enjoying ddoc ;) > > I can't say I'm a fan either. You can replace $(D …) with `…` nowadays. It made me a bit more of a fan. :) -- Marco
Re: Null references and access violation
Am Wed, 14 Sep 2016 16:52:19 + schrieb Bauss: > Can someone riddle me this, why D gives an access violation > instead of ex. a null reference exception? Access violations cost exactly 0. Noone needs to do anything extra for this check that isn't done by the CPU already. The next step is an assertion (which I think is done in debug mode when you call a method on a null object). That's still not recoverable, just like out of memory situations in D. Compare for example in-contracts, where you assert for not-null. Those throw unrecoverable errors as well unless you turn them from assert(obj !is null); into if (obj is null) throw new NullPointerException(); (And that's what the compiler in its current state would probably insert for you on every access to the object.) D is somewhat consistent in making null pointers and other contracts/assertions fatal errors that end program execution. In other words: Everything that's a fault in the program logic gives you a rather harsh exit, while unforseeable situations like network errors or incorrect user input are handled by exceptions. Walter mentioned that when a program is run inside a debugger, access violations are the easiest problem for the debugger, while D exceptions on Linux are not as easy to break on. I understand the sentiment though. I've seen a Karaoke software throw exceptions because no item was selected in a list box. If that was an access violation you could not have acknowledged the OutOfBounds/NullPointer message, selected an item and continued. Depending on how and where the software is used, one or the other is a better default. We have had some interesting proposals on not-null references (as NullPointerExceptions are seen as a mistake in retrospect by language designers [citation needed]) and "remembering" what line of code has safe access to the object. E.g. everything in "if (obj) { ... }" can safely access the object. -- Marco
Re: colour lib needs reviewers
Am Wed, 14 Sep 2016 11:36:12 +1000 schrieb Manu via Digitalmars-d <digitalmars-d@puremagic.com>: > On 14 September 2016 at 04:34, Marco Leise via Digitalmars-d > <digitalmars-d@puremagic.com> wrote: > > JavaScript's canvas works that way for > > example. I.e. the only pixel format is RGBA for simplicity's > > sake and I'm not surprised it actually draws something if I > > load it with a 24-bit graphic. ;) > > Given this example, isn't it the job of the image loader to populate > the image with data?2 > […] I admit is was a constructed example. You can't load an image directly into a HTML5 canvas. Shame on me. > You'll notice I didn't add arithmetic operators to the HSx type ;) > If you have HSx colors, and want to do arithmetic, cast it to RGB first. Now that you say it, yes I was wondering how the arithmetics work since I couldn't find any opBinary. > I went through a brief phase where I thought about adding an angle > type (beside normint), but I felt it was overkill. > I still wonder if it's the right thing to do though... some type that > understands a circle, and making angle arithmetic work as expected. I see. I'm also wondering what equality means for a color with floating point hue 10° and another with hue 730°. I guess it is ok the way it is now, because wrapping the values doesn't guarantee they map to the same value either. Floating point equality is flawed either way. -- Marco
Re: iPhone vs Android
Am Tue, 13 Sep 2016 18:16:27 + schrieb Laeeth Isharc: > Thanks you for the clear explanation. So if you don't have GC > allocations within RC structures and pick one or the other, then > the concern does not apply? That's right. Often such structures contain collections of things, not just plain fields. And a list or a hash map working in a @nogc environment typically checks its contained type for any pointers with hasIndirections!T and if so adds its storage area to the GC scanned memory to be on the safe side. That means every collection needs a way to exempt its contents from GC scanning and the user needs to remember to tell it so. A practical example of that are the EMSI containers, but other containers, i.e. in my own private code look similar. https://github.com/economicmodeling/containers struct DynamicArray(T, Allocator = Mallocator, bool supportGC = shouldAddGCRange!T) { ... } Here, when you use a dynamic array you need to specify the type and allocator before you get to the point of opting out of GC scanning. Many will prefer concise code, go with GC scanning to be "better safe than sorry" or don't want to fiddle with the options as long as the program works. This is no complaint, I'm just trying to draw a picture of how people end up with more GC scanned memory than necessary. :) -- Marco
Re: colour lib needs reviewers
Am Tue, 13 Sep 2016 12:00:44 +1000 schrieb Manu via Digitalmars-d: > What is the worth of storing alpha data if it's uniform 0xFF anyway? > It sounds like you mean rgbx, not rgba (ie, 32bit, but no alpha). > There should only be an alpha channel if there's actually alpha data... right? I don't mean RGBX. JavaScript's canvas works that way for example. I.e. the only pixel format is RGBA for simplicity's sake and I'm not surprised it actually draws something if I load it with a 24-bit graphic. ;) > > […] An additive one may be: > > > > color = color_dst + color_src * alpha_src > > alpha = alpha_dst > > I thought about adding blend's, but I stopped short on that. I think > that's firmly entering image processing territory, and I felt that was > one step beyond the MO of this particular lib... no? > Blending opens up a whole world. I agree with that decision, and that it entails that arithmetics are undefined for alpha channels. :-( Yeah bummer. The idea that basic (saturating) arithmetics work on colors is a great simplification that works for the most part, but let's be fair, multiplying two HSV colors isn't exactly going to yield a well defined hue either, just as multiplying two angles doesn't give you a new angle. I.e.: http://math.stackexchange.com/a/47880 > > […] > […] > From which functions? Link me? > I'd love to see more precedents. Yep, that's better than arguing :) So here are all graphics APIs I know and what they do when they encounter colors without alpha and need a default value: SDL: https://wiki.libsdl.org/SDL_MapRGB "If the specified pixel format has an alpha component it will be returned as all 1 bits (fully opaque)." Allegro: https://github.com/liballeg/allegro5/blob/master/include/allegro5/internal/aintern_pixels.h#L59 (No docs, just source code that defaults to 255 for alpha when converting a color from a bitmap with non-alpha pixel format to an ALLEGRO_COLOR.) Cairo: https://www.cairographics.org/manual/cairo-cairo-t.html#cairo-set-source-rgb "Sets the source pattern within cr to an opaque color." Microsoft GDI+: https://msdn.microsoft.com/de-de/library/windows/desktop/ms536255%28v=vs.85%29.aspx "The default value of the alpha component for this Color object is 255." Gnome GDK: https://developer.gnome.org/gdk-pixbuf/2.33/gdk-pixbuf-Utilities.html#gdk-pixbuf-add-alpha "[…] the alpha channel is initialized to 255 (full opacity)." Qt: http://doc.qt.io/qt-4.8/qcolor.html#alpha-blended-drawing "By default, the alpha-channel is set to 255 (opaque)." OpenGL: https://www.opengl.org/sdk/docs/man2/xhtml/glColor.xml "glColor3 variants specify new red, green, and blue values explicitly and set the current alpha value to 1.0 (full intensity) implicitly." (Note: The color can be queried and shows a=1.0 without blending operations setting it internally if needed.) Java (AWT): https://docs.oracle.com/javase/7/docs/api/java/awt/Color.html#Color%28int,%20boolean%29 "If the hasalpha argument is false, alpha is defaulted to 255." Apple's Quartz does not seem to provide color space conversions and always requires the user to give the alpha value for new colors, so there is no default: https://developer.apple.com/library/tvos/documentation/GraphicsImaging/Reference/CGColor/index.html#//apple_ref/c/func/CGColorCreate One thing I noticed is that many of those strictly separate color spaces from alpha as concepts. For example in Quartz *all* color spaces have alpha. In Allegro color space conversions are ignorant of alpha. That begs the question what should happen when you convert RGBA to a HLx color space and back to RGBA. Can you retain the alpha value? CSS3 for example has HSLA colors that raise the bar a bit. -- Marco
Re: colour lib needs reviewers
Am Tue, 13 Sep 2016 00:37:22 +1000 schrieb Manu via Digitalmars-d: > I flip-flopped on this multiple times. > It's not so simple. > 1. Alpha doesn't necessarily mean transparently, that's just one > (extremely common) use > 2. 1 is the multiplication identity, but 0 is the additive identity. I > cant find either addition or multiplication to take precedence over > eachother. > 3. With those 2 points above already making me feel a bit worried > about it, I think that appearing numbers out of nowhere is just > generally bad, so I went with a=0. > If alpha defaults to 1, then a*b works as expected (object does not > become transparent by this operation). > If alpha defaults to 0, then b-a works as expected (object does not > become transparent by this operation). Additive color components can work like that, but the alpha component just doesn't represent an amount of light. Don't try too hard to make the arithmetics make sense, when the practical use lies in 100% alpha by default. Converting every color to full transparent is not helpful when you want to save a 24-bit BMP as 32-bit PNG or get an RGBA color from an RGB texture. I don't know who declared that 100% alpha means opaque. For what it's worth it could have been the other way around and all the (1-a) and (a) in blending functions swapped. And a blending function is what is needed here to give any meaning at all to arithmetics on alpha. An additive one may be: color = color_dst + color_src * alpha_src alpha = alpha_dst > With that in mind, you realise alpha will *always* require explicit > handling... so I think 0 is a better default at that stage. It may be > possible to convince me otherwise ;) I fully agree with the first line, I just think that for practical uses alpha is considered to be 100% in pixel formats without alpha channel. Otherwise the most popular blending functions would do nothing on RGB colors. SDL for example also uses 100% alpha by default: "If the specified pixel format has an alpha component it will be returned as all 1 bits (fully opaque)." > > Cool stuff there in the colorspace module, to get started with > > getting photos or DV/MPEG into a linear color space. Should > > BT.709 also have prefab conversion methods? It would seem HD > > video is more wide spread nowadays. > > BT.709's gamma func isn't changed from 601. Rec.2020 introduced a > higher precision version. > > There is a typo in "Function that converts a linear luminance > > to gamme space." (colorspace.d) > > Thx. But please log comments like this on the PR so I don't lose track > of them? :) Will do! > > It seems inconsistent that you can specify all kinds of > > exponents when the exponent is shared but are forced to 5 bit > > exponents when mixed into each component, while the mantissa > > can still be adjusted. Would m7e3 have worked? (With f* being > > the IEEE standard formats.) > > I was thinking that, and mentioned as such in the comments near those lines ;) > Thing is, there are no realtime small floats with exponent != 5bits > except the unique 7e3 format on xbox 360. > I can extend this in an unbreaking way. > I might: f## = ieee-like, s##e# = signed float, (u)##e# = unsgned float. > It's just pretty verbose is all: "rgb_11e5_11e5_10e5". I mean, it's > fine... but maybe people wouldn't find this intuitive? Many > programmers don't know what an 'exponent' or 'mantissa' is... :/ Yeah I thought that the common formats would be the IEEE ones, especially f16 and that anything less are very niche custom formats that come and go and need verbose descriptions. Well, either way works. > > The final package should contain a fair amount of > > documentation, in my opinion also on the topics of sRGB vs. > > linear and what "hybrid gamma" is. > > Yeah, but is it really my job to teach someone about colour space? > Wikipedia can do that much better than I can... :/ > I agree, but I don't really know where to draw the line, or how to organise > it. > > Perhaps people with strong opinions on how this should be presented > can create a PR? Alright, but hybrid gamma is really not something that can be googled. Or rather I end up at Toyota's Gamma Hybrid product page. :D And as always, don't let the turkeys get you down. -- Marco
Re: colour lib needs reviewers
Am Mon, 12 Sep 2016 11:31:13 + schrieb Guillaume Chatelet: > On Monday, 12 September 2016 at 04:14:27 UTC, Manu wrote: > > I think I'm about as happy with my colour lib as I'm going to > > be. It really needs reviews. > > > > I added packed-RGB support, including weird micro-float and > > shared-exponent formats. > > They're important for interacting with any real-time rendering > > libraries. > > There is only one texture format I'm aware of that isn't > > supported, > > and that is u7e3 floats, only available on Xbox360 ;) > > > > Chromatic adaptation functions are in. > > I've done a tidy/reorg pass. > > > > I'm not sure what else should be in the scope of this lib. > > > > PR: https://github.com/dlang/phobos/pull/2845 > > dub: https://code.dlang.org/packages/color > > docs: > > http://dtest.thecybershadow.net/artifact/website-04a64e024cf75be39700bebd3a50d26f6c7bd163-7185c8ec7b15c9e785880cab6d512e6f/web/library-prerelease/std/experimental/color.html > > > > - Manu > > And also, do you handle limited (16-235) to full dynamic range > (0-255) conversions? > > It's a setting some acquisition/graphic card allow. > https://wiki.videolan.org/VSG:Video:Color_washed_out/ > > [...] Practically all video content is encoded with the limited range, from VHS to HDTV, IIRC and HiFi equipment uses it as well, only computers are not so familiar with it and when connected to a TV over HDMI one or the other may think that the other part thinks ... well colors are often washed out, in effect. On the other hand this luminance compression can apply to any color format, so it is probably best to leave it to the user to compress colors if the output is a video file. -- Marco
Re: colour lib needs reviewers
Am Mon, 12 Sep 2016 14:14:27 +1000 schrieb Manu via Digitalmars-d: > I think I'm about as happy with my colour lib as I'm going to be. > It really needs reviews. > > I added packed-RGB support, including weird micro-float and > shared-exponent formats. > They're important for interacting with any real-time rendering libraries. > There is only one texture format I'm aware of that isn't supported, > and that is u7e3 floats, only available on Xbox360 ;) > > Chromatic adaptation functions are in. > I've done a tidy/reorg pass. > > I'm not sure what else should be in the scope of this lib. > > PR: https://github.com/dlang/phobos/pull/2845 > dub: https://code.dlang.org/packages/color > docs: > http://dtest.thecybershadow.net/artifact/website-04a64e024cf75be39700bebd3a50d26f6c7bd163-7185c8ec7b15c9e785880cab6d512e6f/web/library-prerelease/std/experimental/color.html > > - Manu Nice work. What's u7e3? 3*10-bit-floats + 2-bit padding? In the example for convert color you show that converting from a color space without alpha (XYZ) to one with alpha (RGBA) turns the pixel transparent. I believe you meant this as something to lookout for when converting colors. What's the reasoning for not setting maximum alpha as a more useful and natural default? Cool stuff there in the colorspace module, to get started with getting photos or DV/MPEG into a linear color space. Should BT.709 also have prefab conversion methods? It would seem HD video is more wide spread nowadays. There is a typo in "Function that converts a linear luminance to gamme space." (colorspace.d) I like how RGB color spaces are described as primaries in xyY + gamma functions. It seems inconsistent that you can specify all kinds of exponents when the exponent is shared but are forced to 5 bit exponents when mixed into each component, while the mantissa can still be adjusted. Would m7e3 have worked? (With f* being the IEEE standard formats.) The final package should contain a fair amount of documentation, in my opinion also on the topics of sRGB vs. linear and what "hybrid gamma" is. What is your stance on pixel formats that are arranged in planes? I assume that e.g. YUV is handled by a single channel "sRGB luminance" + 2 signed 8-bit color planes? It may be worth adding some hints in that direction. All in all good work with a scientific feel to it. -- Marco
[OT] Re: Let's kill 80bit real at CTFE
Am Sun, 11 Sep 2016 15:00:12 +1000 schrieb Manu via Digitalmars-d: > On 9 September 2016 at 21:50, Stefan Koch via Digitalmars-d > wrote: > > Hi, > > > > In short 80bit real are a real pain to support cross-platform. > > emulating them in software is prohibitively slow, and more importantly hard > > to get right. > > 64bit floating-point numbers are supported on more architectures and are > > much better supported. > > They are also trivial to use at ctfe. > > I vote for killing the 80bit handling at constant folding. > > > > Destroy! > > I just want CTFE '^^'. Please, can we have that? > It's impossible to CTFE any non-linear function. It's ridiculous, I > constantly want to generate a curve at compile time! I have experimented with a few iterative algorithms from around the web that are now in my module for "random stuff not in Phobos": /*** * * Computes the arcus tangens at compile time. * **/ enum ctfeAtan(real x) in { assert(x == x && abs(x) != real.infinity); } body { if (abs(x) == 0) return x; // Reduce x to <0.5 for effective convergence of the Taylor series. x /= 1 + sqrt(1 + x * x); x /= 1 + sqrt(1 + x * x); // Sum up Taylor series to compute atan(). immutable xSqr = -x * x; real mul = x; real div = 1; real x_old; do { x_old = x; mul *= xSqr; div += 2; x += mul / div; } while (x !is x_old); // Compensate for the initial reduction by multiplying by 4. return 4 * x; } /*** * * Computes the arcs sinus at compile time. * **/ enum ctfeAsin(real x) in { assert(x.isWithin(-1,+1)); } body { if (abs(x) == 0) return x; immutable div = 1 - x * x; return x / abs(x) * (div == 0 ? PI / 2 : ctfeAtan(sqrt(x * x / div))); } /*** * * Computes `x` to the power of `y` at compile-time. * * Params: * x = The base value. * y = The power. * * Source: * http://stackoverflow.com/a/7710097/4038614 * **/ @safe @nogc pure nothrow real ctfePow(real x, real y) { if (y >= 1) { real temp = ctfePow( x, y / 2 ); return temp * temp; } real low = 0, high = 1; real sqr = sqrt( x ); real acc = sqr; real mid = high / 2; while (mid != y) { sqr = sqrt( sqr ); if (mid <= y) { low = mid; acc *= sqr; } else { high = mid; acc *= 1 / sqr; } mid = (low + high) / 2; } return acc; } /*** * * Computes the natural logarithm of `x`at compile time. * **/ @safe @nogc pure nothrow FloatReg ctfeLog(FloatReg x) { if (x != x || x <= 0) return -FloatReg.nan; else if (x == 0) return -FloatReg.infinity; else if (x == FloatReg.infinity) return +FloatReg.infinity; uint m = 0; while (x <= ulong.max) { x *= 2; m++; } @safe @nogc pure nothrow static FloatReg agm(FloatReg x, FloatReg y) in { assert(x >= y); } body { real a, g; do { a = x; g = y; x = 0.5 * (a+g); y = sqrt(a*g); } while(x != a || y != g); return x; } return PI_2 / agm(1, 4/x) - m * LN2; } Especially the `log` function seemed like a good compromise between execution speed and accuracy. FloatReg is "the native float register type", but the way CTFE works it should be `real` anyways. The code is mostly not by me, but from StackOVerflow comments and Wikipedia. Most of it is common knowledge to every mathematician. -- Marco
Re: Struct default constructor - need some kind of solution for C++ interop
Am Fri, 09 Sep 2016 14:46:31 + schrieb Ethan Watson: > […] > > First and foremost, resources are processed offline to match the > ideal binary format for the target platform. The industry has > been using DXT textures for over a decade now, and they've been > supported on consoles. The overwhelming majority of textures are > thus baked in to such a format. Whichever format is chosen, on > disk the file will essentially represent the resource's final > layout in memory. I understand that. > Second, file loading. You can't just go loading files any old > time you want in a streaming-based, or even just straight up > multithreaded, engine if you expect to keep within performance > targets and not lock up every thread you've created. They need > scheduling. Thus, resource creation needs to go through several > steps: > > * Something requests a resource, goes to sleep > * File loader schedules appropriately, notifies on load complete > * Object gets resource load notification, does work to hook it up > to whatever API needs it ...and the objects are probably created ahead of time in a pool, to avoid allocations? In such a scheme it is only natural to not have I/O in ctors. But what about the parts of the code that handle the game initialization before streaming starts? Is there no config = new GameConfig("settings.ini"); or db = new AssetDatabase("menu.pkg"); that perform I/O during construction and potentially display an exception error messages ? -- Marco
Re: @nogc hash
Am Fri, 09 Sep 2016 11:35:58 + schrieb Guillaume Piolat: > On Friday, 9 September 2016 at 11:16:19 UTC, Marco Leise wrote: > > It does not /allocate/ with the GC, but the methods are > > not /annotated/ @nogc, e.g. insert(): > > https://github.com/economicmodeling/containers/blob/master/src/containers/hashmap.d#L338 > > It's plain simple not possible out of the box* at the moment. > > > > * writing your own hash function that covers every type is NOT > > "out of the box" :p > > But they are template functions. > Maybe this is a case of > https://p0nce.github.io/d-idioms/#Automatic-attribute-inference-for-function-templates > then? Sorry, I didn't want to talk about the lack of attribute inference for non-templated functions inside templates :p. Yes, that's why some of the methods in EMSI containers are annotated and I'm aware of that. The point is that TypeInfo_*.getHash() cannot be @nogc, because a class or struct may implement a not-@nogc toHash(). (Where I may add, that e.g. TypeInfo_Pointer.getHash() *could* be @nogc, since all it returns is `cast(size_t)cast(void*)p`). The alternatives in core.internal.hash are supposed to be CTFE-able and albeit bytesHash itself does not seem to use the GC, the wrapper functions for different types may. And that is why @nogc hash table implementations don't fly at the moment. -- Marco
Re: Struct default constructor - need some kind of solution for C++ interop
Am Thu, 08 Sep 2016 07:52:58 + schrieb Ethan Watson: > On Wednesday, 7 September 2016 at 21:05:32 UTC, Walter Bright > wrote: > > 5. In my not-so-humble opinion, construction should never fail > > and all constructors should be nothrow, but I understand that > > is a minority viewpoint > > 100% agree there. I can't think of any class in our C++ codebase > that fails construction, and it's a pretty common rule in the > games industry to not assert/throw exceptions during construction. > > Of course, with Binderoo being open sourced, I can't guarantee > any of my end users will be just as disciplined. So when you have an object that reads state from a file, you first construct it and then call a member function "loadFromFile()" that may throw? For argument's sake let's take a *.bmp class. That one would not have a constructor with a filename? Or do you have such constructors and I/O exceptions are just logged and swallowed? I'd like to understand how it would behave, since obviously both you and Walter have written large software products and personally I prefer to attempt construction and rollback on errors until I'm back in the state where the user initiated the action. What is the benefit? Well, one benefit is that you are forced to write error concealment code and make the best out of partially broken input. Others? Out-of-memory and invalid arguments would be other examples of exceptions, but I guess the Errors thrown from asserts don't count as exceptions in the regular sense, since by definition they are non-recoverable. -- Marco
Re: Templates do maybe not need to be that slow (no promises)
Am Fri, 09 Sep 2016 10:32:59 + schrieb Stefan Koch: > On Friday, 9 September 2016 at 09:31:37 UTC, Marco Leise wrote: > > > > Don't worry about this special case too much. At least GCC can > > turn padLength from a runtime argument into a compile-time > > argument itself, so the need for templates to do a poor man's > > const-folding is reduced. So in this case the advise is not to > > use a template. > > This is not what this is about. > This is about cases where you cannot avoid templates because you > do type-based operations. > > The code above was just an example to illustrate the problem. Fair enough. I hope there is a less complex solution that all compilers could benefit from. -- Marco
Re: @nogc hash
Am Fri, 09 Sep 2016 10:52:54 + schrieb Guillaume Piolat: > On Friday, 9 September 2016 at 10:16:09 UTC, Marco Leise wrote: > > it is - AFAICT - not possible to insert an > > element into a hash table in @nogc code. > > It is with http://code.dlang.org/packages/emsi_containers It does not /allocate/ with the GC, but the methods are not /annotated/ @nogc, e.g. insert(): https://github.com/economicmodeling/containers/blob/master/src/containers/hashmap.d#L338 It's plain simple not possible out of the box* at the moment. * writing your own hash function that covers every type is NOT "out of the box" :p -- Marco
@nogc hash
What is the way forward with @nogc and hash tables? At the moment it is - AFAICT - not possible to insert an element into a hash table in @nogc code. Aggregate types work since you can always implement a @nogc toHash and look for that, but other types don't. Or do I miss something? -- Marco
Re: Templates do maybe not need to be that slow (no promises)
Am Fri, 09 Sep 2016 07:56:04 + schrieb Stefan Koch: > Hi Guys, > > I keep this short. > There seems to be much more headroom then I had thought. > > The Idea is pretty simple. > > Consider : > int fn(int padLength)(int a, int b, int c) > { >/** > very long function body 1000+ lines > */ >return result * padLength; > } > > This will produce roughly the same code for every instaniation > expect for one imul at the end. > > […] Don't worry about this special case too much. At least GCC can turn padLength from a runtime argument into a compile-time argument itself, so the need for templates to do a poor man's const-folding is reduced. So in this case the advise is not to use a template. You said that there is a lot of code-gen and string comparisons going on. Is code-gen already invoked on-demand? I assume with "dmd -o-" code-gen is completely disabled, which is great for ddoc, .di and dependency graph generation. -- Marco
Re: CompileTime performance measurement
Am Tue, 06 Sep 2016 05:02:54 + schrieb timepp: > On Sunday, 4 September 2016 at 04:24:34 UTC, rikki cattermole > wrote: > > void writeln(T...)(T args) { > > if (__ctfe){ > > debug { > > __ctfeWriteln(args); > > } > > } else { > > // ... current implementation > > } > > } > > > > Are you sure? > any usage example? > > consider a normal usage: > writeln("done."); > > I just want a runtime output. how can I tell the compiler not to > print "done." at compile time? If you actually call a function during compile-time that uses writeln and want it to be silent you would write: if (!__ctfe) writeln("done."); -- Marco
Re: Fallback 'catch-all' template functions
Am Sat, 3 Sep 2016 02:56:16 -0700 schrieb Walter Bright: > On 9/3/2016 2:43 AM, Manu via Digitalmars-d wrote: > > This is interesting. Can you explain how that works? > > > Specializations are preferred over non-specializations, and T:T is the > identity > specialization. Pretty cool, I'll try that next time I write templated overload sets! -- Marco
Re: Fallback 'catch-all' template functions
Am Sat, 3 Sep 2016 14:24:13 +0200 schrieb Andrei Alexandrescu: > On 9/3/16 7:29 AM, Manu via Digitalmars-d wrote: > > On 3 September 2016 at 11:38, Andrei Alexandrescu via Digitalmars-d > > wrote: > >> On 9/3/16 2:41 AM, Manu via Digitalmars-d wrote: > >>> > >>> On 3 September 2016 at 00:18, Xinok via Digitalmars-d > >>> wrote: > > > In the past, I have suggested using the "default" keyword to specify a > fallback function of this kind. I think it's a useful pattern for generic > algorithms that have optimized variants on specific types for > performance. > > void f(T)(T t) if(isSomething!T) {} > void f(T)(T t) if(isSomethingElse!T) {} > void f(T)(T t) default {} > >>> > >>> > >>> It's an interesting idea... flesh out a DIP? > >> > >> > >> We're better off without that. -- Andrei > > > > Then we need a decent way to do this. > > Use static if inside the function. The entire notion of "call this > function if you can't find something somewhere that works" is > questionable. -- Andrei This notion is what drives template specializations in D: char foo(T)(T t) { return '?'; } // Call this if nothing else matches char foo(T : int)(T t) { return 'i'; } // Call this for ints char foo(T : string)(T t) { return 's'; } // Call this for strings Granted it's a bit more fuzzy and talks about "more specialized", but it is ultimately the same as a hard fallback when foo(3.4) is called. UFCS functions are also only invoked if there is no method with that name on the type, or any type reachable through "alias this" or opDot and there is no opDispatch on said types accepting that name. However questionable the notion is, it is common in D today. Static if also wont work in cases where the signature needs to be different, like overloads of opCast, where one returns bool and is const and another returns a different view on the same thing by ref and is inout. opCast(T:bool) would be unclean, as it also matches types T with a boolean 'alias this'. One can also imagine that some overloads want to take their arguments by value while others take it by ref. -- Marco
Re: colour lib
Am Sat, 3 Sep 2016 16:01:26 +1200 schrieb rikki cattermole: > On 03/09/2016 12:17 PM, Manu via Digitalmars-d wrote: > ...snip... > > > I think the presence of all this colour space information as type > > arguments should nudge users in the right direction. They'll be all > > "I've never seen this parameter before..." and google it... maybe. > > I don't think it's the std lib doco's job to give users a lesson in > > colour theory...? :/ :D Just a short paragraph. "Note: The common sRGB color space used in computer screens or JPEGs from digicams does not evenly distribute brightness along the pixel values. Multiplying an sRGB pixel by two, wont double its intensity! To perform accurate image manipulations, you are advised to always convert to a high precision linear color-space like [insert name] first. More information on the history of sRGB and the formulas used can be found here: https://www.w3.org/Graphics/Color/sRGB; I believe sRGB is the only color-space that you "feel" you are already familiar with as a computer person, because you see and use it all the time. If I really wanted to make you pull your hair I'd say: "Man this NormalizedInt stuff would so benefit from MMX. Imagine we don't use float but ushort as linear RGB value and then use a single PADDUSW to add two colors." > Something[0] along this line perhaps? > > Overview of the choices and scope along with reasoning. > > [0] > https://github.com/rikkimax/alphaPhobos/blob/master/source/std/experimental/graphic/image/specification.dd I have not found text about color spaces there, but it is an interesting collection of API design rationales. "If it mutates it may throw. If it doesn't mutate it shouldnt." I never thought about it that way. -- Marco