Re: Thin UTF8 string wrapper
On Friday, 6 December 2019 at 16:48:21 UTC, Joseph Rushton Wakeling wrote: Hello folks, I have a use-case that involves wanting to create a thin struct wrapper of underlying string data (the idea is to have a type that guarantees that the string has certain desirable properties). The string is required to be valid UTF-8. The question is what the most useful API is to expose from the wrapper: a sliceable random-access range? A getter plus `alias this` to just treat it like a normal string from the reader's point of view? One factor that I'm not sure how to address w.r.t. a full range API is how to handle iterating over elements: presumably they should be iterated over as `dchar`, but how to implement a `front` given that `std.encoding` gives no way to decode the initial element of the string that doesn't also pop it off the front? I'm also slightly disturbed to see that `std.encoding.codePoints` requires `immutable(char)[]` input: surely it should operate on any range of `char`? I'm inclining towards the "getter + `alias this`" approach, but I thought I'd throw the problem out here to see if anyone has any good experience and/or advice. Thanks in advance for any thoughts! All the best, -- Joe Good questions. I don't have answers to them all but I hope this information is helpful. I use wrapper structs to represent properties in this way as well. For example my "mar" library has the SentinelPtr and SentinelArray types which guarantee that the underlying pointer and/or array is terminted by some value (i.e. like a null-terminated C string). If I'm creating and use these wrapper types inside a self-contained program then I don't really care about API compatibility so I would use a simple powerful mechanism like "alias this". For libraries where the API boundary is important I implement the most limited API I can. The reason for this, is it allows you to see all possible interaction with the type. This way, when you need to change the API you know all the existing ways it can be interacted with and iterate on the API design appropriately. This is the case for SentinelPtr and SentinelArray. For this case I only implement the operations I know are being used, and I made this easy by creating a simple module I call "wrap.d" (https://github.com/dragon-lang/mar/blob/master/src/mar/wrap.d). If you have a struct that wraps a string and guarantees it's UTF8 encoded, wrap.d lets you declare that it's a wrapper type and allows you to mixin the operations you want to expose like this: struct Utf8String { private string str; import mar.wrap; // this verifies the size of the wrapper struct and the underlying field // are the same, and creates the wrappedValueRef method that the other // wrapper mixins use to access the underlying wrapped value mixin WrapperFor!"str"; // Now you can mixin different operations, for example mixin WrapOpCast; mixin WrapOpIndex; mixin WrapOpSlice; } On the topic of immutable(char)[] vs const(char)[]. If a function takes const data, I take it to mean that the function won't change the data. If it takes immutable data, I take it to mean that the function won't change it AND the caller must ensure data won't change while the function has it. However in practice, functions that require immutable data sill declare their data be "const" instead of "immutable". I think this is because declaring it as immutable would require extra boiler-plate all over your code to cast data to immutable all the time. So most functions end up using const even though they require immutable.
Re: Any 3D Game or Engine with examples/demos which just work (compile) out of the box on linux ?
On Friday, 18 October 2019 at 06:11:37 UTC, Ferhat Kurtulmuş wrote: On Friday, 18 October 2019 at 05:52:19 UTC, Prokop Hapala wrote: Already >1 year I consider to move from C++ to Dlang or to Rust in my hobby game development (mostly based on physical simulations https://github.com/ProkopHapala/SimpleSimulationEngine). I probably prefer Dlang because it compiles much faster, and I can copy C/C++ code to it without much changes. [...] I cannot make any comment for others. But Dagon should work. I wrote a very little demo game some time ago https://github.com/aferust/dagon-shooter. I didn't try to compile and run it on Linux.I think you need to have a nuklear.so in your path, since Bindbc loaders try to load dynamic libraries by default. This is what I get when I clone dagon-shooter and build it with "dub": WARNING: A deprecated branch based version specification is used for the dependency dagon. Please use numbered versions instead. Also note that you can still use the dub.selections.json file to override a certain dependency to use a branch instead. WARNING: A deprecated branch based version specification is used for the dependency bindbc-soloud. Please use numbered versions instead. Also note that you can still use the dub.selections.json file to override a certain dependency to use a branch instead. Performing "debug" build using C:\tools\dmd.2.088.1.windows\dmd2\windows\bin\dmd.exe for x86_64. bindbc-loader 0.2.1: target for configuration "noBC" is up to date. bindbc-soloud ~master: target for configuration "library" is up to date. bindbc-opengl 0.8.0: target for configuration "dynamic" is up to date. bindbc-sdl 0.8.0: target for configuration "dynamic" is up to date. dlib 0.17.0-beta1: target for configuration "library" is up to date. dagon ~master: target for configuration "library" is up to date. dagon-shooter ~master: building configuration "application"... source\enemy.d(10,1): Error: undefined identifier EntityController source\enemy.d(16,19): Error: function enemyctrl.EnemyController.update does not override any function source\enemy.d(48,1): Error: undefined identifier EntityController source\enemy.d(77,19): Error: function enemyctrl.BoomController.update does not override any function source\mainscene.d(80,17): Error: undefined identifier LightSource source\mainscene.d(82,21): Error: undefined identifier FirstPersonView source\mainscene.d(93,16): Error: undefined identifier NuklearGUI source\mainscene.d(95,15): Error: undefined identifier FontAsset source\mainscene.d(102,5): Error: undefined identifier SceneManager source\mainscene.d(128,19): Error: function mainscene.MainScene.onAssetsRequest does not override any function source\mainscene.d(198,19): Error: function mainscene.MainScene.onAllocate does not override any function source\mainscene.d(469,19): Error: function void mainscene.MainScene.onUpdate(double dt) does not override any function, did you mean to override void dagon.resource.scene.Scene.onUpdate(Time t)? source\mainscene.d(541,1): Error: undefined identifier SceneApplication C:\tools\dmd.2.088.1.windows\dmd2\windows\bin\dmd.exe failed with exit code 1.
Re: Is betterC affect to compile time?
On Thursday, 25 July 2019 at 12:46:48 UTC, Oleg B wrote: On Thursday, 25 July 2019 at 12:34:15 UTC, rikki cattermole wrote: Those restrictions don't stop at runtime. It's vary sad. What reason for such restrictions? It's fundamental idea or temporary implementation? Yes it is very sad. It's an implementation thing. I can guess as to a couple reasons why it doesn't work, but I think there's a few big ones that contribute to not being able to use certain features at compile-time without having it introduce things at runtime.
Re: Mixin mangled name
On Monday, 1 July 2019 at 19:40:09 UTC, Andrey wrote: Hello, Is it possible to mixin in code a mangled name of some entity so that compiler didn't emit undefined symbol error? For example mangled function name or template parameter? If you've got undefined symbol "foo", you could just add this to one of your modules: extern (C) void foo() { }
Re: make C is scriptable like D
On Thursday, 20 June 2019 at 06:20:17 UTC, dangbinghoo wrote: hi there, a funny thing: $ cat rgcc #!/bin/sh cf=$@ mycf=__`echo $cf|xargs basename` cat $cf | sed '1d' > ${mycf} gcc ${mycf} -o a.out rm ${mycf} ./a.out $ cat test.c #!/home/user/rgcc #include int main() { printf("hello\n"); } And then, chmod +x test.c ./test.c output hello. is rdmd implemented similarly? thanks! binghoo rdmd adds a few different features as well, but the bigger thing it does is cache the results in a global temporary directory. So If you run rdmd on the same file with the same options twice, the second time it won't compile anything, it will detect that it was already compiled and just run it.
dmd -nodefaultlibs?
Is there a way to prevent dmd from adding any default libraries to its linker command? Something equivalent to "-nodefaultlibs" from gcc? https://gcc.gnu.org/onlinedocs/gcc/Link-Options.html I'd still like to use the dmd.conf file, so I don't want to use "-conf="
Re: D Logic bug
On Thursday, 11 October 2018 at 23:29:05 UTC, Steven Schveighoffer wrote: On 10/11/18 7:17 PM, Jonathan Marler wrote: I had a look at the table again, looks like the ternary operator is on there, just called the "conditional operator". And to clarify, D's operator precedence is close to C/C++ but doesn't match exactly. This is likely a result of the grammar differences rather than an intention one. For example, the "Conditional operator" in D actually has a higher priority than an assignment, but in C++ it's the same and is evaluated right-to-left. So this expression would be different in C++ and D: Not in my C/D code. It would have copious parentheses everywhere :) Good :) That case is actually very strange, I don't know if it's something that's really common. Yes, that explains why myself, Jonathan Davis and certainly others didn't know there were actually differences between C++ and D Operator precedence :) I wasn't sure myself but having a quick look at each's operator precedence table made it easy to find an expression that behaves differently in both.
Re: D Logic bug
On Thursday, 11 October 2018 at 21:57:00 UTC, Jonathan M Davis wrote: On Thursday, October 11, 2018 1:09:14 PM MDT Jonathan Marler via Digitalmars-d wrote: On Thursday, 11 October 2018 at 14:35:34 UTC, James Japherson wrote: > Took me about an hour to track this one down! > > A + (B == 0) ? 0 : C; > > D is evaluating it as > > (A + (B == 0)) ? 0 : C; > > > The whole point of the parenthesis was to associate. > > I usually explicitly associate precisely because of this! > > A + ((B == 0) ? 0 : C); > > In the ternary operator it should treat parenthesis directly > to the left as the argument. > > Of course, I doubt this will get fixed but it should be > noted so other don't step in the same poo. In c++ the ternary operator is the second most lowest precedence operator, just above the comma. You can see a table of each operator and their precendence here, I refer to it every so often: https://en.cppreference.com/w/cpp/language/operator_precedence Learning that the ternary operator has such a low precedence is one of those things that all programmers eventually run into...welcome to the club :) It looks like D has a similar table here (https://wiki.dlang.org/Operator_precedence). However, it doesn't appear to have the ternary operator in there. On that note, D would take it's precedence order from C/C++ unless there's a VERY good reason to change it. The operator precedence matches in D. Because in principle, C code should either be valid D code with the same semantics as it had in C, or it shouldn't compile as D code, changing operator precedence isn't something that D is going to do (though clearly, the ternary operator needs to be added to the table). It would be a disaster for porting code if we did. - Jonathan M Davis I had a look at the table again, looks like the ternary operator is on there, just called the "conditional operator". And to clarify, D's operator precedence is close to C/C++ but doesn't match exactly. This is likely a result of the grammar differences rather than an intention one. For example, the "Conditional operator" in D actually has a higher priority than an assignment, but in C++ it's the same and is evaluated right-to-left. So this expression would be different in C++ and D: a ? b : c = d In D it would be: (a ? b : c ) = d And in C++ would be: a ? b : (c = d) Check it out: --- import core.stdc.stdio; void main() { int a = 2, b = 3; printf("expr = %d\n", 1 ? a : b = 4); } --- prints "expr = 4" it evaluates the conditional (1 ? a : b) into the expression a, which is actually an rvalue! and then assigns it to 4 because of the "= 4" and then returns the value 4 to printf. Here's the C++ version: #include int main(int argc, char *argv[]) { int a = 2, b = 3; printf("expr = %d\n", 1 ? a : b = 4); } This one prints "expr = 2" It simply returns the value of `a` because the "b = 4" at the end is all part of the "else" contition in the ternary operator.
Re: D Logic bug
On Thursday, 11 October 2018 at 14:35:34 UTC, James Japherson wrote: Took me about an hour to track this one down! A + (B == 0) ? 0 : C; D is evaluating it as (A + (B == 0)) ? 0 : C; The whole point of the parenthesis was to associate. I usually explicitly associate precisely because of this! A + ((B == 0) ? 0 : C); In the ternary operator it should treat parenthesis directly to the left as the argument. Of course, I doubt this will get fixed but it should be noted so other don't step in the same poo. In c++ the ternary operator is the second most lowest precedence operator, just above the comma. You can see a table of each operator and their precendence here, I refer to it every so often: https://en.cppreference.com/w/cpp/language/operator_precedence Learning that the ternary operator has such a low precedence is one of those things that all programmers eventually run into...welcome to the club :) It looks like D has a similar table here (https://wiki.dlang.org/Operator_precedence). However, it doesn't appear to have the ternary operator in there. On that note, D would take it's precedence order from C/C++ unless there's a VERY good reason to change it.
Re: Yet another binding generator (WIP)
On Monday, 1 October 2018 at 13:51:10 UTC, evilrat wrote: Hi, Early access program is now live! Limited offer! Preorder until 12.31.2017 BC and you will receive* unique pet - "Cute Space Hamster"! !! *(Limited quantity in stock) [...] Based on clang? I approve. I'll have to try it out sometime.
Re: phobo's std.file is completely broke!
On Saturday, 22 September 2018 at 21:04:04 UTC, Vladimir Panteleev wrote: On Saturday, 22 September 2018 at 20:46:27 UTC, Jonathan Marler wrote: Decided to play around with this for a bit. Made a "proof of concept" library: I suggest using GetFullPathNameW instead of GetCurrentDirectory + manual path appending / normalization. It's also what CoreFX seems to be doing. Yes that allows the library to avoid calling buildNormalizedPath. I've implemented and pushed this change. This change also exposed a weakness in the Appender interface and I've created a bug for it: https://issues.dlang.org/show_bug.cgi?id=19259 The problem is there's no way to extend the length of the data in an appender if you don't use the `put` functions. So when I call GetFullPathNameW function to populate the data (like the .NET CoreFX implementation does) I can't extend the length.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 20:53:02 UTC, krzaq wrote: On Saturday, 22 September 2018 at 20:40:14 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 19:04:41 UTC, Henrik wrote: [...] That works for types but wouldn't work for keywords. Keywords have special meaning in the lexical stage and you can't extend/change the grammar of the language via an alias or typedef. You could do something like this with a preprocessor but then you run into all sorts of other problems (i.e. #define safe @safe). If you come up with other ideas then feel free to share. No one likes the current state but no one has come up with a good solution yet. C++ added contextual keywords, like `override` and `final`. If this can be done in C++, surely D is easier to parse? https://wiki.dlang.org/Language_Designs_Explained#Why_don.27t_we_create_a_special_rule_in_the_syntax_to_handle_non-keyword_function_attributes_without_an_.27.40.27_character.3F
Re: phobo's std.file is completely broke!
On Thursday, 20 September 2018 at 19:49:01 UTC, Nick Sabalausky (Abscissa) wrote: On 09/19/2018 11:45 PM, Vladimir Panteleev wrote: On Thursday, 20 September 2018 at 03:23:36 UTC, Nick Sabalausky (Abscissa) wrote: (Not on a Win box at the moment.) I added the output of my test program to the gist: https://gist.github.com/CyberShadow/049cf06f4ec31b205dde4b0e3c12a986#file-output-txt assert( dir.toAbsolutePath.length > MAX_LENGTH-12 ); Actually it's crazier than that. The concatenation of the current directory plus the relative path must be < MAX_PATH (approx.). Meaning, if you are 50 directories deep, a relative path starting with 50 `..\` still won't allow you to access C:\file.txt. Ouch. Ok, yea, this is pretty solid evidence that ALL usage of non-`\\?\` paths on Windows needs to be killed dead, dead, dead. If it were decided (not that I'm in favor of it) that we should be protecting developers from files named " a ", "a." and "COM1", then that really needs to be done on our end on top of mandatory `\\?\`-based access. Anyone masochistic enough to really WANT to deal with MAX_PATH and such is free to access the Win32 APIs directly. Decided to play around with this for a bit. Made a "proof of concept" library: https://github.com/marler8997/longfiles It's just a prototype/exploration on the topic. It allows you to include "stdx.longfiles" instead of "std.file" which will enable the conversion in every call, or you can import "stdx.longfiles : toLongPath" and use that on filenames passed to std.file. There's also a test you can run rund test/test_with_longfiles.d (should work) rund test/test_without_longfiles.d (should fail) NOTE: use "rund test/cleantests.d" to remove the files...I wasn't able to via the windows explorer program.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 19:04:41 UTC, Henrik wrote: On Saturday, 22 September 2018 at 15:45:09 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 15:25:32 UTC, aberba wrote: On Saturday, 22 September 2018 at 14:31:20 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 13:25:27 UTC, rikki cattermole wrote: Then D isn't the right choice for you. I think it makes for a better community if we can be more welcoming, helpful a gracious instead of responding to criticism this way. This is someone who saw enough potential with D to end up on the forums but had some gripes with it, after all who doesn't? I'm glad he took the initiative to provide us with good feedback, and he's not the first to take issue with the inconsistent '@' attribute syntax. I'm sure everyone can agree this inconsistency is less than ideal but that doesn't mean D isn't right for them and we should respond this feedback like this with thanks rather than dismissal. That inconsistency is an issue for me. I wish there a clear decision to make things consistent. Yeah there's been alot of discussion around it over the years, which is why I put this together about 4 years ago: https://wiki.dlang.org/Language_Designs_Explained#Function_attributes Gosh I've forgotten how long I've been using D. Interesting article. "int safe = 0; // This code would break if "safe" was added as a keyword" My question here: why didn't D use a similar solution as C when dealing with these things? Look at the introduction of the bool datatype in C99. They created the compiler reserved type "_Bool" and put "typedef _Bool bool" in "stdbool.h". The people wanting to use this new feature can include this header, and other can leave it be. No ugly "@" polluting the language on every line where it's used. Wouldn't a similar solution have been possible in D? That works for types but wouldn't work for keywords. Keywords have special meaning in the lexical stage and you can't extend/change the grammar of the language via an alias or typedef. You could do something like this with a preprocessor but then you run into all sorts of other problems (i.e. #define safe @safe). If you come up with other ideas then feel free to share. No one likes the current state but no one has come up with a good solution yet.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 15:25:32 UTC, aberba wrote: On Saturday, 22 September 2018 at 14:31:20 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 13:25:27 UTC, rikki cattermole wrote: Then D isn't the right choice for you. I think it makes for a better community if we can be more welcoming, helpful a gracious instead of responding to criticism this way. This is someone who saw enough potential with D to end up on the forums but had some gripes with it, after all who doesn't? I'm glad he took the initiative to provide us with good feedback, and he's not the first to take issue with the inconsistent '@' attribute syntax. I'm sure everyone can agree this inconsistency is less than ideal but that doesn't mean D isn't right for them and we should respond this feedback like this with thanks rather than dismissal. That inconsistency is an issue for me. I wish there a clear decision to make things consistent. Yeah there's been alot of discussion around it over the years, which is why I put this together about 4 years ago: https://wiki.dlang.org/Language_Designs_Explained#Function_attributes Gosh I've forgotten how long I've been using D.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 13:25:27 UTC, rikki cattermole wrote: Then D isn't the right choice for you. I think it makes for a better community if we can be more welcoming, helpful a gracious instead of responding to criticism this way. This is someone who saw enough potential with D to end up on the forums but had some gripes with it, after all who doesn't? I'm glad he took the initiative to provide us with good feedback, and he's not the first to take issue with the inconsistent '@' attribute syntax. I'm sure everyone can agree this inconsistency is less than ideal but that doesn't mean D isn't right for them and we should respond this feedback like this with thanks rather than dismissal.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 08:48:37 UTC, Nemanja Borić wrote: On Friday, 21 September 2018 at 21:07:57 UTC, Jonathan M Davis wrote: [...] Sociomantic "maintains" (well, much more in the past than today) D1 compiler and you can find latest releases here (Ubuntu): https://bintray.com/sociomantic-tsunami/dlang/dmd1 (direct link https://bintray.com/sociomantic-tsunami/dlang/dmd1/v1.082.1#files) or you can compile https://github.com/dlang/dmd/tree/dmd-1.x yourself and hope that the compiler bug is fixed - we've certainly fixed a lot of them in the past years (decade?). [...] I'd be interested to hear/read about the features that some developers don't like with D2. Maybe you can point me to places where this has been shared in the past and/or reply with your own perspective? Others feel free to chime in as well. I should make it clear this is just a request to gather data, I'm not looking to start a debate, just wanting to see what can be learned from this. Thanks.
Re: rund users welcome
On Thursday, 20 September 2018 at 23:19:17 UTC, aliak wrote: Somewhat along these lines, I just found a watched a video by a guy who's been working on a programming language called Jai (it has some awesome concepts) and one of the sections he went in to about source files building themselves I thought was interesting and reminded me of rund so thought I'd post here. Might inspire you to add some stuff to rund :) Video: https://www.youtube.com/watch?v=uZgbKrDEzAs Time in video on "getting rid of build tools": https://youtu.be/uZgbKrDEzAs?t=1849 Enjoy! Yeah I'm very familair with Jai and Jonathan Blow :) I'm excited for him to release it, I've emailed him about contributing but haven't gotten much response from him yet.
Re: phobo's std.file is completely broke!
On Wednesday, 19 September 2018 at 06:11:22 UTC, Vladimir Panteleev wrote: On Wednesday, 19 September 2018 at 06:05:38 UTC, Vladimir Panteleev wrote: [...] One more thing: There is the argument that the expected behavior of Phobos functions creating filesystems objects with long paths is to succeed and create those files. However, this results in filesystem objects that most software will fail to access (everyone needs to also use the long paths workaround). One point of view is that the expected behavior is that the functions succeed. Another point of view is that Phobos should not allow programs to create files and directories with invalid paths. Consider, e.g. that a user writes a program that creates a large tree of deeply nested filesystem objects. When they are done and wish to delete them, their file manager fails and displays an error. The user's conclusion? D sucks because it corrupts the filesystem and creates objects they can't operate with. I was wanting to reply with something similar:) My 2 cents..whatever it's worth. Vladimir has expressed most if not all the points I would have brought up. Abscissa did bring up a good idea to help users support long filenames, but I agree with Vladimir that this should be "opt-in". Provide a function in phobos for it, plus, it lets them cache the result AND infinitely better, the developer knows what's going on. What drives me mad is when you have library writers who try to "protect" you from the underlying system by translating everything you do into what they "think" you're trying to do. This will inevitably result in large complex adaptation layers as both the underlying system and the front-facing API change over time with unwieldy maintenance burden. An opt-in solution doesn't have this problem because you've kept each solution orthogonal rather than developing a translation layer that needs to be able to determine what the underlying system does or does not support. This is a fundamental example of encapsulation, the filesystem library should be it's own component with the windows filesystem workaround being an optional "add-on" that the filesystem library doesn't need to know about. This workaround could look like an extra function in phobos...or you could even write a module that wraps std.file and does the translation on a per-call basis.
Re: Why the hell do exceptions give error in the library rather than the user code?
On Friday, 14 September 2018 at 14:34:36 UTC, Josphe Brigmo wrote: std.file.FileException@C:\D\dmd2\windows\bin\..\..\src\phobos\std\file.d(3153): It is very annoying when the only error info I have is pointing to code in a library which tells me absolutely nothing about where the error occurs in the in the user code(which is what matters). Surely the call stack can be unrolled to find code that exists in the user code? Or at least display several lines like a trace stack. Not getting a stack trace? What platform are you on and what's the command line you used to compile?
Re: filtered imports
On Thursday, 13 September 2018 at 17:54:03 UTC, Vladimir Panteleev wrote: On Thursday, 13 September 2018 at 16:23:21 UTC, Jonathan Marler wrote: The immediate example is to resolve symbol conflicts. I've ran into this a few times: import std.stdio; import std.file; void main(string[] args) { auto text = readText(args[1]); write("The contents of the file is: ", text); } However, it is solved with an alias: alias write = std.stdio.write; (or using selective imports or fully-qualified identifiers for either module of course). That's pretty slick. I guess I'm all out of use cases then. I did learn a new trick though :)
Re: filtered imports
On Thursday, 13 September 2018 at 11:58:40 UTC, rikki cattermole wrote: On 13/09/2018 11:54 PM, Jonathan Marler wrote: "Selective imports" limit the symbols imported from a module by providing a list of all the symbols to include: import std.stdio : writeln, writefln; The complement of this would be a "Filtered import", meaning, import all the symbols except the ones in the provided list. I imagine the syntax would look something like: import std.stdio ~ writeln, writefln; To provide a use case for this, say you have a module that uses a fair amount of symbols from `std.file` but you want to make sure that all calls to `chdir` are logged. Using a filtered import would allow you to exclude `chdir` from being available globally so you could create a wrapper that all code is forced to go through. import std.stdio; import std.file ~ chdir; void chdir(R)(R path) { writefln("chdir '%s'", path); from!"std.file".chdir(path); } It's an interesting variation on D's current repertoire of import semantics. Possibly worth consideration as an addition to the language. Grammar Changes: - ImportBindings: (existing rule) Import : ImportBindList (existing rule) Import ~ ImportExcludeList (new rule) ImportExcludeList: (new rule) Identifier, ImportExcludeList (new rule) - import std.stdio; import std.file; void chdir(R)(R path) { writeln("changing dir to ", path); std.file.chdir(path); } void main() { chdir("/tmp"); } Ah, I see. Poor example on my part. I didn't realize that local symbols ALWAYS take precedence over imported ones. I thought that the compiler would "fall back" to an imported symbol if the local one didn't work, but it appears that's not the case. Some more thoughts. Selective imports and filtered imports are complements to each other, you can still have all the functionality you want with just one of them, but one will usually fit better in each case. Namely, if you want a small number of symbols from a module, selective imports are the way to go, but if you want to include all except a small number of symbols from a module, then filtered imports would be nice. So a use case would only work better with filtered imports if it was a scenario where we want to include all except a few symbols from an imported module. Given that, the next question is, in what cases do we want to prevent symbols from being imported from a module? The immediate example is to resolve symbol conflicts. // assume both modules provide the symbol baz import foo; import bar; baz(); // symbol conflict // change `import foo;` to `import foo ~ baz;` And to reiterate, we could also resolve this conflict with selective imports (or even static or named imports in this case) so you would only consider this useful in cases where we are using a good amount of symbols from foo as to make it cumbersome to have to selectively import all the symbols or qualify them all in source, i.e. import foo : a, b, c, d, e, f, g, h, i, j, k, ...; // vs import foo ~ baz; Anyway, those are my thoughts on it. Again I think it's something to consider. I'm not sure if it has enough uses to justify an addition to the language (that bar is pretty high now-a-days). So it's up to us D programmers to think about whether these semantics would work well in our own projects.
filtered imports
"Selective imports" limit the symbols imported from a module by providing a list of all the symbols to include: import std.stdio : writeln, writefln; The complement of this would be a "Filtered import", meaning, import all the symbols except the ones in the provided list. I imagine the syntax would look something like: import std.stdio ~ writeln, writefln; To provide a use case for this, say you have a module that uses a fair amount of symbols from `std.file` but you want to make sure that all calls to `chdir` are logged. Using a filtered import would allow you to exclude `chdir` from being available globally so you could create a wrapper that all code is forced to go through. import std.stdio; import std.file ~ chdir; void chdir(R)(R path) { writefln("chdir '%s'", path); from!"std.file".chdir(path); } It's an interesting variation on D's current repertoire of import semantics. Possibly worth consideration as an addition to the language. Grammar Changes: - ImportBindings:(existing rule) Import : ImportBindList(existing rule) Import ~ ImportExcludeList (new rule) ImportExcludeList: (new rule) Identifier, ImportExcludeList (new rule) -
Re: rund users welcome
On Wednesday, 12 September 2018 at 10:06:29 UTC, aliak wrote: On Wednesday, 12 September 2018 at 01:11:59 UTC, Jonathan Marler wrote: On Tuesday, 11 September 2018 at 19:55:33 UTC, Andre Pany wrote: On Saturday, 8 September 2018 at 04:24:20 UTC, Jonathan Marler wrote: I've rewritten rdmd into a new tool called "rund" and have been using it for about 4 months. It runs about twice as fast making my workflow much "snappier". It also introduces a new feature called "source directives" where you can add special comments to the beginning of your D code to set various compiler options like import paths, versions, environment variable etc. Feel free to use it, test it, provide feedback, contribute. https://github.com/marler8997/rund It would be great if you could create a pull request for rdmd to add the missing -i enhancement. Kind regards Andre I did :) https://github.com/dlang/tools/pull/292 Made me sad to read that and related PRs ... sigh :( Yeah I loved working on D. But some of the people made it very difficult. So I've switched focus to other projects that use D rather than contributing to D itself. But anyway! rund seems awesome! Thanks for it :) some questions: Are these all the compiler directives that are supported (was not sure if they were an example or some of them or all of them from the readme): #!/usr/bin/env rund //!importPath //!version //!library //!importFilenamePath //!env = //!noConfigFile //!betterC I love the concept of source files specifying the compiler flags they need to build. Yeah they have proven to be very useful. I have many tools written in D and this feature allows the main source file to be a "self-contained" program. The source itself is declaring the libraries it needs, the environment, etc. And the answer is Yes, all those options are supported along with a couple I recently added `//!debug` and `//!debugSymbols`. I anticipate more will be added in the future (see https://github.com/marler8997/rund/blob/master/src/rund/directives.d) To show how powerful they are, I include an example in the repository that can actually build DMD on the fly (assuming the c++ libraries are built beforehand). https://github.com/marler8997/rund/blob/master/test/dmdwrapper.d #!/usr/bin/env rund //!env CC=c++ //!version MARS //!importPath ../../dmd/src //!importFilenamePath ../../dmd/res //!importFilenamePath ../../dmd/generated/linux/release/64 //!library ../../dmd/generated/linux/release/64/newdelete.o //!library ../../dmd/generated/linux/release/64/backend.a //!library ../../dmd/generated/linux/release/64/lexer.a /* This wrapper can be used to compile/run dmd (with some caveats). * You need to have the dmd repository cloned to "../../dmd" (relative to this file). * You need to have built the C libraries. You can build these libraries by building dmd. Note sure why, but through trial and error I determined that this is the minimum set of modules that I needed to import in order to successfully include all of the symbols to compile/link dmd. */ import dmd.eh; import dmd.dmsc; import dmd.toobj; import dmd.iasm; Thanks for the interest. Feel free to post any requested features or issues on github.
Re: rund users welcome
On Tuesday, 11 September 2018 at 19:55:33 UTC, Andre Pany wrote: On Saturday, 8 September 2018 at 04:24:20 UTC, Jonathan Marler wrote: I've rewritten rdmd into a new tool called "rund" and have been using it for about 4 months. It runs about twice as fast making my workflow much "snappier". It also introduces a new feature called "source directives" where you can add special comments to the beginning of your D code to set various compiler options like import paths, versions, environment variable etc. Feel free to use it, test it, provide feedback, contribute. https://github.com/marler8997/rund It would be great if you could create a pull request for rdmd to add the missing -i enhancement. Kind regards Andre I did :) https://github.com/dlang/tools/pull/292 I spent quite a bit of time on it but in the end the bulk of my time was spent on what seemed to be endless debate rather than writing good code. After I decided to try forking rdmd my life got much better :) Now I spend my time writing great code, making good tests and having nice tools. I'm much happier working than I am arguing with people. What took me months to do with rdmd took me less than a day with my rund. So...if you have requests for rund features or issues by all means let me know, but I don't control rdmd and I've learned that time contributing to it is mostly time wasted.
Re: rund users welcome
On Tuesday, 11 September 2018 at 17:36:09 UTC, Kagamin wrote: On Tuesday, 11 September 2018 at 15:20:51 UTC, Jonathan Marler wrote: The Posix/Windows 10 cases seem fine, but Windows <10 is not great. MSDN says symbolic links are supported since Vista. Yeah but I think you need Admin privileges to make them.
Re: rund users welcome
On Tuesday, 11 September 2018 at 08:53:46 UTC, Kagamin wrote: On Saturday, 8 September 2018 at 04:24:20 UTC, Jonathan Marler wrote: https://github.com/marler8997/rund I have an idea how to push shebang to userland and make it crossplatform: if, say, `rund -install prog.d` would copy/link itself into current folder under name "prog" and when run would work with file args[0]~".d", this will work the same on all platforms without shebang. So your idea is that you could run `rund -install prog.d` which would create some sort of file that allows you to run `./prog` (on POSIX) or `prog` (on WINDOWS). So something like this: /path/prog.d Posix: /path/prog -> /usr/bin/rund Windows 10 (It supports symbolic links) /path/prog.exe -> C:\Programs\rund.exe Windows <10 /path/prog.exe (a copy of rund.exe) Then this would allow you to run "/path/prog" and which would invoke rund and like you said we could take "argv[0]" and assume that's the main source file. The Posix/Windows 10 cases seem fine, but Windows <10 is not great. In this case it has to keep an entire copy of rund around (currently 1.8M). I think we can do better. Instead, `rund -install prog.d` could generate a little "wrapper program" that forwards any calls to rund. You could make this wrapper program with a small D program, or with this BATCH script: --- /path/prog.bat @rund %~dp0prog.d %* You get the same result. When you run "\path\prog" it will invoke rund with the given args for prog.d. Thoughts?
Re: rund users welcome
On Tuesday, 11 September 2018 at 01:02:30 UTC, Vladimir Panteleev wrote: On Sunday, 9 September 2018 at 04:32:32 UTC, Jonathan Marler wrote: - -od (e.g. for -od.) Hmmm, yeah it looks like rund is currently overriding this. I've attempted a fix but it's hard to cover all the different combinations of -of/-od/etc. I'll need to fill out the rest of the tests soon. Thanks. Looks like there's a problem on Posix systems: With -od., the binary file doesn't have the executable bit set. This fixes it: std.file.copy(from, to, std.file.PreserveAttributes.yes); I wasn't able to reproduce this issue on my ubuntu box. But, this might not be an issue anymore because I've implemented your next suggestion... Why not just get the compiler to create the file at the correct location directly, and avoid the I/O of copying the file? https://github.com/marler8997/rund/pull/3 (Remove extra rename/copy when user gives -of) Reviews welcome
Re: rund users welcome
On Sunday, 9 September 2018 at 09:55:19 UTC, Vladimir Panteleev wrote: On Sunday, 9 September 2018 at 04:32:32 UTC, Jonathan Marler - The .d extension is not implied, like for dmd/rdmd I haven't come up with any reasons to support this. Maybe you can enlighten me? "rund prog" is shorter and easier to type than "rund prog.d". Support main source with no extension: https://github.com/marler8997/rund/pull/1
Re: rund users welcome
On Sunday, 9 September 2018 at 03:33:49 UTC, Vladimir Panteleev wrote: On Saturday, 8 September 2018 at 04:24:20 UTC, Jonathan Marler wrote: I've rewritten rdmd into a new tool called "rund" and have been using it for about 4 months. It runs about twice as fast making my workflow much "snappier". It also introduces a new feature called "source directives" where you can add special comments to the beginning of your D code to set various compiler options like import paths, versions, environment variable etc. Feel free to use it, test it, provide feedback, contribute. Thanks! I tried integrating it into my scripts as an rdmd replacement. Cool thanks for giving it a try. Currently, the following are missing: - -od (e.g. for -od.) Hmmm, yeah it looks like rund is currently overriding this. I've attempted a fix but it's hard to cover all the different combinations of -of/-od/etc. I'll need to fill out the rest of the tests soon. - --build-only should imply -od. Maybe...I actually have use cases where I want "--build-only" but want the executable to be built in the normal cache location. Build the program and cache it but don't run it yet. - No --main, though that can probably be substituted with -main Yeah, I don't see any reason to duplicate the flag already supported by dmd. Maybe there's a reason I'm not aware of. - The .d extension is not implied, like for dmd/rdmd I haven't come up with any reasons to support this. Maybe you can enlighten me? Also, --pass is weird. Why not use the standard-ish -- ? It is a bit weird. I've never had a reason to use this option myself. Is this the syntax you are thinking of? rund other.d main.d -- ... The problem with this is it's not "composable". Say you were running a D program that also used "--", then it's ambiguous whether the "--" belongs to rund or to the program being compiled. But maybe I'm missing something? If you have an idea that's less weird than "--pass=.d" then I'm all for it :) Was there a problem with the idea of forking rdmd? The above plus things like its -lib support would then not be needed to be reimplemented. I would actually consider this a "fork" of rdmd. But I rebuilt it from the ground up, re-integrating each feature one by one so I could ensure they were "cohesive". I also haven't integrated all features from rdmd yet. I will probably integrate more of them when I see the need. So far it's just been me using it so that's why I want to get people using it so we can flush out the rest. The big difference with rdmd and rund is that rund is not compatible with compilers that don't support "-i". Since this is such a fundamental change, I thought a rename made sense, and "rund" fits the modern times where we have more D compilers than just dmd.
rund users welcome
I've rewritten rdmd into a new tool called "rund" and have been using it for about 4 months. It runs about twice as fast making my workflow much "snappier". It also introduces a new feature called "source directives" where you can add special comments to the beginning of your D code to set various compiler options like import paths, versions, environment variable etc. Feel free to use it, test it, provide feedback, contribute. https://github.com/marler8997/rund
Re: -op can be quite strange
On Saturday, 1 September 2018 at 23:29:01 UTC, Nicholas Wilson wrote: On Saturday, 1 September 2018 at 14:48:55 UTC, Jonathan Marler wrote: Note that we would want this to be a new option so as not to break anyone depending on "-op" semantics. Maybe "-om" for "output path based on 'Module' name"? LDC has this already as -oq, FWIW. I knew those LDC folks were smart :) Has there been any attempt to add -oq to dmd?
-op can be quite strange
The -od (output directory) and -op (perserve source paths) work great when you're compiling multiple modules in a single invocation. For example, say we have the following: /foolib/src/foo/bar.d /myapp/src/main.d Current Directory: /myapp ``` dmd -I=../foolib/src -I=src -od=obj -op -c src/main.d ../foolib/src/foo/bar.d ``` This example shows a weird unexpected result. Because I've specified "../foolib/src/foo/bar.d" on the command line, the compiler will put the object file here: obj/../foolib/src/foo/bar.o which means it will go here: /myapp/foolib/src/foo/bar.o This is very strange. Because it has a ".." in the name, it doesn't even get put in the "obj" folder specified by "-od". What makes it worse is if you ad used "-i" instead of giving it on the command line: ``` dmd -I=../foolib/src -I=src -od=obj -op -c src/main.d -i ``` then it would be in: obj/foo/bar.o I thought about this for a bit an realized that maybe there's a better way. Instead of telling the compiler to append the "relative path" of the source file from CWD to the output directory like this: `/.d` -> `//.o` `../foolib/src/foo/bar.d` -> `obj/../foolib/src/foo/bar.o` we could tell the compiler to use the path relative of the source file from the package root. Let's call it the "package path": `//.d` -> `//.o` `../foolib/src/foo/bar.d` -> `obj/foo/bar.o` This would mean modules would go to the same place whether they were explicitly given on the command line or whether they were a "compiled import" via "-i". It also handles conflicts because each fully-qualified module name (and by extension each "package path" / "module name" combo) must be unique in a compiler invocation. Note that we would want this to be a new option so as not to break anyone depending on "-op" semantics. Maybe "-om" for "output path based on 'Module' name"?
Re: Embrace the from template?
On Saturday, 25 August 2018 at 09:30:27 UTC, Jonathan M Davis wrote: On Saturday, August 25, 2018 2:02:51 AM MDT Jonathan Marler via Digitalmars- d wrote: [...] Honestly, I don't want to be doing _anything_ like from with _any_ syntax. It's not just a question of from itself being too long. It's the fact that you're having to use the import path all over the place. I don't want to be putting anything other than the actual symbol name in the function's signature. IMHO, the ideal is to be able to just put import blah; at the top and then just use whatever was in module blah without having to repeat it everywhere. On the whole, I find this whole trend of constantly having to list exactly which symbols you're importing / exactly where a symbol comes from instead of just being able to just slap an import at the top and use the symbols te be way, way too verbose and a general maintenance problem. Yes, it can make it easier to figure out where a symbol came from when reading the code, and sometimes, it can improve compilation speed, but it means having to add a ton of extra code in comparison to just importing the module once, and you have to maintain all of that, constantly tweaking import statements, because you've changed which symbols you've used. It's like a cancer except that it comes with just enough benefits that some folks keep pushing for it. from is not the entire problem, but IMHO, it's definitely the straw that breaks the camel's back. It's taking all of this specificity way too far. I don't want to have to write or read code that's constantly putting import information everywhere. Sadly, it makes C's #include mess start looking desirable in comparison. - Jonathan M Davis I can certainly understand this sentiment. I personally use both styles depending no the situation. Each has their pros and cons, it's verbosity vs specificity. At least with D I can define the `from` template in my own projects even if the core language doesn't add it. I'm just of the opinion that it's useful enough to warrant additions to the core language in some form to make it easier to use. Thanks for chiming in.
Re: Embrace the from template?
On Saturday, 25 August 2018 at 04:25:56 UTC, Jonathan M Davis wrote: On Friday, August 24, 2018 7:03:37 PM MDT Jonathan Marler via Digitalmars-d wrote: > What uses does this actually have, I only see one example > from the article and it is an oversimplistic example that > effectively translates to either phobos being used or not > being used. All the extra bloat this template would add to > the already bloated if constraints is not welcome at all. > The potential small benefit this might add isn't worth the > unreadable mess it will turn code into. I can't help but laugh when you say "all the extra bloat this template would add..." :) Sorry, I don't mean to insult but that really gave me a laugh. I hate to be blunt, but its clear from your response that you failed to grok the original post, which makes anything else I say pointless. So I'm going to slowly back away from this one...step...step..step*stp**s*...* It actually does add more template instantiations - and therefore arguably more bloat. It's just that because it more tightly ties importing to the use of the symbol, it reduces how many symbols you import unnecessarily, which can therefore reduce the bloat. So, if the symbol is used everywhere anyway, then from just adds bloat, whereas if it really is used in a more restricted way, then it reduces compilation times. The reason that I personally hate from's guts is because of how verbose it is. I'd _much_ rather see lazy importing be added like Walter likes to bring up from time to time. It should get us the reduction in compile times without all of the verbosity. As such, I would hate to see from in a place like object.d (or honestly, anywhere in druntime or Phobos), because then it might be used in Phobos all over the place, and I simply don't want to have to deal with it. It's bad enough that we're using scoped and local imports all over the place. They do help with tying imports to what uses them (and in the case of templated code can actually result in imports only happening when they need to), but it's so painfully verbose. I'd much rather not see the situation get that much worse by from being considered best practice instead of just fixing the compiler so that it's more efficient at importing and thus avoiding all of that extra verbosity in the code. - Jonathan M Davis Would love to see lazy imports. I actually started implementing them earlier this year. Just to make sure we're on the same page, normal imports (like import foo.bar;) cannot be lazy (see my notes at https://github.com/marler8997/dlangfeatures#lazy-imports). There are 3 types of imports that can be lazy: 1. importing specific symbols: `import foo.bar : baz;` 2. static imports `static import foo.bar;` 3. alias imports: `import bar = foo.bar;` So assuming we're on the same page, you mentioned that the `from` template is too verbose. I can see this point. To measure this I consider the least verbose syntax for achieving the semantics of the `from` template. The semantics can be stated as "take symbol X from module Y". The shortest syntax possible would be the following: If we defined ':' as the special "from operator" then the following would be equivalent: foo.bar:baz from!"foo.bar".baz Of course, reserving a special character for such an operator should require that the operation is common enough to warrant the reservation of a character. Less common operators piggy back on keywords or combinations of special characters. For example, you could make the syntax a bit more verbose by re-using the import keyword, i.e. import(foo.bar).baz but this example is only 1 character less than the `from` template. In the end I don't know if these semantics warrant a special operator. Maybe they warrant new syntax, however, the solution that requires the least amount of justification is adding a template to `object.d`. The overhead will be virtually zero and only requires a few lines of code because it leverages existing D semantics. In the end, these semantics are a great addition to D that makes lazy imports much easier to accommodate. I've had good success with `from` and think D would do well to implement these semantics in the core part of the language, whether with the template or with new syntax.
Re: Embrace the from template?
On Saturday, 25 August 2018 at 00:40:54 UTC, tide wrote: On Friday, 24 August 2018 at 06:41:35 UTC, Jonathan Marler wrote: Ever since I read https://dlang.org/blog/2017/02/13/a-new-import-idiom/ I've very much enjoyed using the new `from` template. It unlocks new idioms in D and have been so useful that I thought it might be a good addition to the core language. I've found that having it in a different place in each project and always having to remember to import it makes it much less ubiquitous for me. One idea is we could add this template to `object.d`. This would allow it to be used from any module that uses druntime without having to import it first. The template itself is also very friendly to "bloat" because it only has a single input parameter which is just a string, extremely easy to memoize. Also, unless it is instantiated, adding it to object.d will have virtually no overhead (just a few AST nodes which would dwarfed by what's already in object.d). It would also be very easy to add, a single PR with 4 lines of code to druntime and we're done. Of course, if we don't want to encourage use of the `from` template then this is not what we'd want. Does anyone have any data/experience with from? All I know is my own usage so feel free to chime in with yours. What uses does this actually have, I only see one example from the article and it is an oversimplistic example that effectively translates to either phobos being used or not being used. All the extra bloat this template would add to the already bloated if constraints is not welcome at all. The potential small benefit this might add isn't worth the unreadable mess it will turn code into. I can't help but laugh when you say "all the extra bloat this template would add..." :) Sorry, I don't mean to insult but that really gave me a laugh. I hate to be blunt, but its clear from your response that you failed to grok the original post, which makes anything else I say pointless. So I'm going to slowly back away from this one...step...step..step*stp**s*...*
Re: Embrace the from template?
On Friday, 24 August 2018 at 20:36:06 UTC, Seb wrote: On Friday, 24 August 2018 at 20:04:22 UTC, Jonathan Marler wrote: I'd gladly fix it but alas, my pull requests are ignored :( They aren't! It's just that sometimes the review queue is pretty full. I have told you before that your contributions are very welcome (like they are from everyone else) and if there's anything blocking your productivity you can always ping me on Slack. Don't tempt me to start contributing again :) I had months where I got almost no attention on a dozen or so PRs...I love to contribute but I'd have to be mad to continue throwing dozens of hours of work away. If the problem gets solved I'll willingly start working again, but I don't think anything's changed.
Re: Embrace the from template?
On Friday, 24 August 2018 at 18:34:20 UTC, Daniel N wrote:I don't use dub myself. On Friday, 24 August 2018 at 18:34:20 UTC, Daniel N wrote: FYI Andrei has an open pull request: https://github.com/dlang/druntime/pull/1756 Oh well I guess great minds think alike :) Too bad it's been stalled due to a bug. I'd gladly fix it but alas, my pull requests are ignored :(
Re: Embrace the from template?
On Friday, 24 August 2018 at 10:58:29 UTC, aliak wrote: On Friday, 24 August 2018 at 06:41:35 UTC, Jonathan Marler wrote: Ever since I read https://dlang.org/blog/2017/02/13/a-new-import-idiom/ I've very much enjoyed using the new `from` template. It unlocks new idioms in D and have been so useful that I thought it might be a good addition to the core language. I've found that having it in a different place in each project and always having to remember to import it makes it much less ubiquitous for me. One idea is we could add this template to `object.d`. This would allow it to be used from any module that uses druntime without having to import it first. The template itself is also very friendly to "bloat" because it only has a single input parameter which is just a string, extremely easy to memoize. Also, unless it is instantiated, adding it to object.d will have virtually no overhead (just a few AST nodes which would dwarfed by what's already in object.d). It would also be very easy to add, a single PR with 4 lines of code to druntime and we're done. Of course, if we don't want to encourage use of the `from` template then this is not what we'd want. Does anyone have any data/experience with from? All I know is my own usage so feel free to chime in with yours. One of the first things I do after a dub init is create a file called internal.d with the from template in it. My only gripe about this template is it's "autocompletion-able-ness" in IDEs and if that can be handled. I would not want it globally imported though, "from" is quite popular as an identifier and D doesn't let you use keywords as identifiers. Cheers, - Ali Good to know others are using it. Of course making it a core part of the language would mean that IDEs would be free to add support for it, whether it was added to `object.d` or with some other means such as a new syntax, i.e. (import std.stdio).writefln(...) I didn't quite understand your last point. Adding `from` to `object.d` wouldn't make it a keyword, it would still be an identifier. And you could still use it as an identifier in your own code.
Re: Embrace the from template?
On Friday, 24 August 2018 at 12:06:15 UTC, Anton Fediushin wrote: On Friday, 24 August 2018 at 06:41:35 UTC, Jonathan Marler wrote: [...] There's no reason to mess with `object.d` or add it to phobos. Just make a dub package and use it! I just published it on the dub registry in public domain (I hope Daniel Nielsen is ok with that. After all, it's just 3 lines of code) Package page: https://from.dub.pm/ Have a good day and don't overthink simple things, Anton It's good to see there are people who are still optimistic about dub. I remember that same feeling so many years ago :)
Embrace the from template?
Ever since I read https://dlang.org/blog/2017/02/13/a-new-import-idiom/ I've very much enjoyed using the new `from` template. It unlocks new idioms in D and have been so useful that I thought it might be a good addition to the core language. I've found that having it in a different place in each project and always having to remember to import it makes it much less ubiquitous for me. One idea is we could add this template to `object.d`. This would allow it to be used from any module that uses druntime without having to import it first. The template itself is also very friendly to "bloat" because it only has a single input parameter which is just a string, extremely easy to memoize. Also, unless it is instantiated, adding it to object.d will have virtually no overhead (just a few AST nodes which would dwarfed by what's already in object.d). It would also be very easy to add, a single PR with 4 lines of code to druntime and we're done. Of course, if we don't want to encourage use of the `from` template then this is not what we'd want. Does anyone have any data/experience with from? All I know is my own usage so feel free to chime in with yours.
Re: reduxed - Redux for D
On Thursday, 23 August 2018 at 19:48:19 UTC, Robert burner Schadek wrote: It is still rough around the corners and https://issues.dlang.org/show_bug.cgi?id=19084 gives me somewhat of a hard time, but give it try and scream at me because it is not nogc. I've posted a comment on issue 19084 but I'll post the response here as well. This is not supposed to compile. I've actually run into this before, but using T.stringof to mixin code for a type name is not supported. When I've asked about adding support for this, no one was interested. I tried to find my forum posts on this but the post is so old I couldn't find it in search. What happens is you mixin the string "Foo", but that type doesn't mean anyting in the scope of Bar. The actual type name is something like "__unittest__20.Foo", however, even if you got the fully qualified type name it won't work because the type is private and can't be accessed outside of the unittest using the symbol name. You have to access the type by "alias". The `bitfields` function in phobos suffers from this same problem and I created a PR in phobos to add bitfields2 to workaround this issue by using a "mixin template" instead of a normal "mixin": https://github.com/dlang/phobos/pull/5490 [QUOTE FROM THE PR] The main advantage of bitfields2 is that it is able to reference the field types by alias, whereas the current implementation converts each field type alias to a string and then mixes in the type name. I know the example you've provided is contrived so I'm not sure how to help you with your exact situation. Maybe I can help you find a solution with a bit more detail?
Re: [OT] Leverage Points
On Saturday, 18 August 2018 at 13:33:43 UTC, Andrei Alexandrescu wrote: A friend recommended this article: http://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/ I found it awesome and would recommend to anyone in this community. Worth a close read - no skimming, no tl;rd etc. The question applicable to us - where are the best leverage points in making the D language more successful. Andrei I don't have much influence on the first 4 types of "leverage points" in D, but I have a suggestion for a new "rule of the system" (5th most important type of leverage point). Require reviews from any user before merging their pull requests. There's a number of ways you could implement the requirement, maybe every PR that a user creates needs to have at least 1 review of another PR associated with it. You could require more or less reviews depending on the size of the PR queue. You could also look at developer's "review to pull request" ratio. Just to get an idea, I wrote a script to calculate some of this data (github.com/marler8997/githubstats). Here's the data for dmd, sorted by review to pr ratio: user review/pr reviews open_prs merged_prs closed_prs ZombineDev 25 250 0 7 3 stefan-koch-sociomantic19.5 39 0 2 0 andralex 17.9583431 0 17 7 jacob-carlborg 16.18 809 2 41 7 kubasz 12 12 1 0 0 dkgroot9 18 0 2 0 trikko 8 81 0 0 timotheecour 6.565 0 3 7 iain-buclaw-sociomantic6 60 1 0 majiang6 60 1 0 JinShil5.858895706955 6 129 28 TurkeyMan 5.52941176594 1 15 1 thewilsonator 5.146 2 6 1 Geod24 4.5117 6 15 5 marler8997 4.155172414241 0 30 28 dmdw64 4 40 1 0 leitimmel 4 40 1 0 schveiguy 3.823 0 5 1 atilaneves 3.72727272741 1 7 3 DmitryOlshansky3.1666719 1 2 3 tgehr 3.156 1 16 1 wilzbach 2.946428571990 25 250 61 FeepingCreature2.929 3 6 1 mathias-lang-sociomantic 2.846153846111 0 30 9 belm0 2.780 2 1 n8sh 2.551 1 0 dgileadi 2.510 1 2 1 UplinkCoder2.186813187199 3 52 36 rikkimax 2 61 0 2 EyalIO 2 20 1 0 MoritzMaxeiner 2 21 0 0 rtbo 2 20 1 0 belka-ew 2 20 1 0 RazvanN7 1.89333284 8 116 26 ntrel 1.84615384648 2 21 3 nemanja-boric-sociomantic 1.890 3 2 MetaLang 1.890 3 2 joakim-noah1.57142857111 1 4 2 Darredevil 1.530 1 1 skl131313 1.591 3 2 JackStouffer 1.530 2 0 arBmind1.530 1 1 CyberShadow1.47457627187 0 53 6 BBasile1.36 34 0 11 14 Burgos 1.340 3 0 ibuclaw1.32967033 484 15 293 56 klickverbot1.32758620777 0
Re: [OT] Leverage Points
On Saturday, 18 August 2018 at 22:20:57 UTC, Walter Bright wrote: On 8/18/2018 9:59 AM, Jonathan Marler wrote: In your mind, what defines the D language's level of success? It no longer needs me or Andrei. Yes, I think this state would be a good indicator of success. This requires attracting developers with strong technical ability and good leadership to manage it. I think requires cultivating a community that rewards good work and encourages contribution. When I was heavily contributing, it was because of people like Seb and Mike who would review pull requests and tried to keep the flow of work moving. But many time it was quashed by other developers and eventually it didn't make sense for me to contribute anymore when dozens of hours of good work can't get through. If this doesn't change, D won't be able to keep good developers. I posed this question to Andrei because I really want to know the answer. The success of a language can mean very different things to each person. The most important aspect of D for me is its continuing progress towards stability/robustness. Though I would say that the language could be considered the best in the world with its balance of safety, performance and practicality, it is very far from perfect. In my mind, D becomes more successful as the language itself becomes better. And if D doesn't continue to improve, it will be supplanted by new languages that continue to be created at an astounding rate. Others may consider D's popularity to be the most important indicator of D's success. I think everyone would agree this is important, however, I would much rather use a good language on my own then a mediocre language with everyone else. I will also say that in order to read that article and apply it to "D's success", you most certainly need to know exactly what that means to identify what D's leverage points are. It was an interesting article. Many of the concepts were familiar and it was interesting to see them all laid out in a simple model and prioritized. Thanks for the link Andrei.
Re: [OT] Leverage Points
On Saturday, 18 August 2018 at 13:33:43 UTC, Andrei Alexandrescu wrote: A friend recommended this article: http://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/ I found it awesome and would recommend to anyone in this community. Worth a close read - no skimming, no tl;rd etc. The question applicable to us - where are the best leverage points in making the D language more successful. Andrei In your mind, what defines the D language's level of success?
Re: ./install.sh dmd broken?
On Friday, 10 August 2018 at 13:19:21 UTC, Jean-Louis Leroy wrote: jll@euclid:~/dlang$ ./install.sh dmd Downloading and unpacking http://downloads.dlang.org/releases/2.x/2.081.1/dmd.2.081.1.linux.tar.xz # 100.0% Invalid signature http://downloads.dlang.org/releases/2.x/2.081.1/dmd.2.081.1.linux.tar.xz.sig Same problem with 'install update'. But not when installing ldc or gdc. Seb and I found the issue (TLDR: fix here: https://github.com/dlang/installer/pull/338) The problem is downloading "install.sh" directly to "~/dlang/install.sh". This causes the install script to think that it has already downloaded the "d-keyring.gpg" so it never downloads it, causing the "invalid signature" error. The fix is to download "install.sh" if the d-keyring is not downloaded, even if install.sh already is. What made it more confusing is that if it doesn't download d-keyring.gpg, then it will create a default one...making you think it was downloaded but it actually wasn't...odd. In your case you'll want to manually remove "~/dlang/d-keyring.gpg" and use the new "install.sh" script after PR 338 is merged/deployed.
Re: Implement a file system for use in embedded systems
On Sunday, 5 August 2018 at 05:53:20 UTC, Mike Franklin wrote: On Saturday, 4 August 2018 at 18:24:28 UTC, B Krishnan Iyer wrote: I had some questions regarding the project and also needed some pointers to get started with the project. Also, more it would be great if more description of the project statement can be provided. The idea is to create something that can replace FatFs (http://www.elm-chan.org/fsw/ff/00index_e.html) for use in embedded systems just like you mentioned (ARM Cortex-M microcontrollers). I don't think you necessarily need to be proficient in embedded systems to write such a project, as the file system could be persisted to anything from an SD Card, RAM, or a simple file. But understanding the limitations of ARM Cortex-M embedded systems will give one perspective that will add in making their design trade-offs. I can think of a few things that would probably help anyone attempting to tackle such a project 1. Get familiar with FatFs by porting it to an existing HAL and successfully read/write from/to an storage medium like an SD card. 2. Buy a book on the FAT file system. A quick search yielded this (https://www.amazon.com/ExFAT-FAT-File-Systems-Internals/dp/1539928977/ref=sr_1_fkmr2_3?s=books=UTF8=1533447939=1-3-fkmr2=flat+file+system), but I have no idea if it's any good. 3. Study the FatFs source code. 4. Start coding and progressively work through your ideas, incrementally learning from your successes and failures. 5. Begin asking questions Mike A bit of history...the FAT filesystem was a Microsoft proprietary filesystem until UEFI came along. Microsoft suggested UEFI use FAT as one of its filesystem formats, but UEFI required that Microsoft create/release a specification for it in order for them to accept it. Surprisingly, Microsoft agreed. I believe this document is the result of that: https://staff.washington.edu/dittrich/misc/fatgen103.pdf
Re: string literal string and immutable(char)* overload ambiguity
On Saturday, 4 August 2018 at 12:16:00 UTC, Steven Schveighoffer wrote: On 8/3/18 10:26 AM, Jonathan Marler wrote: On Tuesday, 31 July 2018 at 15:07:04 UTC, Steven Schveighoffer wrote: On 7/31/18 10:13 AM, Nicholas Wilson wrote: [...] Absolutely, I didn't realize this was an ambiguity. It should be the same as foo(long) vs. foo(int) with foo(1). +1 for this Although there is a solution for this today, i.e. foo(cast(string)"baz"); foo("baz".ptr)); I see no reason why `string` shouldn't have precedence over `immutable(char)*`, especially since you can always explicitly choose the pointer variant with `.ptr. Let me rewrite your solution for int vs. long: foo(cast(int)1); foo(1L); You like that too? ;) "baz" is a string, that's its primary type. That it can be used for a const(char)* is nice for legacy C code, but shouldn't get in the way of its natural type. -Steve Yeah I definitely agree. Though there is a solution, it's ugly. Making string have precedence over char pointers seems like only positives from what I can tell.
Re: skinny delegates
On Friday, 3 August 2018 at 17:34:47 UTC, kinke wrote: On Friday, 3 August 2018 at 16:46:53 UTC, Jonathan Marler wrote: [...] [...] You're right, thanks for elaborting.
Re: skinny delegates
On Friday, 3 August 2018 at 16:19:04 UTC, kinke wrote: On Friday, 3 August 2018 at 14:46:59 UTC, Jonathan Marler wrote: After thinking about it more I suppose it wouldn't be that complicated to implement. For delegate literals, you already need to gather a list of all the data you need to put on the heap, and if it can all fit inside a pointer, then you can just put it there instead. Nope, immutability (and no escaping) are additional requirements, as each delegate copy has its own context then, as opposed to a single shared GC closure. Maybe you could provide an example or 2 to demonstrate why these would be requirements...we may have 2 different ideas on how this would be implemented. In the end, I think that most if not all use cases would be better off using the library solution if they want this optimization. I disagree. I'm not sure which part you disagree with. I was saying that with a library solution, you get the ability "opt-in"/"opt-out" of the optimization, do you think it should always be on and the developer shouldn't need to or care to opt out of it? Also, what about the developers that want to guarantee that the optimization is occuring? If they just stick a @nogc around it then how would they determine which requirement they are violating for the optimization to occur?
Re: skinny delegates
On Thursday, 2 August 2018 at 17:21:47 UTC, Steven Schveighoffer wrote: On 8/2/18 12:21 PM, Jonathan Marler wrote: On Monday, 30 July 2018 at 21:02:56 UTC, Steven Schveighoffer wrote: Would it be a valid optimization to have D remove the requirement for allocation when it can determine that the entire data structure of the item in question is an rvalue, and would fit into the data pointer part of the delegate? Here's what I'm looking at: auto foo(int x) { return { return x + 10; }; } In this case, D allocates a pointer on the heap to hold "x", and then return a delegate which uses the pointer to read x, and then return that plus 10. However, we could store x itself in the storage of the pointer of the delegate. This removes an indirection, and also saves the heap allocation. Think of it like "automatic functors". Does it make sense? Would it be feasible for the language to do this? The type system already casts the delegate pointer to a void *, so it can't make any assumptions, but this is a slight break of the type system. The two requirements I can think of are: 1. The data in question must fit into a word 2. It must be guaranteed that the data is not going to be mutated (either via the function or any other function). Maybe it's best to require the state to be const/immutable. I've had several cases where I was tempted to not use delegates because of the allocation cost, and simply return a specialized struct, but it's so annoying to do this compared to making a delegate. Plus something like this would be seamless with normal delegates as well (in case you do need a real delegate). I think the number of cases where you could optimize this is very small. And the complexity of getting the compiler to analyze cases to determine when this is possible would be very large. It's not that complicated, you just have to analyze how much data is needed from the context inside the delegate. First iteration, all of the data has to be immutable, so it should be relatively straightforward. After thinking about it more I suppose it wouldn't be that complicated to implement. For delegate literals, you already need to gather a list of all the data you need to put on the heap, and if it can all fit inside a pointer, then you can just put it there instead. On that note, I think if a developer wants to be sure that this optimization occurs in their code, they should explicitly use a library solution like the one in Ocean or the one I gave. If a developer relies on the optimization, then when it doesn't work you won't get any information as to why it couldn't perform the optimization (i.e. some data was mutable or were not r-values). Depending on the code, this failure will either be ignored or break some dependency on the optimization like @nogc. With a library solution, it explicitly copies the data into the pointer so you'll get an explicit error message if it doesn't fit or has some other issue. Something else to consider is this would cause some discrepancy with the @nogc attribute based on the platform's pointer width. By making this an optimization that you don't have to "opt-in", the developer may be unaware that their code is depending on this optimization that won't work on other platforms. Their code could become platform-dependent without them knowing. However, I suppose the counter-argument is that code that uses delegate literals with @nogc would probably we aware of this, but still something to consider. In the end, I think that most if not all use cases would be better off using the library solution if they want this optimization. This allows the developer to "opt-in" or "opt-out" of this optimization and enables the compiler to provide error messages when the "opt-in" with incompatible usage.
Re: string literal string and immutable(char)* overload ambiguity
On Tuesday, 31 July 2018 at 15:07:04 UTC, Steven Schveighoffer wrote: On 7/31/18 10:13 AM, Nicholas Wilson wrote: is there any particular reason why void foo(string a) {} void foo(immutable(char)* b) {} void bar() { foo("baz"); } result in Error: foo called with argument types (string) matches both: foo(string a) and: foo(immutable(char)* b) especially given the pointer overload is almost always void foo(immutable(char)* b) { foo(b[0 .. strlen(b)]); } and if I really want to call the pointer variant I can with foo("baz".ptr); but I can't call the string overload with a literal without creating a temp. I think we should make string literals prefer string arguments. Absolutely, I didn't realize this was an ambiguity. It should be the same as foo(long) vs. foo(int) with foo(1). -Steve +1 for this Although there is a solution for this today, i.e. foo(cast(string)"baz"); foo("baz".ptr)); I see no reason why `string` shouldn't have precedence over `immutable(char)*`, especially since you can always explicitly choose the pointer variant with `.ptr.
Re: skinny delegates
On Thursday, 2 August 2018 at 16:21:58 UTC, Jonathan Marler wrote: On Monday, 30 July 2018 at 21:02:56 UTC, Steven Schveighoffer wrote: Would it be a valid optimization to have D remove the requirement for allocation when it can determine that the entire data structure of the item in question is an rvalue, and would fit into the data pointer part of the delegate? Here's what I'm looking at: auto foo(int x) { return { return x + 10; }; } In this case, D allocates a pointer on the heap to hold "x", and then return a delegate which uses the pointer to read x, and then return that plus 10. However, we could store x itself in the storage of the pointer of the delegate. This removes an indirection, and also saves the heap allocation. Think of it like "automatic functors". Does it make sense? Would it be feasible for the language to do this? The type system already casts the delegate pointer to a void *, so it can't make any assumptions, but this is a slight break of the type system. The two requirements I can think of are: 1. The data in question must fit into a word 2. It must be guaranteed that the data is not going to be mutated (either via the function or any other function). Maybe it's best to require the state to be const/immutable. I've had several cases where I was tempted to not use delegates because of the allocation cost, and simply return a specialized struct, but it's so annoying to do this compared to making a delegate. Plus something like this would be seamless with normal delegates as well (in case you do need a real delegate). -Steve I think the number of cases where you could optimize this is very small. And the complexity of getting the compiler to analyze cases to determine when this is possible would be very large. In addition, a developer can already do this explicitly if they want, i.e. auto foo(int x) { static struct DummyStructToMakeFunctionWithDelegateAbi { int passthru() const { return cast(int) } } DummyStructToMakeFunctionWithDelegateAbi dummyStruct; auto dg = dg.ptr = cast(void*)(x + 10); // treat the void* pointer as an int value return dg; } void main(string[] args) { auto dg = foo(32); import std.stdio; writefln("dg() = %s", dg()); } It's definitely ugly but it works. This will print the number "42" as expected. This would be a case where DIP1011 extern(delegate) would come in handy :) i.e. extern(delegate) int passthru(void* ptr) { return cast(int)ptr; } int delegate() foo2(int x) { return &(cast(void*)(x + 10)).passthru; } Actually, I'll do you one better. Here's a potential library function for it. I'm calling these types of delegates "value pointer delegates". // Assume this is in a library somewhere auto makeValuePtrDelegate(string valueName, string funcBody, T)(T value) { static struct DummyStruct { auto method() const { mixin("auto " ~ valueName ~ " = cast(T)"); mixin (funcBody); } } DummyStruct dummy; auto dg = dg.ptr = cast(void*)value; return dg; } auto foo(int x) { return makeValuePtrDelegate!("val", q{ return val + 10; })(x); } void main(string[] args) { auto dg = foo(32); import std.stdio; writefln("dg() = %s", dg()); }
Re: skinny delegates
On Monday, 30 July 2018 at 21:02:56 UTC, Steven Schveighoffer wrote: Would it be a valid optimization to have D remove the requirement for allocation when it can determine that the entire data structure of the item in question is an rvalue, and would fit into the data pointer part of the delegate? Here's what I'm looking at: auto foo(int x) { return { return x + 10; }; } In this case, D allocates a pointer on the heap to hold "x", and then return a delegate which uses the pointer to read x, and then return that plus 10. However, we could store x itself in the storage of the pointer of the delegate. This removes an indirection, and also saves the heap allocation. Think of it like "automatic functors". Does it make sense? Would it be feasible for the language to do this? The type system already casts the delegate pointer to a void *, so it can't make any assumptions, but this is a slight break of the type system. The two requirements I can think of are: 1. The data in question must fit into a word 2. It must be guaranteed that the data is not going to be mutated (either via the function or any other function). Maybe it's best to require the state to be const/immutable. I've had several cases where I was tempted to not use delegates because of the allocation cost, and simply return a specialized struct, but it's so annoying to do this compared to making a delegate. Plus something like this would be seamless with normal delegates as well (in case you do need a real delegate). -Steve I think the number of cases where you could optimize this is very small. And the complexity of getting the compiler to analyze cases to determine when this is possible would be very large. In addition, a developer can already do this explicitly if they want, i.e. auto foo(int x) { static struct DummyStructToMakeFunctionWithDelegateAbi { int passthru() const { return cast(int) } } DummyStructToMakeFunctionWithDelegateAbi dummyStruct; auto dg = dg.ptr = cast(void*)(x + 10); // treat the void* pointer as an int value return dg; } void main(string[] args) { auto dg = foo(32); import std.stdio; writefln("dg() = %s", dg()); } It's definitely ugly but it works. This will print the number "42" as expected. This would be a case where DIP1011 extern(delegate) would come in handy :) i.e. extern(delegate) int passthru(void* ptr) { return cast(int)ptr; } int delegate() foo2(int x) { return &(cast(void*)(x + 10)).passthru; }
Re: Way to override/overload D’s runtime assertions to use custom handlers?
On Wednesday, 25 July 2018 at 15:24:50 UTC, Alexander Nicholi wrote: Hello, A project I’m helping develop mixes D code along with C and C++, and in the latter two languages we have custom macros that print things the way we need to, along with app-specific cleanup tasks before halting the program. Because it uses multiple languages, two of which have spotty or nonexistent exception support, and because we only depend on the D runtime sans libphobos, we have opted to avoid the use of exceptions in our codebase. Assertions still give us the ability to do contract programming to some extent, while C++ and D provide static assertions at compile-time to supplement. With runtime assertions, C and C++ handle things amicably, but D’s `assert` builtin seems to fall back to C99’s assert.h handlers and there doesn’t seem to be a way around this. Is there a way to change this to use our own handlers with the D runtime? How does this change without the runtime, e.g. via `-betterC` code? If not, is this something that can be implemented in the language as a feature request? Our use case is a bit odd but still a possibility when using D as a systems-level language like this. Thanks, Alex As far as I know, D's "assert" falls back to __assert. I have a pet/educational project (github.com/marler8997/maros) where I don't use the D runtime or the C runtime and my definition looks like this: extern (C) void __assert(bool cond, const(char)[] msg) { // TODO: would be nice to get a stack trace if (!cond) { version (linux) { import stdm.linux.file : stderr, write; import stdm.linux.process : exit; } else static assert(0, __FUNCTION__ ~ " not implemented on this platform"); write(stderr, "assert failed: "); write(stderr, msg); write(stderr, "\n"); exit(1); } } I just put this in my "object.d".
Re: DIP 1011--extern(delegate)--Preliminary Review Round 1
On Saturday, 21 July 2018 at 17:54:17 UTC, soolaïman wrote: I know this one year old already but the DIP is still in formal review. [...] This doesn't work because the ABI of a normal function is NOT THE SAME as the ABI of a delegate. That's the only reason the DIP exists is to solve this ABI problem.
Re: Completely Remove C Runtime with DMD for win32
On Sunday, 15 July 2018 at 20:29:29 UTC, tcb wrote: I've been trying to compile a trivial program (extern C int main() {return 0;}) without linking parts of the C runtime with no success. I compile with dmd -debuglib= -defaultlib= -v -L=/INFORMATION -betterC but optlink shows a lot of things from snn.lib being pulled in and the resultant executable is about 12kb. I also replaced object.d with an empty module. If I pass /nodefaultlib to the linker I get warning 23: no stack and __acrtused_con is undefined so the linker fails with no start address. Is it possible to completely remove the C runtime on windows, and if so how? Sorry for the sloppily formatted post. I recently created an issue that included an example that allows you to compile a Hello World program on linux x64 without the c standard library, druntime or phobos. https://issues.dlang.org/show_bug.cgi?id=19078 You can modify it to run on windows as well. I'm not sure if the _start assembly implementation would be the same on windows. Try it out and let me know how it works.
Re: A Case for Oxidation: A potential missed opportunity for D
On Friday, 29 June 2018 at 09:25:08 UTC, Mike Franklin wrote: Please allow me to bring your attention to an interesting presentation about choosing a modern programming language for writing operating systems: https://www.youtube.com/watch?v=cDFSrVhnZKo It's a good talk and probably worth your time if you're interested in bare-metal systems programming. The presenter mentions D briefly in the beginning when he discussed how he made his choice of programming language. He shows the following (probably inaccurate) matrix. Lang| Mem Safety | Min Runtime | Strong Type Syst. | Performance C || x | | x C++ || x | | x C# | x | | x | D | x | | x | x Go | x | | x | x Rust| x | x | x | x Java| x | | x | x Haskell | x | | x | Cycle | x | x | x | x It appears the deal-breaker for D was the lack of "minimal runtime". Of course D has -betterC and, with 2.079, a way to use some features of D without the runtime, but he also goes on to discuss the importance of memory safety in his application of the language. I hope we'll see something competitive with DIP25, DIP1000, and the `scope` storage class, namely *memory safety without a runtime*. I'm currently waiting for 2.081 to reach LDC and GDC, and then I have a few ideas I'd like to begin working on myself, but I never have a shortage of ideas, just a shortage of time and energy. Enjoy! Mike This just isn't true. I've written a fair amount of a linux distro in D without druntime/phobos or even the standard C library. https://github.com/marler8997/maros
Re: What's happening with the `in` storage class
On Saturday, 9 June 2018 at 07:40:08 UTC, Mike Franklin wrote: On Saturday, 9 June 2018 at 07:26:02 UTC, Walter Bright wrote: Your time is valuable, too, and while I'm not going to tell you want to work on, I'd prefer something more important. If that's how you feel then I clearly don't share your values. To me, cleaning up the unimplemented, half-implemented, and poorly implemented features of D is very important. I would like to be able to use D professionally, and you make difficult it to advocate for D with a straight face when you're willing to tolerate this kind of sloppiness in the language definition and implementation. All I'm asking for is a thoughtful decision, and don't appreciate the implication that I'm wasting my time. Mike Seems to be alot of fundamental problems with D that Walter and Andrei say are "unimportant". Some of the things I've seen to be neglected are `shared`, `in`, broken import in-variance, tooling, community, compiler brittleness. The results of the dlang survery seem to have been ignored. Features like "tuples", "named parameteers", "interpolated strings" were highest on the list but I don't see any call to action. In fact I see quite a lot of resistance. It seems that Walter and Andrei are forcing D into an "end of life" stage where language improvements and cleanup are consistently rejected, even ones with high benefit/const ratio. I hope I'm wrong though. On the "technical scale" D is a top contender, but if it stagnates it will be supplanted by new languages, maybe even ones that already exist.
Re: Tiny D suitable for embedded JIT
On Thursday, 24 May 2018 at 20:22:15 UTC, Dibyendu Majumdar wrote: On Wednesday, 23 May 2018 at 18:49:05 UTC, Dibyendu Majumdar wrote: The ultimate goal is to have JIT library that is small, has fast compilation, and generates reasonable code (i.e. some form of global register allocation). The options I am looking at are a) start from scratch, b) hack LLVM, or c) hack DMD. I have been looking at DMD code (mainly the backend stuff) for this ... I think it will be too difficult for me to try to modify it :-( Regards Dibyendu Sad to hear. Was interested to see if this was feasible. I don't have much experience with the backend but if you're still up for the task, take a look at `dmd/glue.d`. I don't know how much of the glue layer this includes but it would be a good start. DMD does have a common "glue layer" shared by DMD, LDC and GDC, so you'd basically need to find the API to build this glue layer and that's what you would use. https://github.com/dlang/dmd/blob/master/src/dmd/glue.d
Re: Tiny D suitable for embedded JIT
On Wednesday, 23 May 2018 at 18:49:05 UTC, Dibyendu Majumdar wrote: Now that D has a better C option I was wondering if it is possible to create a small subset of D that can be used as embedded JIT library. I would like to trim the language to a small subset of D/C - only primitive types and pointers - and remove everything else. The idea is to have a high level assembly language that is suitable for use as JIT backend by other projects. I wanted to know if this is a feasible project - using DMD as the starting point. Should I even think about trying to do this? The ultimate goal is to have JIT library that is small, has fast compilation, and generates reasonable code (i.e. some form of global register allocation). The options I am looking at are a) start from scratch, b) hack LLVM, or c) hack DMD. Regards Dibyendu I've recently been looking into how QEMU works and it uses something called TCG (Tiny Code Generator). QEMU works by taking code from another platform/cpu and translates it to TCG, which then gets "jitted" to the instructions for the host. From what I understand, TCG is fairly small. I think it aims to be simple rather than highly optimized, unlike LLVM which allows more complexity for the sake of performance. TCG: https://git.qemu.org/?p=qemu.git;a=blob_plain;f=tcg/README;hb=HEAD
Re: Support alias this in module scope?
On Wednesday, 23 May 2018 at 03:44:36 UTC, Manu wrote: If we can use `alias this` to mirror an entire C++ namespace into the location we want (ie, the scope immediately outside the C++ namespace!!), then one sanitary line would make the problem quite more tolerable: extern(C++, FuckOff) { void bah(); void humbug(); } alias this FuckOff; // <-- symbols are now aliased where they should have been all along (count the seconds until the reply that says to use reflection to scan the scope, and use a mixin to... blah blah) Had the same idea about a year ago :) https://forum.dlang.org/post/bmawbtdaqdngoiqfo...@forum.dlang.org
Re: CI buildbots
On Monday, 21 May 2018 at 22:21:34 UTC, Manu wrote: On 21 May 2018 at 09:22, Jonathan Marler via Digitalmars-d <digitalmars-d@puremagic.com> wrote: On Monday, 21 May 2018 at 04:46:15 UTC, Manu wrote: This CI situation with the DMD/druntime repos is not okay. It takes ages... **hours** sometimes, for CI to complete. It's all this 'auto-tester' one, which seems to lock up on the last few tests. This makes DMD is a rather unenjoyable project to contribute to. I had a sudden burst of inspiration, but it's very rapidly wearing off. It might take hours for CI to complete, but it can take weeks or months for someone to review your code...so the CI time doesn't really seem to matter for myself. That is unless you're trying to use the CI in your modify/test development cycle. However, that's should be solvable by testing locally in most cases. I use CI to test the platforms I don't build locally. That's natural for cross-platform development. Ah I see. Well for me it seemed worth the effort to setup at least a windows and linux machine which covers most testing. The worst was when we were having intermittent seg faults on 32-bit OSx...I eventually borrowed my girlfriends macbook to reproduce and debug that one...ugh that was a pain. In any case I'd recommend having one posix and one windows platform. There's always VMs if you need em :) But to address your original concern, I'm not sure that decreasing the testing required to integrate changes is necessarily a net positive. If you can decrease test time while maintaining the same coverage then by all means, let's do that! But I think in general you have to find a balance between the two. Since you are using CI for your modify/build/test development cycle, one idea would be to define some sort of interface that the CI's could use to limit what they are testing for a particular PR. You could have it search through the comments for something like "limit test to dmd runnable" or something like that.
Re: CI buildbots
On Monday, 21 May 2018 at 04:46:15 UTC, Manu wrote: This CI situation with the DMD/druntime repos is not okay. It takes ages... **hours** sometimes, for CI to complete. It's all this 'auto-tester' one, which seems to lock up on the last few tests. This makes DMD is a rather unenjoyable project to contribute to. I had a sudden burst of inspiration, but it's very rapidly wearing off. It might take hours for CI to complete, but it can take weeks or months for someone to review your code...so the CI time doesn't really seem to matter for myself. That is unless you're trying to use the CI in your modify/test development cycle. However, that's should be solvable by testing locally in most cases.
Re: DIP 1011 library alternative
On Tuesday, 15 May 2018 at 21:25:05 UTC, Andrei Alexandrescu wrote: Hello, I was reviewing again DIP 1011 and investigated a library solution. That led to https://gist.github.com/run-dlang/18845c9df3d73e45c945feaccfebfcdc It builds on the opening examples in: https://github.com/dlang/DIPs/blob/master/DIPs/DIP1011.md I'm displeased at two aspects of the implementation: * Perfect forwarding is tedious to implement: note that makeDelegate hardcodes void as the return type and (int, float) as parameter types. Ideally it should accept any parameters that the alias passes. * Pass-by-alias and overloading interact poorly. Does anyone know how to refer by alias to one specific overload? Thanks, Andrei Now that I've had a chance to look the example you've shared, the only comment I'd make is that DIP1011 aims to allow applications to create "delegate-compatible" functions that don't require generating a wrapper function to forward a delegate call to the function at runtime In this example, 2 functions have been defined that take a class and a struct pointer as the first argument, however, these function are not ABI compatible with the delegate ABI meaning you will have to generate a runtime "conversion" function to forward a delegate call to call the function. extern(delegate) allows the application to generate the function using the delegate ABI in the first place so no "conversion" is necessary. To achieve this with a libary solution, you need to modify the function definition itself, not just create a wrapper around it.
Re: DIP 1011 library alternative
On Tuesday, 15 May 2018 at 21:25:05 UTC, Andrei Alexandrescu wrote: Hello, I was reviewing again DIP 1011 and investigated a library solution. That led to https://gist.github.com/run-dlang/18845c9df3d73e45c945feaccfebfcdc It builds on the opening examples in: https://github.com/dlang/DIPs/blob/master/DIPs/DIP1011.md I'm displeased at two aspects of the implementation: * Perfect forwarding is tedious to implement: note that makeDelegate hardcodes void as the return type and (int, float) as parameter types. Ideally it should accept any parameters that the alias passes. * Pass-by-alias and overloading interact poorly. Does anyone know how to refer by alias to one specific overload? Thanks, Andrei thought this was dead...good to see you're looking into it :)
Re: Bugzilla & PR sprint on the first weekend of every month
On Tuesday, 8 May 2018 at 18:48:15 UTC, Seb wrote: What do you guys think about having a dedicated "Bugzilla & PR sprint" at the first weekend of very month? We could organize this a bit by posting the currently "hot" bugs a few days ahead and also make sure that there are plenty of "bootcamp" bugs, s.t. even newcomers can start to get involved. Even if you aren't too much interested in this effort, being a bit more active on Slack/IRC or responsive on GitHub on this weekend would help, s.t. newcomers interested in squashing D bugs get over the initial hurdles pretty quickly and we can finally resolve the long-stalled PRs and find a consensus on them. What do you think? Is this something worth trying? Maybe the DLF could also step in and provide small goodies for all bug hunters of the weekend (e.g. a "D bug hunter" shirt if you got more than X PRs merged). Yeah I think it's worth a try. I would probably emphasize PR reviews but I'd never discourage people from fixing bugs as well. I'd participate so long as I'm available.
Re: D Library Breakage
On Friday, 13 April 2018 at 23:36:46 UTC, H. S. Teoh wrote: On Fri, Apr 13, 2018 at 11:00:20PM +, Jonathan Marler via Digitalmars-d wrote: [...] @JonathanDavis, the original post goes through an example where you won't get a compile-time or link-time error...it results in a very bad runtime stack stomp. To put things in perspective, this is essentially the same problem in C/C++ as compiling your program with one version of header files, but linking against a different version of the shared library. Well, this isn't restricted to C/C++, but affects basically anything that uses the OS's dynamic linker. It's essentially an ABI change that wasn't properly reflected in the API, thus causing problems at runtime. The whole thing about sonames and shared library versioning is essentially to solve this problem. But even then, it's not a complete solution (e.g., I can still compile against the wrong version of a header file, and get a struct definition of the wrong size vs. the one expected by the linked shared library). Basically, it boils down to, "don't make your build system do this". [...] The point is, this is a solvable problem. All we need to do is save the compiler configuration (i.e. versions/special flags that affects compilation) used when compiling a library and use that information when we are interpreting the module's source as as an "pre-compiled import". Interpreting a module with a different version than was compiled can create any error you can possibly come up with and could manifest at any time (i.e. compile-time, link time, runtime). The problem with this "solution" is that it breaks valid use cases. For example, a shared library can have multiple versions, e.g., one compiled with debugging symbols, another with optimization flags, but as long as the ABI remains unchanged, it *should* be valid to link the program against these different versions of the library. One example where you really don't want to insist on identical compiler flags is if you have a plugin system where plugins are 3rd party supplied, compiled against a specific ABI. It seems impractically heavy-handed to ask all your 3rd party plugin writers to recompile their plugins just because you changed a compile flag in your application that, ultimately, doesn't even change the ABI anyway. You've missed part of the solution. The solution doesn't require you to compile with the same flags, what it does it takes the flags that were used to compile the modules you're linking to and interprets their "import source code" the same way it was interpreted when it was compiled. If the precompiled module was compiled with the debug version, the `version(debug)` blocks will be enabled in the imported module source code whether or not you are compiling your application with debug enabled. This guarantees that the source is an accurate representation of the precompiled library you'll be linking to later. By the way...you're right that C/C++ suffer from the same problems with header files :)
Re: D Library Breakage
On Friday, 13 April 2018 at 22:29:25 UTC, Steven Schveighoffer wrote: On 4/13/18 5:57 PM, Jonathan M Davis wrote: On Friday, April 13, 2018 16:15:21 Steven Schveighoffer via Digitalmars-d I don't know if the compiler can determine if a version statement affects the layout, I suppose it could, but it would have to compile both with and without the version to see. It's probably an intractable problem. Also, does it really matter? If there's a mismatch, then you'll get a linker error, so it's not like you're going to get subtle bugs out of the deal or anything like that. I don't see why detection is an issue here. Well, for layout changes, there is no linker error. It's just one version of the code thinks the layout is one way, and another version thinks it's another way. This is definitely bad, and causes memory corruption errors. But I don't think it's a problem we can "solve" exactly. -Steve @JonathanDavis, the original post goes through an example where you won't get a compile-time or link-time error...it results in a very bad runtime stack stomp. @Steven You're just addressing the example I gave and not thinking of all the other ways version (or other compiler flags) could change things. For example, you could have version code inside a template that changes mangling because it is no longer inferred to be pure/safe/whatever. The point is, this is a solvable problem. All we need to do is save the compiler configuration (i.e. versions/special flags that affects compilation) used when compiling a library and use that information when we are interpreting the module's source as as an "pre-compiled import". Interpreting a module with a different version than was compiled can create any error you can possibly come up with and could manifest at any time (i.e. compile-time, link time, runtime). The jist is that if we don't solve this, then it's up to the applications to use the same versions that were used to compile all their pre-compiled D libraries...and if they don't...all bets are off. They could run into any error at any time and the compiler/type system can't help them.
Re: D Library Breakage
On Friday, 13 April 2018 at 10:47:18 UTC, Rene Zwanenburg wrote: On Friday, 13 April 2018 at 05:31:25 UTC, Jesse Phillips wrote: Well if DIP1000 isn't on by default I don't think Phobos should be compiled with it. I think that the version issue is not unique to D and would be good to address, but I don't see the compiler reading the object file to determine how it should built the import files. More importantly, it can be perfectly valid to link object files compiled with different options. Things like parts of the program that shouldn't be optimized, or have their logging calls added/removed. One thought I has was that we could define a special symbol that basically encodes the configuration that was used to compile a module. So when you import a precompiled module, you can insert a dependency on that special symbol based on the configuration you interpreted the imported module with with. So if a module is compiled and imported with a different configuration, you'll get a linker error. If we take the previous example with main and foo. compile foo with -version=FatFoo foo.o contains special symbol (maybe "__module_config_foo_version_FatFoo") compile main without -version=FatFoo main.o contains dependency on symbol "__module_config_foo" (note: no "version_FatFoo") link foo.o main.o Error: symbol "__module_config_foo" needed by main.o is not defined The linker error isn't great, but it prevents potential runtime errors. Also, if you use the compiler instead of the linker you'll get a nice error message. dmd foo.o main.o Error: main.o expected module foo to be compiled without -version=FatFoo but foo.o was compiled with it
D Library Breakage
Currently phobos is going through a transition where DIP 1000 is being enabled. This presents a unique problem because when DIP 1000 is enabled, it will cause certain functions to be mangled differently. This means that if a module in phobos was compiled with DIP 1000 enabled and you don't enable it when compiling your application, you can end up with cryptic linker errors that are difficult to root cause. This problem has exposed what I think to be a deeper problem with the way D handles precompiled modules. Namely: Precompiled D libraries do not expose the "important compiler configuration" that was used to compile them. "Important compiler configuration" meaning what versions were used, whether unittest was enabled, basically anything that an application using it needs to know to properly interpret the module the same way it was interpreted when it was compiled. For example, say you have a library foo with a single module. module foo; struct Foo { int x; version (FatFoo) { private int[100] y; } void init() @safe nothrow { x = 0; version (FatFoo) { y[] = 0; } } } Now let's compile it: dmd -c foo.d Now let's use it: import foo; int main() @safe nothrow { Foo foo; foo.init(); return 0; } dmd main.d foo.o (foo.obj for windows) ./main (main.exe for windows) It runs and we're good to go. Now let's do something sinister... dmd -version=FatFoo -c foo.d Now compile and run our program again, but don't include the `-version=FatFoo` dmd main.d foo.o (foo.obj for windows) ./main (main.exe for windows) We've just stomped all over our stack and now it's just a pancake of zeros! Your results will be unpredictable but on my windows box main throws an exception even though the function is marked @safe and nothrow :) The root of the problem in this situation comes back to the problem that DIP 1000 is currently having. The "important compiler configuration" used to compile our library is unknown. If we could take our precompiled library foo.o and see what compiler configuration was used to compile it, we wouldn't have this problem because we would have seen it was compiled with the "FatFoo" version. Then we would have interpreted the module we used to load it with the "FatFoo" version and avoided this terrible "pancake stack" :) So what do people think? Is this something we should address? We could explore ways of including information in our pre-compiled libraries that the compiler could use to know how it was compiled and therefore how to interpret the modules the same way they were when being compiled. All object formats that I know of support sections that tools can use to inject information like this. We could also just tell people that they must make sure to use the same compiler configuration for their own applications that were used when their libraries were compiled. If they don't ensure this then all safety guarantees are gone...not ideal but less work for D right? :)
Re: D compiles fast, right? Right??
On Wednesday, 4 April 2018 at 20:29:19 UTC, Stefan Koch wrote: On Wednesday, 4 April 2018 at 20:04:04 UTC, Jack Stouffer wrote: On Wednesday, 4 April 2018 at 01:08:48 UTC, Andrei Alexandrescu wrote: Exactly, which is why I'm insisting this - and not compiler benchmarking, let alone idle chattaroo in the forums - is where we need to hit. What we have here, ladies and gentlemen, is a high-impact preapproved item of great general interest. Shall we start the auction? Are you aware of this PR? https://github.com/dlang/dmd/pull/8124 This is but a layer of paint over the real problem. Unneeded Dependencies. Programming should not be a game of jenga. Piling things on top of other things rarely works out. Having unittests included from precompiled libraries is a problem in and of itself. This is causing many templates to be instantiated that will never be used by the application, killing compilation time. There are also other problems...here's a link to my description of "Lazy Imports" that I think would help other issues we currently have. https://github.com/marler8997/dlangfeatures#lazy-imports
Re: D compiles fast, right? Right??
On Tuesday, 3 April 2018 at 23:29:34 UTC, Atila Neves wrote: On Tuesday, 3 April 2018 at 19:07:54 UTC, Jonathan Marler wrote: On Tuesday, 3 April 2018 at 10:24:15 UTC, Atila Neves wrote: On Monday, 2 April 2018 at 18:52:14 UTC, Jonathan Marler wrote: You still missed my point. I got your point. I'm disagreeing. I don't know why you keep saying you are "disagreeing" with me. It looks like we agree. You're example is showing that DLANG's std.path compiles slower than GO's path library. I agree. All I was saying is that this example doesn't show that GO code compiles faster than D. That was my one and only point.
Re: D compiles fast, right? Right??
On Tuesday, 3 April 2018 at 10:24:15 UTC, Atila Neves wrote: On Monday, 2 April 2018 at 18:52:14 UTC, Jonathan Marler wrote: My point was that GO's path library is very different from dlang's std.path library. It has an order of magnitude less code so the point was that you're comparing a very small library with much less functionality to a very large one. I understood your point. I'm not sure you understood mine, which is: I don't care. I want to get work done, and I don't want to wait for the computer. You still missed my point. You're post was saying that "D does not compile as fast as GO". But the libraries you're comparing are vastly different. If you're post was saying, "dlang's std.path compiles much slower than GO's" then you would be fine. However, you're post was misleading saying the Go compile's faster than D in general, and I was pointing out that the use case you provided doesn't apply in the general case, it only applies to a library with the same name/type of functionality. I didn't say anything about whether it was advantageous, the point was that it's more code so you should take that into account when you evaluate performance. You're post was misleading because it was assuming that both libraries were comparable when in reality they appear to be very different. I disagree. They're very similar in the sense that, if I want to build a path and want to rely on the standard library, it takes vastly different amounts of time to compile my code in one situation vs the other. I refer to my previous answer. Your example shows that dlang's std.path compiles slower than GO's, but that doesn't say anything about the compile performance for both languages in the general case. To make such a claim you should compare the exact same "functionality" implemented in both languages. My point was that if you want to compare "compile-time" performance, you should not include the unittests in D's time since Go does not include unittests. "Go does not include unittests"? Under some interpretations I guess that could be viewed as correct, but in practical terms I can write Go tests without an external library (https://golang.org/pkg/testing/)/ Whether it's a language keyword or not is irrelevant. What _is_ relevant (to me) is that I can write Go code that manipulates paths and test it with everything building in less time that it takes to render a frame in a videogame, whereas in D... You're totally misunderstanding me. I was just saying that if you want to compare the compile speed of D vs GO (IN THE GENERAL CASE), you should not include the unittests in D's performance because you weren't including them in your GO example. This is a problem that should be fixed but still doesn't change the fact that not taking this into consideration would be an unfair comparison. No, no, no, a thousand times more no. We can't make a marketing point of D compiling so fast it might as well be a scripting language when it's not even true. I get a better edit-compile-test cycle in *C++*, which is embarassing. Atila You totally misunderstood what I was saying once again. I agree with what you said here, but it has nothing to do with what I was saying. If your point is that it takes too long to access std.path's functionality then I completely agree. What I am arguing against is that your example is not evidence that GO compiles faster than D in general. You're example is comparing 2 different libraries in 2 different languages, not about the languages themselves.
Re: D compiles fast, right? Right??
On Monday, 2 April 2018 at 12:33:37 UTC, Atila Neves wrote: On Friday, 30 March 2018 at 16:41:42 UTC, Jonathan Marler wrote: Seems like you're comparing apples to oranges. No, I'm comparing one type of apple to another with regards to weight in my shopping bag before I've even taken a bite. My point was that GO's path library is very different from dlang's std.path library. It has an order of magnitude less code so the point was that you're comparing a very small library with much less functionality to a very large one. It's over an order of magnitude more code More lines of code is a liability, not an advantage. I didn't say anything about whether it was advantageous, the point was that it's more code so you should take that into account when you evaluate performance. You're post was misleading because it was assuming that both libraries were comparable when in reality they appear to be very different. and it's only fair to compare the "non-unittest" version of std.path with Go, since Go does not include unittests. Absolutely not. There is *0* compile-time penalty on Go programmers when they test their programs, whereas my compile times go up by a factor of 3 on a one-line program. And that's >3 multiplied by "already slow to begin with". My point was that if you want to compare "compile-time" performance, you should not include the unittests in D's time since Go does not include unittests. In practicality, D should not be compiling in the standard library unittest by default. This is a problem that should be fixed but still doesn't change the fact that not taking this into consideration would be an unfair comparison.
Re: D compiles fast, right? Right??
On Saturday, 31 March 2018 at 21:37:13 UTC, Jonathan M Davis wrote: On Saturday, March 31, 2018 08:28:31 Jonathan Marler via Digitalmars-d wrote: On Friday, 30 March 2018 at 20:17:39 UTC, Andrei Alexandrescu wrote: > On 3/30/18 12:12 PM, Atila Neves wrote: >> Fast code fast, they said. It'll be fun, they said. Here's >> a D >> >> file: >> import std.path; >> >> Yep, that's all there is to it. Let's compile it on my >> laptop: >> /tmp % time dmd -c foo.d >> dmd -c foo.d 0.12s user 0.02s system 98% cpu 0.139 >> total > > Could be faster. > >> That... doesn't seem too fast to me. But wait, there's more: >> /tmp % time dmd -c -unittest foo.d >> dmd -c -unittest foo.d 0.46s user 0.06s system 99% cpu >> >> 0.525 total > > Not fast. We need to make -unittest only affect the built > module. Even though it breaks certain uses of > __traits(getUnittests). No two ways about it. Who can work > on that? > > Andrei If you approve of the -unittest= approach then timotheecour has already offered to implement this. It's pattern matching would work the same as -i and would also use the "implied standard exclusions" that -i uses, namely, -unittest=-std -unittest=-etc -unittest=-core This would mean that by default, just passing "-unittest" would exclude druntime/phobos just like "-i" by itself also does. And every time you used another library, you'd have the same problem and have to add -unittest=- whatever for each and every one of them, or you would have to use -unittest= with everything from your application or library rather than using -unittest. I really don't see how that scales well, and it's way too manual and too easy to screw up. It might be a decent idea for a workaround, but it's not a great solution. IMHO, this is really something that should be handled by the compiler. It simply shouldn't be compiling in the unittest blocks for modules that you're not compiling directly. And if that's done right, then this whole problem goes away without having to make sure that every project you work on is configured correctly to avoid pulling in the unit tests from everything that it depends on. And maybe figuring out what to do about __traits(getUnittests) complicates things, but it seems like the fact that we're even having this problem is due to a flaw in the design of D's unit tests and that that should be fixed, not worked around. - Jonathan M Davis Let's make this conversation a bit more concrete, I'm not sure we are discussing the exact same thing. The proposed solution is to have -unittest mean "compile unittests for all 'compiled modules' according to the pattern rules". The default pattern rule is to include all modules except druntime/phobos. Say you have two "packages" foo and bar that contain modules inside their respective directories. With the proposed -unittest= this is the semantics you would get. dmd -unittest foo/*.d bar/*.d # compiles unittests for foo/*.d and bar/*.d dmd -unittest=foo foo/*.d bar/*.d # compiles unittests for foo/*.d dmd -unittest=bar foo/*.d bar/*.d # compiles unittests for bar/*.d dmd -unittest=foo.x foo/*.d bar/*.d # compiles unittests for foo/x.d dmd -unittest=-bar foo/*.d bar/*.d # compiles unittests for foo/*.d Note that the default behavior makes sense, but this mechanism also allows you more fine-graned control to limit unittesting to certain packages or modules. This degree of control would be quite helpful to me, any of the previously listed use cases represent valid scenarios that I would like to be able to do. Do you have another solution that would provide this functionality? I don't see any reason not to support these use cases.
Re: D compiles fast, right? Right??
On Friday, 30 March 2018 at 20:17:39 UTC, Andrei Alexandrescu wrote: On 3/30/18 12:12 PM, Atila Neves wrote: Fast code fast, they said. It'll be fun, they said. Here's a D file: import std.path; Yep, that's all there is to it. Let's compile it on my laptop: /tmp % time dmd -c foo.d dmd -c foo.d 0.12s user 0.02s system 98% cpu 0.139 total Could be faster. That... doesn't seem too fast to me. But wait, there's more: /tmp % time dmd -c -unittest foo.d dmd -c -unittest foo.d 0.46s user 0.06s system 99% cpu 0.525 total Not fast. We need to make -unittest only affect the built module. Even though it breaks certain uses of __traits(getUnittests). No two ways about it. Who can work on that? Andrei If you approve of the -unittest= approach then timotheecour has already offered to implement this. It's pattern matching would work the same as -i and would also use the "implied standard exclusions" that -i uses, namely, -unittest=-std -unittest=-etc -unittest=-core This would mean that by default, just passing "-unittest" would exclude druntime/phobos just like "-i" by itself also does.
Re: D compiles fast, right? Right??
On Friday, 30 March 2018 at 16:12:44 UTC, Atila Neves wrote: Fast code fast, they said. It'll be fun, they said. Here's a D file: import std.path; Yep, that's all there is to it. Let's compile it on my laptop: /tmp % time dmd -c foo.d dmd -c foo.d 0.12s user 0.02s system 98% cpu 0.139 total That... doesn't seem too fast to me. But wait, there's more: /tmp % time dmd -c -unittest foo.d dmd -c -unittest foo.d 0.46s user 0.06s system 99% cpu 0.525 total Half. A. Second. AKA "an eternity" in dog years, err, CPU time. I know this has been brought up before, and recently even, but, just... just... sigh. So I wondered how fast it'd be in Go, since it's got a reputation for speedy compilation: package foo import "path" func Foo() string { return path.Base("foo") } /tmp % time go tool compile foo.go go tool compile foo.go 0.01s user 0.01s system 117% cpu 0.012 total See, now that's what I'd consider fast. It has actual code in the file because otherwise it complains the file isn't using the imported package, because, Go things. It compiled so fast I had to check I'd generated an object file, and then I learned you can't use objdump on Go .o files, because... more Go things (go tool objdump for the curious). Ok, so how about C++, surely that will make D look good? #include // yes, also a one-liner /tmp % time /usr/bin/clang++ -std=c++17 -c foo.cpp /usr/bin/clang++ -std=c++17 -c foo.cpp 0.45s user 0.03s system 96% cpu 0.494 total /tmp % time /usr/bin/g++ -std=c++17 -c foo.cpp /usr/bin/g++ -std=c++17 -c foo.cpp 0.39s user 0.04s system 99% cpu 0.429 total So yeeah. If one is compiling unit tests, which I happen to pretty much only exclusively do, then trying to do anything with paths in D is 1. Comparable to C++ in build times 2. Actually _slower_ than C++ (who'd've thunk it?) * 3. Gets lapped around Captain America vs The Falcon style about 50 times by Go. And that's assuming there's a crazy D programmer out there (hint: me) that actually tries to compile minimal units at a time (with actual dependency tracking!) instead of the whole project at once, otherwise it'll take even longer. And this to just import `std.path`, then there's the actual work you were trying to get to. Today actually made me want to write Go. I'm going to take a shower now. Atila * Building a whole project in C++ still takes a lot longer since D scales much better, but that's not my typical worflow, nor should it be anyone else's. Seems like you're comparing apples to oranges. Go's path.go is very small, a 215 line file: https://github.com/golang/go/blob/master/src/path/path.go Documentation: https://golang.org/pkg/path/ Dlang's std.path is much more comprehensive with 4181 lines: https://github.com/dlang/phobos/blob/master/std/path.d Documentation: https://dlang.org/phobos/std_path.html It's over an order of magnitude more code and only takes twice as long to compile without unittests, and it's only fair to compare the "non-unittest" version of std.path with Go, since Go does not include unittests. I'm not sure why you would compile the standard library unittests every time you compile anything. Probably a consequence of not having `-unittest=`. timotheecour suggested we add support for this and I agree for cases like this, where druntime and phobos would be exclude by default (just like we do with -i), meaning that your compilation example would not have compiled phobos unittests.
Re: dmd -unittest= (same syntax as -i)
On Friday, 16 March 2018 at 07:47:31 UTC, Johannes Pfau wrote: Am Thu, 15 Mar 2018 23:21:42 + schrieb Jonathan Marler: On Thursday, 15 March 2018 at 23:11:41 UTC, Johannes Pfau wrote: Am Wed, 14 Mar 2018 14:22:01 -0700 schrieb Timothee Cour: [...] And then we'll have to add yet another "-import" switch for DLL support. Now we have 3 switches doing essentially the same: Telling the compiler which modules are currently compiled and which modules are part of an external library. Instead of just using the next best simple solution, I think we should take a step back, think about this and design a proper, generic solution. [...] I had the same idea but mine was to add this metadata in the library file itself instead of having it as a separate file. This is to some degree nicer, as it allows for self contained distribution. But then you have to support different library formats, it's more difficult to include support in IDEs and it's more difficult to extend the format. It makes the data more difficult to pull from the file but doesn't make it more difficult to extend. Every library format supports generic "comment" type data blobs used for things like this. You could provide a small library to "pull" this data blob from each supported library format. You would already want to provide a library for the format itself so that same library could include the code to pull it from each library format (i.e. ELF/OMF/COFF). I actually started writing a library in anticipation for this. Currently it can and print the modules in an OMF/ELF file by finding the TypeInfo symbols. This could be a fallback mechanism to use when a library didn't have any metadata, or even be used to "patch" a library to include metadata: https://github.com/marler8997/dlangmodulereader Of course TypeInfo goes away for code compiled with -betterC so it doesn't always work, hence why you'd want to add the metadata beforehand. However, this design is "orthogonal" to -i= and -unittest=, in both cases you may want to include/exclude certain modules regardless of whether or not they are in a library. When would this be the case for -i? First, if we were to add the functionality you've talked about (which I hope we do at some point) this would work alongside -i not be an alternative to it. The new mechanism would allow us to ALWAYS EXCLUDE modules that exist in a pre-compiled library. So we could remove the standard exclusions from -i (-i=-std -i=-core -i=-etc -i=-object) because those modules would already be excluded since they would be in phobos. And if a program wasn't using phobos, then they wouldn't erroneously be excluded. It just works as it should. I agree there is no use case where you would want to compile modules that are in a library passed to the compiler. However, it's easy to come up with use cases where you want to exclude imported modules from the compilation even if they aren't in any library passed to the compiler. In fact, if you're doing any type of incremental compilation, this will most certainly be the more common use case since libraries are only passed to the compiler/linker during the final link stage. For example, maybe you are compiling a "plugin" that will be linked to another program but uses a common library that you want to exclude from your initial compilation, call it "library_for_plugins" i.e. --- myplugin.d static import library_for_plugins; // DO NOT compile this module into the // plugin library static import some_other_library; // DO compile this module into the // plugin library void foo() { // uses symbols from both modules but that doesn't mean you want // to include all of them in compilation library_for_plugins.foo(); some_other_library.bar(); } dmd -c -od=obj -i=-library_for_plugins myplugin.d lib -o myplugin.lib obj\*.o Of course this is just one example I came up with on the fly. You could come up with any number of use cases where you would want this. The point is, this is a compiler, it's job is to compile modules and there's alot more use cases than just "compile everything except what's in the libraries I've provided".
Re: dmd -unittest= (same syntax as -i)
On Thursday, 15 March 2018 at 23:11:41 UTC, Johannes Pfau wrote: Am Wed, 14 Mar 2018 14:22:01 -0700 schrieb Timothee Cour: [...] And then we'll have to add yet another "-import" switch for DLL support. Now we have 3 switches doing essentially the same: Telling the compiler which modules are currently compiled and which modules are part of an external library. Instead of just using the next best simple solution, I think we should take a step back, think about this and design a proper, generic solution. [...] I had the same idea but mine was to add this metadata in the library file itself instead of having it as a separate file. However, this design is "orthogonal" to -i= and -unittest=, in both cases you may want to include/exclude certain modules regardless of whether or not they are in a library.
Re: dmd -unittest= (same syntax as -i)
On Thursday, 15 March 2018 at 12:14:12 UTC, Jacob Carlborg wrote: On Thursday, 15 March 2018 at 05:22:45 UTC, Seb wrote: Hmm how would this solve the StdUnittest use case? I.e. that templated phobos unittests and private unittest symbols are compiled into the users unittests? See also: https://github.com/dlang/phobos/pull/6202 https://github.com/dlang/phobos/pull/6159 I would hope this would be solvable without the having the user do something, like `-unittest=`. -- /Jacob Carlborg We could do the same thing for -unittest that we did with -i, which is to implicitly add: -unittest=-std -unittest=-core -unittest=-etc -unittest=-obj
Re: dmd -unittest= (same syntax as -i)
On Wednesday, 14 March 2018 at 21:22:01 UTC, Timothee Cour wrote: would a PR for `dmd -unittest= (same syntax as -i)` be welcome? wouldn't that avoid all the complicatiosn with version(StdUnittest) ? eg use case: # compile with unittests just for package foo (excluding subpackage foo.bar) dmd -unittest=foo -unittest=-foo.bar -i main.d I'm in favor. If you decide to create a PR, all the match logic is in mars.d at the end of the file. Might want to move it to another module.
Re: Special Code String for mixins
On Wednesday, 15 March 2017 at 13:50:28 UTC, Inquie wrote: I hate building code strings for string mixins as it's very ugly and seems like a complete hack. How bout, instead, we have a special code string similar to a multiline string that allows us to represent valid D code. The compiler can then verify the string after compilation to make sure it is valid D code(since it ultimately is a compile time constant). e.g., string s = "smile"; enum code1 = @# void happyCode = "Makes me @@s@@"; #@ enum code2 = code1 ~ @# int ImThisHappy = @@s.length@@; #@ mixin(code); or mixin(code.stringof); // possible to convert code string to a string and vice versa. or whatever syntax one thinks is better. The point is that the code string is specified different and then is no longer ambiguous as a normal string. Compilers and IDE's can make more informed decisions. There might be a much better way, but something should be done to clean up this area of D. It is a mess to have to use string building to create code. (it's amazingly powerful, but still a mess) I've got a PR for dmd (https://github.com/dlang/dmd/pull/7988) that implements "interpolated strings" which makes generating code with strings MUCH nicer, i.e. string generateFunction(string attributes, string returnType, string name, string args, string body) { import std.conv : text; return text(iq{ // This is an interpolated string! $(attributes) $(returnType) $(name)($(args)) { $(body) } }); } // Let's use it: mixin(generateFunction("pragma(inline)", "int", "add", "int a, int b", "return a + b;")); assert(100 == add(25, 75));
Re: template auto value
On Monday, 5 March 2018 at 13:03:50 UTC, Steven Schveighoffer wrote: On 3/2/18 8:49 PM, Jonathan Marler wrote: On Saturday, 3 March 2018 at 00:20:14 UTC, H. S. Teoh wrote: On Fri, Mar 02, 2018 at 11:51:08PM +, Jonathan Marler via Digitalmars-d wrote: [...] Not true: template counterexample(alias T) {} int x; string s; alias U = counterexample!x; // OK alias V = counterexample!1; // OK alias W = counterexample!"yup"; // OK alias X = counterexample!s; // OK alias Z = counterexample!int; // NG The last one fails because a value is expected, not a type. If you *really* want to accept both values and types, `...` comes to the rescue: template rescue(T...) if (T.length == 1) {} int x; string s; alias U = rescue!x; // OK alias V = rescue!1; // OK alias W = rescue!"yup"; // OK alias X = rescue!s; // OK alias Z = rescue!int; // OK! T Ah thank you...I guess I didn't realize that literals like 1 and "yup" were considered "symbols" when it comes to alias template parameters. Well, they aren't. But template alias is a bit of a mess when it comes to the spec. It will accept anything except keywords AFAIK. Would be nice if it just worked like the variadic version. The variadic version is what is usually needed (you see a lot of if(T.length == 1) in std.traits). But, if you wanted to ensure values (which is more akin to your proposal), you can do: template rescue(alias val) if(!is(val)) // not a type -Steve Thanks for the tip, it looks like the spec does mention "literals" but "alias" parameters are even more versatile than that (https://dlang.org/spec/template.html#TemplateAliasParameter). For example you can pass a function call. I've created an issue to make sure we update the spec to reflect the true capabilities: https://issues.dlang.org/show_bug.cgi?id=18558
Re: State of D: The survey is killing man, way too much
On Saturday, 3 March 2018 at 17:42:25 UTC, David Gileadi wrote: On 3/3/18 8:08 AM, 0x wrote: The D survey is killing maan! Those are lots of questions in there If I ever get hold of the people behind it... Is it a coincidence that your user handle is "negative one"? ;) He's obviously unsigned and therefore cannot be "negative". In his mind he was just overflowed :)
Re: template auto value
On Saturday, 3 March 2018 at 00:20:14 UTC, H. S. Teoh wrote: On Fri, Mar 02, 2018 at 11:51:08PM +, Jonathan Marler via Digitalmars-d wrote: [...] Not true: template counterexample(alias T) {} int x; string s; alias U = counterexample!x; // OK alias V = counterexample!1; // OK alias W = counterexample!"yup"; // OK alias X = counterexample!s; // OK alias Z = counterexample!int; // NG The last one fails because a value is expected, not a type. If you *really* want to accept both values and types, `...` comes to the rescue: template rescue(T...) if (T.length == 1) {} int x; string s; alias U = rescue!x; // OK alias V = rescue!1; // OK alias W = rescue!"yup"; // OK alias X = rescue!s; // OK alias Z = rescue!int; // OK! T Ah thank you...I guess I didn't realize that literals like 1 and "yup" were considered "symbols" when it comes to alias template parameters.
template auto value
I believe I found small hole in template parameter semantics. I've summarized it here (https://github.com/marler8997/dlangfeatures#template-auto-value-parameter). Wanted to get feedback before I look into creating a PR for it. -- COPY/PASTED from https://github.com/marler8997/dlangfeatures#template-auto-value-parameter -- If you reference the D grammar for templates (See https://dlang.org/spec/template.html), there are currently 5 categories of template parameters: TemplateParameter: TemplateTypeParameter TemplateValueParameter TemplateAliasParameter TemplateSequenceParameter TemplateThisParameter However there is a hole in this list, namely, generic template value parameters. The current TemplateValueParameter grammar node must explicitly declare a "BasicType": TemplateValueParameter: BasicType Declarator BasicType Declarator TemplateValueParameterSpecialization BasicType Declarator TemplateValueParameterDefault BasicType Declarator TemplateValueParameterSpecialization TemplateValueParameterDefault For example: --- template foo(string value) { } foo!"hello"; --- However, you can't create a template that accepts a value of any type. This would a good use case for the auto keyword, i.e. --- template foo(auto value) { } foo!0; foo!"hello"; --- This would be a simple change to the grammar, namely, BasicTemplateType: BasicType auto TemplateValueParameter: BasicTemplateType Declarator BasicTemplateType Declarator TemplateValueParameterSpecialization BasicTemplateType Declarator TemplateValueParameterDefault BasicTemplateType Declarator TemplateValueParameterSpecialization TemplateValueParameterDefault
Re: -libpath?
On Wednesday, 21 February 2018 at 04:07:38 UTC, Mike Franklin wrote: On Tuesday, 6 February 2018 at 17:49:33 UTC, Jonathan Marler wrote: What do people think of adding an argument to DMD to add library search paths? Currently the only way I know how to do this would be via linker-specific flags, i.e. GCC: -L-L/usr/lib MSVC: -L-libpath:C:\mylibs OPTLINK: -L+C:\mylibs\ NOTE: the optlink version only works if no .def file is specified. If you have a .def file, then you can't add any library search paths :) If we added a new "linker-independent" flag to dmd, then you could add paths using the same interface regardless of which linker you are using. I'd expect the argument to be something like: -libpath= The disadvantage is it would be another command line option added to DMD. If there is general agreement that this is a desirable feature, I'll go ahead and implement it. Given the current state of things, and the issue described above, I think a linker/platform independent flag would be nice. However, I'd much rather have the compiler just be a compiler and not have to worry about all the intricacies building. IMO, the compiler should get out of the linking business altogether, and just generate object files. A separate build tool could then call the compiler, linker, and whatever else to do builds. But that ship has probably sailed. Mike Interesting idea. Actually I don't think it's too late for this. It is too late for DMD to just be a compiler, but that doesn't mean the compiler can't be stripped out as a separate component that DMD interfaces with. This would just make DMD a build tool/compiler/linker wrapper/etc that interfaces with underlying components that could be invoked independently as well. In any case, DMD has evolved to make development more convenient, adding features in a monolithic fashion that could have otherwise been implemented using independent components as part of a suite of compiler software, not unlike LLVM. However, this requires a lot more effort, creating interfaces between each component that then need to be well-defined and maintained...sometimes you just want to provide a feature without going through all the grunt work to make it robust. I think this is a natural evolution of software. Most of it starts monolithic and components are pulled out as needed, and this can still be done for the D compiler.
Re: -libpath?
On Tuesday, 6 February 2018 at 17:49:33 UTC, Jonathan Marler wrote: What do people think of adding an argument to DMD to add library search paths? Currently the only way I know how to do this would be via linker-specific flags, i.e. GCC: -L-L/usr/lib MSVC: -L-libpath:C:\mylibs OPTLINK: -L+C:\mylibs\ NOTE: the optlink version only works if no .def file is specified. If you have a .def file, then you can't add any library search paths :) If we added a new "linker-independent" flag to dmd, then you could add paths using the same interface regardless of which linker you are using. I'd expect the argument to be something like: -libpath= The disadvantage is it would be another command line option added to DMD. If there is general agreement that this is a desirable feature, I'll go ahead and implement it. no one responded to this, but I thought I would bump this to the front page to double check if there is any interest in this feature.
Re: How to represent multiple files in a forum post?
On Sunday, 18 February 2018 at 23:46:05 UTC, Sönke Ludwig wrote: Am 14.02.2018 um 19:33 schrieb Jonathan Marler: @timotheecour and I came up with a solution to a common problem: How to represent multiple files in a forum post? Why not multipart/mixed? Since this is NNTP based, wouldn't that be the natural choice? That it, assuming that forum.dlang.org is the target for this, of course. Actually, using multipart/mixed was my initial thought! But that format is a bit awkward and verbose for humans to type, which makes sense because it was designed to be generated by programs, not as a human-maintained block of text. HAR: == --- a.txt This is a.txt --- b.txt This is b.txt == Multipart: == Content-Type: multipart/alternative; boundary= -- Content-Type: text/plain; charset=us-ascii Filename: a.txt This is a.txt -- Content-Type: text/plain; charset=us-ascii Filename: b.txt This is b.txt == HAR was actually born out of Multipart, it's really just a simplified version of it :)
Re: How to represent multiple files in a forum post?
On Monday, 19 February 2018 at 01:26:43 UTC, Jonathan M Davis wrote: Okay. Maybe, I'm dumb, but what is the point of all of this? Why would any kind of standard be necessary at all? Good question. Having a standard allows computers to interface with the archive as well as humans. It's not hard to create "ad hoc" formats on the fly to represent multiple files, which is why having a standard doesn't immediately come to mind. But with a standard, you can create tools to process that format and understand it. As an example, we're exploring 2 useful applications, namely, representing multi-file tests in the dmd test suite and multi-file programs in https://run.dlang.io, it's also already been useful to me in copy/pasting complex test cases to forums without having to manage a bunch of individual files. Of course, these are just a few examples of the benefits you get with a standard. So...is it necessary? No. Is it helpful? I think so. Is it worth it? I think there's a good case for it when you weight the simplicity of it against the benefit.
Re: How to represent multiple files in a forum post?
On Sunday, 18 February 2018 at 21:40:34 UTC, Martin Nowak wrote: On Sunday, 18 February 2018 at 04:04:48 UTC, Jonathan Marler wrote: If there is an existing standard that's great, I wasn't able to find one. If you find one let me know. Found ptar (https://github.com/jtvaughan/ptar) and shar (https://linux.die.net/man/1/shar), both aren't too good fits, so indeed a custom format might be in order. Interesting. Shar definitely doesn't fit the bill (I used it to archive a simple file and got a HUGE shell file). However, PTAR is interesting. Here's the example they provide (https://github.com/jtvaughan/ptar/blob/master/FORMAT.md#example-archive) == PTAR == Metadata Encoding: utf-8 Archive Creation Date: 2013-09-24T22:41:20Z Path: a.txt Type: Regular File File Size: 32 User Name: foo User ID: 1000 Group Name: bar Group ID: 1001 Permissions: 664 Modification Time: 1380015036 --- These are the contents of a.txt --- Path: b.txt Type: Regular File File Size: 32 User Name: foo User ID: 1000 Group Name: baz Group ID: 522 Permissions: 664 Modification Time: 1380015048 --- These are the contents of b.txt --- In HAR you could represent the same 2 files (omitting all the metadata) with: == HAR == --- a.txt These are the contents of a.txt --- b.txt These are the contents of b.txt The PTAR format may be saved if many of the fields were optional, however, it appears that most of them are always required, i.e. NOTE: Unless otherwise specified, all of the aforementioned keys are required. Keys that do not apply to a file entry are silently ignored. If the standard was changed though (along with making the initial file header optional), the example could be slimmed down to this: Path: a.txt --- These are the contents of a.txt --- Path: b.txt --- These are the contents of b.txt --- This format looks fine, but it's a big modification of PTAR as it exists.
Re: How to represent multiple files in a forum post?
On Saturday, 17 February 2018 at 22:11:28 UTC, Martin Nowak wrote: On Wednesday, 14 February 2018 at 18:33:23 UTC, Jonathan Marler wrote: @timotheecour and I came up with a solution to a common problem: How to represent multiple files in a forum post? Oh, I thought it already was a standard, but [.har](https://en.wikipedia.org/wiki/.har) is JSON based and very different, so the name is already taken. Have you done some proper research for existing standards? There is a 50 year old technique to use ASCII file separators (https://stackoverflow.com/a/18782271/2371032). Sure we can find some existing ones. We recently wondered how gcc/llvm test codegen, at least in gcc or gdb I've seen a custom test case format. Those seem to be binary control characters, they also don't allow you to specify the filename...how does that solve the problem of being able to represent multiple files in a forum post? If there is an existing standard that's great, I wasn't able to find one. If you find one let me know.
Re: How to represent multiple files in a forum post?
On Wednesday, 14 February 2018 at 20:16:32 UTC, John Gabriele wrote: Can the har file delimiter be more than three characters? Yes. So long as the delimiter is the consistent across the whole file, i.e. file1 file2 (See https://github.com/marler8997/har#custom-delimiters) What do you think of allowing trailing dashes (or whatever the delim chars are) after the file/dir name? It would make it easier to see the delimiters for larger har'd files. --- file1.d --- module file1; --- file2.d --- module file2; (Note that markdown allows extra trailing characters with its ATX-style headers, and Pandoc does likewise with ATX headers as well as its div syntax (delimited by at least three colons), for that very reason --- to make it easier to spot them.) Given the simplicity of the addition and the the fact that other standards have found it helps readability...I think you've made a fair case. I'll add a note in the README to be a probable addition.
Re: How to represent multiple files in a forum post?
On Wednesday, 14 February 2018 at 18:52:35 UTC, user1234 wrote: On Wednesday, 14 February 2018 at 18:47:31 UTC, Jonathan Marler wrote: On Wednesday, 14 February 2018 at 18:44:06 UTC, user1234 wrote: how does it mix with markdown, html etc ? They'll have to use escapes to be compliant, haven't they ? Works great with mardown. ``` --- file1 Contents of file1 --- file2 Contents of file2 ``` hyphens are used for titles in some flavors. 2 things mitigate that. 1) Markdown does not preprocess text in between triple backticks 2) Even if you didn't put your HAR file in between triple backticks: Har uses "newline, dash, dash, dash, space, name" Markdown uses "newline, dash, dash, dash, dash*, newline" These don't actually conflict, i.e. Markdown title --- --- har file
Re: How to represent multiple files in a forum post?
On Wednesday, 14 February 2018 at 18:44:06 UTC, user1234 wrote: how does it mix with markdown, html etc ? They'll have to use escapes to be compliant, haven't they ? Works great with mardown. ``` --- file1 Contents of file1 --- file2 Contents of file2 ```
Re: How to represent multiple files in a forum post?
On Wednesday, 14 February 2018 at 18:45:59 UTC, Seb wrote: On Wednesday, 14 February 2018 at 18:33:23 UTC, Jonathan Marler wrote: @timotheecour and I came up with a solution to a common problem: How to represent multiple files in a forum post? What's wrong with https://gist.github.com? I'm not sure how that allows you to represent multiple files in a forum post, or how it would help you download the files locally to reproduce/test them.
How to represent multiple files in a forum post?
@timotheecour and I came up with a solution to a common problem: How to represent multiple files in a forum post? So we decided to take a stab at creating a standard! (queue links to https://xkcd.com/927) We're calling it "har" (inspired by the name tar). Here's the REPO: https://github.com/marler8997/har and here's what it looks like: --- file1.d module file1; --- file2.d module file2; // some cool stuff --- main.d import file1, file2; void main() { } --- Makefile main: main.d file1.d file2.d dmd main.d file1.d file2.d The repo contains the standard in README.md and a reference implementation for extracting files from a har file (archiving not implemented yet). One of the great things is when someone creates a post with this format, you can simply copy paste it to "stuff.har" and then extract it with `har stuff.har`. No need to create each individual file and copy/paste the contents to each one. Is this going to change the world? No...but seems like a nice solution to an minor annoyance :)
Re: Dub, Cargo, Go, Gradle, Maven
On Monday, 12 February 2018 at 10:35:06 UTC, Russel Winder wrote: In all the discussion of Dub to date, it hasn't been pointed out that JVM building merged dependency management and build a long time ago. Historically: Make → Ant → Maven → Gradle and Gradle can handle C++ as well as JVM language builds. So the integration of package management and build as seen in Go, Cargo, and Dub is not a group of outliers. Could it be then that it is the right thing to do. After all package management is a dependency management activity and build is a dependency management activity, so why separate them, just have a single ADG to describe the whole thing. SCons, CMake, and Meson (also Reggae?) are traditional build tools, but they assume all package dependency management is handled elsewhere, i.e. they state what is required for a build but assume some other tool provides those packages, usually OS package management, but note with C++ Conan is rapidly becoming a big player. JFrog via Bintray and Artifactory do appear to be the leaders in repository management and Gradle works bery well with it. Rust, Ceylon, and D have to date chosen to eschew systems like Bintray to create their own language specific versions for whatever reason. This leads to language specific dependency management and build. Go is also in this category really except that Git, Mercurial, and Breezy (née Bazaar) repositories are the only package storage used. Then, is a DevOps world, there is deployment, which is usually a dependency management task. Is a totally new tool doing ADG manipulation really needed for this? The lessons from all the tools from SCons to Gradle is that it is all about ADG manipulation and constraint satisfaction. SCons is really quite restricted even though it does it very well (*). Gradle really tries hard not just to solve the problems of Maven (**), but to do end- to-end project management well. In a sense it is the antithesis of each tool does one thing and one thing only model, it is the "there is one and only one ADG to describe the life of the project". Maven and Gradle, and to a lesser extent Cargo and Go, emphasise project management as a wholistic thing, rather than making people deside on each item of the tool chain. Gradle proves a good plugin system to allow changes to the default standard project lifecycle. Gradle uses Groovy scripts or Kotlin scripts for project specifications. Most projects are easily described by a purely declarative specification using internal DSLs. However, for those awkwards bits for some projects a bit of programming in the specification solves the problem. So this goes against the "a project specification must be purely declarative so use TOML/JSON/SDL" but is easy DevOps worth it. Atila's Reggae has already shown how easy it is to use D (or Python, Ruby, JavaScript, Lua) to define a build in an internal DSL. Merging ideas from Dub, Gradle, and Reggae, into a project management tool for D (with C) projects is relatively straightforward of plan albeit really quite a complicated project. Creating the core ADG processing is the first requirement. It has to deal with external dependencies, project build dependencies, and deployment dependencies. The initial problem is creating an ADG from what is potentially a CDG and then doing constraint satisfaction to generate actions. SCons and Gradle have a lot to offer on this. Having been obsessed by build and project management since about 1976, I'd be interested in doing some work on this. (*) The O(N) vs. O(n), SCons vs. Tup thing that T raised in another thread is important, but actually it is an implementation thing of how do you detect change, it isn't an algorithmic issue at a system design level. But it is important. (**) Which many people ignore because Maven remains the major project management tool in the JVM-verse. Lot's of stuff here. I would love for a build/package management tool to emerge and take over like git did with source control. However, my guess is that this problem is very hard and that's why there's so many tools that have their own pros and cons. But it sounds like you're trying to sift through them all and find the gems in each tool. I myself have made attempts to tackle the problem. I created a tool called "dbuild" that allowed you to use D code to describe your project and then build it for you (kinda like Raggae I suppose). And my latest project I call "bidmake" which is more general. It implements the model of what I call "build contracts". A project describes a "contract" and various "contractors" can be used to fulfill the contract. You could have a "build contractor" that builds a contract, or a "documentation contractor" that generates documentation, etc. Anyway, I currently use the tool successfully on a few projects but I'm still not sure how much potential it has. Maybe with your experience you
Re: Casts
On Tuesday, 6 February 2018 at 21:35:11 UTC, Steven Schveighoffer wrote: On 2/6/18 4:16 PM, timotheecour wrote: On Wednesday, 22 October 2008 at 18:43:15 UTC, Denis Koroskin wrote: On Wed, 22 Oct 2008 22:39:10 +0400, bearophilewrote: [...] You can use the following short-cut to avoid double casting: if (auto foo = cast(Foo)obj) { ... } else if (auto bar = cast(Bar)bar) { ... } else { ... } doesn't work with extern(C++) classes see https://forum.dlang.org/post/nolylatfyjktnvwey...@forum.dlang.org Thimotheecour, I'm not sure how you find these threads. This one is almost 10 years old! Back then, C++ class integration wasn't even a twinkling in Walter's eye ;) -Steve Lol, I did the same thing once...I think I may have clicked a link on an old thread, and when I went back to the "page view" it must have gone back to the "page view" where the original thread was listed. I saw an interesting thread an responded and then someone let me know it was 7 years old...I had no idea!!!