Re: Python : Pythonista / Ruby: Rubyist : / D : ?
On Saturday, 22 April 2017 at 08:30:03 UTC, Russel Winder wrote: Terms such as Pythonista, Rubyist, Rustacean, Gopher, etc. are terms of tribalism and exclusion. They are attempts to ensure people claiming membership of the tribe reject being polyglot by pressuring them to eschew all other languages. A good programmer can work professionally with a number of languages, the psychology of programming people have data supporting this theory – if the languages have different computational models. Agreed. No need to praise your own group while ridiculing others. These are programming languages, not text editors.
Re: Dlang Features You Would Like To Share
On Wednesday, 12 April 2017 at 21:40:48 UTC, bluecat wrote: What are some features that you have discovered that you would like to share with the community? For me, one thing I found interesting was the ability to define structures dynamically using mixins: import std.stdio; import std.format: format; template MakePoint(string name, string x, string y) { const char[] MakePoint = "struct %s {int %s; int %s;}".format(name, x, y); } mixin(MakePoint!("Point", "x", "y")); void main() { auto pt = new Point; pt.x = 1; pt.y = 2; writefln("point at (%s, %s)", pt.x, pt.y); } I really like the ability to pass delegates as `alias` template arguments. This allows me to pass these delegates as templates, and the "higher-order" template can instantiate them multiple times with multiple types: https://dpaste.dzfl.pl/e2a7a252b5cc
Re: Happy December 13th!
On Tuesday, 13 December 2016 at 13:57:33 UTC, Walter Bright wrote: On 12/13/2016 2:48 AM, Walter Bright wrote: What a great day to be alive! Just feeling really blessed today, and hope you all are too. This is a fake message. - the real Walter Then why is your Gravatar account called "walterbright2" while the fake message sender's account called "walterbright"?
Re: Fun: Shooting yourself in the foot in D
On Thursday, 27 October 2016 at 19:49:16 UTC, Ali Çehreli wrote: http://www.toodarkpark.org/computers/humor/shoot-self-in-foot.html Some entries for reference: C - You shoot yourself in the foot. - You shoot yourself in the foot and then nobody else can figure out what you did. C++ - You accidentally create a dozen instances of yourself and shoot them all in the foot. Providing emergency medical assistance is impossible since you can't tell which are bitwise copies and which are just pointing at others and saying, "That's me, over there." Python - You shoot yourself in the foot and then brag for hours about how much more elegantly you did it than if you had been using C or (God forbid) Perl. What would the entry for D be? :) Ali You conceive a baby which is born with a bullet genetically shot in the foot
Re: Linus' idea of "good taste" code
On Tuesday, 25 October 2016 at 22:53:54 UTC, Walter Bright wrote: It's a small bit, but the idea here is to eliminate if conditionals where possible: https://medium.com/@bartobri/applying-the-linus-tarvolds-good-taste-coding-requirement-99749f37684a#.nhth1eo4e This is something we could all do better at. Making code a straight path makes it easier to reason about and test. Eliminating loops is something D adds, and goes even further to making code a straight line. One thing I've been trying to do lately when working with DMD is to separate code that gathers information from code that performs an action. (The former can then be made pure.) My code traditionally has it all interleaved together. I'd like to point to Joel Spolsky excellent article "Five Worlds" - http://www.joelonsoftware.com/articles/FiveWorlds.html TL;DR: Joel Spolsky argues that different types("worlds") of developments require different qualities and different priorities, both from the code and the process. Because of that, advice given by experts of one world does not necessary apply to other worlds, even if the expert is really smart and experienced and even if the advice was learned with great pain. Linus Torvald is undoubtedly smart and experienced, but he belongs to the world of low-level kernels and filesystems code. Just because such code would be considered "tasteless" there doesn't mean it's tasteless everywhere.
Re: Tuple enhancement
On Sunday, 16 October 2016 at 13:58:51 UTC, Andrei Alexandrescu wrote: I was thinking it would be handy if tuples had a way to access a field by name at runtime. E.g.: Tuple!(int, "a", double, "b") t; string x = condition ? "a" : "b"; double v = t.get!string(x, 3.14); The get method takes the field name and the type of the presumed field, and it returns the value of the field in the tuple. If no field by that name and type, return the second argument. Rquirements: * Do not throw - allow the second argument to be a throwing delegate * Do not add overhead if the method is never used * Figure out a reasonable (but not all too complicated) way to deal with implicit conversions, e.g. if x == "a" in the the code above, should it return 3.14 or convert the int to double? * Handle qualifiers appropriately Andrei When I need to choose at runtime a value out of multiple choices with different types(all available at compile-time), and handle them similarly, I like to use something like this: import std.stdio; import std.typecons; import std.conv; auto getAnd(alias dlg, T)(T tup, string key) { final switch (key) foreach (fieldName; T.fieldNames) { case fieldName: return dlg(__traits(getMember, tup, fieldName)); } } void main() { Tuple!(int, "a", double, "b") t; t.a = 3; t.b = 3.14; string toString(string key) { return t.getAnd!(x => x.to!string)(key); } assert (toString("a") == "3"); assert (toString("b") == "3.14"); } The idea is to pass a delegate as an "alias", and instantiate it multiple times, once for each field. This means that instead of being forced to convert them to a common type, we can write once code that uses the correct type for each field.
Re: DIP 1002 (TryElseExpression) added to the queue
On Wednesday, 28 September 2016 at 21:00:00 UTC, Steven Schveighoffer wrote: Declaring variables that you need in the right scopes is pretty straightforward. Having scopes magically continue around other separate scopes (catch scopes) doesn't look correct. I get why it's desired, but it doesn't look clean at all. -Steve Consider this: try { auto foo = Foo(); } catch (FooCreationException) { // ... } else { foo.doSomethingWithFoo(); } // foo does not exist here versus this: Foo foo; try { foo = Foo(); } catch (FooCreationException) { // ... } else { foo.doSomethingWithFoo(); } // foo exists here - it could be initialized, it could be not...
Re: DIP 1002 (TryElseExpression) added to the queue
On Wednesday, 28 September 2016 at 11:17:05 UTC, pineapple wrote: On Wednesday, 28 September 2016 at 07:47:32 UTC, Andrei Alexandrescu wrote: * I saw in the forum that the "else" clause is supposed to run in the scope of the "try" statement, but that is not mentioned in the proposal. Even though that is the semantics in Python, that should be explicit in the document. The proposal should be standalone and fully specified without knowing Python or perusing external links. * The fact above (the "else" clause continues the scope of the statement after "try") is surprising, considering that the "catch" and "finally" clauses introduce their own scopes. The irregularity may confuse users. If the "else" clause is defined to introduce its own scope, it seems the advantages of the proposal are diminished. It was an idea that was raised, yes. If catch and finally don't continue the scope of try, then neither should else. That said, it might be preferable if they all did continue try's scope. But this would be a distinct and separate change. I'm the one who suggested that, and I believe there is a very good reason why `else`'s scope behavior should differ from `catch`'s and `try`'s. `catch` and `finally` should not continue `try`'s scope, because they can run even when `try` did not finish successfully(that's their purpose), which means the variables declared in `try` may or may not be initialized in the `catch` and `finally` blocks. For the same reason it's not useful to have them continue the scope - you can't use the variables declared in them if you don't know whether or not they have been initialized. `else` is different - it is guaranteed to only execute if `try` finished successfully, which means all variables in the `try` block have had their initialization statements executed. Also, it's actually useful to have it continue the scope, because one may want to declare a variable in `try`(so they can `catch` exceptions in it's initialization) but use it in `else`(so the same exceptions in it's usage will bubble up). This has little to do with Python's semantics. Python is an interpreted language so it doesn't bother with scope rules for control statements - either the initialization statement was executed and the variable is initialized, or it wasn't and the variable is not declared. D can not imitate this behavior...
Re: DIP 1002 (TryElseExpression) added to the queue
On Tuesday, 27 September 2016 at 09:48:42 UTC, Jonathan M Davis wrote: On Tuesday, September 27, 2016 09:30:10 Dicebot via Digitalmars-d wrote: https://github.com/dlang/DIPs/blob/master/DIPs/DIP1002.md PR: https://github.com/dlang/DIPs/pull/43 Abstract: In Python, the try/catch/finally syntax is augmented with an additional clause, termed else. It is a fantastically useful addition to the conventional syntax. It works like this: ``` try: do_something() except Exception as e: pass # Runs when an error inheriting from Exception was raised else: pass # Runs when no error was raised finally: pass # Runs unconditionally, evaluates last ``` And why not just put the code that would go in the else at the end of the try block? Just like with this proposed else, the code would only run if the preceding code didn't throw any exceptions. This just seems like an attempt to make D more like python rather than to add anything useful. - Jonathan M Davis Exceptions thrown in the `else` clause are not caught in the catch/expect clauses. This gives you finer grained control: try { auto f1 = File("f1.txt"); } catch (ErrnoException) { // f1.txt not found? no biggie... } else { // This won't happen if we can't open f1.txt // If we can't open f2 we don't want to catch the exception: auto f2 = File("f2.txt", "w"); // Do stuff with f1 and f2 } // This will still happen even if we can't open f1.txt BTW, if this feature is ever implemented in D, it's important that the else clause will continue the try clause's scope.
Re: Named arguments via struct initialization in functions
On Wednesday, 9 March 2016 at 13:39:57 UTC, Martin Tschierschke wrote: On Wednesday, 9 March 2016 at 12:55:16 UTC, Idan Arye wrote: [...] [...] [...] Thats true. [...] Yes.Ok. What I like about the :symbol notation is, that a string witch is used only to distinguish between different objects in an Hash / AA has a complete different purpose than a string used to be displayed for the user. I think that writeln("Name", a[:name]); is easier to read, than writeln("Name", a["name"]); especially if the structures are getting bigger, or you are in a vibe.d jade template string where you would have to use additional quoting to write: a(href="a[\"url\"]") a["link_text"] a(href="a[:url]") a[:link_text] May be I should get rid of this by using a struct for my mysql results to display? (=> a.url and a.link_text ) Just my 2 Cents :-) If nested strings is what's bothering you, you can always use backticks. Or opDispatch(though I don't recommend it, as it tends to screw up compilation errors). But these won't let you have fields with different types, and since Voldemort types are so easy in D you are probably better off with structs.
Re: Named arguments via struct initialization in functions
On Wednesday, 9 March 2016 at 10:06:25 UTC, Martin Tschierschke wrote: On Tuesday, 8 March 2016 at 18:46:02 UTC, Chris Wright wrote: On Tue, 08 Mar 2016 13:52:09 +, Martin Tschierschke wrote: What about this idea? A new word "as" or something similar. fun(ypos as y, xpos as x, radius as r); // different order! The syntax isn't an issue. There was one DIP about named parameters, but it was unpopular. It didn't change *anything* about overload resolution; it only had the compiler check that you provided arguments in the correct order, even if they were all of the same type. Even if there were a DIP everyone liked, nobody is signing up to implement it. DDMD is a little scary and not reflective of good design in D (having been translated by machine from C++). I might take a look, but I probably won't have the time to produce anything useful. I have seen the "DIP about named parameters", and i liked the idea of getting a easier to read code, because it gets more verbose. My idea with the "val as x" was to avoid the need to define the functions in a different way. But as Idan Arye pointed out, it seems to be more difficult: "As far as I understand, the main two problems with named arguments are overloading ambiguity and the fact that argument names are not part of the signature." An other point on my wish list would be to allow string symbol notation like in ruby. Than using hashes (AA) for parameters gets more convenient: :symbol <= just short for => "symbol" h[:y]= 50; h[:x] = 100; // <=> h["y"] = 50; h["x"] = 100 For calling a function: auto params = [:y : 50, :x : 100] <=> auto params = ["y" : 50 , "x" : 100] Especially when the code is nested in a string for mixin purpose. (vibe.d templates). But maybe this crashes, because of the ambiguity if no space after a ":" is used in this place: auto hash = [ 1 :variable] meaning: auto hash = [ 1 : variable ] not auto hash = [1 "variable" ] which would make no sense either. String symbols are Ruby's(and many Lisps', and maybe some other, less popular languages) way to do untyped enums and untyped structs. It's a dynamically typed languages thing and has no room in statically typed languages like D. D mimics dynamic typing to some extent by creating types on the fly with it's powerful templates mechanism - but a new type still needs to be created. D is geared toward reflection on types at compile time, not towards type detection at run-time... Allowing something like `auto params = [:y : 50, :x : 100]` won't really solve anything. It works nicely in Ruby, because Ruby has dynamic typing and with some syntactic sugar you get elegant syntax for dynamic structs. But in a structured language like D, you run into the problem that `[:y : 50, :x : 100] is an associative array with a determined type for it's values, so you can't do things like `[:y : 50, :x : "hello"]` - which greatly limits the usability of this syntax.
Re: Named arguments via struct initialization in functions
On Sunday, 6 March 2016 at 17:35:38 UTC, Seb wrote: Hey all, I wanted to relive the discussion on named arguments and ping for its current status. There is a bunch of examples to show how needed a unified solution for this problem is, let me give you one from phobos [2]. ``` // I want to allow downsizing iota(10).sliced!(Yes.replaceArrayWithPointer, Yes.allowDownsize)(4); ``` There is of course the alternative solution that an author overloads his function to the utmost, but this results in complexity and duplicated code (see e.g. redBlackTree in phobos [3]). Currently the best solution AFAICT is to use a struct to pass such flags, like ``` struct Options{int x; int y=1; int z=2;} auto fun(Options options) { return options.x + options.y + options.z; } Options options = {x: 4, z: 3}; auto a=fun(options); ``` There are also other workarounds as discussed in [1] (e.g. with CTFE string analysis [4]). I general there two solutions to this problem 1) get true named parameters support in D (probably complicated) 2) allow struct inits in functions - e.g. fun({x: 4}) For 2) Jacob Carlborg has proposed something similar three years ago. In his case he proposed anonymous structs which might be more generally applicable, however just created the struct seems easier and allows more It doesn't seem that complicated to me as the compiler already knows the type of the argument. Using structs is not ideal, because one can't require parameters, but this can be solved by having those parameters as normal ones like `sliced(4, {allowDownsize: true})` and it creates some maybe unnecessary overhead. However it is probably the easiest solution right now. What are your thoughts on this issue? On a side note: many templated functions are also complicated and experiencing this issue, so it also might be worth to think about this issue too ;-) Cheers, Seb [1] http://forum.dlang.org/post/pxndhoskpjxvnoaca...@forum.dlang.org [2] https://github.com/DlangScience/mir/issues/18 [3] https://github.com/D-Programming-Language/phobos/pull/4041/files [4] https://github.com/timotheecour/dtools/blob/master/dtools/util/functional.d As far as I understand, the main two problems with named arguments are overloading ambiguity and the fact that argument names are not part of the signature. Using a struct to define the named arguments solves the signature problem, but the ambiguity problem remains - it simply gets shifted from "which overload to choose" to "which struct to choose". The consensus here seems to be that ambiguous code should be result in compilation errors(and that it should be easy for the compiler to detect that the code is ambiguous!), and that if structs are used to declare named arguments, then solving overloading ambiguities should be done by constructing the struct explicitly. Then everyone disagree regarding how structs should be created explicitly with named fields. I would like to suggest an idea that will allow solving ambiguities without the need for new syntax(behind the syntax for declaring and using named arguments), be consistent with existing D features, and as a bonus provide a much nicer syntax: Make it like variadic functions! Declaring the named arguments variadically will be done by adding `...` after a struct argument: struct Options{int x; int y=1; int z=2;} auto fun(Options options ...) We'll need a syntax for specifying the arguments - but that's more of a matter of taste than an actual technical problem, and it's going to be bikeshedded over and over, so for the purpose of describing my idea let's pick a Ruby-style `:`(because `=` will break the rule of if-it-compiles-as-C-it-should-work-like-C): fun(x: 4, z: 3); I've promised you to solve ambiguity, right? Well, just like regular variadic functions can be called with an explicit array, named arguments variadic functions should also be callable with an explicit struct: Options options = {x: 4, z: 3}; fun(options); Which brings us back to square one, right? Wrong! Because at this point, it should be trivial to give structs a second default constructor: this(typeof(this) that ...) { this = that; } (actual implementation may be more efficient and robust(preferably a syntactic sugar for postblit), but you get the idea). So now structs can be constructed with named arguments, and ambiguities could be neatly solved with: fun(Options(x: 4, z: 3)); P.S: If a struct-based solution is ever implemented for named arguments, I do hope it'll support template inference, where a simple PODS(Plain Old D Struct... sorry...) is automatically created based on the supplied arguments' names and types(and order!). UDA support for that could be nice too...
Re: Parameterized Keywords
On Tuesday, 8 March 2016 at 17:28:59 UTC, Bob the Viking wrote: VGhpcyBpcyBidWxsc2hpdC4g Don't post things like this - some of us are Vim users here...
Re: Parameterized Keywords
On Monday, 7 March 2016 at 05:56:54 UTC, Patience wrote: Just curious if anyone can see the use for them? e.g., for[32], switch[alpha] //alpha is a user type, if[x](x < 32) etc.. The idea is two-fold, one is to allow polymorphic keyword behavior(it's behavior depends on the argument) and to allow the code itself to manipulate the behavior. Of course, the behavior isn't easily define for the above examples. So far all I can think of is various simple modifications: foreach[?] <- executes only if there are one or more iterations and checks for nullity. This allow one to avoid writing the if check. int[size] <- creates an integer of size bits. etc... I can't see much benefit but maybe someone has some good uses? These could be thought of as "attributes for keywords". The programmer would have to define precisely what if[x] means. Obviously the biggest objection is "It will obfuscate keywords"... Please don't obfuscate the question with such responses. This... looks like you have a solution and now you are looking for a problem. I fail to see the need to parameterize the *keywords*, when what you really want to modify is the *constructs* created from these keywords. That is, you don't want to modify `if` keyword - you want to modify the whole `if` statement(`if (a) { b(); } else { c(); }`). Modifying the keyword is just a means to that end. This distinction is important because the term "keywords", while distinctive and important at the lexical phase, is too broad at the grammar phase and is no longer meaningful once we reach the semantics phase. Parameterizing `int` is different from parameterizing `foreach` is different from parameterizing `pure`. Of course, this statement doesn't hold for homoiconic languages, where keywords are actual values and parameterizing them simply means returning a different value. Also, I'm assuming you mean to allow defining parameterizations at library level - otherwise they won't be very useful, since you could simply create new syntactic constructs. So, assuming you the language is not homoiconic and that users and library authors should be able to define("overload") their own keyword parameterizations, the keywords will need to be partitioned into several categories: data-types, annotations, statements etc. Each category should have it's own parameterization overloading rules - so `int[...]` and `float[...]` will have similar rules, which will be very different from `if[...]`'s rules. Now, let's focus, for a moment, on types - because "parameterizing" types is a solved problem - it's called "templating". You usually want parameterized types to also be types - your `int[12]` should be a type, usable wherever types are usable - which is exactly what templated types do - it's easy to implement `CustomSizedInt!12` which does exactly what your `int[12]` does. In fact, templated types are better, because: 1) When you encounter `CustomSizedInt!12` and want to know what it does, you need to search for `CustomSizedInt`'s declaration - an extremely common problem, automated by many simple-to-use IDE features and command line tools. To divine `int[12]`'s meaning you'd have to look for the implementation of a the parameterization of `int` with another `int`(or with a `long`, or with a `uint`, or with a...), and you need a more complex query to search for it. 2) `int[12]` is a user defined type, but it's conceptually coupled to `int`. The mere concept of parametrized keyword types is coupled to primitive types. Templated types do not have this limitation - they can depend on whatever they want to - so you have much more freedom. Even if parameterized keywords could do everything templated types could, you'd have to abuse them(resulting in many code smells) whenever you want a type doesn't strictly resolve around a primitive type - something that comes naturally with templated types. 3) Code that invokes user-defined behavior should have an "anchor" - something the definition of that behavior resolves around. When you call a function it's the function. When you call a method it's the object's type. When you use an overloaded operator it's the type of one of the operands. When you use `int[12]`? Both `int` and `12` are built in - there is no anchor. If `int[12]` is to be library defined, it would have to be more like `int[SizeInBits(12)]`(so `SizeInBits` is the anchor), and suddenly it doesn't look that syntactically appealing compared to `CustomSizedInt!12`... So, that was for keywords that represent types - what about other keywords? I picked types because it's something D(and many other languages) already have, but I claim that the same reasons apply to all other keywords. Let's look at another easy one - annotations. Let's say you want to paramererize `pure` - e.g. `pure[MyPureModifier]` - so it'll do something a bit different. It'll still be an annotation, so it'll have to
Precision of double in toJSON(and in general)
http://dpaste.dzfl.pl/9a0569b20756 toJSON is converting to the default precision, that fits `float`, even though the actual type is `double`. In general, printing `double` with "%s" fits `float` more than `double`. The problem, as I understand it, is that "%s" is converted to "%g" instead of "%f" for both `float`s and `double`s. I assume that's because "%f" is always printed in fixed-point format, while "%g" can choose between the fixed-point and the scientific formats. This is all well for regular printing, but when converting to JSON we needlessly lose precision. Wouldn't it be better to convert to JSON using "%f", or to generally increase the precision of `double` when using "%s"?
Re: Neovim autocompletion using deoplete
On Sunday, 21 February 2016 at 22:56:23 UTC, landaire wrote: On Sunday, 21 February 2016 at 19:46:50 UTC, Idan Arye wrote: No offense LoL. People who have time to get offended by hearing their project is inferior to another project, should better use that time to improve that project of them! I'm not sure where I said or how you got the idea I was offended, but thanks for the encouragement. Sorry, I thought my English was better than that... I meant "not offense taken"...
Re: Neovim autocompletion using deoplete
On Sunday, 21 February 2016 at 18:34:42 UTC, landaire wrote: On Sunday, 21 February 2016 at 13:36:26 UTC, maik klein wrote: Good job, but could you also explain why do you think its better than in dutyl? I currently also use nvim with dutyl(dcd, dscanner, fmt) and youcompleteme and I haven't run into any issues. Is it possible to filter autocompletions based on the type? Like modules, struct alias, enum etc? Sorry, I didn't mean to say it's objectively better than dutyl but it's been working better for me. dutyl has some solid features that this plugin does not have such as calltip completion (simple enough to add), finding ddoc, and finding declarations. As Idan pointed out, this fits my own workflow a little better since I don't have to manually start/stop dcd-server and I've found that it's a little more reliable with triggering completions (although this may have been a misconfiguration on my end). Currently you cannot filter autocompletions based on the type but that might be easy to add depending on what you mean. No offense LoL. People who have time to get offended by hearing their project is inferior to another project, should better use that time to improve that project of them! At any rate, programming(any kind of engineering actually) is the art of tradeoffs. I could add these features to Dutyl, but then I'll have to add more requirements, which is something I don't want to do.
Re: Neovim autocompletion using deoplete
On Sunday, 21 February 2016 at 13:36:26 UTC, maik klein wrote: Good job, but could you also explain why do you think its better than in dutyl? I currently also use nvim with dutyl(dcd, dscanner, fmt) and youcompleteme and I haven't run into any issues. As Dutyl's author, I can think of few things: 1) Dutyl requires the user to run DCD in the background. It provides helpers to start and stop the DCD server, but it's still up to the user to run them. I did not want to start DCD automatically when I have no way of closing it when the user exits Vim. deoplete-d is running the DCD server as a Python subprocess, so it can fully control it's lifetime. 2) Dutyl is running dcd-client synchronously and parses it's output in the main Vim process, which freezes Vim. The freezes are very short - DCD is very fast and there is not much to parse - but they still exist. deoplete-d, on the other hand, is using the deoplete framework to do this asynchronously, so the completions seem instantaneous. This is mainly an illusion - they should take about the same time(OK, maybe a bit faster - after all, Python is faster than VimScript), and they only feel faster because they are done asynchronously and there are no freezes, but I can see how it improves the UX.
Re: Evolutionary Programming!
On Tuesday, 5 January 2016 at 16:10:21 UTC, Jason Jeffory wrote: 1. Grammar independence - People speak different languages and perceive logic through different symbols. It is the nature of life and knowledge. I want to focus on that. If multiple developers need to work on the same project, having different grammars is not a very good idea. Consistency is important - we promote coding standards to cover the parts not enforced by the grammar - so mixing the different grammars in the same project is a huge no-no. Promoting multiple grammars will be a disservice to programmers then, since each project will have it's chosen grammar, and number of projects each developer is comfortable working on will be drastically reduced. Of course, as already suggested in this thread, AST editors could do that translation, and having me use a different grammar than you will become as simple as my text editor highlighting keywords in different colors than yours. Then again - if this will be the case, using different grammars will also be about as meaningful as having different color schemes... However, using different grammars can have a different goal - reducing the boundaries between libraries. If everyone use different grammars of the same language, you won't have to give up on or struggle with a cool library just because it's written in a different language than the one you use. This problem is currently addressed by platforms like Java and .NET. Languages on these platforms are compiled into bytecode, with an expressive enough format that allows any language on the platform to use libraries written in other languages - without the need to translate the library's interface to the client language. The main hurdle with these - which I believe your dream language will have to face as well when it tries to support multiple grammars - is supporting the many language-backed idioms that modern languages use to make our code cleaner and safer. Let's consider, for the Dispose Pattern(https://en.wikipedia.org/wiki/Dispose_pattern). The syntax in Java and in Python looks quite similar: try (BufferedReader reader = new BufferedReader(new FileReader(filename))) { return reader.readLine(); } with open(filename) as reader: print(reader.readline()) So, since Python has a JVM version - Jython - you would expect to be able to do this: with BufferedReader(FileReader("chapter-tracker.sh")) as reader: print(reader.readline()) But no - you get an error: AttributeError: 'java.io.BufferedReader' object has no attribute '__exit__' So, what happened? While the idioms look similar in Java and Python, the semantics behind them are quite different. In Java, `try (Foo foo = new Foo())` will simply call `foo.close()` when the block is exited. In python, `with Foo() as foo:` will call `__enter__()` on `Foo()`'s result, assign it's result to `foo`, and when the block exists it call `__exit__(...)` on `Foo()`'s result from back than(not on `foo`!) To solve this, you'd have to define such idioms as part of the platform, and thus all the languages(/grammars) could follow them. But this comes with it's own price: - The list of idioms you'd want to make official can become quite large - making the interface between the platform and the languages/grammars more complex, and therefore the implementation of such languages more complex. This is something platform/language designers usually try to avoid. - Language/grammar designers will want to add new idioms to their languages/grammars, but the process of adding a new idiom to the platform will be quite long. This will give language/grammar designers to just add their new idiom into their own creation, without caring about consistency in the platform's ecosystem. So, like everything else in our profession - this is a matter of tradeoffs.
Re: Complexity nomenclature
On Friday, 4 December 2015 at 01:27:42 UTC, Andrei Alexandrescu wrote: I know names are something we're awfully good at discussing :o). Destroy! Andrei I find it ironic that this thread has moved to discuss how to name complexity/running-time...
Re: improving scope(finally/success)
On Thursday, 3 December 2015 at 11:41:29 UTC, Tomer Filiba wrote: it'd be really helpful if scope() statements got hold of the return value or exception, e.g., scope(success, retval) { writeln("the retval is", retval) } scope(failure, ex) { if(typeid(ex) == typeid(MyException)) { callTheCops(); } } it would make logging very easy, and since these statements are basically code-rewrites i don't suppose it would be hard to implement. from a syntax point of view: scope(succes[, VARNAME])// where VARNAME would be the return value (a const tmp variable?) scope(failure[, VARNAME]) // where VARNAME would be hold the exception (a Throwable) i mean, code such as int f() { scope(exit) writeln("bye"); return 5; } is rewritten as something like int f() { try { auto tmp = 5; } finally { writeln("bye"); } return tmp; } so `tmp` is already there for the finally clause (modulo scoping issues) int foo(bool cond) { // scope 1 { // scope 2 scope(success, retval) { writeln("the retval is", retval) } if (cond) { // scope 3 return 1; } } return 2; } `foo(true)` should obviously print "the retval is 1", but what should `foo(false)` print? It's `scope(success)` block should run when it exists scope 2, but the return value is only determined at the end of scope 1.
Re: Pseudo namespaces
On Thursday, 3 December 2015 at 20:51:02 UTC, Andrei Alexandrescu wrote: I vaguely remembered I saw something like this a while ago: http://dpaste.dzfl.pl/f11894a098c6 The trick could be more fluent, but it might have merit. Has anyone explored it? Is it a viable candidate for becoming a D idiom? I was looking at this in conjunction with choosing a naming convention for container functions. Some functions are "stable" so that would be part of their name, e.g. insertStable or stableInsert. With this, it's possible to write lst.stable.insert. Andrei People are going to hate me, but http://dpaste.dzfl.pl/851d1d1f5e4b
Re: Pseudo namespaces
On Friday, 4 December 2015 at 01:37:35 UTC, Mike wrote: On Friday, 4 December 2015 at 01:04:33 UTC, Idan Arye wrote: People are going to hate me, but http://dpaste.dzfl.pl/851d1d1f5e4b Doesn't seem to scale to member access: http://dpaste.dzfl.pl/37193377524c /d649/f987.d-mixin-3(7): Error: 'this' is only defined in non-static member functions, not fun Is there a way to make it work? Mike Yea, my bad. Initially I used a template mixin, but the syntax was ugly so I changed it to a regular template + alias. When you use a template mixin member access does work: http://dpaste.dzfl.pl/9ca85cbecea7
Re: Complexity nomenclature
On Friday, 4 December 2015 at 01:27:42 UTC, Andrei Alexandrescu wrote: Consider the collections universe. So we have an imperative primitive like: c.insertAfter(r, x) where c is a collection, r is a range previously extracted from c, and x is a value convertible to collection's element type. The expression imperatively inserts x in the collection right after r. Now this primitive may have three complexities: * linear in the length of r (e.g. c is a singly-linked list) * linear in the number of elements after r in the collection (e.g. c is an array) * constant (c is a doubly-linked list) These complexities must be reflected in the name of the primitives. Or perhaps it's okay to conflate a couple of them. I know names are something we're awfully good at discussing :o). Destroy! Andrei The complexities of the operations is a property of the data structure being used. If each collection type will have it's own set of method names based on the complexity of operations on it, we won't be able to have templated functions that operate on any kind of collection(or at the very least, these functions will be really tedious to code).
Re: I hate new DUB config format
On Wednesday, 2 December 2015 at 16:15:04 UTC, Nick Sabalausky wrote: On 11/25/2015 06:53 AM, Suliman wrote: I find the SDLang format much cleaner to use than JSON But it's dead format! Nobody do not use it. JSON easy to read, there is a lot of it's checkers and formating tools. Yes, it's not perfect, but now it's _standard_. Personally I'd prefer yaml, because it's much easier to read for humans. But what we will do with SDL? Who know how to parse, validate it with D, and with another language? Even ini is better, because everybody know it. This whole debate is completely moronic. 1. With DUB, which format is "default" means next to nothing. 2. I don't know where in the world you've gotten the idea you can no longer just copy-paste deps. That's patently BS. 3. SDLang is fucking trivial. Any programmer worth at least half their salt (ie anyone here, including you) could've learned it in same time you've already spent bellyaching about it. 4. Fuck "standard/popular/common". Seriously, fuck it. That sort of bullshit nonsense attitude is EXACTLY why half our industry is as completely fucked as it is with complete and total shit like PHP, JS, Node, JVM, Angular, JS Toolkit #five-billion-and-one, gaudy metro colors, meaningless hieroglyphs, walled gardens, web pages with near-zero content just empty space and screen-sized images with a quick slogan or two, text entry boxes that are *literally* slower than a goddamn Apple II, etc, etc etc... Seriously, enough of this goddamn "fashion before engineering" bullshit. That load of crap is why I'm right on the verge of completely jumping ship from what's left of this goddamn industry. If we start pulling that shit as a matter of course here too, I'm fucking gone, good riddance. The issue is not with humans reading and writing SDLang files - like you said, the syntax is not hard, and besides - the default should be enough for most of the basic learning projects one can make, so by the time you actually need to edit dub.sdl you should know enough D to not be learning two things at once. No - the problem is with making tools(IDEs/editor plugins/scripts) parse and emit it. Yes, computers can read SDLang files - but you need to find a library(or write one yourself) that fits your language of choice. I, for example, have a Ruby script that creates small DUB projects for me when I want to test stuff. Since usually I want to interact with small parts of the huge application, I need to do some specific configuration on the DUB project file so I have my Ruby script load it with it's JSON module, perform the necessary changes, and save over the original file. Back then, when DUB moved to SDLang, obviously my script crashed because `dub init` now creates SDLang config files by default. This can be changed, but I thought I would go with the flow and modify my script to edit the SDLang file. I'm always eager to test new technologies(not adapt - test. I want to experience them before I decide whether to adapt them or not). So, the first step was to google for a Ruby Gem that deals with SDLang. Should be easy, right? Wrong! Neither "Ruby SDL" nor "Ruby SDLang" yielded any relevant results(at least not in the first pages of google), and only recently, when this thread started and I decided to attempt again, I figured out you need to search for "Ruby Simple Declarative Language". Or to use rubygems.org's search engine. But that doesn't matter - back then I couldn't find it so instead just set my script to run `dub init` with `--format=json`, and now I no longer feel motivated to convert my script to use SDLang... If you try to push a new technology, and when people complain about problems of that technology your reply is that they can simply use the better alternative instead - well, if you think this is good for the ecosystem let me tell you about a project called Tango...
Re: I hate new DUB config format
On Wednesday, 2 December 2015 at 22:57:31 UTC, CraigDillabaugh wrote: On Wednesday, 2 December 2015 at 20:45:33 UTC, Idan Arye wrote: On Wednesday, 2 December 2015 at 16:15:04 UTC, Nick Sabalausky wrote: [...] The issue is not with humans reading and writing SDLang files - like you said, the syntax is not hard, and besides - the default should be enough for most of the basic learning projects one can make, so by the time you actually need to edit dub.sdl you should know enough D to not be learning two things at once. [...] Where you looking for this: https://github.com/ikayzo/SDL.rb Yes, and eventually I found it - when I searched with "Ruby Simple Declarative Language". My point was not that it doesn't exist, but that it much harder than it should have been to search for it. BTW - I tried, just to see what I get, to search for a Python implementation: "Python SDL", as expected, yields only results related to Simple DirectMedia Layer. "Python Simple Declarative Language" the only related thing I find ikayzo's github page(which contains SDLang implementations for Java, .NET and Ruby - but not for Python). And here comes the fun part: "Python SDLang" does not find the SDLang implementation for Python - at least not on the first page. But - the first 3 results are about an SDL implementation... ... it's SDLang-D! Yes, you got that right - I searched for something related to Python(!!!) and got a result for D. So yea, maybe SDLang wasn't created specifically for DUB, but it might as well have been. Either that, or D suddenly became more popular than Python. I'll let you judge which of these two alternatives is more probable.
Re: I hate new DUB config format
On Tuesday, 1 December 2015 at 02:46:46 UTC, lobo wrote: On Monday, 30 November 2015 at 21:05:08 UTC, Ola Fosheim Grøstad wrote: On Monday, 30 November 2015 at 20:42:23 UTC, Suliman wrote: Should we try to implement yet another language for writing building config? No, I wasn't really talking about a build system for D, more like a hypothetic generic distributed build system for all languages. But I've read that Google uses a distributed build system for their big C++ applications. So people are working on such solutions already. Maybe we should use any of existence language that may be very good for it, like Red. It have very small foot prints so it can be easy to embeded to build system. I've never heard of Red, do you have a link? Red started out as a Rebol 2 clone and last I checked (18 months ago) it was still is Rebol 2 compatible. http://www.red-lang.org/ bye, lobo Red is not Rebol2 compatible - it's outright impossible to have a single script file that'll run without errors on both Rebol2 and Red. The reason is that Rebol2 requires the first thing in the file to be a `REBOL` preamble, while Red requires it to be a `Red` preamble(though it's generous enough to allow a shebang before it). Since you can only have one preamble, and it can't be both `REBOL` and `Red`, I refuse to call them compatible even if every Rebol2 command can be copied to a Red script and run in there! At any rate, please don't use any Rebol dialect in DUB(or for anything else, in that matter. Just - don't use it). Many languages have awkward quirks, but Rebol seems to be a collection of awkward quirks with a programming language somtimes accidentally hiding in between, created by someone who thought Perl is too readable and shell scripts have too strict type systems.
Re: Is there anyone willing to do the videos 18sex website?
On Tuesday, 1 December 2015 at 02:14:07 UTC, Walter Bright wrote: On 11/30/2015 2:18 PM, Jonny wrote: On Sunday, 29 November 2015 at 12:23:16 UTC, tired_eyes wrote: On Sunday, 29 November 2015 at 02:19:30 UTC, mcss wrote: I want to find a partner to do the world's largest 18sex video site. Lol, such an ambitious project! Dlang definetely needs a success story of that kind :) Please keep us posted! I'm sure if there was an incentive, like free sex with the female clients, then it wouldn't be that difficult to amass a group of programmers to do the work! This thread has gone far enough. Please stop. Still more productive than the DUB format thread...
Re: This Week in D
On Monday, 30 November 2015 at 23:54:15 UTC, Meta wrote: On Monday, 30 November 2015 at 22:32:50 UTC, Adam D. Ruppe wrote: [...] I think it looked pretty pointless to people on the inside as well Just because the discussion is pointless doesn't mean defeat is acceptable!
Re: JSON5 support for std.json
On Monday, 30 November 2015 at 01:02:38 UTC, Jonathan M Davis wrote: On Monday, 30 November 2015 at 00:30:07 UTC, Chris Wright wrote: I'm considering adding JSON5 support to std.json and want to know how well this would be received. JSON5 is pretty much just modern JavaScript's object literal format, allowing things like comments, trailing commas, and single-quoted strings. I only plan to add support for parsing JSON5, not emitting it. So there should be no compatibility concerns with what std.json emits. Since it's technically a breaking change (people can use std.json currently to validate that a document is valid JSON), I am inclined to make JSON5 be off by default and add an option to parse JSON5 rather than JSON1. Anyone have strong feelings about this? Having a JSON 5 parser makes some sense, but I don't think that it makes any sense to have one which can parse JSON 5 but not emit it. Either you're dealing with JSON 5 or you're not, and the historical approach of the internet to be lax in what you accept and strict what you emit has proven to be a horrible approach IMHO. It's pointless to emit most of the JSON5 relaxations - I see no reason why the emitter should specifically add trailing commas, and I don't see how it can emit comments... The only JSON5 feature that is not syntactic sugar is the special floating point values, and the current std.json halfway supports it if you use `JSONOptions.specialFloatLiterals` - http://dpaste.dzfl.pl/42bbe53e00f9. "NaN" is emitted the same as JSON5, but for infinity we emit "Infinite" while JSON5 specifies "Infinity".
Re: I hate new DUB config format
On Sunday, 29 November 2015 at 18:54:04 UTC, Russel Winder wrote: Does anyone still use Maven – surely the world has moved to Gradle with it's Groovy scripts using the Gradle DSL. If I had a silver coin for every time the world should have moved to a better technology...
Re: Feature Request: Hashed Based Assertion
On Thursday, 26 November 2015 at 11:12:07 UTC, tcak wrote: I brought this topic in "Learn" a while ago, but I want to talk about it again. [...] So it's not just the function's signature you want to hash, but it's code as well? What about functions called from the API function? Or functions that set data that'll later be used by the API functions? If anything, I would have hashed the unittests of the API function. If the behavior of the API function changes in a fashion that requires a modification of the unittest, then you might need to alert the business logic programmers. Anything less than that is just useless noise that'll hide the actual changes you want to be warned about among the endless clutter created by trivial changes.
Re: I hate new DUB config format
On Wednesday, 25 November 2015 at 19:05:15 UTC, Walter Bright wrote: The main problem with SDL is it's name. It's not an overly popular project - it doesn't even have an article in Wikipedia. That alone is not a problem - if we had something against non-mainstream project we wouldn't be using D - the problem with SDL's lack of popularity shares it's initials with "Simple DirectMedia Layer" - a super-popular project with binding for most languages. This makes it very hard to google for Dimple Declarative Languae - because most of the things you'll find are about Simple DirectMedia Layer. If you google "D programming language SDL" you'll find it. I just did - and all the first page results were about "Simple DirectMedia Layer". The second page had two results that have something to do with "Simple Declarative Language" - but they weren't landing pages or anything, just source codes that happened to deal with DUB(https://travis-ci.org/D-Programming-Language/dub-registry and https://coveralls.io/files/917374709). Google search results are a bit customized, so other people might get better results, but I still believe SDL is extremely unsearchable.
Re: I hate new DUB config format
On Wednesday, 25 November 2015 at 10:17:02 UTC, Suliman wrote: I think that using SDL format was big mistake. Not only I do not want to spend time in learning yet another dead config format that now use only one project -- DUB. In time when DUB used json it was not perfect, but at last it was standard and everybody can read it. Now when I come to code.dlang.org I can't simply do copy-past of dependence. I need go to docs page, and read how to include it. Also I do not see any projects that are migrate to SDL. Everybody continue to use JSON. So please, return JSON back as default, or very soon we will see that nobody do not submit packages to code.dlang.org and nobody do not use DUB for their own projects. Please vote about SDL config format http://www.easypolls.net/poll.html?p=565587f4e4b0b3955a59fb67 If SDL will stay by default I will prefer to move to any other build system or will downgrade to old version of DUB. The main problem with SDL is it's name. It's not an overly popular project - it doesn't even have an article in Wikipedia. That alone is not a problem - if we had something against non-mainstream project we wouldn't be using D - the problem with SDL's lack of popularity shares it's initials with "Simple DirectMedia Layer" - a super-popular project with binding for most languages. This makes it very hard to google for Dimple Declarative Languae - because most of the things you'll find are about Simple DirectMedia Layer.
Re: Pattern Based Design
On Tuesday, 17 November 2015 at 19:05:30 UTC, Jonny wrote: Being able to factor a project into well understood patterns that are loosely bound yet cohesive is fundamental for a successful project. Does D have an ability to template patterns(or even better yet, a uml like interface that can emit D code) effectively? i.e., saves much more time than doing it by hand? As I become more knowledgeable about the fundamental programming concepts I realize that modern programming hasn't yet brought design to the forefront of programming, where it naturally should be. UML is a start, obviously and there are many reincarnations and variations on the theme. But I imagine that a fully integrated design interface is the way to go. Something that allows you to work in design mode when you are designing and work in implementation mode when you are implementing... keeping the two distinct is what prevents the chaos that tends to happen as a project grows. Proper design is the key to success, is it not? If so, then wouldn't it be wise for D to be more than just a "compiler"? Code folding is a cheesy attempt to reduce implementation details. Code should be more than just a text file of the implementation, but should also include details the design of the program(what it should do, the patterns involved, how the patterns are fitting together, etc). About the closest I have seen to the concept I am interested in is the UML applications like Visual Paradigm which attempt to make design the utmost importance. Because these apps are not integrated with the compiler, the compiler cannot take advantage of design details for optimization. Neither can it properly refactor the implementation details when the design changes. Code generation from UML is bullshit. The point of design is to work at a higher levels of abstraction than your code - levels behind what can be automatically compiled to executable code. By working at such high levels, you can skip many implementation details that can be filled later by the human programmers, which allows you to easily apply design changes(before you write the actual code) and which provides you with better overview of the whole project or specific modules, functionalities and flows. If you want to generate actual code from the design, you must limit the abstraction level of the design to one that can be automatically compiled to executable code - a limitation that robs you of the benefits mentioned above and essentially makes the format of your design a graphic programming languages. Such languages have been created before, and never got traction - and for a good reason! Over the years, programmers have developed a large array of tools for working with textual line-oriented source code files - SCMs, sophisticated text editors, search tools, text manipulation tools and more. Many language-agnostic tools that can work on any source files provided that they are composed of textual lines of code. Graphical languages don't satisfy that condition - so you can't use these tools with them.
Re: Scientific computing in D
On Tuesday, 10 November 2015 at 01:53:25 UTC, bachmeier wrote: On Tuesday, 10 November 2015 at 00:01:19 UTC, Idan Arye wrote: Weird approach. Usually, when one wants to use an interpreted language as an host to a compiled language, the strategy is precompile the compiled language's code and load it as extensions in the interpreted code. That's what dmdinline does. From the examples, it seems like it doesn't. It seems like it's compiling D code on the fly, rather than loading pre-compiled libraries as R extensions.
Re: Scientific computing in D
On Monday, 9 November 2015 at 21:05:35 UTC, bachmeier wrote: On Monday, 9 November 2015 at 20:30:49 UTC, Gerald Jansen wrote: On Monday, 9 November 2015 at 19:31:14 UTC, Márcio Martins wrote: I have been running some MCMC simulations in Python ... Is anyone doing similar stuff with D? Unfortunately, I couldn't find any plotting libraries nor MATLAB-like numerical/stats libs in dub. This seems like another area where D could easily pick up momentum with RDMD and perhaps an integration with Jupyter which is becoming very very popular. see http://dlangscience.github.io/ And here is the gitter discussion site: https://gitter.im/DlangScience/public I've got this project https://bitbucket.org/bachmeil/dmdinline2 to embed D inside R on Linux. Unfortunately the documentation isn't good. I'm currently working on going in the other direction, embedding R inside D. There are, of course, many good MCMC options in R that you could call from your D code. Weird approach. Usually, when one wants to use an interpreted language as an host to a compiled language, the strategy is precompile the compiled language's code and load it as extensions in the interpreted code.
Re: assert(0)
On Saturday, 7 November 2015 at 21:24:02 UTC, Fyodor Ustinov wrote: Colleagues, IMHO: If "assert" catch "fundamental programmers errors" - it should hang programm immediately with and without "-release". If "assert" catch not "too fundamental" errors - assert(0) should emit "Error" in both cases. Third option - assert(0) - it's a "special case" and halt program in both cases. But there should't be so that in one case the "fundamental error" and the other "not fundamental". It's my opinion. WBR, Fyodor. I strongly disagree. Without `-release`, the job of "fundamental programmers errors" is not to stop the program and prevent farther corruption(which is pointless - how do you know executing the `scope(exit)` and `scope(failure)` blocks will increase the corruption? Maybe they'll reduce it?), because when you are developing you shouldn't work with the only copies of important data files. Without `-release`, the role of `assert`s(and `Error`s in general) is the help the programmer fix these bugs. When I have a "fundamental programmer error" in my code, I prefer to get a stack trace from the exception mechanism than to find a core dump(that was hopefully generated) and rely on GDB's excellent support for D to analyze it. Besides, there are some very specific cases where it's acceptable to catch `Error`s - one of them is when you have a logging mechanism that can log these errors in a way/place that's easier for you to read - and then of course re-throws them. Halting the program on errors prevents this logging mechanism from doing it's thing.
Re: DIP 57: static foreach
On Tuesday, 3 November 2015 at 20:28:43 UTC, Andrei Alexandrescu wrote: On 11/03/2015 03:12 PM, Shammah Chancellor wrote: Ditto. This needs `static continue` and `static break`. Without this functionality, the control flow in `static foreach` becomes very unwieldy. "continue" and "break" (no static) should just work. -- Andrei Depending on what you want `continue` and `break` to do. Consider this: http://dpaste.dzfl.pl/925e4aec6173 Note that the pragma gets compiled for `true` even though we `continue` before it. This is the expected behavior from `continue`, but a `static continue` should have skipped that, and a static break should have skipped the compilation of the rest of the AliasSeq.
Re: Please vote for the DConf logo
On Wednesday, 4 November 2015 at 09:30:30 UTC, Andrei Alexandrescu wrote: Reply to this with 1.1, 1.2, 2, or 3: 1) by ponce: Variant 1: https://github.com/p0nce/dconf.org/blob/master/2016/images/logo-sample.png Variant 2: https://raw.githubusercontent.com/p0nce/dconf.org/4f0f2b5be8ec2b06e3feb01d6472ec13a7be4e7c/2016/images/logo2-sample.png 2) by Jonas Drewsen: https://dl.dropboxusercontent.com/u/188292/g4421.png 3) by anonymous: PNG: http://imgur.com/GX0HUFI SVG: https://gist.github.com/anonymous/4ef7282dfec9ab327084 Thanks, Andrei 1.2
Re: Automatic method overriding in sub-classes
On Tuesday, 27 October 2015 at 23:15:39 UTC, Tofu Ninja wrote: An alternative solution could be that if you provide a header to a class and don't include the auto override body, then the auto override functionality is removed and the method is treated as a regular method from that point on(with the most recent version of the method being the one that is used). This would allow the class to still be inherited later on. I think it's a bad idea. The usage or lack of usage of header files should not change the code behavior.
Re: OT: Programming Expansibility
On Friday, 23 October 2015 at 05:17:47 UTC, Jeffery wrote: Oh, Well, I don't think it is arbitrary. If you expose a public member, then you are not encapsulating it and exposing it to the public. The compiler does know this. Hence adding any wrapper does not change the encapsulation. My point is that automatically wrapping every method of the public fields makes the wrapping pointless. Your idea seems to me like sugar around the Law of Demeter. But the point of the Law of Demeter is not to reduce the number of dots per expression - it's to force interactions with the inner objects of a composition to be done via methods of the outer objects. By automatically exposing all methods of the inner objects via wrappers you are missing the entire point of LoD - even if user code can do `yours.foo()` instead of `yours.m.foo()`, it still interacts with `yours.m` without any meaningful mediation of `yours`. If x is a public field of a class, then how can wrapping it hurt anything. If you expose the internal public members of x(if any), then this might create less encapsulation than you want. If you want to hide some public members of x then surely it is not hard to inform the compiler to do so? Explicitly marking the public members of x you want to hide should be easy enough: public class X { public void foo(); } public class A { public X x; private x.foo; // Marks x as private so compiler does not implement wrapper for x.foo } (of course, better notations would exist). Essentially the logic is "wrap by default". Since most of the time the programmer is creating wrappers, this seems it would save a lot of work? Unless there is a good reason to do otherwise(a good reason is something like to-not-break-existing-code, not something like to-save-some-typing) the default should be the more restricting option. Programmers will add the extra syntax to lift the restriction when the actually need it removed, but they usually won't bother to add the restriction when their code works perfectly fine without it.
Re: Can [] be made to work outside contexts of binary operators?
On Thursday, 22 October 2015 at 15:57:05 UTC, Shriramana Sharma wrote: I tried: import std.stdio; void main() { int [5] vals = [1, 2, 3, 4, 5]; writefln("A = %d, B = %d, C = %d, D = %d, E = %d", vals []); } but got thrown an exception that "%d is not a valid specifier for a range". The Python equivalent to flatten a list works: vals = [1, 2, 3, 4, 5] print("A = {}, B = {}, C = {}, D = {}, E = {}".format(*vals)) Output: A = 1, B = 2, C = 3, D = 4, E = 5 Question: Can D's [] be made to work that way? I recently had to write custom functions since I had an array representing numerical fields and wanted to print them out with individual labels but I wasn't able to use a single writefln with sufficient specifiers for that purpose because of this limitation. D's `writefln` is a template-variadic function. Each time you use it, the compiler looks at the arguments you send to it, and compiles a new instantiation of it based on the number and types of these arguments. This means that it would have to know at compile time how many values `vals[]` holds - but that number is only known at runtime! Now, in your case, since vals is a static array, it should be possible to know it's value at compile time. Maybe if there was a `tupleof` for static arrays? At any rate, you can always use the range formatters %( and %) to print the array. See http://dpaste.dzfl.pl/47e3e5a9e5c4
Re: OT: Programming Expansibility
On Thursday, 22 October 2015 at 17:01:55 UTC, Jeffery wrote: ... (I don't see how wrapping breaks encapsulation, in fact, it adds another layer of encapsulation, which isn't breaking it, is it?) The work argument was my whole point though. If the compiler internally wrapped all unwrapped members(easy to do as it is just a simple forwarding proxy) and D itself can do this with just a few lines of code and opDispatch, there is little work the programmer actually has to do. The issue about private wrapping is moot as I mentioned about all members being public in the examples. I should have stated that in general. Obviously wrapping private members wouldn't render the "private" meaningless. I did not argue that it's a lot of work - I argue that getting the compiler to do that for you is a bad idea. The point of encapsulation is that you make a concise choice of which members to wrap and how to wrap them. The compiler can't make these design choices, so if the wrapping is done automatically by the compiler it'll simply wrap everything in a straightforward manner, and you miss the whole point of the encapsulation.
Re: OT: Programming Expansibility
On Wednesday, 21 October 2015 at 14:17:15 UTC, Jeffery wrote: *snip* I think you are looking at it wrong. Object composition should either be public or private. If it's public, it should be perfectly fine for user code to be fully aware that a `Yours` has a `Mine` named `m`. Or, if we look at a more concrete example: class Name { public string first; public string last; } class Person { public Name name; } I see no harm done in user code calling `person.name.first`, because a `Name` is just a type, just like `string` or `int` - the only difference is that `Name` is user defined. In these cases, I find it ridiculous for a library wrap all the functionality of another library instead of just passing it on. A real life example: Java used to have a really crappy datetime module in it's standard library(Java 8 got an improved datetime module). Someone made a third party datetime library called "Joda Time"(http://www.joda.org/joda-time/), which is considerably better, and many libraries require it. If these libraries followed your rule, they would have to wrap the functions of library, so instead of `someEvent.getTime().getMonth()` they'll have to implement in the `Event` class a `getMonth` method so you could `someEvent.getMonth()`, and the same with all the methods in the rich interface provided by Joda Time. Does that seem reasonable to you? Now, while it's true that the fact that it comes from a third party library may make it more prune to bugs and breaking changes(which I assume are what you mean by "flaws"), these flaws don't really grow exponentially on the chain of indirection. The flaw *potential* might grow exponentially, since the number of possible chains can grow exponentially, but the number of chains actually used if far smaller! By insisting on a single level of indirection, you are actually making things worse: - Since you need to wrap methods for the user code, you needlessly materialize many possible chains into existence, triggering many possible flaws that no one would have to deal with otherwise. - You make it impossible for users to deal with flaws in libraries you depend on. Even if your library should not be affected by that flaw, your users now depend on you to deal with it even if they should have been able to deal with it themselves. So, a public composition is a public dependency and should not be hidden by the Law of Demeter. How about private composition? If a composition is private, you should not be able to access it via `y.m.foo()` - but not because it's too long an indirection chain, but because it's a private member field! The outside world should not care that `Yours` has a `Mine` named `m` - this composition is supposed to be encapsulated. The thing is - just like automatically defining getters and setters for all member fields breaks encapsulation, so does automatically defining proxy wrappers for all the methods of the member field. It might solve other problems(like lifetime and ownership problems), but it will not achieve the basic purposes of encapsulation, like allowing you to change the internal fields without affecting users of the outer object. If you change a method of the internal object, the methods of the outer object will also change. So, this type of wrapping is not good here either. `Yours` shouldn't just have a method for invoking `m`'s `foo()`. If `Yours` has a functionality that requires invoking `m.foo()`, the implementation of that functionality can call `m.foo()` directly. Otherwise, there is no reason for any method of `Yours` to call `m.foo()` - certainly not as automatic, thoughtless means to allow users - that shouldn't even be aware of `m`'s existence - to have access to it.
Re: OT: Morfa - an interesting (toy?) language that claims to be inspired/based on D
On Tuesday, 20 October 2015 at 10:22:56 UTC, Ola Fosheim Grøstad wrote: What is interesting about Morfa is that they have a jitted REPL. That's a significant advantage worth pursuing. Looks like LLVM makes such things possible - there is also one for C++(https://github.com/vgvassilev/cling). Could be nice to have a LDC-based REPL...
Re: Fastest JSON parser in the world is a D project
On Wednesday, 14 October 2015 at 07:35:49 UTC, Marco Leise wrote: auto json = parseTrustedJSON(`{ "coordinates": [ { "x": 1, "y": 2, "z": 3 }, … ] }`); I assume parseTrustedJSON is not validating? Did you use it in the benchmark? And were the competitors non-validating as well?
Re: -> and :: operators
On Friday, 9 October 2015 at 19:48:39 UTC, Dmitry Olshansky wrote: On 09-Oct-2015 21:44, Freddy wrote: On Friday, 9 October 2015 at 04:15:42 UTC, Ali Çehreli wrote: Semi-relatedly, a colleague who has heard many D sales pitches from me over the years is recently "looking at Go" and liking it very much. He came to me today telling me about this awesome Go feature where you just type a dot after a pointer and the language is so great that it works! You don't need to type (*p).member. Isn't Go awesome! I responded "yep, it's a great feature and those gostards will never admit that they took that feature from D." (There is probably earlier precedence but it felt great to say it to my friend. :) ) Ali Stole from D? You mean java right? There is no value type objects in Java so no. More likely C#. Nope - C# uses -> to access member of a struct referenced by a pointer. See https://msdn.microsoft.com/en-us/library/50sbeks5.aspx The difference between reference types and pointers is that with reference types, THERE ARE NO value varaiables. So it's safe to use . instead of -> for accessing member through a reference because there is no value type, because there is no such a thing as accessing a member of a reference type without dereferencing it. So it's safe to do so on classes in C#, but not on structs. This is the innovation in D(regarding this issue) - that on struct types, the same operator is used for BOTH the value type and the pointer to it.
Re: -> and :: operators
On Sunday, 11 October 2015 at 13:05:41 UTC, Warwick wrote: On Sunday, 11 October 2015 at 09:43:04 UTC, Idan Arye wrote: On Friday, 9 October 2015 at 19:48:39 UTC, Dmitry Olshansky wrote: This is the innovation in D(regarding this issue) - that on struct types, the same operator is used for BOTH the value type and the pointer to it. At the risk of sounding like a broken record the Delphi variant of Object Pascal started doing that some time around Delphi 4 or Delphi 5 IIRC. (So mid to late 90s). IE. You accessed members with the dot operator whether it was an object (like D Delphi's objects are heap based / reference semantics), a struct, or a pointer to a struct. You should have elaborated then. The other guys talked about the invention of reference types, so I assumed you did as well.
Re: Would a DSpin or DLab for Fedora make sense?
On Friday, 9 October 2015 at 18:10:59 UTC, tim wrote: Would a DSpin or DLab for Fedora make sense? i.e. a Linux build with most of the D stuff preinstalled. What is Fedora Labs? Fedora Labs is a selection of curated bundles of purpose-driven software and content as curated and maintained by members of the Fedora Community. These may be installed as standalone full versions of Fedora or as add-ons to existing Fedora installations. What is Fedora Labs? Fedora Labs is a selection of curated bundles of purpose-driven software and content as curated and maintained by members of the Fedora Community. These may be installed as standalone full versions of Fedora or as add-ons to existing Fedora installations. I think Debian has something similar called Blends. I assume most of the major Linux versions have something similar. I doubt it'll be a good idea. These bundles seem to target areas of interest, never specific languages. You can see bundles geared towards graphic designers or gamers, but not ones for C++ or Java developers. I think the reason is that the purpose of these bundles is to attract people to install the distribution. "Are you a sound editor? We have something just for you - Fedora Jam!". This doesn't work the other way around - nobody will start editing music just because Fedora offers Fedora Jam... Also, Fedora Labs is quite a commitment - in order to use one, you have to reinstall the OS. This is OK if you are an enthusiastic considering a switch to Linux and being offered a distribution flavor modified specifically for you hobby, but programmers usually expect languages to work on whatever OS they'll choose to use(.NET developers choose to ignore the existence of non-Windows operation systems :-P). Existing D developers won't install a new OS just to use D, because it's a lot of trouble and they can already use D just find in their current setups. We can't attract new D developers that way either - convincing someone to try D is hard enough without trying to get them to install a new OS! So, a bundle dedicated to D is not a good idea, but it can be nice if we can get D into the existing bundles. For example, if we can get D into Fedora Scientific, it can get science programmers to try D for their science programs. Of course, for that we need to convince the maintainers that D is good for science...
Re: Moving back to .NET
On Friday, 25 September 2015 at 15:21:34 UTC, Kagamin wrote: On Friday, 25 September 2015 at 14:54:53 UTC, Jonathan M Davis wrote: Do you mean build from the command line? I did that at my previous job where we were using cmake and had made the directory structure very neat, and all of the VS stuff was separate from the actual code, since we didn't build in the source directories, but at my current job, everything was set up with VS by folks who use VS for everything, and the directory structure is a complete mess, making doing stuff from the command line a lot messier than it should be. Doesn't msbuild build it? We have our projects set up with VS too, and it's built by msbuild just fine in a single command. In fact one of our developers builds the solution from command line too and he uses CLI TFS cilent. I've actually encountered some heavily configured Visual Studio projects that could be build from Visual Studio but not from MSBuild. Never got to dig deep enough to figure out why - I suspect it has something to do with the solution arrangement in one of them, and with VS plugins in another. Should be avoidable if one of the devs works with MSBuild from the start - but that was clearly not the case here. At any rate, I still managed to create the illusion of building them from the command line by keeping a open instance of Visual Studio in the background and using devenv to make it compile them when I needed to. But this method will probably not work well if you want to automate these builds on a server...
Re: D ranked as #25 by IEEE spectrum
On Thursday, 24 September 2015 at 01:32:54 UTC, Ola Fosheim Grøstad wrote: On Thursday, 24 September 2015 at 00:16:27 UTC, Idan Arye wrote: On Wednesday, 23 September 2015 at 22:20:35 UTC, Meta wrote: On Wednesday, 23 September 2015 at 19:28:00 UTC, Ola Fosheim Grøstad wrote: http://spectrum.ieee.org/static/interactive-the-top-programming-languages-2015 They list D as useful for web development and embedded, but not desktop apps... And they list Rust was useful for desktop apps and web development. Something's fishy here. They list TCL for embedded. This is behind ridiculous... http://wiki.tcl.tk/1363 http://jim.tcl.tk/index.html/doc/www/www/index.html Mother of... Is it still stringly typed?
Re: Moving back to .NET
On Wednesday, 23 September 2015 at 20:41:38 UTC, rumbu wrote: On Wednesday, 23 September 2015 at 19:52:11 UTC, Paolo Invernizzi wrote: On Wednesday, 23 September 2015 at 18:36:01 UTC, rumbu wrote: Personally, I don't know any Windows developer masochistic enough to use the command line when an IDE is available for the task described above. Nice to meet you, rumbu! Now you know one! ;-P --- Paolo Nice to meet you too, Paolo. Browsing through your posts, I saw that you are using "mainly Mono-D" :) Don't tell me that you are coloring the keywords in your code using a marker. "Not using an IDE" does not mean "programming with cat" - most text editors have syntax highlighting... Anyways, I've also used to be one of these Windows developers masochistic enough to use the command line. I've used it back when I was programming in C#, which means I had to write .csproj files by hand(deep down they resemble Ant, but Visual Studio seems to be writing all sorts of crap in there) and build the projects from the command line using MSBuild, but it was worth it because it means I could build seamlessly from Vim, and I could write deployment scripts that run on the server. That being said - when I said "used to be" it's not because I'm no longer a "masochist", but because I'm no longer a Windows developer(so yes, I'm no longer a masochist...) - so you can say I was already in the Linux developer mindset and it's no surprise I preferred the command line. Even back then, I was disturbed by the fact that so many programmers feel uncomfortable with the idea of typing textual commands to make computers do things...
Re: D ranked as #25 by IEEE spectrum
On Wednesday, 23 September 2015 at 22:20:35 UTC, Meta wrote: On Wednesday, 23 September 2015 at 19:28:00 UTC, Ola Fosheim Grøstad wrote: http://spectrum.ieee.org/static/interactive-the-top-programming-languages-2015 They list D as useful for web development and embedded, but not desktop apps... And they list Rust was useful for desktop apps and web development. Something's fishy here. They list TCL for embedded. This is behind ridiculous...
Re: Implementing typestate
On Wednesday, 16 September 2015 at 06:25:59 UTC, Ola Fosheim Grostad wrote: On Wednesday, 16 September 2015 at 05:51:50 UTC, Tobias Müller wrote: Ola Fosheim Grøstadwrote: On Tuesday, 15 September 2015 at 20:34:43 UTC, Tobias Müller wrote: There's a Blog post somewhere but I can't find it atm. Ok found it: > http://pcwalton.github.io/blog/2012/12/26/typestate-is-dead/ But that is for runtime detection, not compile time? Not as far as I understand it. The marker is a type, not a value. And it's used as template param. But you need non-copyable move-only types for it to work. Yes... But will it prevent you from doing two open() in a row at compiletime? What's wrong with two `open()`s in a row? Each will return a new file handle.
Re: Implementing typestate
On Wednesday, 16 September 2015 at 14:34:05 UTC, Ola Fosheim Grøstad wrote: On Wednesday, 16 September 2015 at 10:31:58 UTC, Idan Arye wrote: What's wrong with two `open()`s in a row? Each will return a new file handle. Yes, but if you do it by mistake then you don't get the compiler to check that you call close() on both. I should have written "what if you forget close()". Will the compiler then complain at compile time? You can't make that happen with just move semantics, you need linear typing so that every resource created are consumed exactly once. Move semantics should be enough. We can declare the destructor private, and then any code outside the module that implicitly calls the d'tor when the variable goes out of scope will raise a compilation error. In order to "get rid" of the variable, you'll have to pass ownership to the `close` function, so your code won't try to implicitly call the d'tor.
Re: Implementing typestate
On Wednesday, 16 September 2015 at 15:57:14 UTC, Ola Fosheim Grøstad wrote: On Wednesday, 16 September 2015 at 15:34:40 UTC, Idan Arye wrote: Move semantics should be enough. We can declare the destructor private, and then any code outside the module that implicitly calls the d'tor when the variable goes out of scope will raise a compilation error. In order to "get rid" of the variable, you'll have to pass ownership to the `close` function, so your code won't try to implicitly call the d'tor. Sounds plausible, but does this work in C++ and D? I assume you mean that you "reinterpret_cast" to a different type in the close() function, which is cheating, but ok :). No need for `reinterpret_cast`. The `close` function is declared in the same module as the `File` struct, so it has access to it's private d'tor.
Re: Type helpers instead of UFCS
On Saturday, 12 September 2015 at 20:37:37 UTC, BBasile wrote: That's why I propose the new keywords 'helper' and 'subject' that will allow to extend the properties pre-defined for a type, as long as the helper is imported: --- module myhelper; helper for subject : string Do we really need a 3-keyword chain? What's wrong with a simple `helper : string` or `helper(string)`? { void writeln() { import std.stdio; writeln(subject); } } --- Why `subject` to refer to the string the function gets called on? What's wrong with good old `this`, which is used for this purpose everywhere else?
Re: Better lambdas!!!!!!!!!!
On Saturday, 12 September 2015 at 10:44:05 UTC, Pierre Krafft wrote: myfunc({return "x:"~x~"y:"-y;}); is infered to mean myfunc((x,y){return "x:"~x~"y:"-y;}); while myfunc({return "y:"~y~"x:"~x;}); is infered to mean myfunc((y,x){return "y:"~y~"x:"~x;}); which is not what I expect since the lambda I want is myfunc((x,y){return "y:"~y~"x:"~x;}); This can lead to subtle bugs which are very hard to see. I don't think this is what the OP was suggesting. As far as I understand, the suggestion was that the lambda's arguments would be inferred from the function argument in higher order's function signature - not from the ones used in the lambda's body as you suggest. So, `myfunc` will be declared as: void myfunc(string delegate(string x, string y)) { // ... And when the compiler see `myfunc({return "x:"~x~"y:"-y;});`, it'll see that `x` and `y` appear in the function argument's definition and match them accordingly. This means that `myfunc({/*...*/})` will be inferred to `myfunc((x,y) {/*...*/})` no matter what the order of argument usage in the lambda's body is - because in the function's signature the arguments are `x` and `y` in that order.
Re: Better lambdas!!!!!!!!!!
On Thursday, 10 September 2015 at 21:03:12 UTC, Meta wrote: On Thursday, 10 September 2015 at 20:56:58 UTC, Ola Fosheim Grøstad wrote: If there is a conflict you should use a regular lambda on the outer one? You could, but then doesn't that defeat the point a bit? My example was off-the-cuff, but the point is that we already have a fairly concise lambda syntax, and adding a new type will mean that we have 4 different ways of expressing the same lambda function. It's just not really worth it. Clojure solved this by disallowing nesting lambdas-with-numbered-arguments: Clojure 1.7.0 user=> (#(+ %1 %2) 1 2) 3 user=> (#(#(+ %1 %2) %2 %1) 1 2) IllegalStateException Nested #()s are not allowed clojure.lang.LispReader$FnReader.invoke (LispReader.java:703) #object[clojure.core$_PLUS_ 0x10fde30a "clojure.core$_PLUS_@10fde30a"] CompilerException java.lang.RuntimeException: Unable to resolve symbol: %1 in this context, compiling:(NO_SOURCE_PATH:0:0) CompilerException java.lang.RuntimeException: Unable to resolve symbol: %2 in this context, compiling:(NO_SOURCE_PATH:0:0) RuntimeException Unmatched delimiter: ) clojure.lang.Util.runtimeException (Util.java:221) CompilerException java.lang.RuntimeException: Unable to resolve symbol: %2 in this context, compiling:(NO_SOURCE_PATH:0:0) CompilerException java.lang.RuntimeException: Unable to resolve symbol: %1 in this context, compiling:(NO_SOURCE_PATH:0:0) RuntimeException Unmatched delimiter: ) clojure.lang.Util.runtimeException (Util.java:221) 1 2 RuntimeException Unmatched delimiter: ) clojure.lang.Util.runtimeException (Util.java:221) Than again, Clojure never was a big advocate of the one-way-of-doing-things approach... At any rate, since string lambdas can usually be used in place of this syntax, and in the cases string lambdas can't be used(because you need something from the scope) it's not THAT hard to use proper lambdas - I see no reason to support it.
Re: friends with phobos, workaround?
On Wednesday, 9 September 2015 at 20:19:44 UTC, Daniel N wrote: For the record, I think D made the right decision... omitting friends. However there's one case in particular which I find useful, anyone see a good workaround for this? #include class Friendly { private: int val; Friendly(int&& val) : val(val) {} friend std::unique_ptr std::make_unique(int&& val); }; int main() { auto yay = std::make_unique(1); auto nay = new Friendly(1); } How about using a mixin template(http://dlang.org/template-mixin.html)? module makeunique; mixin template MakeUnique(Args...) { import std.typecons : Unique; static Unique!(typeof(this)) makeUnique(Args args) { return Unique!Friendly(new typeof(this)(args)); } } module friendly; import makeunique; struct Friendly { private: int val; this(int val) { this.val = val; } public: mixin MakeUnique!(int); }; module app; import std.stdio; import std.typecons; import friendly; void main() { auto yay = Friendly.makeUnique(1); }
Re: A collection of DIPs
On Monday, 7 September 2015 at 16:10:31 UTC, Israel wrote: On Monday, 7 September 2015 at 14:44:05 UTC, nx wrote: https://github.com/NightmareX1337/DX Destroy! Yea ill admit, i came from C# and i hate underscores. I prefer PascalCase above anything. 3 of the keys in my keychain are of the same brand, so they are shaped pretty much the same. The pin arrangement is different, of course, but they all have similar blades of the same height and width, and plastic-cased bows of the same rectangular shape. The orange key opens the front door. The yellow key opens the back door. The blue key opens the old bicycle storage room. I prefer the looks of the blue key. The shape is the same, but the color does make a difference - the orange and yellow ones look kinda toyish - the colors "scream" too hard. The blue one looks professional - as much as a key can look professional. Still, I wouldn't want them all to be blue Because then I wouldn't be able to tell the freaking difference! Maybe you think PascalCase is the best case. Maybe you are right, and there is an objectively "best" case, which is better than all the rest. It doesn't matter - the problem is not which single case you choose to rule them all, but that you insist on choosing one and making the convention to use only it for everything you define. At the very least, the convention should specify two cases - for types and for variables(=fields/arguments/scoped variables) - because you want to be able to name variables after their types. This might be considered a bad practice for the built-in types, but with user defined types you can usually get the type specific enough and the scope small enough for this to be the natural choice. The C# standard library is filled with member fields named after their types, and since both type and field use PascalCase, they had to use separate namespaces for types and variables. Now, for a language that has context sensitive keywords having context-sensitive identifier namespaces is not that weird - but I really don't want to see this misfeature in D...
Re: AST like coding syntax. Easy upgrade!
On Sunday, 6 September 2015 at 23:40:58 UTC, anonymous wrote: On Monday 07 September 2015 00:37, cym13 wrote: There already is a kind of "code string": interpret(q{ var a = 2; var b += a; }); It doesn't do any kind of syntax check, but there again how do you want to have syntax check for any language? The D compiler is a D compiler, it can't support js syntax or whatever. There's a very basic syntax check: Token strings (q{...}) go through tokenization. Compilation fails when the contents aren't valid tokens. For example, q{'} fails with "Error: unterminated character constant". That's not considered as syntax check - that's an earlier stage of the compilation process called "lexical analysis"(https://en.wikipedia.org/wiki/Lexical_analysis)
Re: AST like coding syntax. Easy upgrade!
On Sunday, 6 September 2015 at 23:38:51 UTC, cym13 wrote: On Sunday, 6 September 2015 at 23:00:21 UTC, bitwise wrote: On Sunday, 6 September 2015 at 22:37:16 UTC, cym13 wrote: On Sunday, 6 September 2015 at 21:16:18 UTC, Prudence wrote: [...] There already is a kind of "code string": interpret(q{ var a = 2; var b += a; }); It doesn't do any kind of syntax check, but there again how do you want to have syntax check for any language? The D compiler is a D compiler, it can't support js syntax or whatever. Many IDEs support multiple languages and can infer language automatically by syntax. It's probably much more difficult than it seems, but I suppose one of these IDEs could be made to parse and infer D token strings separately. Sure, but the support for that will be an external tool, it doesn't have anything to do in the D compiler. q{} strings are meant to be seen specially by editors, they won't highlight them the same way for example, it is then the editor's job to detect other languages if it wants to. D has done his job in the matter. Editors will have a hard time highlighting q{} strings differently, since they'll need to understand the semantics in order to know how the string will be parsed. Compare it to Ruby's heredoc, where the chosen terminator string can be used as an hint(https://github.com/joker1007/vim-ruby-heredoc-syntax). Sure, it may be just a convention, but an easily kept one that can make programmers' life easier. You can't do that with D's q{} strings, unless you hard-code into the editor's relevant syntax file the templates that use them, just like the regular syntax of the language.
Re: Error reporting is terrible
On Friday, 4 September 2015 at 03:26:50 UTC, David DeWitt wrote: On Thursday, 3 September 2015 at 23:56:53 UTC, Prudence wrote: [...] I think D is about as easy to install as anything. But then again I dont use Windows so I dont have 99.9% of the hassles that come along with that. Since you didnt provide any examples it is kinda hard to fix but we can guess at your problems so my solution would be: Wipe Windows Install Arch Install vim Install DMD. Problem solved :) Considering the OP struggles with installing DMD and VisualD, I don't think recommending Arch Linux is a good idea...
Re: else if for template constraints
On Monday, 17 August 2015 at 21:27:47 UTC, Meta wrote: On Monday, 17 August 2015 at 17:17:15 UTC, Steven Schveighoffer wrote: On 8/17/15 1:00 PM, Idan Arye wrote: It looks a bit ugly, that the `else` is after a function declaration instead of directly after the if's then clause. How about doing it with the full template style? template replaceInPlace(T, Range) if(isDynamicArray!Range is(Unqual!(ElementEncodingType!Range) == T) !is(T == const T) !is(T == immutable T)) { void replaceInPlace(ref T[] array, size_t from, size_t to, Range stuff) { /* version 1 that tries to write into the array directly */ } } else if(is(typeof(replace(array, from, to, stuff { void replaceInPlace(ref T[] array, size_t from, size_t to, Range stuff) { /* version 2, which simply forwards to replace */ } } Yes, I like this much better. -Steve At that point, couldn't you just use static if inside the body of the template instead of using template constraints? No. Consider this: http://dpaste.dzfl.pl/a014aeba6e68. The having two foo templates is illegal(though it'll only show when you try to instantiate foo), because each of them covers all options for T. When T is neither int nor float, the foo *function* in the first template is not defined, but the *foo* template is still there. With the suggested syntax, the first foo template would only be defined for int and float, and the second will only be defined for char and bool - so there is no conflict.
Re: else if for template constraints
On Monday, 17 August 2015 at 13:18:43 UTC, Steven Schveighoffer wrote: void replaceInPlace(T, Range)(ref T[] array, size_t from, size_t to, Range stuff) if(isDynamicArray!Range is(Unqual!(ElementEncodingType!Range) == T) !is(T == const T) !is(T == immutable T)) { /* version 1 that tries to write into the array directly */ } void replaceInPlace(T, Range)(ref T[] array, size_t from, size_t to, Range stuff) else if(is(typeof(replace(array, from, to, stuff { /* version 2, which simply forwards to replace */ } It looks a bit ugly, that the `else` is after a function declaration instead of directly after the if's then clause. How about doing it with the full template style? template replaceInPlace(T, Range) if(isDynamicArray!Range is(Unqual!(ElementEncodingType!Range) == T) !is(T == const T) !is(T == immutable T)) { void replaceInPlace(ref T[] array, size_t from, size_t to, Range stuff) { /* version 1 that tries to write into the array directly */ } } else if(is(typeof(replace(array, from, to, stuff { void replaceInPlace(ref T[] array, size_t from, size_t to, Range stuff) { /* version 2, which simply forwards to replace */ } }
Re: D for project in computational chemistry
On Sunday, 16 August 2015 at 13:11:12 UTC, Yura wrote: Good afternoon, gentlemen, just want to describe my very limited experience. I have re-written about half of my Python code into D. I got it faster by 6 times. This is a good news. However, I was amazed by performance of D vs Python for following simple nested loops (see below). D was faster by 2 order of magnitude! Bearing in mind that Python is really used in computational chemistry/bioinformatics, I am sure D can be a good option in this field. In the modern strategy for the computational software python is used as a glue language and the number crunching parts are usually written in Fortran or C/C++. Apparently, with D one language can be used to write the entire code. Please, also look at this article: http://www.worldcomp-proceedings.com/proc/p2012/PDP3426.pdf Also, I wander about the results of this internship: http://forum.dlang.org/post/laha9j$pc$1...@digitalmars.com With kind regards, Yury Python: #!/usr/bin/python import sys, string, os, glob, random from math import * a = 0 l = 1000 for i in range(l): for j in range(l): for m in range(l): a = a +i*i*0.7+j*j*0.8+m*m*0.9 print a D: import std.stdio; // command line argument import std.getopt; import std.string; import std.array; import std.conv; import std.math; // main program starts here void main(string[] args) { int l = 1000; double a = 0; for (auto i=0;il;i++){ for (auto j=0;jl;j++) { for (auto m=0;ml;m++) { a = a + i*i*0.7+j*j*0.8+m*m*0.9; } } } writeln(a); } Initially I thought the Python version is so slow because it uses `range` instead of `xrange`, but I tried them both and they both take about the same, so I guess the Python JIT(or even interpreter!) can optimize these allocations away. BTW - if you want to iterate over a range of numbers in D, you can use a foreach loop: foreach (i; 0 .. l) { foreach (j; 0 .. l) { foreach (m; 0 .. l) { a = a + i * i * 0.7 + j * j * 0.8 + m * m * 0.9; } } } Or, to make it look more like the Python version, you can iterate over a range-returning function: import std.range : iota; foreach (i; iota(l)) { foreach (j; iota(l)) { foreach (m; iota(l)) { a = a + i * i * 0.7 + j * j * 0.8 + m * m * 0.9; } } } There are also functions for building ranges from other ranges: import std.algorithm : cartesianProduct; import std.range : iota; foreach (i, j, m; cartesianProduct(iota(l), iota(l), iota(l))) { a = a + i * i * 0.7 + j * j * 0.8 + m * m * 0.9; } Keep in mind though that using these functions, while making the code more readable(to those with some experience in D, at least), is bad for performance - for my first version I got about 5 seconds when building with DMD in debug mode, while for the last version I get 13 seconds when building with LDC in release mode.
Re: Proposal : mnemonic for start index for slices
On Saturday, 8 August 2015 at 13:08:08 UTC, Temtaime wrote: Hi ! I want to add some sugar to D : sometimes it's necessary to use complex start index. For example: auto sub = arr[idx + 123 * 10..idx + 123 * 10 + 1]; Proposal is to add a mnemonic for start index, for instance : auto sub = arr[idx + 123 * 10..# + 1]; // # == start index # is for example. Maybe it can be @ or some other symbol. Any ideas ? Should i try to create a DIP ? auto sub = arr[idx + 123 * 10 .. $][0 .. 1];
Re: Rant after trying Rust a bit
On Thursday, 6 August 2015 at 06:54:45 UTC, Walter Bright wrote: On 8/3/2015 2:19 AM, Max Samukha wrote: The point is that '+' for string concatenation is no more of an 'idiot thing' than '~'. Sure it is. What if you've got: T add(T)(T a, T b) { return a + b; } and some idiot overloaded + for T to be something other than addition? Having add(a, b) return ab is not that weird. But consider this: http://pastebin.com/R3csc5Pa I can't put it in dpaste because it doesn't allow threading, but here is an example output: 45 45 45 45 45 45 45 45 45 45 MyString(0361572489) MyString(0379158246) MyString(0369158247) MyString(0582361479) MyString(0482579136) MyString(0369147258) MyString(0371482569) MyString(0469137258) MyString(0369147258) MyString(0561472389)
Re: Rant after trying Rust a bit
On Thursday, 6 August 2015 at 11:26:10 UTC, Ola Fosheim Grøstad wrote: On Thursday, 6 August 2015 at 06:50:38 UTC, Walter Bright wrote: On 8/2/2015 8:17 PM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= ola.fosheim.grostad+dl...@gmail.com wrote: It's a weird thing to do for a C-decendant as I would expect ~= to do binary negation. If you really felt this way, you'd expect the C != operator a != b to be the same as: a = !b I don't because != is frequently used and usually in a context where expectations points towards comparison and not assignment. But I would prefer =, ≠, and ≤ for comparison and constants... then have something else for variable assignment. I understand your attempt to auction your old APL keyboard didn't go well?
Re: DMD on WIndows 10
On Friday, 31 July 2015 at 22:02:13 UTC, Paul D Anderson wrote: I'm waiting to upgrade from Windows 7 to Windows 10 to avoid the inevitable just-released bugs, but does anyone have any info about D on Windows 10? Has anyone tried it? p.s. Please don't tell me how much better your favorite operating system is than Windows. Thank you. :) A quick Google search points me to this article: http://www.howtogeek.com/219782/is-windows-10-backwards-compatible-with-your-existing-software/ It says Windows 10 should be able to run any application that runs on Windows 7 - though it does say that Windows 10 removed Desktop Gadgets, so if your program is one of those it probably won't work. Than again - I doubt there are any Windows Desktop Gadgets written in D...
Helpers for writing unittests
The Rant after trying Rust a bit thread(http://forum.dlang.org/thread/ckjukjfkgrguhfhkd...@forum.dlang.org) talks mostly about traits, and how people want Rust's traits(or C++'s Concepts) in D. As a general rule, I believe that taking a feature from one language and sticking it in another works just as nicely as taking off a leg from one chair and using it to replace a broken leg from a different firm's chair. You'd be lucky if they have similar length, and even if you sand the legs to have exact same length the balance will be off. You might be able to sit on it, but it won't be as comfortable as a chair that all it's parts fit together. Instead, it's better to look at the problem the feature solves, and how the language without that feature approaches the problem. In our case, the problem is that compile-time errors in the templates only pop up once the template is instantiated. Rust's solution is to use Traits, and Walter made it pretty clear that the D way to solve this problem is to instantiate the template in a unittest. As seen in that thread, many people find this solution lacking. Writing 100% coverage tests for instantiating templates is a long and tiring process. In response, the anti-traits camp accuses these people of being lazy. Well, I say - programmers should be lazy! That's why we are programming - because we are lazy and want the computers to do our work for us. So the current D solution is not enough - but that doesn't mean a feature transplant from Rust is a good solution - we need to find a D style solution, that'll fit with the rest of the language. What I'm thinking about is a unittest helper that'll help in checking different instantiations of the template. A quick proof of concept - http://dpaste.dzfl.pl/8907c3a7d54c - shows how the unittest found that foo doesn't work with long and ulong, and printed easy-to-understand errors. Just like IntegerTypes we can have many more lists of types for different categories of types we want to test - string types, range types etc. With this in Phobos, writing unittests with full coverage(compile-time only) will be much easier. Note that we just want to see that it compiles - we don't want to actually run it, because than we'd have to supply templated test data and test results to compare with, which is a much harder problem. These are compile-time mocks - the problem they solve is limited to compilation so they will be able to solve it well and elegantly. This does not come instead of tests that actually run - a unittest can test compilation on all the relavant types and actually run and check the results only for a subset of these types. Thoughts?
Re: Helpers for writing unittests
On Friday, 31 July 2015 at 00:30:23 UTC, Jonathan M Davis wrote: On Friday, 31 July 2015 at 00:07:43 UTC, Idan Arye wrote: Thoughts? Some unit test helpers for this sort of thing might be nice, but I don't think that it really buys us much with this particular case. You could just as easily do unittest { foreach(T; TypeTuple!(ubyte, byte, ushort, short, uint, int, ulong, long)) static assert(is(typeof(foo(T.init))); } and the code is basically as long as is with assertCompilesWith - shorter even. The above example is longer due to not using an alias for the integral types like you did, but if that same alias were used, then it becomes unittest { foreach(T; IntegerTypes) static assert(is(typeof(foo(T.init))); } which isn't all that different than unittest { assertCompilesWith!(IntegerTypes, (x) { foo(x); }); } The resulting compilation errors are extremely different. With your method we get: /d391/f994.d(31): Error: static assert (is(typeof(__error))) is false which doesn't tell us what the problem is. With my method we get: /d433/f500.d(25): Error: cannot implicitly convert expression (x) of type ulong to int /d433/f500.d(31): Error: template instance f500.foo!ulong error instantiating And yes, a lot of static stack trace after that, but these too lines tells us what the problem is and which template parameters caused it. Of course, if you just ran the function inside the foreach loop you'd get nice error messages as well - but then you'd have to write tests that actually run. Which is easy for numbers, because they are all the same type of data with different sizes, but can get tricky when you have more complex types, that differ more in their behavior.
Re: with(auto x = ...)
On Sunday, 26 July 2015 at 07:28:45 UTC, Kapps wrote: On Friday, 24 July 2015 at 15:01:29 UTC, Adam D. Ruppe wrote: [...] The with statement is one where I think it would be interesting to make it an expression. For named parameters (admittedly, I find this one a bit ugly): foo(with(ParameterTypeTuple!foo) { abc = 2, def = 3 }); Or just: auto args = with(ParameterTypeTuple!foo) { abc = 2, def = 3 }; foo(args); For initialization: auto a = with(new FooBar()) { name = Foo, bar = 3 }; Or: with(new Thread(foo) { isDaemon = true }).start(); Sadly it'll break all the code that currently use it, since we'll now need to terminate it with a semicolon.
Re: with(auto x = ...)
On Sunday, 26 July 2015 at 14:49:40 UTC, Timon Gehr wrote: On 07/26/2015 01:04 PM, Idan Arye wrote: On Sunday, 26 July 2015 at 07:28:45 UTC, Kapps wrote: On Friday, 24 July 2015 at 15:01:29 UTC, Adam D. Ruppe wrote: [...] The with statement is one where I think it would be interesting to make it an expression. For named parameters (admittedly, I find this one a bit ugly): foo(with(ParameterTypeTuple!foo) { abc = 2, def = 3 }); Or just: auto args = with(ParameterTypeTuple!foo) { abc = 2, def = 3 }; foo(args); For initialization: auto a = with(new FooBar()) { name = Foo, bar = 3 }; Or: with(new Thread(foo) { isDaemon = true }).start(); Sadly it'll break all the code that currently use it, since we'll now need to terminate it with a semicolon. Well, no. That does not follow. We can have both a with statement and a with expression. Mmm... but how will we differ them? The style in Kapps' example can fit into Rust, but looks weird in D. How about something that resembles the difference between expression and block lambdas: with (...) { ... } // statement with with (...) = ... // expression with While it may differ from lambdas since in lambdas both are expressions, it's similar in that the version without the = accepts a block of statements and the version with the = accepts an expression.
Re: Deduce template arguments from return value?
On Sunday, 12 July 2015 at 09:13:03 UTC, Yuxuan Shui wrote: For example: import std.conv; T a(T)(int a) { return to!T(a); } void main(){ string x = a(2); } D is not able to deduce T. Can we make it possible to deduce template arguments from where the return value is assigned to? Rust is able to do this: fn main() { let a : Veci32 = Vec::new(); } (In fact, you can even do this is Rust: fn main() { let mut a = Vec::new(); a[0] = 0i32; }) Just like ML, Rust's amazing type inference comes with a price - a super strict type system. D has less strict type system, which allows - for example - implicit conversions in some cases(consider http://dpaste.dzfl.pl/ed83a75a48ba) For D to support Rust's kind of type inference, it's type system will need to be completely replaced with something much more strict. Whether you think such type systems are good or not - this change will result in a massive code breakage.
Re: Deduce template arguments from return value?
On Sunday, 12 July 2015 at 14:13:22 UTC, Timon Gehr wrote: On 07/12/2015 02:52 PM, Idan Arye wrote: [...] Strictness is not really the main problem here. Even if your language supports implicit conversions/overloading, the language can just give you back an error in case of unresolvable ambiguity as in your example. The example given in the OP has as obvious correct answer `string`, even though `const(string)` would in principle be possible as well. It is more about the issue that D's type system is Turing complete, hence it is hard to come up with a very principled set of deduction rules. Maybe something like: If the computation of the return type does not involve introspection on any unspecified template argument, template arguments can be deduced from the return type. Implementation is roughly: If an IFTI call has unresolved arguments, but there are restrictions on the return type, instantiate all remaining overloads of the template with wildcard arguments that resist any kind of introspection and analyze everything possible, ignoring template constraints and gagging any compilation errors. As soon as the return types for every overload have been determined in terms of the wildcards, unify them with what you know about the required return type and check the template constraints in an attempt to remove the remaining ambiguity. Error out if anything remains ambiguous. That's a good point, which raises quite a concern - if this type inference is used in a templated function, the function will work with simple template parameters(ones that the deduction system can handle) but not with complex ones(e.g. ones that use auto return type). This will make development of these templates harder, because you won't be able to test them with simple parameters...
Re: Extend D's switch statement?
On Wednesday, 8 July 2015 at 09:57:16 UTC, ketmar wrote: On Wed, 08 Jul 2015 07:15:25 +, Yuxuan Shui wrote: I think it will be useful to extent switch statement to support any type that implements opEquals and opHash. Is there any technical difficulties to implement this? no, you can do that* with some template programming. so there is no reason to increase compiler complexity. * with some limitations, but they aren't that big. http://dlang.org/phobos/std_algorithm_comparison.html#.predSwitch
Re: How to avoid multiple spelling `import`
On Tuesday, 16 June 2015 at 09:33:22 UTC, Dennis Ritchie wrote: Hi, I can write this: import std.range : chain, split; But I can not write this: import std.range : chain, split, std.algorithm : map, each; We have several times to write the word `import`: import std.range : chain, split; import std.algorithm : map, each; Does D something to solve this problem? Maybe there is something like: import std.range{chain, split}, std.algorithm{map, each}; import std.range(chain, split), std.algorithm(map, each); import { std.range : chain, split; std.algorithm : map, each; } There is no problem to be solved here. Having to type `import` for each imported module is not big enough a burden to justify this additional syntax.
Re: ReturnType and overloaded functions
On Friday, 12 June 2015 at 23:26:00 UTC, Yuxuan Shui wrote: When there are multiple overloaded functions, whose return type will I get when I use ReturnType? Is there a way I could choose a specific function by its parameter types? The return type of the first declared one: http://dpaste.dzfl.pl/f448ec624592
Re: Asked on Reddit: Which of Rust, D, Go, Nim, and Crystal is the strongest and why?
On Thursday, 11 June 2015 at 13:21:27 UTC, Dave wrote: Exceptions are not meant to force handling errors at he source. This attitude is why so many exceptions go unhandled at the upper layers. When you have a top level that calls a function that calls 50 other functions, each that throw a handful or more different exceptions, it's unreasonable to expect the upper layer coder to account for all of them. In fact, in practice they usually don't until bugs arise. This is provably bad practice. I'd rather have an exception unhandled at the top level than discarded at a middle level. Much easier to debug when you get a proper stack trace. Also, the top level handling can be very generic if it's purpose is not to solve the problem but to log it and to allow to use to continue using the other parts of the program as much as possible. If you want to force handling errors at the source they should be part of the return type. Again what errors are worth throwing as exceptions in your paradigm? Which ones are worth returning? This separation is very arbitrary for my taste. Exceptions are for when something went wrong. Returned errors are for when the function can't do what you asked it to do, but that doesn't mean that something went wrong. For example, if you try to write to a file and fail that's an exception, because something went wrong(e.g. - not enough disk space, or a permissions problem). But if you have a function that parses a string to a number and you call it with a non-numeric string - that doesn't necessarily mean that something went wrong. Maybe I don't expect all strings to be convertible to numbers, and instead of parsing each string twice(once for validation and once for the actual conversion) I prefer to just convert and rely on the conversion function to tell me if it's not a number? Note that doesn't mean that every time a function returns an error it's not a problem - they can indicate problems, the point is that it's not up to the callee to decide, it's up to the caller. The conversion function doesn't know if I'm parsing a YAML file and if field value is not a number that just means it's a string, or if I'm parsing my own custom file format and if something specific is not a number that means the file is corrupted. In the latter case, I can convert the returned error to exception(the returned error's type should have a method that returns the underlying result if it's OK and if there was an error raises an exception), but it's the caller's decision, not the callee. Exceptions are not hard fails. They can be if they go unaccounted for (depending on the language/environment). Java has the infamous, NullPointerException that plagues Java applications. C# has the NullReferenceException. Even if they go unaccounted for, you still get a nice stack trace that helps you debug them. Maybe we have different definitions for hard fail... It doesn't really guarantee the functions not annotated as throwing won't crash Combined with other guarantees (such as immutability, thread local storage, safe memory management, side-effect free code, no recursion, etc), you can make a reasonable guarantee about the safety of your code. And a superhero cape, combined with an airplane, airplane fuel and flight school, allow you to fly in the air. Not really sure how to parse this...Doesn't seem like you have any good argument against what I said. Again I said you can make a *reasonable* guarantee. And I am not alone here. If you look at Rust, it really does illustrate a trend that functional programming has been pushing for a long time. Provable guarantees. Problems are very rarely unique. There are a core set of things that happen frequently that cause problems. And these things are easily recognizable by compilers. You can't prevent everything, but you can prevent a good deal of the obvious stuff. This is just an extension of that mindset. So it is not really that outlandish. It is the other restrictions(without getting into a discussion about each and every restriction in the list) that make the code safer - nothrow doesn't really contribute IMO. Without the nothrow, you cannot guarantee it won't cause problems with unhandled errors ;) Seems like a nice guarantee to me. I would at least like this option, because library writers often try to write in an idiomatic way (and I tend to use the most reputable libraries I can find), which gives you some guarantees. The guarantee would be better served by default IMHO though. Even with no throw you can't guarantee a function won't cause problems with unhandled errors - unless you use a very strict definition of handling errors, that include discarding them or crashing the program. nothrow can only guarantee the function won't expose any problems you can use the exceptions mechanism to debug or deal with - not very useful, considering how easy it is to convert an error
Re: Asked on Reddit: Which of Rust, D, Go, Nim, and Crystal is the strongest and why?
On Thursday, 11 June 2015 at 00:27:09 UTC, Dave wrote: The promise of exceptions isn't to not have to specifically handle errors in every layer Correct me if I am wrong, but aren't exceptions in D used for general error handling? Doesn't Phobos prefer exceptions over return codes? It seems to be a controversial subject: https://github.com/D-Programming-Language/phobos/pull/1090#issuecomment-12737986 it's to not care about exceptions in every layer. My second job was working in the sustaining department of a large company. One of the most frequent cause of bugs were people not checking errors. The fix was almost always handle the error at its source and log or signal a message to some IO informing the user. Users tend to prefer messages over complete meltdowns. Too many avoidable outages and crashes came by my desk due to this attitude. Very few domains/cases require hard fails. Exceptions are not meant to force handling errors at he source. If you want to force handling errors at the source they should be part of the return type. I'm not talking about C style return-error-code-and-write-the-actual-result-into-a-variable-passed-by-reference - functional languages like Haskell, Scala and Rust have shown that monadic sum types of result/error are both safe(don't allow you to access the returned value unless you do it in a path that makes sure there wasn't an error, or convert the error into an exception) and easy to use(adds very little syntactic overhead for the good path, and the bad path code is simpler than in exception handling) Exceptions are not hard fails. You don't have to crash the entire program - just fail the action the user was trying to do and display a nice error message with whatever information you can and want to provide to the user. Even if you don't handle them, they provide information useful for debugging. I don't want to pass the annotation in each and every intermediate layer Seems like an odd comment to have on the forum of a language that frequently has functions in the standard library annotated like this: pure nothrow @nogc @safe real abs(Num)(Num y) Just because I use the language doesn't mean I have to like every single one of it's features... I've programmed enough Java to know how harmful nothrow-by-default is. I disagree one this. Lets agree to disagree. I agree to disagree It doesn't really guarantee the functions not annotated as throwing won't crash Combined with other guarantees (such as immutability, thread local storage, safe memory management, side-effect free code, no recursion, etc), you can make a reasonable guarantee about the safety of your code. And a superhero cape, combined with an airplane, airplane fuel and flight school, allow you to fly in the air. It is the other restrictions(without getting into a discussion about each and every restriction in the list) that make the code safer - nothrow doesn't really contribute IMO. If it's an error that the caller needs to know about - make it part of the return type. If it doesn't need to know about it - throw an exception and let someone up the line handle it. I don't agree with this. Too much to worry about. Impractical to maintain both paradigms. What errors don't you need to know about? Scala and Rust seem to maintain both paradigms just fine. It's actually beneficial to have both - you have to acknowledge return-type-based exceptions, and you can always bypass them by turning them to exceptions, which are good for logging and debugging. If exception handling is enforced, they can only by bypassed by converting them to errors or crashes, which are much less nice than exceptions when it comes to debugging, logging and cleanup. If I have to be explicit about not handling an error somewhere, I prefer the this exact thing here can return an error, I assume it won't, but if it does you'll get an exception so it can be debugged approach over the not that I care, but something, somewhere down that path might throw approach. handle it or ignore it. The process I mentioned would not prevent this in anyway. Just inform others of the decision you made that may have adverse effects on the stability of their code. Writing code that acknowledges that this code can fail due to an exception somewhere else does not count as ignoring it.
Re: Asked on Reddit: Which of Rust, D, Go, Nim, and Crystal is the strongest and why?
On Thursday, 11 June 2015 at 21:57:36 UTC, Dave wrote: In regards to being faster, I'm not a big fan of exceptions in the first place. This probably explains my perspective on them, but I am familiar with their typical use case. And it's to communicate errors. I'd much prefer something like what Andrei presented during one of his C++ talks (ExpectedT). If I were to design a language today, I might try to incorporate that somehow with some semantic sugar and a handle by default mentality. I've reordered your post a bit so I can refer to this part first. I think you and I refer to different things when we talk about returned errors, and with the definition I think you have in mind I do see how my arguments can look like the mumbling of a madman. So I'm going to clear it out first. At a previous post here(http://forum.dlang.org/post/riiuqazmqfyftppmx...@forum.dlang.org), I've said that I'm not talking about C style return-error-code-and-write-the-actual-result-into-a-variable-passed-by-reference - functional languages like Haskell, Scala and Rust have shown that monadic sum types of result/error are both safe(...) and easy to use(...) What I refer by monadic sum types of result/error is pretty similar in concept to ExpectedT, though what I in mind is much closer to what functional languages have, which makes allows both easier handling inside expressions and a guarantee that the underlying result will only be used in a path where it was checked that there is no error or after the user explicitly said it's OK to convert it to an exception. This is what's done in Rust(except it's converted to panics, which are harder to log and contain than exceptions), it can be done in D, and of course a new language that chooses this approach can have syntax for it. Here is an example for how it would be used in D: Expected!int parseInt(string txt) { if (/*parsing successful*/) { return expected(parsedValue); } else { return error(); } } // Propogation - functional style Expected!string formatNextValueText(string origNumber) { return parseInt(origNumber).ifOK!( // good path parsedValue = After %s comes %s.format( parsedValue, parsedValue + 1).expected, // error path () = error()); } // Propogation - imperative style Expected!string formatNextValueText(string origNumber) { auto parsedValue = parseInt(origNumber); if (auto parsedValuePtr = parsedValue.getIfOK()) { return After %s comes %s.format( *parsedValuePtr, *parsedValuePtr + 1).expected, } else { return error(); } } // Convertion to exception string formatNextValueText(string origNumber) { // This will throw an exception if the parsing, and // return the parsed value if it succeeded: auto parsedValue = parseInt(origNumber).enforceOK(); return After %s comes %s.format( *parsedValue, *parsedValue + 1); } Now to answer the rest of your post, which actually came first: He is saying that now anything that throws will not only be slow but also have the same limitations as returned errors. nothrow by default is combining the slowness of exceptions with the limitness of returned errors. He literally said combine the slowness of exceptions. So I don't know how to read that other than he said it's slow. But perhaps I am just misunderstanding his wording, so perhaps it's best I just assume I misunderstood him. I also literally said with the limitness of returned errors.. That part is important part of the sentence. My point is that the exception mechanism is much less limited from the returned value mechanism, because it lets you handler the error in a higher level without modifying the middle levels to acknowledge it. The price for this is slowness. The benefits of nothrow by default come naturally with returned errors - the users can't implicitly ignore them, and since the possible errors are encoded into the return type any tool that can display the return type can show you these errors. With nothrow by default, you are paying the performance price of an exception mechanism, and then do extra work to add to it the limitations of returned errors, just so you can get the benefits that come naturally with returned errors. Wouldn't it be better to just use returned errors in the first place? but also have the same limitations as returned errors That is a legitimate concern, but I don't think it is correct. The transitive nature would enforce that you at least handle it at some point along the chain. Nothing would force you to handle it right away. Although I think in most cases it's far better to do it when the error occurs(but this is my style). But when you don't there would be at least a flag saying this might fail that you and others
Re: Asked on Reddit: Which of Rust, D, Go, Nim, and Crystal is the strongest and why?
On Wednesday, 10 June 2015 at 16:02:36 UTC, Ola Fosheim Grøstad wrote: Yeah, I think it would be nice if one could change the culture of programming so that people easily could combine any 2 languages in the same project. But that takes either significant creator-goodwill/cooperation or platforms like .NET/JVM. I could see myself wanting to do some things in Prolog, some things in Lisp and some things in C. Today that takes too much FFI work. Wasn't LLVM supposed to solve that, being a virtual machine for compilation to low level native code?
Re: Asked on Reddit: Which of Rust, D, Go, Nim, and Crystal is the strongest and why?
On Wednesday, 10 June 2015 at 18:13:53 UTC, Dave wrote: On Wednesday, 10 June 2015 at 17:34:55 UTC, jmh530 wrote: 3) Immutability by default. Someone (somewhere) made an interesting point that it can be conceptually convenient to have the most restrictive choice as the default. I'm not sure I agree with that (maybe the most common choice used in good code), but at least immutability by default it can be helpful for concurrent programming. I am one of those that think that a language should allow you to do whatever you want, but be restrictive by default. For example immutability unless you explicitly ask for mutability (like in Rust). D sort of has the ability to do this, but it's sort of backwards due to it's defaults. For instance D is mutable by default (an inherited trait due to the C subset of the language), with the ability to explicitly mark values as immutable. Another backwards annotation is nothrow. I don't really care if something doesn't throw, I care when it throws, because then I have to do something (or my program may crash unexpectedly). Even if the enforcement is kind of there (although unannotated functions can do whatever), it would have been a better guarantee to disallow this by default. I usually agree that the more restrictive option should be the default, but exceptions is... well... the exception. The whole point of the exceptions system is to limit the number of points where you need to worry about something going wrong to the place where it happens and the places where you want to do something special with it. nothrow by default means you can't do that - you have to care about the exception at every single point in between. The result is a better syntax for the exact same thing the exceptions idiom was supposed to prevent...
Re: Asked on Reddit: Which of Rust, D, Go, Nim, and Crystal is the strongest and why?
On Wednesday, 10 June 2015 at 19:56:00 UTC, Dave wrote: I usually agree that the more restrictive option should be the default, but exceptions is... well... the exception. The whole point of the exceptions system is to limit the number of points where you need to worry about something going wrong to the place where it happens and the places where you want to do something special with it. The point of exceptions is to communicate errors and where they originate. As long as this is respected, then I am not following your complaint. you have to care about the exception at every single point in between. That's the point. If you don't annotate the function (with a throw keyword for instance), then you are forced to catch and handle the exception. If you annotate it (saying this function will throw), no catch is needed, but at least you are communicating to the next layer that this code *does* have errors that should be accounted for. nothrow by default means you can't do that Actually no guarantees by default means you can't do what I explained above. The promise of exceptions isn't to not have to specifically handle errors in every layer - it's to not care about exceptions in every layer. I don't want to pass the annotation in each and every intermediate layer - I want only the throwing layer and the catching layer to acknowledge the exception's existence. The middle layers should be written like there is no exception, and when there is one, they will simply fail and let RAII to automatically do the cleanup. I've programmed enough Java to know how harmful nothrow-by-default is. It doesn't really guarantee the functions not annotated as throwing won't crash - only that if they do crash there is nothing you can do about it. This mechanism is so easy to bypass in harmful ways(catch and rethrow something that doesn't need annotation(like Error), or terminate the process with a special exit function), and annotating the layers all the way up is so cumbersome(and sometimes impossible - when you override functions, or pass delegates) that this mechanism tends to encourage to bypass it, which harms the debugging. Of course, nothrow as the optional annotation is a different syntax for the same semantics, so it suffers from the same drawbacks but it has the benefit of seldom being use therefore seldom getting in your way. Which leads me to the final conclusion - there shouldn't be nothrow, not by default and not as special annotation! (I'm not talking about D here - this isn't worth the code breakage - but generally on programming languages) If it's an error that the caller needs to know about - make it part of the return type. If it doesn't need to know about it - throw an exception and let someone up the line handle it. Rust got it right - though they made it a bit cumbersome to catch `panic`s. Why would I need to catch panic? To display them nicely to the user(don't just dump to stderr - pop up a window that apologizes and prompts the user to email the exception data to the developers) or to rollback the changes(yes, there was an irrecoverable error in the program. That doesn't give me the right to corrupt user data when I can avoid it). But the point is - you can either handle it or ignore it. The sign here and we'll pass it on bureaucracy is not benefiting anyone.
Re: Self-referential tuples?
On Tuesday, 9 June 2015 at 23:04:41 UTC, Andrei Alexandrescu wrote: On 6/9/15 3:58 PM, Timon Gehr wrote: On 06/09/2015 05:28 PM, Andrei Alexandrescu wrote: Following the use of This in Algebraic (https://github.com/D-Programming-Language/phobos/pull/3394), we can apply the same idea to Tuple, thus allowing one to create self-referential types with ease. Consider: // A singly-linked list is payload + pointer to list alias List(T) = Tuple!(T, This*); // A binary tree is payload + two children alias Tree(T) = Tuple!(T, This*, This*); // or alias Tree(T) = Tuple!(T, payload, This*, left, This*, right); // A binary tree with payload only in leaves alias Tree2(T) = Algebraic!(T, Tuple!(This*, This*)); Is there interest in this? Other application ideas to motivate the addition? Andrei Well, the issue is with this kind of use case: alias List(T)=Algebraic!(Tuple!(),Tuple!(T,This*)); So a list is either nothing, or a head and a tail. What is the problem here? -- Andrei The `This*` here is not mapped to `Algebraic!(Tuple!(),Tuple!(T,This*))` - it's mapped to the closest containing tuple, `Tuple!(T,This*)`. This means that the tail is not a list - it's a head and a tail. The list is either empty or infinite. At any rate, I think this feature is useful enough even if it doesn't support such use cases. You can always declare a list as a regular struct...
Re: static foreach considered
On Monday, 8 June 2015 at 21:32:52 UTC, Timon Gehr wrote: I think the body should have access to a scope that is hidden from the outside which contains the loop variable, but declarations should be inserted into the enclosing scope like for static if. This would require some syntax to mark the declarations we want to expose. Maybe `out`? This is far better than the mixin template approach, since it'll alert us early about conflicts: static foreach (ident; [a, b, a]) { int mixin(ident ~ 1); out int mixin(ident ~ 2); } `a1` is created twice, but that's OK since it isn't marked with `out`. `a2` is declared twice and raises a compilation error because it's marked with `out`. This will ensure these kind of errors are detected early and the compilation error points to the exact place of declaration.
Re: static foreach considered
On Monday, 8 June 2015 at 20:02:11 UTC, Andrei Alexandrescu wrote: Walter and I are looking at ways to implement it. Here's a baseline without static foreach - a trace function that prints function calls before they are made: http://dpaste.dzfl.pl/762c83c7fe30 If the function is overloaded, that won't work. In such cases, static foreach might be helpful. Here's code from the cycle I have a dream: http://dpaste.dzfl.pl/82a70c809210 I'm trying to collect together motivating examples and to figure out the semantics of the feature. Andrei How will scoping work? Similar to mixin templates? It would be nice together with this feature to be able to mixin identifiers: static foreach (ident; [foo, bar]) { auto mixin(ident)() { // code for foo/bar } } Otherwise, other than overloads and template instantiations this won't be much better then generating code strings with CTFE...
Re: static foreach considered
On Monday, 8 June 2015 at 21:14:46 UTC, Jonathan M Davis wrote: I would assume that it would be pretty much the same as doing foreach(T; TypeTuple!(...)) { ... } except that you're not forced to shove everything in a TypeTuple. - Jonathan M Davis If that was the case, A library solution for converting a compile-time range to a TypeTuple would have sufficed(http://dpaste.dzfl.pl/7eb30f5e1156 - this compiles in 2.67). The problem with regular `foreach` over type tuple is that declarations inside the foreach's body are invisible from the outside. If `static foreach` had this limitation, Andrei's example wouldn't work since `trace` would be local to the body of the `static foreach`. This essentially renders the main usecase of this feature(declaring stuff) and leaves us with a loop unrolling optimization...
Re: static foreach considered
On Monday, 8 June 2015 at 22:16:50 UTC, Timon Gehr wrote: On 06/09/2015 12:12 AM, Idan Arye wrote: On Monday, 8 June 2015 at 21:32:52 UTC, Timon Gehr wrote: I think the body should have access to a scope that is hidden from the outside which contains the loop variable, but declarations should be inserted into the enclosing scope like for static if. This would require some syntax to mark the declarations we want to expose. Maybe `out`? This is far better than the mixin template approach, since it'll alert us early about conflicts: static foreach (ident; [a, b, a]) { int mixin(ident ~ 1); out int mixin(ident ~ 2); } `a1` is created twice, but that's OK since it isn't marked with `out`. `a2` is declared twice and raises a compilation error because it's marked with `out`. This will ensure these kind of errors are detected early and the compilation error points to the exact place of declaration. I actually intended all declarations in the body to be inserted into the enclosing scope, at least by default. What about helper declarations that repeat in each static iteration? It can work like with mixin templates, where declarations hide each other(http://dpaste.dzfl.pl/c173395eb0cd), but that means that if there is a repeat in the declaration you do want to expose, the compiler will simply hide it without issuing an error. You will get an error when you try to access that declaration from somewhere else, but this error message is distant from the root cause both in time - you might only write the code that access the declaration created by that particular iteration much later in the development process - and space - the error will point to the point of usage, not the point of duplicate declaration. Also, if the point of usage is inside a template and depends on the template instantiation this kind of error is much harder to debug... As for exposing the declaration by default - unless there is a backward compatibility issue, it's usually best to make the most restrictive and contained version the default one. If not exposing is the default and someone neglects to mark the exposed declaration, it will fail immediately when they try to access it(and they will. Immediately. Because that's the code they are writing right now) and they can just add the annotation. But if exposing is the default, and someone neglects to mark the internal helpers as non-exposed, well - they better hope that there are duplications that'll expose their mistake. This is not always trivial: struct Foo(Types...) { static foreach (Type; Types) { static if (isSomeString!Type) { // I forgot to mark this as non-exposed void stringHelper() { // helper for strings } void doSomething(Type arg) { // Something that uses stringHelper } } else { void doSomething(Type arg) { // The non-string versio } } } } unittest { alias MyFoo = Foo!(int, float, string); // some tests with MyFoo } Since I only have one string in MyFoo's types list, `stringHelper` is only declared once. A month from now, when I try to create `Foo!(string, wstring)`, it'll create two `stringHelper`s and result in compilation error. Having an error show up a month later is not fun. It's much less fun when it it pops for someone else that now needs to figure out what you were trying to do... That's why I think not exposing should be the default. In that case, since `doSomething` is not marked as exposed, this will fail early, because we can safely assume the exposed functionality is being tested - even if I don't write a proper unit test, since `doSomething` is part of `Foo`'s API I will try to use it(if you write API without at least trying it out you deserve whatever method of torture the people that use that API can think of), so the bug will pop early.
Re: static foreach considered
On Monday, 8 June 2015 at 22:15:32 UTC, rsw0x wrote: On Monday, 8 June 2015 at 20:02:11 UTC, Andrei Alexandrescu wrote: I'm trying to collect together motivating examples and to figure out the semantics of the feature. maybe not completely related, but I made a blog post on using CTFE to unroll foreach at compiletime https://rsw0x.github.io/post/switch-unrolling/ I find myself often writing recursive templates for compile-time generation of constructs that could be done cleaner with static foreach. I also use this method alot, and sometimes encounter this bug: http://dpaste.dzfl.pl/16af3c5dad73 The break inside the `foreach` is breaking from the `foreach`, not from the `switch`, so it continues to execute the `default` clause. This is not really a bug - `foreach` unrolling is more of a loop unrolling optimization that we hijack, so it makes sense `break` inside it will act like it's inside a regular `foreach`. With `static foreach`, we might want `break`(and `continue`) to operate on the containing, runtime control structure.
Re: Why is there no named parameter support?
On Monday, 8 June 2015 at 20:36:10 UTC, Yuxuan Shui wrote: Is there any reasons/difficulties for not implementing named parameters? There is clearly a need: http://forum.dlang.org/thread/wokfqqbexazcguffw...@forum.dlang.org#post-pxndhoskpjxvnoacajaz:40forum.dlang.org http://forum.dlang.org/thread/hdxnptcikgojdkmld...@forum.dlang.org
Re: static foreach considered
On Tuesday, 9 June 2015 at 00:48:34 UTC, rsw0x wrote: On Tuesday, 9 June 2015 at 00:01:07 UTC, Idan Arye wrote: On Monday, 8 June 2015 at 22:15:32 UTC, rsw0x wrote: On Monday, 8 June 2015 at 20:02:11 UTC, Andrei Alexandrescu wrote: I'm trying to collect together motivating examples and to figure out the semantics of the feature. maybe not completely related, but I made a blog post on using CTFE to unroll foreach at compiletime https://rsw0x.github.io/post/switch-unrolling/ I find myself often writing recursive templates for compile-time generation of constructs that could be done cleaner with static foreach. I also use this method alot, and sometimes encounter this bug: http://dpaste.dzfl.pl/16af3c5dad73 The break inside the `foreach` is breaking from the `foreach`, not from the `switch`, so it continues to execute the `default` clause. This is not really a bug - `foreach` unrolling is more of a loop unrolling optimization that we hijack, so it makes sense `break` inside it will act like it's inside a regular `foreach`. With `static foreach`, we might want `break`(and `continue`) to operate on the containing, runtime control structure. I knew there was something I was forgetting in that short example, thanks for the reminder. Interestingly, the assembly generated with `break` and `break label` with a label on the switch is exactly the same. I don't have time right now to go review the spec, so I have no idea if that's correct. Why wouldn't they? If we neglect RAII(for simplicity), `break` jumps to the first instruction after the `foreach`, and `break label` with a label on the `switch` jumps to the first instruction after the `switch`. Since there is nothing in the `switch` after the `foreach`, the first instruction after the `foreach` is also the first instruction after the `switch`, so the command to jump to that instruction is the same, and the assembly is the same.
Re: We need to have a way to say convert this nested function into a struct
On Saturday, 6 June 2015 at 06:16:17 UTC, Andrei Alexandrescu wrote: Nested functions that allocate their environment dynamically can be quite useful. However, oftentimes the need is to convert the code plus the data needed into an anonymous struct that copies the state inside, similar to C++ lambdas that capture by value. I wonder how to integrate that within the language nicely. Andrei My solution: http://dpaste.dzfl.pl/aa013ff51f60 Can't make it @nogc though, because it thinks I'm trying to capture `a` and `b`...
Re: Entry point a la git or go
On Sunday, 31 May 2015 at 23:11:24 UTC, Andrei Alexandrescu wrote: There's this recent trend seen with git and go - all tooling is entered by a single command. Are there advantages to doing the same for our toolchain? (Consider it'll include things such as dub, dfix, and dformat in the future.) Something like dc, a program that parses the command line and dispatches it to the appropriate tool(s). How would this add value to our toolchain? Andrei This is creating a namespace, which has all sorts of benefits: 1) Easier help - `dc --help` shows basic description of all subtools. 2) Better names - since all tools reside in the `dc` command anyways they can have names based on what they do and not some acronym or a made-up project name. Instead of `dub` and `rdmd` we can have `dc build` and `dc run-script` - which convey what they do right away. 3) Containment - since all tools are under `dc`, and assuming `dc` can have some general-purpose flags, it's easier to build wrappers around it. You've mentioned git - there is a Git plugin for Vim called Fugitive that in addition for implementing commands that give special treatment wrapping to some Git commands, also offers the general `:Git` command, that can run any Git command you like and sets the general `git` flags to play better with Vim. It can do so because all Git commands start with `git `. Not sure yet how this property of single-entry-point-tool-chain will benefit D though - it depends on what the general flags for `dc` will be...
Re: As discussed in DConf2015: Python-like keyword arguments
On Sunday, 31 May 2015 at 04:08:33 UTC, ketmar wrote: and this: void test(A...) (A a) { import std.stdio; foreach (auto t; a) writeln(t); } void main () { test(x: 33.3, z: 44.4, a: , , d:Yehaw); } I like the idea of a template-variadic keyword arguments, but does it have to have the exact same syntax as template-variadic positional arguments? What will happen to functions that expect positional variadic arguments and get invoked with keyword variadic arguments instead?
Re: dmd makes D appear slow
On Friday, 29 May 2015 at 19:16:45 UTC, Steven Schveighoffer wrote: On 5/29/15 12:58 PM, H. S. Teoh via Digitalmars-d wrote: On Fri, May 29, 2015 at 06:50:02PM +, Dicebot via Digitalmars-d wrote: On Friday, 29 May 2015 at 18:38:20 UTC, H. S. Teoh wrote: This will probably offend some people, but I think LDC/GDC should be the default download on dlang.org, and dmd should be provided as an alternative for those who want the latest language version and don't mind the speed compromise. I did make LDC default compiler used in Arch but now people are unhappy with increased compile times so I may need to revert it back :) Can't please 'em all... According to Walter, many D users want fast compile times, and aren't as concerned about performance of the generated code. But from this thread's OP, it seems there's another group of users who don't care about fast compile times but want the generated code to squeeze every last drop of performance from their CPUs. So I guess we should be equally recommending all 3 compilers, with a note to help people choose their compiler depending on their needs. myOpinion = (fastCompileTimes * 1 fastCode); -Steve For the development cycle too?
Re: DIP78 - macros without syntax extensions
On Wednesday, 27 May 2015 at 08:14:36 UTC, Kagamin wrote: On Tuesday, 26 May 2015 at 23:47:41 UTC, Dennis Ritchie wrote: If this proposal is considered, it is required to propose to look at the implementation of macros in Nemerle. Many believe that it is in Nemerle macros implemented the most successful compared to other modern languages. Of course, the most successful macros are implemented in Lisp, but the syntax of the language is poor :) The problem with declarative macro system is that you would need to learn yet another language. Possibly turing-complete. And a declarative turing-complete language is an overkill both for usage and implementation. Imperative macros get it done in an intuitive way in the existing language. But D already has such a declarative language - the one used in template metaprogramming. I think a macro system that plays well with that template metaprogramming sub-language will be really nice. For example, CTFE that works like a macro and returns types/aliases: Auto deduceType(Auto args) { // some complex imperative code to deduce the type from the args return DeducedType; } struct Foo(T...) { deduceType(T) value; }