Re: x64 Privileged instruction
On Wednesday, 12 September 2018 at 13:26:03 UTC, Stefan Koch wrote: On Wednesday, 12 September 2018 at 10:42:08 UTC, Josphe Brigmo wrote: x64 gives Privileged instruction but x86 gives First-chance exception: std.file.FileException "C:\": The filename, directory name, or volume label syntax is incorrect. at std\file.d(4573) which is much more informative... seems like a bug to me. More context needed. What code does produce this behavior. Lots of code. I pretty much always get this error. Just throw. It is a first chance exception so that should be clear enough. The point is that x64 doesn't seem to handle first chance exceptions and gives a privileged instruction. This happens windows 10 visual D and I've had it happen for a long time.
Re: Shared, ref, arrays, and reserve template instantiation
Great -- Thank you both. I previously found Unqual, but it looks like that needs template support so wasn't feasible, hence my question. Neia is right that I tried to cast as in the second case ( but without UFCS -- reserve( cast(int[]), N); ). As an aside, what is going on behind the scenes with the compiler when casting away a property? I did not think cast operations copied data, so was surprised that a cast value is not an lvalue. Regarding Jonathan's comments, we had definitely protected the ~= operations with Mutex, but realized we were doing lots of array appends in a hot loop, and since we have an idea of cardinality ahead of time just wanted to preallocate. Since it is all initialization before concurrent code enters the picture, we'll do what you've suggested and set it up as TL and then cast to shared. James
Re: Shared, ref, arrays, and reserve template instantiation
On Wednesday, September 12, 2018 5:41:16 PM MDT James Blachly via Digitalmars-d-learn wrote: > When I add the "shared" attribute to an array, I am no longer > able to call reserve because the template won't instantiate: > > Error: template object.reserve cannot deduce function from > argument types !()(shared(int[]), int), candidates are: > /dlang/dmd/linux/bin64/../../src/druntime/import/object.d(4091): >object.reserve(T)(ref T[] arr, size_t newcapacity) > > 1. Shared modifies the type, so the template does not match. Even > casting does not seem to work however. Is there something about > shared that makes it unable to be taken by reference? > 2. Is there a workaround for me to be able to preallocate the > array? You can't do much of anything with shared while it's shared, which is pretty much the whole point. The way that shared needs to be used in general is essentially synchronized(mutexForSharedObj) { auto local = cast(Type)sharedObj; // do stuff with local... // ensure that no thread-local references to local / sharedObj exist // before releasing the mutex } // shared object is now essentially unusable again Doing pretty much _any_ operation on a shared object while it's shared (other than atomic operations from core.atomic) is wrong, because it's not thread-safe. The compiler prevents most operations but not as many as it should (e.g. copying is currently legal). That will likely be fixed in the future, but exactly what's going to happen to shared in all of the fine details hasn't been sorted out yet. The basics work, but not all of the details are as they should be yet. So, if you're doing anything like calling reserve or ~= on a shared array, then you need to protect it with the mutex that you have for it and cast away shared first. However, if you're just dealing with constructing the array, then what you should do is create it as thread-local and then cast it to shared after you're done setting it up and are ready to share it across threads (after which, all further operations on it should be protected by a mutex or use atomics, otherwise they're not thread-safe). - Jonathan M Davis
Re: Shared, ref, arrays, and reserve template instantiation
On Wednesday, 12 September 2018 at 23:41:16 UTC, James Blachly wrote: When I add the "shared" attribute to an array, I am no longer able to call reserve because the template won't instantiate: Error: template object.reserve cannot deduce function from argument types !()(shared(int[]), int), candidates are: /dlang/dmd/linux/bin64/../../src/druntime/import/object.d(4091): object.reserve(T)(ref T[] arr, size_t newcapacity) 1. Shared modifies the type, so the template does not match. Even casting does not seem to work however. Is there something about shared that makes it unable to be taken by reference? 2. Is there a workaround for me to be able to preallocate the array? Kind regards I'm guessing you tried something like: shared char[] x; // Doesn't work; it casts the result of x.reserve cast(char[])x.reserve(100); // Doesn't work; (cast(char[])x) is not an lvalue, so it can't be ref (cast(char[])x).reserve(100); Arrays are passed and stored like pointers, and `reserve` modifies the array, which is why the thing needs to be ref. Anyway, it works like this: // Cast and store in a variable so it can be ref auto b = cast(char[]) x; // Okay, reallocates b (changes b.ptr), doesn't change x b.reserve(100); // Copy changes back to the shared variable x = cast(shared) b;
Re: Copy Constructor DIP and implementation
On Wednesday, September 12, 2018 5:55:05 PM MDT Nicholas Wilson via Digitalmars-d-announce wrote: > On Wednesday, 12 September 2018 at 23:36:11 UTC, Jonathan M Davis > > wrote: > > On Wednesday, September 12, 2018 5:17:44 PM MDT Nicholas Wilson > > > > via Digitalmars-d-announce wrote: > >> it seems that even if we were to want to have @implicit as an > >> opposite of C++'s explicit it would _always_ be present on > >> copy-constructors which means that @implicit for copy > >> constructors should itself be implicit. > > > > Oh, yes. The whole reason it's there is the fear that not > > requiring it would break code that currently declares a > > constructor that would be a copy constructor if we didn't > > require @implicit. So, if the DIP is accepted, you _could_ > > declare a constructor that should be a copy constructor but > > isn't, because it wasn't marked with @implicit (just like you > > can right now). If code breakage were not a concern, then > > there's pretty much no way that @implicit would be part of the > > DIP. Personally, I don't think that the risk of breakage is > > high enough for it to be worth requiring an attribute for what > > should be the normal behavior (especially when such a > > constructor almost certainly was intended to act like a copy > > constructor, albeit an explicit one), but Andrei doesn't agree. > > The bog-standard way of dealing with avoidable breakage with DIPs > is a -dip-10xx flag. In this case, if set, would prefer to call > copy constructors over blit + postblit. > > Also adding @implicit is a backwards incompatible change to a > codebase that wants to use it as it will cause it to fail on > older compilers. Even if one does : > > static if (__VERSION__ < 2085) // or whenever it gets implemented > enum implicit; > all over the place, I don't disagree, but it's not my decision. - Jonathan M Davis
Re: Copy Constructor DIP and implementation
On Wednesday, 12 September 2018 at 23:55:05 UTC, Nicholas Wilson wrote: The bog-standard way of dealing with avoidable breakage with DIPs is a -dip-10xx flag. In this case, if set, would prefer to call copy constructors over blit + postblit. Also adding @implicit is a backwards incompatible change to a codebase that wants to use it as it will cause it to fail on older compilers. Even if one does : static if (__VERSION__ < 2085) // or whenever it gets implemented enum implicit; all over the place, It is illegal to declare a copy constructor for a struct that has a postblit defined and vice versa: Hmm, I suppose one could static if (__VERSION__ < 2085) // use a postblit else // use a copy ctor.
Re: Copy Constructor DIP and implementation
On Wednesday, 12 September 2018 at 23:36:11 UTC, Jonathan M Davis wrote: On Wednesday, September 12, 2018 5:17:44 PM MDT Nicholas Wilson via Digitalmars-d-announce wrote: it seems that even if we were to want to have @implicit as an opposite of C++'s explicit it would _always_ be present on copy-constructors which means that @implicit for copy constructors should itself be implicit. Oh, yes. The whole reason it's there is the fear that not requiring it would break code that currently declares a constructor that would be a copy constructor if we didn't require @implicit. So, if the DIP is accepted, you _could_ declare a constructor that should be a copy constructor but isn't, because it wasn't marked with @implicit (just like you can right now). If code breakage were not a concern, then there's pretty much no way that @implicit would be part of the DIP. Personally, I don't think that the risk of breakage is high enough for it to be worth requiring an attribute for what should be the normal behavior (especially when such a constructor almost certainly was intended to act like a copy constructor, albeit an explicit one), but Andrei doesn't agree. The bog-standard way of dealing with avoidable breakage with DIPs is a -dip-10xx flag. In this case, if set, would prefer to call copy constructors over blit + postblit. Also adding @implicit is a backwards incompatible change to a codebase that wants to use it as it will cause it to fail on older compilers. Even if one does : static if (__VERSION__ < 2085) // or whenever it gets implemented enum implicit; all over the place, It is illegal to declare a copy constructor for a struct that has a postblit defined and vice versa:
Shared, ref, arrays, and reserve template instantiation
When I add the "shared" attribute to an array, I am no longer able to call reserve because the template won't instantiate: Error: template object.reserve cannot deduce function from argument types !()(shared(int[]), int), candidates are: /dlang/dmd/linux/bin64/../../src/druntime/import/object.d(4091): object.reserve(T)(ref T[] arr, size_t newcapacity) 1. Shared modifies the type, so the template does not match. Even casting does not seem to work however. Is there something about shared that makes it unable to be taken by reference? 2. Is there a workaround for me to be able to preallocate the array? Kind regards
Re: Copy Constructor DIP and implementation
On Wednesday, September 12, 2018 4:11:20 PM MDT Manu via Digitalmars-d- announce wrote: > On Wed, 12 Sep 2018 at 04:40, Dejan Lekic via Digitalmars-d-announce > > wrote: > > On Tuesday, 11 September 2018 at 15:22:55 UTC, rikki cattermole > > > > wrote: > > > Here is a question (that I don't think has been asked) why not > > > @copy? > > > > > > @copy this(ref Foo other) { } > > > > > > It can be read as copy constructor, which would be excellent > > > for helping people learn what it is doing (spec lookup). > > > > > > Also can we really not come up with an alternative bit of code > > > than the tupleof to copying wholesale? E.g. super(other); > > > > I could not agree more. @implicit can mean many things, while > > @copy is much more specific... For what is worth I vote for @copy > > ! :) > > @implicit may be attributed to any constructor allowing it to be > invoked implicitly. It's the inverse of C++'s `explicit` keyword. > As such, @implicit is overwhelmingly useful in its own right. > > This will address my single biggest usability complaint of D as > compared to C++. @implicit is super awesome, and we must embrace it. Except that this DIP doesn't do anything of the sort. It specifically only affects copy constructors. Yes, in theory, we could later extend @implicit to do something like what you describe, but there are not currently any plans to do so. So, @implicit makes more sense than @copy in the sense that it's more likely to be forward-compatible (or at least, @implicit could be reused in a sensible manner, whereas @copy couldn't be; so, if we used @copy, we might also have to introduce @implicit later anyway), but either way, saying that @implicit has anything to do with adding implicit construction to D like C++ has is currently false. In fact, the DIP specifically makes it an error to use @implicit on anything other than a copy constructor. - Jonathan M Davis
Re: Copy Constructor DIP and implementation
On Wednesday, September 12, 2018 5:17:44 PM MDT Nicholas Wilson via Digitalmars-d-announce wrote: > it seems that even if we were to want to have @implicit as an > opposite of C++'s explicit it would _always_ be present on > copy-constructors which means that @implicit for copy > constructors should itself be implicit. Oh, yes. The whole reason it's there is the fear that not requiring it would break code that currently declares a constructor that would be a copy constructor if we didn't require @implicit. So, if the DIP is accepted, you _could_ declare a constructor that should be a copy constructor but isn't, because it wasn't marked with @implicit (just like you can right now). If code breakage were not a concern, then there's pretty much no way that @implicit would be part of the DIP. Personally, I don't think that the risk of breakage is high enough for it to be worth requiring an attribute for what should be the normal behavior (especially when such a constructor almost certainly was intended to act like a copy constructor, albeit an explicit one), but Andrei doesn't agree. > If at some point in the future we decide that we do want to add > @implicit construction, then we can make the copy constructor > always @implicit. Until that point I see no need for this, > because it is replacing postblit which is always called > implicitly. Except that the whole reason that @implicit is being added is to avoid the risk of breaking code, and that problem really isn't going to go away. So, it's hard to see how we would ever be able to remove it. Certainly, if we were willing to take the risks associated with it, there wouldn't be any reason to introduce @implicit in the first place (at least not for copy constructors). If it were my decision, I wouldn't introduce @implicit and would risk the code breakage (which I would expect to be pretty much non-existent much as it theoretically could happen), but it's not my decision. - Jonathan M Davis
Re: Copy Constructor DIP and implementation
On Wednesday, 12 September 2018 at 22:11:20 UTC, Manu wrote: On Wed, 12 Sep 2018 at 04:40, Dejan Lekic via Digitalmars-d-announce wrote: On Tuesday, 11 September 2018 at 15:22:55 UTC, rikki cattermole wrote: > > Here is a question (that I don't think has been asked) why > not > @copy? > > @copy this(ref Foo other) { } > > It can be read as copy constructor, which would be excellent > for helping people learn what it is doing (spec lookup). > > Also can we really not come up with an alternative bit of > code than the tupleof to copying wholesale? E.g. > super(other); I could not agree more. @implicit can mean many things, while @copy is much more specific... For what is worth I vote for @copy ! :) @implicit may be attributed to any constructor allowing it to be invoked implicitly. It's the inverse of C++'s `explicit` keyword. As such, @implicit is overwhelmingly useful in its own right. This will address my single biggest usability complaint of D as compared to C++. @implicit is super awesome, and we must embrace it. https://stackoverflow.com/a/11480555/1112970 I have no idea why you would want to declare a copy-constructor as explicit it seems that even if we were to want to have @implicit as an opposite of C++'s explicit it would _always_ be present on copy-constructors which means that @implicit for copy constructors should itself be implicit. From what I understand of explicit in C++, if we were to have @implicit construction it would be used for things like struct Foo { @implicit this(int) {} } void useFoo(Foo f) { ... } void main() { useFoo(0); // Fine, implicitly construct auto tmp = Foo(0); useFoo(tmp); } If at some point in the future we decide that we do want to add @implicit construction, then we can make the copy constructor always @implicit. Until that point I see no need for this, because it is replacing postblit which is always called implicitly. @\all please remember to leave feedback on the actual draft review on the DIP PR.
Re: Mobile is the new PC and AArch64 is the new x64
On 12 September 2018 at 10:09, Joakim via Digitalmars-d wrote: > On Tuesday, 11 September 2018 at 16:50:33 UTC, Dejan Lekic wrote: >> >> On Monday, 10 September 2018 at 13:43:46 UTC, Joakim wrote: >>> >>> LDC recently added a linux/AArch64 CI for both its main branches and >>> 64-bit ARM, ie AArch64, builds have been put out for both linux and Android. >>> It does not seem that many are paying attention to this sea change that is >>> going on with computing though, so let me lay out some evidence. ... >> >> >> I mostly agree with you, Joakim. I own a very nice (but now old) ODROID U2 >> (check the ODROID XU4 or C2!) so ARM support is important for me... >> >> Also, check this: >> https://www.hardkernel.com/main/products/prdt_info.php?g_code=G152875062626 >> >> HOWEVER, I think Iain is right - PPC64 and RISC-V are becoming more and >> more popular nowadays and may become more popular than ARM in the future but >> that future is unclear. > > > If and when they do, I'm sure D and other languages will be ported to them, > but right now they're most definitely not. > > I know because I actually looked for a RISC-V VPS on which to port ldc and > found nothing. Conversely, I was able to rent out an ARM Cubieboard2 > remotely four years back when I was first getting ldc going on ARM: > > https://forum.dlang.org/post/steigfwkywotxsypp...@forum.dlang.org > > I contacted one of the few companies putting out RISC-V dev boards, Sifive, > a couple weeks ago with the suggestion of making available a paid RISC-V > VPS, and one of their field engineers got back to me last week with a note > that they're looking into it. > > I think their model of having an open ISA with proprietary extensions will > inevitably win out for hardware, just as a similar model has basically won > already for software, but that doesn't mean that RISC-V will be the one to > do it. Someone else might execute that model better. > POWER9 has been making some headway, for instance finally they have a sensible real type (IEEE Quadruple). Though the developers working on glibc support seem to be making a shambles of it, where they want to support both new and old long double types at the same time at run-time! It seems that no one thought about Fortran, Ada, or D when it came to long double support in the C runtime library *sigh*. For us, I think we can choose to ignore the old IBM 128-bit float, and so remove any supporting code from our library, focusing instead only on completing IEEE 128-bit float support (LDC, upstream your local patches before i start naming and shaming you). ARM seems to be taking RISC-V seriously at least (this site was taken down after a couple days if I understand right: http://archive.fo/SkiH0). There is currently a lot of investment going into ARM64 in the server space right now, but signals I'm getting from people working on those projects are that it just doesn't hold water. With one comparison being a high end ARM64 server is no better than a cheap laptop bought 5 years ago. RISC-V got accepted into gcc-7, and runtime made it into glibc 2.27, there's certainly a lot effort being pushed for it. They have excellent simulator support on qemu, porting druntime only took two days. Patches for RISCV64 will come soon, probably with some de-duplication of large blocks. Iain.
Re: Copy Constructor DIP and implementation
On Wed, 12 Sep 2018 at 04:40, Dejan Lekic via Digitalmars-d-announce wrote: > > On Tuesday, 11 September 2018 at 15:22:55 UTC, rikki cattermole > wrote: > > > > Here is a question (that I don't think has been asked) why not > > @copy? > > > > @copy this(ref Foo other) { } > > > > It can be read as copy constructor, which would be excellent > > for helping people learn what it is doing (spec lookup). > > > > Also can we really not come up with an alternative bit of code > > than the tupleof to copying wholesale? E.g. super(other); > > I could not agree more. @implicit can mean many things, while > @copy is much more specific... For what is worth I vote for @copy > ! :) @implicit may be attributed to any constructor allowing it to be invoked implicitly. It's the inverse of C++'s `explicit` keyword. As such, @implicit is overwhelmingly useful in its own right. This will address my single biggest usability complaint of D as compared to C++. @implicit is super awesome, and we must embrace it.
Re: Variadic template with template arguments in pairs
On Wednesday, 12 September 2018 at 15:12:16 UTC, Anonymouse wrote: void doByPair(Args...)(Args args) if (Args.length) { foreach (pair; args.pairwise) { static assert(is(typeof(pair[0]) == string)); static assert(isPointer!(pair[1])); assert(pair[1] !is null); string desc = pair[0]; auto value = *pair[1]; writefln("%s %s: %s", typeof(value).stringof, desc, value); } } The easiest way is probably to iterate using indices with an increment of 2, e.g.: static foreach(i; iota(0, args.length, 2)) { static assert(is(typeof(args[i]) == string)); static assert(isPointer!(args[i+1])); // etc. } Another alternative is to write the function recursively: void doByPair(T, Rest...)(string desc, T* valuePtr, Rest rest) { writefln("%s %s: %s", T.stringof, desc, *valuePtr); if (rest.length) doByPair(rest); }
Re: extern(C++, ns) is wrong
On Wednesday, September 12, 2018 3:06:23 PM MDT Manu via Digitalmars-d wrote: > On Tue, 11 Sep 2018 at 20:59, Danni Coy via Digitalmars-d > > wrote: > > So my understanding is that the main issue with extern(C++,"ns") is > > functions that have different C++ name-spaces overriding each other in > > unexpected ways. How feasible is to simply disallow > > functions/variables/objects/... with the same name but a different "ns" > > being in the same module? > That's natural behaviour. You can't declare the same symbol twice in > the same scope. And that's really what's so nice about the idea behind extern(C++, "NS"). It's incredibly simple, because it follows _all_ of the normal D semantics. It's just that it then affects how the symbols are mangled so that they link up with the C++ symbols that they're bindings for. So, the whole thing is incredibly easy to reason about. The only downside that I'm aware of is that it makes it harder to put multiple namespaces in the same file, which matters if you're trying to put all of the symbols from a particular header file in a corresponding module. In every other respect, it's simpler - and incredibly easy to reason about. - Jonathan M Davis
Re: Mobile is the new PC and AArch64 is the new x64
On 12 September 2018 at 10:46, Dejan Lekic via Digitalmars-d wrote: > On Wednesday, 12 September 2018 at 08:09:46 UTC, Joakim wrote: >> >> I contacted one of the few companies putting out RISC-V dev boards, >> Sifive, a couple weeks ago with the suggestion of making available a paid >> RISC-V VPS, and one of their field engineers got back to me last week with a >> note that they're looking into it. >> >> I think their model of having an open ISA with proprietary extensions will >> inevitably win out for hardware, just as a similar model has basically won >> already for software, but that doesn't mean that RISC-V will be the one to >> do it. Someone else might execute that model better. > > > I could not agree more - look at Parallella! Their model is the same yet it > ultimately failed (unfortunately as I think Exynos is seriously good stuff)! > :( I only ever saw Parallella used in the context of a CPU where you offload computation onto, rather than something your system runs directly on-top of. For this, I assumed their target audience was mainly places that currently use expensive GPUs.
Re: Copy Constructor DIP and implementation
On Wednesday, 12 September 2018 at 19:39:21 UTC, Jonathan M Davis wrote: However, Andrei does not believe that the risk is worth it and insists that we need a way to differentiate between the new copy constructors and any existing constructors that happen to look like them. So, there won't be any kind inference here. If we were going to do that, we wouldn't be adding the attribute in the first place. Andrei has a bad score in understanding what is best for language. It's too arrogant, or scary, to listen to anyone: that's about this ... https://www.youtube.com/watch?v=KAWA1DuvCnQ For mockery, I would like to point out that the speaker of this proposal has also exchanged "nogc" for "@nogc": throw the fears into the sea, and fix the total nonsense that this language has become. Best wishes to the moderator.
Re: extern(C++, ns) is wrong
On Tue, 11 Sep 2018 at 20:59, Danni Coy via Digitalmars-d wrote: > > > > So my understanding is that the main issue with extern(C++,"ns") is functions > that have different C++ name-spaces overriding each other in unexpected ways. > How feasible is to simply disallow functions/variables/objects/... with the > same name but a different "ns" being in the same module? That's natural behaviour. You can't declare the same symbol twice in the same scope.
Re: Using a C++ class in a D associative array
In C++ programming you can call any kind of array it may be associative or dissociative one, and, with declaration fo the array, you can easily input mad output with a string and it can be integer or float format. And you can take some help from https://hpsupports.co/hp-support-assistant/ to get the exact array for any integer or float value.
Re: D IDE
On Wed, 12 Sep 2018 at 04:45, Atila Neves via Digitalmars-d wrote: > > On Wednesday, 5 September 2018 at 17:34:17 UTC, ShadoLight wrote: > > On Wednesday, 5 September 2018 at 13:11:18 UTC, Jonathan M > > Davis wrote: > > > > It anyway appears that Vim/Emacs are often extended by plugins, > > and this will be the only way to have some project manage > > features. > > I'm an Emacs user. I have never needed project management > features. If I want to edit a new file, I do that. > > You might be confusing "project management" with a build system. > I'm not sure, but then I just use a build system such as CMake. > > > I maintain that it is not practical trying to duplicate this in > > your editor of choice except if the amount of time you will > > save (from increased productivity) exceed the time taken to do > > this. I maintain that for bug fixing/support in a big > > organization this will hardly ever be the case. > > True, but why would anyone want to duplicate it? The only reason > I can think of is if the team is using Visual Studio and the .sln > file is the agreed-upon build system. I know this happens in real > life, but it shouldn't. And even then... open VS, add a file, go > back to editing in Emacs/vim/whathaveyou. Or edit the XML > directly. Or use glob's in the XML.
Re: Copy Constructor DIP and implementation
On Tue, Sep 11, 2018 at 03:08:33PM +, RazvanN via Digitalmars-d-announce wrote: > I have finished writing the last details of the copy constructor > DIP[1] and also I have published the first implementation [2]. [...] Here are some comments: - The DIP should address what @implicit means when applied to a function that isn't a ctor. Either it should be made illegal, or the spec should clearly state that it's ignored. - I prefer the former option, because that minimizes the risk of conflicts in the future if we were to expand the scope of @implicit to other language constructs. - However, the latter option is safer in that if existing user code uses @implicit with a different meaning, the first option would cause code breakage and require the user to replace all uses of @implicit with something else. - The DIP needs to address what a copy ctor might mean in a situation with unions, and/or whether it's legal to use unions with copy ctors. There are a few important cases to consider (there may be others): - Should copy ctors even be allowed in unions? - The copy ctor is defined in the union itself. - The union contains fields that have copy ctors. If two overlapping fields have copy ctors, which ctor will get called? Should this case be allowed, or made illegal? - How would type qualifiers (const, immutable, etc.) interact with unions of the above two cases? - If a struct contains a union, and another non-union field that has a copy ctor, how should the compiler define the generated copy ctor of the outer struct? - If a struct declares only one copy ctor, say mutable -> mutable, then according to the DIP (under the section "copy constructor call vs. standard copying (memcpy)"), declaring an immutable variable of that type will default to standard copying instead. - This means if the struct needs explicit handling of copying in a copy ctor, the user must remember to write all overloads of the copy ctor, otherwise there will be cases where standard copying is silently employed, bypassing any user-defined semantics that may be necessary for correct copying. - Shouldn't there be a way for the compiler to automatically generate this boilerplate code instead? Should there be a way to optionally generate warnings in such cases, so that the user can be aware in case default copying isn't desired? - What should happen if the user declares a copy ctor as a template function? Should the compiler automatically use that template to generate const/immutable/etc. copy ctors when needed? T -- Change is inevitable, except from a vending machine.
Re: Copy Constructor DIP and implementation
On Wednesday, September 12, 2018 1:18:11 PM MDT Gary Willoughby via Digitalmars-d-announce wrote: > On Wednesday, 12 September 2018 at 16:40:45 UTC, Jonathan M Davis > > wrote: > > Ultimately, I expect that if we add any attribute for this, > > people coming to D are going to think that it's downright > > weird, but if we're going to have one, if we go with @implicit, > > we're future-proofing things a bit, and personally, thinking > > about it over time, I've found that it feels less like a hack > > than something like @copy would. If we had @copy, this would > > clearly forever be something that we added just because we has > > postblit constructors first, whereas @implicit at least _might_ > > be used for something more. > > That does actually make a lot of sense. Isn't there any way these > constructors could be inferred without the attribute? As I understand it, the entire point of the attribute is to avoid any breakage related to constructors that have a signature that matches what a copy constructor would be. If we didn't care about that, there would be no reason to have an attribute. Personally, I think that the concerns about such possible breakage are overblown, because it really wouldn't make sense to declare a constructor with the signature of a copy constructor unless you intended to use it as one (though right now, it would have to be called explicitly). And such code shouldn't break if "explicit" copy constructors then become "implicit" copy constructors automatically. However, Andrei does not believe that the risk is worth it and insists that we need a way to differentiate between the new copy constructors and any existing constructors that happen to look like them. So, there won't be any kind inference here. If we were going to do that, we wouldn't be adding the attribute in the first place. - Jonathan M Davis
Re: Copy Constructor DIP and implementation
On Wednesday, 12 September 2018 at 16:40:45 UTC, Jonathan M Davis wrote: Ultimately, I expect that if we add any attribute for this, people coming to D are going to think that it's downright weird, but if we're going to have one, if we go with @implicit, we're future-proofing things a bit, and personally, thinking about it over time, I've found that it feels less like a hack than something like @copy would. If we had @copy, this would clearly forever be something that we added just because we has postblit constructors first, whereas @implicit at least _might_ be used for something more. That does actually make a lot of sense. Isn't there any way these constructors could be inferred without the attribute?
Re: DlangUI and android
On Monday, 10 September 2018 at 09:19:52 UTC, Josphe Brigmo wrote: Is there an emulator that can run the apks? Android emulator does not work, I suppose, because it isn't java. Complains about a missing classes.dex file. I'd rather have an emulator version if possible for quicker dev. For APKs I usually use Bluestacks [0]. Works great for Unity builds atleast. 0: https://www.bluestacks.com/
Re: Copy Constructor DIP and implementation
On Wednesday, September 12, 2018 10:04:57 AM MDT Elie Morisse via Digitalmars-d-announce wrote: > On Wednesday, 12 September 2018 at 11:39:21 UTC, Dejan Lekic > > wrote: > > On Tuesday, 11 September 2018 at 15:22:55 UTC, rikki cattermole > > > > wrote: > >> Here is a question (that I don't think has been asked) why not > >> @copy? > >> > >> @copy this(ref Foo other) { } > >> > >> It can be read as copy constructor, which would be excellent > >> for helping people learn what it is doing (spec lookup). > >> > >> Also can we really not come up with an alternative bit of code > >> than the tupleof to copying wholesale? E.g. super(other); > > > > I could not agree more. @implicit can mean many things, while > > @copy is much more specific... For what is worth I vote for > > @copy ! :) > > @implicit makes sense if extending explicitly implicit calls to > all other constructors gets somday considered. Some people argued > for it and I agree with them that it'd be nice to have, for ex. > to make a custom string struct type usable without having to > smear the code with constructor calls. That's why some argued in a previous thread on the topic that we should decide what (if anything) we're going to do with adding implicit construction to the language before finalizing this DIP. If we added some sort of implicit constructor to the language, then @implicit would make some sense on copy constructors (it's still a bit weird IMHO, but it does make some sense when explained), and in that case, having used @copy could actually be a problem. If we're looking at this as an attribute that's purely going to be used on copy constructors, then @copy does make more sense, but it's also less flexible. @implicit could potentially be used for more, whereas @copy really couldn't - not when it literally means copy constructor. Personally, I'd rather that we just risk the code breakage caused by not having an attribute for copy constructors than use either @implicit or @copy, since it really only risks breaking code using constructors that were intended to be copy constructors but had to be called explicitly, and that code would almost certainly be fine if copy constructors then became implicit, but Andrei seems unwilling to do that. But at least, when start arguing about that, fact that the work "explicitly" very naturally fits into that as the description for what a copy constructor would be currently, it does make more sense that @implicit would be used. Ultimately, I expect that if we add any attribute for this, people coming to D are going to think that it's downright weird, but if we're going to have one, if we go with @implicit, we're future-proofing things a bit, and personally, thinking about it over time, I've found that it feels less like a hack than something like @copy would. If we had @copy, this would clearly forever be something that we added just because we has postblit constructors first, whereas @implicit at least _might_ be used for something more. It would still feel weird and hacky if it never was used for anything more, but at least we'd be future-proofing the language a bit, and @implicit does make _some_ sense after it's explained, even if very few people (if any) will initially think that it makes sense. - Jonathan M Davis
Re: Mobile is the new PC and AArch64 is the new x64
On Wednesday, 12 September 2018 at 15:38:36 UTC, Joakim wrote: the world is right now? It's not IBM, Apple, Whoops, meant to write Intel here, but wrote Apple again. :D
[Issue 19242] Strange inferencing by combination of template and lambda
https://issues.dlang.org/show_bug.cgi?id=19242 --- Comment #2 from Basile B. --- Problem is also that the specs for this kind of stuff are void. Looks like only the last overload of the overload set is tried. --
[Issue 19242] Strange inferencing by combination of template and lambda
https://issues.dlang.org/show_bug.cgi?id=19242 Basile B. changed: What|Removed |Added CC||b2.t...@gmx.com Hardware|x86 |All OS|Windows |All --- Comment #1 from Basile B. --- I don't know if there's a bug or an enhancement possible in the compiler. Selecting the right overload is trivial: ``` foo((int a) => a + 1); ``` If this solution is satisfying you can set the Status as RESOLVED for the reson INVALID. --
[Issue 19242] New: Strange inferencing by combination of template and lambda
https://issues.dlang.org/show_bug.cgi?id=19242 Issue ID: 19242 Summary: Strange inferencing by combination of template and lambda Product: D Version: D2 Hardware: x86 OS: Windows Status: NEW Severity: normal Priority: P1 Component: dmd Assignee: nob...@puremagic.com Reporter: zan77...@nifty.com This code doesn't work: -- void foo(Ret)(Ret delegate(int) dg) { } // Even though alias func is not specified, // this is selected unnaturally. // Also, if this function is commented out, // the above function(foo(Ret)) is inferred. void foo(alias func)(void delegate(string) dg) { } void main() { foo(a => a + 1); } -- dmd -run main.d -- main.d(11): Error: incompatible types for `(a) + (1)`: `string` and `int` --
Re: Copy Constructor DIP and implementation
On Wednesday, 12 September 2018 at 11:39:21 UTC, Dejan Lekic wrote: On Tuesday, 11 September 2018 at 15:22:55 UTC, rikki cattermole wrote: Here is a question (that I don't think has been asked) why not @copy? @copy this(ref Foo other) { } It can be read as copy constructor, which would be excellent for helping people learn what it is doing (spec lookup). Also can we really not come up with an alternative bit of code than the tupleof to copying wholesale? E.g. super(other); I could not agree more. @implicit can mean many things, while @copy is much more specific... For what is worth I vote for @copy ! :) @implicit makes sense if extending explicitly implicit calls to all other constructors gets somday considered. Some people argued for it and I agree with them that it'd be nice to have, for ex. to make a custom string struct type usable without having to smear the code with constructor calls.
Re: Mobile is the new PC and AArch64 is the new x64
On Wednesday, 12 September 2018 at 06:41:38 UTC, Gambler wrote: On 9/10/2018 9:43 AM, Joakim wrote: Yes, I know, these devices won't replace your quad-core Xeon workstation with 32-64 GBs of RAM anytime soon, but most people don't need anywhere near that level of compute. That's why PC sales keep dropping while mobile sales are now 6-7X that per year: I'm all for supporting modern open CPU architectures. At the same time, I fear that the specific trend you're describing here (people ditching PCs for cellphones/tablets) is effectively a reversal of the PC revolution. For the last 30+ years people benefited from "trickle down computing". They had access to PCs that were equivalent to cutting edge servers of 6-7 years prior. They had ability to choose their operating system, expand and upgrade their hardware and install any software they wanted. All of this is breaking down right now. Yes and no, it is true that that is the way tech _used_ to diffuse. However, do you know what the largest tech company in the world is right now? It's not IBM, Apple, HP, or Microsoft, ie none of the server or PC companies. It's Apple, which doesn't sell into the server or traditional enterprise markets almost at all and only has 15-20% unit share in the mobile market. In other words, consumer tech markets are _much_ larger than the server/enterprise markets that used to lead tech R, which means consumer tech like mobile is what leads the way now. As for choosing your own OS, that's still possible, but as always, it can be tough to get drivers for your hardware: https://together.jolla.com/question/136143/wiki-available-devices-running-sailfish-os/ And if you simply want to tinker with the Android OS on your device, there are many ways to do that: https://www.xda-developers.com/how-to-install-custom-rom-android/ No need to expand and upgrade your hardware when prices keep dropping in this Darwinian market. There's now a $500 phone with a faster chip than the one I got just 7 months back for $700: https://m.newegg.com/products/N82E16875220078 As for installing any software you want, Android allows it: it's how I debug the apps I build on my phone or tablet. The iPhone doesn't, but it's a minority of the mobile market. Intel got lazy without competition and high-end CPU architectures stagnated. All the cutting-edge computing is done on NVidia cards today. It requires hundreds of gigabytes of RAM, tens of terabytes of data and usage of specialized computing libraries. I very much doubt this will "trickle down" to mobile in foreseeable future. Heck, most developer laptops today have no CUDA capabilities to speak of. I question the need for such "cutting-edge computing" in the first place, but regardless, it has already moved down to mobile and other edge devices: https://arstechnica.com/gadgets/2017/10/the-pixel-2-contains-a-custom-google-soc-the-pixel-visual-core/ https://www.theverge.com/2018/7/26/17616140/google-edge-tpu-on-device-ai-machine-learning-devkit Moreover, mobile devices are locked down by default and it's no trivial task to break out of those walled gardens. IIRC, Apple has an official policy of not allowing programming tools in their app store. Alan Kay had to personally petition Steve Jobs to allow Scratch to be distributed, so kids could learn programming. I believe the general policy is still in place. They have their own app for that now: https://www.apple.com/swift/playgrounds/ Android is better, but it's still a horror to do real work on, compared to any PC OS. Fine, you rooted it, installed some compilers and so on. How will you share your software with fellow Android users? You seem to have missed all the posts I've made here before about native Android support for ldc: :) _I have never rooted any of my Android devices_. Compiling D code on most any Android device is as simple as installing an app from the official Play Store and typing a single command, `apt install ldc`: https://wiki.dlang.org/Build_D_for_Android The instructions there even show you how to package up an Android GUI app, an apk, on Android itself, by using some other packages available in that Android app. In essence, we are seeing the rapid widening of two digital divides. The first one is between users and average developers. The second one is between average developers and researchers at companies like Google. I very much doubt that we will see an equivalent of today's high-end machine learning server on user's desk, let alone in anyone's pocket, within 7 years. I disagree on both counts. First off, people were running supercomputers and UNIX workstations while you were piddling along on your PC decades ago. That changed nothing about what you were able to learn and accomplish on your PC. In fact, you were probably much better off than they were, as the PC skills you picked up were likely in much more demand than their supercomputing
Re: Is it's correct to say that ALL types that can grow are place on heap?
On 13/09/2018 3:22 AM, Timoses wrote: On Wednesday, 12 September 2018 at 14:46:22 UTC, rikki cattermole wrote: On 13/09/2018 2:34 AM, drug wrote: 12.09.2018 15:14, Timoses пишет: On Tuesday, 11 September 2018 at 12:07:14 UTC, drug wrote: If data size is less or equal to total size of available registers (that can be used to pass values) than passing by value is more efficient. Passing data with size less than register size by reference isn't efficient because you pass pointer (that has register size) and access memory using it. Thank you! So if I pass by reference it will ALWAYS use the address in memory to fetch the data, whereas passing it by value enables the (compiler?..) to use the register which has already loaded the data from memory (stack for example)? Honestly, I'm not an expert in this domain, but I think so. Recently used areas of the stack will be available in the cache in most cases. The issue with passing by reference is it increases the indirection (number of pointers) that it must go through to get to the raw bytes. This is why classes are bad but structs are good. Even if the struct is allocated on the heap and you're accessing it via a pointer. This sounds like classes should never be used.. I don't recall right now what issues I'm usually encountering with structs that make me switch to classes (in D). Nah, this is cycle counting aka don't worry about it if you're not doing anything super high performance.
Re: Is it's correct to say that ALL types that can grow are place on heap?
On Wednesday, 12 September 2018 at 14:46:22 UTC, rikki cattermole wrote: On 13/09/2018 2:34 AM, drug wrote: 12.09.2018 15:14, Timoses пишет: On Tuesday, 11 September 2018 at 12:07:14 UTC, drug wrote: If data size is less or equal to total size of available registers (that can be used to pass values) than passing by value is more efficient. Passing data with size less than register size by reference isn't efficient because you pass pointer (that has register size) and access memory using it. Thank you! So if I pass by reference it will ALWAYS use the address in memory to fetch the data, whereas passing it by value enables the (compiler?..) to use the register which has already loaded the data from memory (stack for example)? Honestly, I'm not an expert in this domain, but I think so. Recently used areas of the stack will be available in the cache in most cases. The issue with passing by reference is it increases the indirection (number of pointers) that it must go through to get to the raw bytes. This is why classes are bad but structs are good. Even if the struct is allocated on the heap and you're accessing it via a pointer. This sounds like classes should never be used.. I don't recall right now what issues I'm usually encountering with structs that make me switch to classes (in D). So passing by reference is generally only applicable (logical) to structs and non-reference types + only makes sense when the function being called is supposed to change the referenced value without returning it. Except, as Steven pointed out in his post when dealing with large lvalue structs. This all seems quite complicated to "get right" when writing code. I'm sure there are compiler optimizations run on this? Or is that not possible due to the nature of difference in ref and value passing. Anyhow, thanks for the answers! I bet it's possible to write books on this topic.. Or just mention ones that already were written : ~D.
Re: DMD32 compiling gtkd out of memory on 32bit Windows 7 machine
On Wednesday, 12 September 2018 at 06:06:15 UTC, dangbinghoo wrote: hi , When compiling gtkd using dub, dmd32 reported "Out for memory" and exit. OS: Windows 7 32bit. RAM : 3GB DMD version: v2.0.82.0 32bit. No VC or Windows SDK installed, when setting up dmd, I selected install vc2010 and use mingw lib. try `dub --build-mode=singleFile` ? I believe this will compile each file and then link them together (instead of compiling it all together what dub does, afaik). There's been another topic on memory consumption of compilation [1]. [1]: https://forum.dlang.org/post/ehyfilopozdndjdah...@forum.dlang.org
Re: Pass 'this' as reference
On 9/12/18 8:01 AM, Jan wrote: I'm using D not for that long and lately I have encountered an issue. I have class 'Foo' with a constructor using this signature: `this (ref Bar original)` In the 'Bar' class itself I want to create an instance of 'Foo' using 'this' as parameter. Something in the way of: `Foo foo = new Foo(ref this);` I couldn't find anything interesting on the internet to help me. Could anyone help me? Many thanks in advance! You don't have to specify ref when calling. This should work: auto foo = new Foo(this); Though almost certainly you are misunderstanding classes -- they are references anyway. I don't know why you would want to accept a class via ref unless you were actually going to reassign the reference. I suggest that your constructor should not accept Bar via ref. -Steve
Variadic template with template arguments in pairs
I'm trying to create a variadic template function that takes pairs of arguments. Sort of like getopt, I want to pass any number of pairs of a string and some pointer. Or any size chunk larger than one. Something like the following, assuming the existence of a hypothetical template pairwise: --- void doByPair(Args...)(Args args) if (Args.length) { foreach (pair; args.pairwise) { static assert(is(typeof(pair[0]) == string)); static assert(isPointer!(pair[1])); assert(pair[1] !is null); string desc = pair[0]; auto value = *pair[1]; writefln("%s %s: %s", typeof(value).stringof, desc, value); } } bool b1 = true; bool b2 = false; string s = "some string"; int i = 42; doByPair("foo", , "bar", , "baz", , "qux", ); --- Should output: bool foo: true bool bar: false string baz: some string int qux: 42 What is the right way to go about doing this?
Re: Pass 'this' as reference
On Wednesday, 12 September 2018 at 15:01:36 UTC, Jan wrote: I'm using D not for that long and lately I have encountered an issue. I have class 'Foo' with a constructor using this signature: `this (ref Bar original)` classes and the ref keyword should very rarely be used together in D. classes are already refs without anything, so adding it makes it a double ref, which breaks more than it helps. If you just get rid of the `ref`s in your code it will probably work.
Pass 'this' as reference
I'm using D not for that long and lately I have encountered an issue. I have class 'Foo' with a constructor using this signature: `this (ref Bar original)` In the 'Bar' class itself I want to create an instance of 'Foo' using 'this' as parameter. Something in the way of: `Foo foo = new Foo(ref this);` I couldn't find anything interesting on the internet to help me. Could anyone help me? Many thanks in advance!
Re: Is it's correct to say that ALL types that can grow are place on heap?
On 13/09/2018 2:34 AM, drug wrote: 12.09.2018 15:14, Timoses пишет: On Tuesday, 11 September 2018 at 12:07:14 UTC, drug wrote: If data size is less or equal to total size of available registers (that can be used to pass values) than passing by value is more efficient. Passing data with size less than register size by reference isn't efficient because you pass pointer (that has register size) and access memory using it. Thank you! So if I pass by reference it will ALWAYS use the address in memory to fetch the data, whereas passing it by value enables the (compiler?..) to use the register which has already loaded the data from memory (stack for example)? Honestly, I'm not an expert in this domain, but I think so. Recently used areas of the stack will be available in the cache in most cases. The issue with passing by reference is it increases the indirection (number of pointers) that it must go through to get to the raw bytes. This is why classes are bad but structs are good. Even if the struct is allocated on the heap and you're accessing it via a pointer.
Re: Is it's correct to say that ALL types that can grow are place on heap?
12.09.2018 15:14, Timoses пишет: On Tuesday, 11 September 2018 at 12:07:14 UTC, drug wrote: If data size is less or equal to total size of available registers (that can be used to pass values) than passing by value is more efficient. Passing data with size less than register size by reference isn't efficient because you pass pointer (that has register size) and access memory using it. Thank you! So if I pass by reference it will ALWAYS use the address in memory to fetch the data, whereas passing it by value enables the (compiler?..) to use the register which has already loaded the data from memory (stack for example)? Honestly, I'm not an expert in this domain, but I think so.
Re: x64 Privileged instruction
On Wednesday, 12 September 2018 at 10:42:08 UTC, Josphe Brigmo wrote: x64 gives Privileged instruction but x86 gives First-chance exception: std.file.FileException "C:\": The filename, directory name, or volume label syntax is incorrect. at std\file.d(4573) which is much more informative... seems like a bug to me. More context needed. What code does produce this behavior.
Re: rund users welcome
On Wednesday, 12 September 2018 at 10:06:29 UTC, aliak wrote: On Wednesday, 12 September 2018 at 01:11:59 UTC, Jonathan Marler wrote: On Tuesday, 11 September 2018 at 19:55:33 UTC, Andre Pany wrote: On Saturday, 8 September 2018 at 04:24:20 UTC, Jonathan Marler wrote: I've rewritten rdmd into a new tool called "rund" and have been using it for about 4 months. It runs about twice as fast making my workflow much "snappier". It also introduces a new feature called "source directives" where you can add special comments to the beginning of your D code to set various compiler options like import paths, versions, environment variable etc. Feel free to use it, test it, provide feedback, contribute. https://github.com/marler8997/rund It would be great if you could create a pull request for rdmd to add the missing -i enhancement. Kind regards Andre I did :) https://github.com/dlang/tools/pull/292 Made me sad to read that and related PRs ... sigh :( Yeah I loved working on D. But some of the people made it very difficult. So I've switched focus to other projects that use D rather than contributing to D itself. But anyway! rund seems awesome! Thanks for it :) some questions: Are these all the compiler directives that are supported (was not sure if they were an example or some of them or all of them from the readme): #!/usr/bin/env rund //!importPath //!version //!library //!importFilenamePath //!env = //!noConfigFile //!betterC I love the concept of source files specifying the compiler flags they need to build. Yeah they have proven to be very useful. I have many tools written in D and this feature allows the main source file to be a "self-contained" program. The source itself is declaring the libraries it needs, the environment, etc. And the answer is Yes, all those options are supported along with a couple I recently added `//!debug` and `//!debugSymbols`. I anticipate more will be added in the future (see https://github.com/marler8997/rund/blob/master/src/rund/directives.d) To show how powerful they are, I include an example in the repository that can actually build DMD on the fly (assuming the c++ libraries are built beforehand). https://github.com/marler8997/rund/blob/master/test/dmdwrapper.d #!/usr/bin/env rund //!env CC=c++ //!version MARS //!importPath ../../dmd/src //!importFilenamePath ../../dmd/res //!importFilenamePath ../../dmd/generated/linux/release/64 //!library ../../dmd/generated/linux/release/64/newdelete.o //!library ../../dmd/generated/linux/release/64/backend.a //!library ../../dmd/generated/linux/release/64/lexer.a /* This wrapper can be used to compile/run dmd (with some caveats). * You need to have the dmd repository cloned to "../../dmd" (relative to this file). * You need to have built the C libraries. You can build these libraries by building dmd. Note sure why, but through trial and error I determined that this is the minimum set of modules that I needed to import in order to successfully include all of the symbols to compile/link dmd. */ import dmd.eh; import dmd.dmsc; import dmd.toobj; import dmd.iasm; Thanks for the interest. Feel free to post any requested features or issues on github.
Re: More fun with autodecoding
On Wednesday, 12 September 2018 at 12:45:15 UTC, Nicholas Wilson wrote: Overloads: [snip] Good point.
Re: Mobile is the new PC and AArch64 is the new x64
On Monday, 10 September 2018 at 14:00:43 UTC, Iain Buclaw wrote: PPC64 Why superscalar is a thing at all? Didn't Itanium prove to be difficult to optimize for?
Re: Copy Constructor DIP and implementation
On Tuesday, 11 September 2018 at 23:56:56 UTC, Walter Bright wrote: On 9/11/2018 8:08 AM, RazvanN wrote: [1] https://github.com/dlang/DIPs/pull/129 [2] https://github.com/dlang/dmd/pull/8688 Thank you, RazvanN! I very much agree!
Re: More fun with autodecoding
On Tuesday, 11 September 2018 at 14:58:21 UTC, jmh530 wrote: Is there any reason why this is not sufficient? [1] https://run.dlang.io/is/lu6nQ0 Overloads: https://run.dlang.io/is/m5HGOh The static asserts being in the constraint affects the template candidacy viability. Being in the function body/runtime contract does not so you'll end up with onlineapp.d(17): Error: onlineapp.foo called with argument types (float) matches both: onlineapp.d(1): onlineapp.foo!float.foo(float x) and: onlineapp.d(7): onlineapp.foo!float.foo(float x) despite the fact only one of them is viable, whereas bar is fine.
Re: silly is released - new test runner for the D programming language
On Wednesday, 12 September 2018 at 04:02:14 UTC, Soulsbane wrote: On Sunday, 12 August 2018 at 15:07:04 UTC, Anton Fediushin wrote: Hello, I'm glad to announce that silly v0.0.1 is released. Silly is a brand-new test runner with simplicity in mind. It's developed to be as simple as possible and contain no useless features. Another important goal is to provide flexible tool which can be easily integrated into existing environments. In my local version I've modified the test name to be colorized. Could you add this feature? it really helps the readability in my opinion. I doubt that will make it more readable, in fact it'd annoy me. Another thing to consider is the fact that colours in terminal are highly customizable so if something works and most importantly looks good for somebody it might look terrible and be unreadable on different terminal preferences. This is something I experienced with trial which colours test names white and vibe-core's logger which used different shades of grey for different log levels. Both of these as you can imagine are unreadable on black-on-white terminals. Either way, I really love silly! Thanks a lot! You are welcome!
Re: Is it's correct to say that ALL types that can grow are place on heap?
On 9/11/18 3:11 AM, Timoses wrote: Aww, I really would love some insights into function parameter passing. Why is it said that passing by value can be more efficient at times? Since it is also said that passing large structs by value can be expensive, why then would it not be cheaper to ALWAYS pass everything by reference? What mechanism is behind the scene that follows one to reason that sometimes passing by value is less expensive? So consider that accessing a struct from the function is cheaper when it is passed by value -- you have one offset from the stack pointer, and that's it. Vs. going through the stack pointer to get the reference, and then dereferencing that. In addition, passing a large struct by value can be as cheap or even cheaper if you can construct the value right where it is going to be passed. In other words, you don't need to make *any* copies. This can be true for rvalues that are passed by value, but not lvalues. So in addition to register passing, there are other benefits to consider. -Steve
Re: More fun with autodecoding
On 9/11/18 7:58 AM, jmh530 wrote: Is there any reason why this is not sufficient? [1] https://run.dlang.io/is/lu6nQ0 That's OK if you are the only one defining S. But what if float is handled elsewhere? -Steve
Re: dlang download stat should be updated
On Tuesday, 11 September 2018 at 07:25:22 UTC, Suliman wrote: On Sunday, 9 September 2018 at 09:05:33 UTC, Suliman wrote: Last update was long time ago http://erdani.com/d/downloads.daily.png UP +1
Re: Is it's correct to say that ALL types that can grow are place on heap?
On Tuesday, 11 September 2018 at 12:07:14 UTC, drug wrote: If data size is less or equal to total size of available registers (that can be used to pass values) than passing by value is more efficient. Passing data with size less than register size by reference isn't efficient because you pass pointer (that has register size) and access memory using it. Thank you! So if I pass by reference it will ALWAYS use the address in memory to fetch the data, whereas passing it by value enables the (compiler?..) to use the register which has already loaded the data from memory (stack for example)?
Re: Canadian companies using D?
On Wednesday, 12 September 2018 at 01:14:58 UTC, Ryan Barker wrote: I wonder if you guys are aware of any Canadian companies using D, I want to do some research about what they're using D for and eventually follow some of them. I was browsing https://dlang.org/orgs-using-d.html but I didn't find any there yet. I'm not aware of any but would be interested to hear otherwise, I live in Toronto and use D for Tilix (https://gnunn1.github.io/tilix-web) just as a data point.
Re: [OT] My State is Illegally Preventing Me From Voting In The Upcoming 2018 US Elections
On Sunday, 9 September 2018 at 14:27:45 UTC, Abdulhaq wrote: If you're serious then why not request an absentee ballot? Just out of curiosity, how does posting this info here help you in any way? I was kind of wondering this too and in worst case a technical error as described is not really his state illegally preventing him from voting. Could be a simple mistake. Contacting his state could probably give him what information he needs to know etc.
Re: D IDE
On Wednesday, 5 September 2018 at 17:34:17 UTC, ShadoLight wrote: On Wednesday, 5 September 2018 at 13:11:18 UTC, Jonathan M Davis wrote: It anyway appears that Vim/Emacs are often extended by plugins, and this will be the only way to have some project manage features. I'm an Emacs user. I have never needed project management features. If I want to edit a new file, I do that. You might be confusing "project management" with a build system. I'm not sure, but then I just use a build system such as CMake. I maintain that it is not practical trying to duplicate this in your editor of choice except if the amount of time you will save (from increased productivity) exceed the time taken to do this. I maintain that for bug fixing/support in a big organization this will hardly ever be the case. True, but why would anyone want to duplicate it? The only reason I can think of is if the team is using Visual Studio and the .sln file is the agreed-upon build system. I know this happens in real life, but it shouldn't. And even then... open VS, add a file, go back to editing in Emacs/vim/whathaveyou. Or edit the XML directly. But even if you avoid this step and can build/run/test from the command-line it may not be optimal in certain debugging scenarios. See next point. You don't have to build/run/test from the command-line, you can do it in-editor. Right, but depending on your type of debugging there is some things which just make more sense to do from right inside the debugger. If you hit a data value break-point or such on an attached debugger you can just double-click the line in the stack trace to go to the appropriate line in the IDE editor. No need to switch tasks to Vim/Emacs, do a go-to or whatever to get to the same place. The type of debugging I'm talking about is not your 'single step' variety. No need to switch tasks to Emacs either, just run the debugger in Emacs and you can double-click if you want to. Although, if you're an Emacs user you're probably not going to want to use the mouse. I sometimes wonder if the Vim/Emacs 'affectionados' spend so much time mastering their editors (which by all accounts have a steep learning curve), that they forgot that IDE development did not stagnate after they left! It's not a question of forgetting what IDEs can do. It's a question of either not needing those features or having them in the editor. I've used Visual Studio, Eclipse, IDEA, etc. I just don't like them. This is what I need from an IDE: autocompletion, go to definition, on-the-fly syntax checking. I have all of that in Emacs. Again, it depends on what you mean by 'editing'. I think he means... editing. Cutting, pasting, replacing, that kind of thing. If you are referring to coding where you are developing from scratch, then sure - I agree. That's not editing, that's writing. In that case, notepad is enough, or cat. There's a reason why vim's normal mode is about editing, not writing (inserting). But the whole point of my post was to point out that this is not the only use-case for some of us. And in some of these other use-cases IDEs are actually superior to editors. That's your opinion, you're entitled to it and I'm not going to try and change your mind. Mine is that no IDE gets close to the power of a good editor. In your favourite IDE, can you set up any key combination you want to: 1. Jump to the end of the current line 2. Check to see if there's a semicolon there 3. If not, add one 4. Open a new line beneath No? I don't learn how to use Emacs, Emacs learns *me*. And that was just a simple example. For another example IDEs are also in some ways a 'standard' inside big organizations in a way that any editor cannot be - the lowest barrier of entry to get new members up to speed in a team. And for some languages (Java/C#) you give up a lot by not developing inside an IDE. In fact, for Java and C#, the appeal/power of the languages is in many ways directly related to the IDE! Now throw in mixing C# with C++ (or even D) development... I'm sure you get my drift! Most of what I'd need an IDE for in Java (I'd probably use IDEA if I were to write Java) I don't need for D.
Re: Copy Constructor DIP and implementation
On Tuesday, 11 September 2018 at 15:22:55 UTC, rikki cattermole wrote: Here is a question (that I don't think has been asked) why not @copy? @copy this(ref Foo other) { } It can be read as copy constructor, which would be excellent for helping people learn what it is doing (spec lookup). Also can we really not come up with an alternative bit of code than the tupleof to copying wholesale? E.g. super(other); I could not agree more. @implicit can mean many things, while @copy is much more specific... For what is worth I vote for @copy ! :)
[Issue 9981] Implement lazy ref arguments
https://issues.dlang.org/show_bug.cgi?id=9981 John Colvin changed: What|Removed |Added CC||john.loughran.colvin@gmail. ||com --- Comment #2 from John Colvin --- Is there a reason for this being disallowed? Seeing as lazy is just pass-by-delegate, why not allow pass-by-delegate-with-ref-return? --
x64 Privileged instruction
x64 gives Privileged instruction but x86 gives First-chance exception: std.file.FileException "C:\": The filename, directory name, or volume label syntax is incorrect. at std\file.d(4573) which is much more informative... seems like a bug to me.
Re: rund users welcome
On Wednesday, 12 September 2018 at 01:11:59 UTC, Jonathan Marler wrote: On Tuesday, 11 September 2018 at 19:55:33 UTC, Andre Pany wrote: On Saturday, 8 September 2018 at 04:24:20 UTC, Jonathan Marler wrote: I've rewritten rdmd into a new tool called "rund" and have been using it for about 4 months. It runs about twice as fast making my workflow much "snappier". It also introduces a new feature called "source directives" where you can add special comments to the beginning of your D code to set various compiler options like import paths, versions, environment variable etc. Feel free to use it, test it, provide feedback, contribute. https://github.com/marler8997/rund It would be great if you could create a pull request for rdmd to add the missing -i enhancement. Kind regards Andre I did :) https://github.com/dlang/tools/pull/292 Made me sad to read that and related PRs ... sigh :( But anyway! rund seems awesome! Thanks for it :) some questions: Are these all the compiler directives that are supported (was not sure if they were an example or some of them or all of them from the readme): #!/usr/bin/env rund //!importPath //!version //!library //!importFilenamePath //!env = //!noConfigFile //!betterC I love the concept of source files specifying the compiler flags they need to build. Cheers, - Ali
Re: Mobile is the new PC and AArch64 is the new x64
On Wednesday, 12 September 2018 at 08:09:46 UTC, Joakim wrote: On Tuesday, 11 September 2018 at 08:34:31 UTC, Chris wrote: [...] Yes, something like that should be done, but I won't be doing much with dub till next year. If anyone else is interested in doing it earlier, feel free. [...] On Wednesday, 12 September 2018 at 08:09:46 UTC, Joakim wrote: From one of the articles you linked: "The Apple Swift compiler has had the ability to compile code for the Android platform for a few years now, but it hasn’t made many friends in the developer community owing to its complexity. Our toolchain was designed to solve this problem by taking the complexity and headaches out of the process, so you can focus on building great apps for your users." If Android devs have been reluctant to touch Swift owing to its complexity (not the language, the toolchain), do you think they would touch D?
[Issue 18771] Identical overload sets in different modules have different identities
https://issues.dlang.org/show_bug.cgi?id=18771 --- Comment #2 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/dlang/dmd https://github.com/dlang/dmd/commit/a04db92fab82abec2c645f6832e24fba1c1c Fix Issue 18771 - Identical overload sets in different modules have different identities https://github.com/dlang/dmd/commit/44ff6c62b9c8ec8b909303b3a81541aef8ac282f Merge pull request #8675 from RazvanN7/Issue_18771 Fix Issue 18771 - Identical overload sets in different modules have different identities --
[Issue 18771] Identical overload sets in different modules have different identities
https://issues.dlang.org/show_bug.cgi?id=18771 github-bugzi...@puremagic.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --
Re: Mobile is the new PC and AArch64 is the new x64
On Wednesday, 12 September 2018 at 08:09:46 UTC, Joakim wrote: I don't think there's a "dedicated team" for any platform that D runs on, so we don't have "first class support" for any platform then. But ARM (Android/iOS) has always been treated worse than a stepchild by D devs. No interest whatsoever, leave it to the LDC guys... D is largely a volunteer effort: if that's not enough, maybe D isn't right for you. This isn't Kotlin or Swift, where one of the largest companies in the world puts full-time devs on the language and gives everything away for free because it suits their agenda. In Apple's case, that means Swift doesn't really support Android and definitely doesn't support Android/AArch64, because putting full-time devs on getting Swift working well with Android doesn't suit their agenda of pushing iOS: Swift locks you in too much. Kotlin is becoming more cross-platform now since google is more cross-platform, but then you're depending on google continually funding development on an OSS project, which they've backed out of before: https://arstechnica.com/gadgets/2018/07/googles-iron-grip-on-android-controlling-open-source-by-any-means-necessary/ I don't fault google for making those choices, as nobody has a right to their OSS contributions, but it is something to consider when using any platform, and even more so for an OSS project: who is funding this and why? Will their model be sustainable? There are no easy answers here: if you want a free-priced, OSS toolchain, you're going to be marching to the beat of someone's drum. We all understand that. But often you don't get to choose. If the user wants an app for Android/iOS what you're gonna tell him or her? "I'm not marching to the beat of Google's drum."? Also, having no or no smooth support for something doesn't make the D community "rebels". As for ongoing maintenance, Android/ARM was done years ago and hasn't taken much in the way of maintenance to keep most of the stdlib/dmd tests passing, so I don't think that's much of an issue. Just to make sure it all works. The less work the better. btw, it was a thread _you_ started that finally spurred me to begin this Android port five years back, though I'd enquired about and had been considering it earlier: https://forum.dlang.org/thread/yhulkqvlwnxjklnog...@forum.dlang.org Ha ha! I know and you picked up on it. Thank you very much, it's much appreciated. But look at the date: November 2013 (!) and we're still talking about it while others have overtaken D in this respect. 5 years + the founding of the D Language Foundation. Sometimes it's good to think outside the box a little and see what's going on around you. It's not just fancy ranges and allocators. The software has to actually run somewhere.
Re: Mobile is the new PC and AArch64 is the new x64
On Wednesday, 12 September 2018 at 08:09:46 UTC, Joakim wrote: I contacted one of the few companies putting out RISC-V dev boards, Sifive, a couple weeks ago with the suggestion of making available a paid RISC-V VPS, and one of their field engineers got back to me last week with a note that they're looking into it. I think their model of having an open ISA with proprietary extensions will inevitably win out for hardware, just as a similar model has basically won already for software, but that doesn't mean that RISC-V will be the one to do it. Someone else might execute that model better. I could not agree more - look at Parallella! Their model is the same yet it ultimately failed (unfortunately as I think Exynos is seriously good stuff)! :(
Re: Mobile is the new PC and AArch64 is the new x64
On Tuesday, 11 September 2018 at 08:34:31 UTC, Chris wrote: On Tuesday, 11 September 2018 at 07:23:53 UTC, Joakim wrote: I agree with a lot of what you say here, but I'm not sure what you mean by "first class support for mobile." What exactly do you believe D needs to reach that level? Basically the things you describe. I was thinking of a stable and easy build system, e.g. $ dub init android [iOS] $ dub --arch=arm64 Yes, something like that should be done, but I won't be doing much with dub till next year. If anyone else is interested in doing it earlier, feel free. And and of course check which language features work (or don't work!) on ARM and write a documentation. Cf. https://kotlinlang.org/docs/reference/native-overview.html I don't see any language features listed for Kotlin there, but ldc does have an AArch64 tracker issue, which lists what else needs to be done: https://github.com/ldc-developers/ldc/issues/2153 It might be a good idea to set up a funding target to get the iOS port back up to speed again. I don't use Apple products so it won't be me picking up that porting work, but maybe Dan could be enticed to finish it as a paid project, since he did most of the voluntary work so far. I'm purely speculating, no idea if money changes the equation for him, just know that he's been too busy to work on it for the last couple years. That'd be part of the first class support. That a dedicated team works on it. Volunteers are not enough. Once it's polished it will still need loads of maintenance. I don't think there's a "dedicated team" for any platform that D runs on, so we don't have "first class support" for any platform then. D is largely a volunteer effort: if that's not enough, maybe D isn't right for you. This isn't Kotlin or Swift, where one of the largest companies in the world puts full-time devs on the language and gives everything away for free because it suits their agenda. In Apple's case, that means Swift doesn't really support Android and definitely doesn't support Android/AArch64, because putting full-time devs on getting Swift working well with Android doesn't suit their agenda of pushing iOS: https://github.com/apple/swift/blob/master/docs/Android.md https://blog.readdle.com/why-we-use-swift-for-android-db449feeacaf However, since Swift is largely open source, there is a small company that claims to have added Android/AArch64 support to the Swift compiler: https://www.scade.io Kotlin is becoming more cross-platform now since google is more cross-platform, but then you're depending on google continually funding development on an OSS project, which they've backed out of before: https://arstechnica.com/gadgets/2018/07/googles-iron-grip-on-android-controlling-open-source-by-any-means-necessary/ I don't fault google for making those choices, as nobody has a right to their OSS contributions, but it is something to consider when using any platform, and even more so for an OSS project: who is funding this and why? Will their model be sustainable? There are no easy answers here: if you want a free-priced, OSS toolchain, you're going to be marching to the beat of someone's drum. As for ongoing maintenance, Android/ARM was done years ago and hasn't taken much in the way of maintenance to keep most of the stdlib/dmd tests passing, so I don't think that's much of an issue. btw, it was a thread _you_ started that finally spurred me to begin this Android port five years back, though I'd enquired about and had been considering it earlier: https://forum.dlang.org/thread/yhulkqvlwnxjklnog...@forum.dlang.org On Tuesday, 11 September 2018 at 16:50:33 UTC, Dejan Lekic wrote: On Monday, 10 September 2018 at 13:43:46 UTC, Joakim wrote: LDC recently added a linux/AArch64 CI for both its main branches and 64-bit ARM, ie AArch64, builds have been put out for both linux and Android. It does not seem that many are paying attention to this sea change that is going on with computing though, so let me lay out some evidence. ... I mostly agree with you, Joakim. I own a very nice (but now old) ODROID U2 (check the ODROID XU4 or C2!) so ARM support is important for me... Also, check this: https://www.hardkernel.com/main/products/prdt_info.php?g_code=G152875062626 HOWEVER, I think Iain is right - PPC64 and RISC-V are becoming more and more popular nowadays and may become more popular than ARM in the future but that future is unclear. If and when they do, I'm sure D and other languages will be ported to them, but right now they're most definitely not. I know because I actually looked for a RISC-V VPS on which to port ldc and found nothing. Conversely, I was able to rent out an ARM Cubieboard2 remotely four years back when I was first getting ldc going on ARM: https://forum.dlang.org/post/steigfwkywotxsypp...@forum.dlang.org I contacted one of the few companies putting out RISC-V dev boards, Sifive, a
Re: Copy Constructor DIP and implementation
On Tuesday, 11 September 2018 at 15:08:33 UTC, RazvanN wrote: Hello everyone, I have finished writing the last details of the copy constructor DIP[1] and also I have published the first implementation [2]. As I wrongfully made a PR for the DIP queue in the early stages of the development of the DIP, I want to announce this way that the DIP is ready for the draft review now. Those who are familiar with the compiler, please take a look at the implementation and help me improve it! Thanks, RazvanN [1] https://github.com/dlang/DIPs/pull/129 [2] https://github.com/dlang/dmd/pull/8688 I'm not sure about the naming of `@implicit` attribute. It seems confusing when used. `@copy` feels more natural or do we even need a new attribute at all?
Re: Mobile is the new PC and AArch64 is the new x64
On Tuesday, 11 September 2018 at 07:52:45 UTC, Joakim wrote: On Tuesday, 11 September 2018 at 07:42:38 UTC, passenger wrote: On Monday, 10 September 2018 at 13:43:46 UTC, Joakim wrote: [...] Is it possible to develop versus a NVidia Jetson, CUDA included? I think so, but I doubt anyone has ever actually tried it: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems-dev-kits-modules/ As for CUDA, Nicholas Wilson said recently that he could do something with it for his DCompute project with ldc, but no idea what the current status is: https://forum.dlang.org/post/slijjptlxdrfgvoya...@forum.dlang.org I'm about to release v0.2, ETA 1 week, with math functions and a API that asserts that you use it correctly (i.e. less reliant on driver error codes which is therefore easier to develop for).
Re: Mobile is the new PC and AArch64 is the new x64
On Wednesday, 12 September 2018 at 06:41:38 UTC, Gambler wrote: [snip] In essence, we are seeing the rapid widening of two digital divides. The first one is between users and average developers. The second one is between average developers and researchers at companies like Google. I very much doubt that we will see an equivalent of today's high-end machine learning server on user's desk, let alone in anyone's pocket, within 7 years. I don't think it's necessarily gonna be like the late 80's PC "revolution" that led to ever more powerful machines being available to the average home user. But most definitely people are switching to mobile, especially because most phones are now powerful enough to do what people used PCs for: internet, email, streaming and even gaming. Then you have speech recognition and text to speech on Android and iOS which makes mobile phones attractive for the visually impaired, and it fits into your pocket. There may be additional benefits in places like Africa where you might not be able to set up PCs and laptops everywhere (which is true even of first world countries). Think of money transfer via apps. I think that's a huge thing in some places in Africa. It's not just about the processing power, it's about convenience. The first question you often hear is "Is there an app for it too?" And even if ARM is replaced someday, the mobile market will remain strong, just with a different architecture - and then D has to cater for that too.
Re: Mobile is the new PC and AArch64 is the new x64
On 9/10/2018 9:43 AM, Joakim wrote: > Yes, I know, these devices won't replace your quad-core Xeon workstation > with 32-64 GBs of RAM anytime soon, but most people don't need anywhere > near that level of compute. That's why PC sales keep dropping while > mobile sales are now 6-7X that per year: I'm all for supporting modern open CPU architectures. At the same time, I fear that the specific trend you're describing here (people ditching PCs for cellphones/tablets) is effectively a reversal of the PC revolution. For the last 30+ years people benefited from "trickle down computing". They had access to PCs that were equivalent to cutting edge servers of 6-7 years prior. They had ability to choose their operating system, expand and upgrade their hardware and install any software they wanted. All of this is breaking down right now. Intel got lazy without competition and high-end CPU architectures stagnated. All the cutting-edge computing is done on NVidia cards today. It requires hundreds of gigabytes of RAM, tens of terabytes of data and usage of specialized computing libraries. I very much doubt this will "trickle down" to mobile in foreseeable future. Heck, most developer laptops today have no CUDA capabilities to speak of. Moreover, mobile devices are locked down by default and it's no trivial task to break out of those walled gardens. IIRC, Apple has an official policy of not allowing programming tools in their app store. Alan Kay had to personally petition Steve Jobs to allow Scratch to be distributed, so kids could learn programming. I believe the general policy is still in place. Android is better, but it's still a horror to do real work on, compared to any PC OS. Fine, you rooted it, installed some compilers and so on. How will you share your software with fellow Android users? In essence, we are seeing the rapid widening of two digital divides. The first one is between users and average developers. The second one is between average developers and researchers at companies like Google. I very much doubt that we will see an equivalent of today's high-end machine learning server on user's desk, let alone in anyone's pocket, within 7 years. My only hope is that newer AMD processors and popularity of VR rigs may help narrow these divides.
DMD32 compiling gtkd out of memory on 32bit Windows 7 machine
hi , When compiling gtkd using dub, dmd32 reported "Out for memory" and exit. OS: Windows 7 32bit. RAM : 3GB DMD version: v2.0.82.0 32bit. No VC or Windows SDK installed, when setting up dmd, I selected install vc2010 and use mingw lib.