Re: `shared`...
On Monday, 1 October 2018 at 02:29:40 UTC, Manu wrote: I feel like I don't understand the design... mutable -> shared should work the same as mutable -> const... because surely that's safe? Nope. Consider. struct A { A* a; } void foo(shared A* a) { a.a = new shared(A))(); } Now you have effectively made a.a accessible as a mutable when it is shared.
Re: I used to be able to use a bffer for toUTF operation, what happened ?
On Wednesday, 11 April 2018 at 12:41:24 UTC, Vladimir Panteleev wrote: On Wednesday, 11 April 2018 at 12:04:24 UTC, deadalnix wrote: This used to be an option: dchar val = ...; char[4] buf; toUTF8(buf, val); Now I'm getting an error. This std.utf.toUTF8 overload was deprecated in 2.074.0 and finally removed in 2.077.0: https://run.dlang.io/is/O57AGU (click Run) Do you have deprecation messages turned on? Yes, but I skipped a few version. encode as proposed indeed does the job, so no problem. Thanks everybody.
I used to be able to use a bffer for toUTF operation, what happened ?
This used to be an option: dchar val = ...; char[4] buf; toUTF8(buf, val); Now I'm getting an error. Looking at the doc, it seems that there are only option returning a string, which I assume is allocated on the GC. Has the function moved somewhere else ? If not, what's going on ?
Re: Opt-in non-null class references?
On Wednesday, 28 February 2018 at 14:05:19 UTC, Jonathan M Davis wrote: I expect that pretty much anything you propose that requires code flow analysis is DOA. Walter is almost always against features that require it, because it's so hard to get right, and the places that D does use it tend to have problems (e.g. it's actually quite trivial to use a const or immutable member variable before it's initialized). Honestly, this is not that hard. It's very hard in DMD because it doesn't go through an SSA like form at any point. It's rather disappointing to see the language spec being decided upon based on design decision made in a compiler many years ago.
Re: PackedAliasSeq?
On Thursday, 22 February 2018 at 19:26:54 UTC, Andrei Alexandrescu wrote: After coding https://github.com/dlang/phobos/pull/6192 with AliasSeq, the experience has been quite pleasurable. However, in places the AliasSeq tends to expand too eagerly, leading to a need to "keep it together" e.g. when you need to pass two of those to a template. I worked around the issue by nesting templates like this: template Merge(T...) { template With(U...) { static if (T.length == 0) alias With = U; else static if (U.length == 0) alias With = T; else static if (T[0] < U[0] || T[0] == U[0] && T[1].stringof <= U[1].stringof) alias With = AliasSeq!(T[0], T[1], Merge!(T[2 .. $]).With!U); else alias With = AliasSeq!(U[0], U[1], Merge!T.With!(U[2 .. $])); } } So instead of the unworkable Merge!(AliasSeq!(...), AliasSeq!(...)), one would write Merge!(AliasSeq!(...)).With!(AliasSeq!(...)). The problem remains for other use cases, so I was thinking of adding to std.meta this simple artifact: template PackedAliasSeq!(T...) { alias expand = AliasSeq!T; } That way, everything stays together and can be expanded on demand. Andrei Isn't a packed AliasSeq just a tuple ?
Re: Developing blockchain software with D, not C++
On Thursday, 18 January 2018 at 09:02:38 UTC, Walter Bright wrote: I don't remember how long, but it took me a fair while to do the divide: https://github.com/dlang/druntime/blob/master/src/rt/llmath.d It could be upscaled by rote to 128 bits, but even that would take me much longer than an hour. And it would still leave the issue of making ucent work with 32 bit code gen. It could also be translated to D, but I doubt the generated code would be as good. Nevertheless, we do have the technology, we just need someone to put it together. All the code to split 64 bits into 32 bits was generic and could be reused.
Re: Developing blockchain software with D, not C++
On Thursday, 18 January 2018 at 03:19:57 UTC, deadalnix wrote: On Sunday, 14 January 2018 at 23:03:27 UTC, Andrei Alexandrescu wrote: Thanks for these thoughts! * (u)cent support * fixes for the shared qualifier * ownership mechanism These took less than 1h to add support for? That would be awesome... but realistically only the (u)cent sounds like that size of effort. Agreed. That would be already a plus, as it would allow to do all the crypto in D. Reading this again, I think there is a bit of a misunderstanding. Only cent/ucent took me ~1h to implement. The rest is more complex. That being said, having cent/ucent would unlock a great deal of performance for crypto libraries, and this is where the bottleneck is as far as CPU is concerned in this type of application.
Re: Developing blockchain software with D, not C++
On Sunday, 14 January 2018 at 23:03:27 UTC, Andrei Alexandrescu wrote: Thanks for these thoughts! * (u)cent support * fixes for the shared qualifier * ownership mechanism These took less than 1h to add support for? That would be awesome... but realistically only the (u)cent sounds like that size of effort. Agreed. That would be already a plus, as it would allow to do all the crypto in D. I've always wondered why we can't implement struct LargeInt(uint bytes) as a library mechanism for larged fixed-size integers, with asm specialization for LargeInt!8. Is adding the type to the compiler necessary, and if so, why? Asm specialization would not be ideal. Compilers have a pass called legalization where they break down the operation of types larger than the largest the plateform support into a series of operations on smaller types. It can generate specific pattern that the rest of the compiler is able to understand and optimize for. This result in the use of instruction that would be very difficult for the compiler to reconstruct from the use of smaller integer types, such as mulhi. Using asm is not ideal, unless the whole routine is written in asm, because the compiler find itself to optimize it, for instance after inlining. So even if it can inline - modern compiler can inline asm under specific circumstances, it finds itself unable to optimize operations at a higher level such as doing (a + b) + (c + d) instead of ((a + b) + c) + d. Having larger types than 128 bits is not really necessary as you can leverage 128bits integers and do the magic yourself. For instance, to add two 256 bits integers represented as ulong[4], you can do: ucent acc = 0; ulong result[4]; for (i; 0 .. 4) { acc += a[i]; acc += b[i]; result[i] = cast(ulong) acc; acc >>= 64; } This will generate the ideal code on a modern optimizing compiler. Doing the same thing using only ulong will not generate good code as the compiler would have to understand from the techniques you used that you are indeed making addition porting the carry over. The problem gets even more hairy for multiplications.
Re: Developing blockchain software with D, not C++
On Saturday, 30 December 2017 at 16:59:41 UTC, aberba wrote: In this video[1] from 2016, developer talks about C++ memory safety features, meta-programming, maturity and few others as main reasons they choose it for developing their blockchain software (the way I got it from a quick view). Besides, D maturity (which I can't confirm or deny), what else does D miss to be considered a better alternative for blockchain in 2018? D is also more productive, has safety and unittest built-in. 1. https://www.youtube.com/watch?v=w4jq4frE5v4 I can talk about this first hand as I have a project running in D. However, I would sadly not recommend D ATM for such a project for 2 reasons: 1/ It is practically not possible to write efficient crypto routines without cent/ucent, short of writing them in asm. 2/ The network layer become very tedious very quick because of how broken shared is and because there is no ownership mechanism. While none of this is present in C++ it doesn't get in your way either. I would *LOVE* to be able to use more D on a day to day basis, but these 2 problems make it very hard. It is especially sad considering 1/ could be solved very easily. It literally took me less than 1h to add support for it in SDC.
Re: [OT] Bitcoin's Split Is Good for Progress
On Monday, 7 August 2017 at 23:45:13 UTC, Joakim wrote: On Wednesday, 2 August 2017 at 16:21:41 UTC, jmh530 wrote: I was surprised to see a familiar name here: https://www.bloomberg.com/view/articles/2017-08-02/bitcoin-s-split-is-good-for-progress Here's an interview with Amaury about the Bitcoin split: https://bitcoinmagazine.com/articles/future-bitcoin-cash-interview-bitcoin-abc-lead-developer-amaury-séchet/ I've been meaning to interview him for the D blog, about D of course, need to get around to that. I guess we can do that now, but as you can imagine, i was pretty busy :)
Re: [OT] Bitcoin's Split Is Good for Progress
On Wednesday, 2 August 2017 at 19:00:05 UTC, Ali Çehreli wrote: On 08/02/2017 09:21 AM, jmh530 wrote: I was surprised to see a familiar name here: https://www.bloomberg.com/view/articles/2017-08-02/bitcoin-s-split-is-good-for-progress "They -- led by former Facebook developer Amaury Sechet -- moved ahead with new software that would increase the maximum block size to 8 MB." Ali We won't stop to 8 :)
Re: [OT] - A hacker stole $31M of Ether — how it happened, and what it means for Ethereum
On Friday, 4 August 2017 at 05:57:00 UTC, Nick B wrote: See - https://medium.freecodecamp.org/a-hacker-stole-31m-of-ether-how-it-happened-and-what-it-means-for-ethereum-9e5dc29e33ce A long read. Someone has stolen $31M of Ether. To give an idea of how bad it is: https://news.ycombinator.com/item?id=14691212 Anyone writing smart contracts on ETH right now is crazy.
Re: [OT] uncovering x86 hardware bugs and unknown instructions by fuzzing.
On Monday, 31 July 2017 at 07:17:33 UTC, Guillaume Chatelet wrote: Some people here might find this interesting: https://github.com/xoreaxeaxeax/sandsifter White paper here: https://github.com/xoreaxeaxeax/sandsifter/blob/master/references/domas_breaking_the_x86_isa_wp.pdf This man is a superhero. See also https://www.youtube.com/watch?v=lR0nh-TdpVg for in hardware privilege escalation and https://www.youtube.com/watch?v=HlUe0TUHOIc . We should consider building a shrine for this guy.
Re: Why do "const inout" and "const inout shared" exist?
On Saturday, 1 July 2017 at 21:47:20 UTC, Andrei Alexandrescu wrote: Walter looked at http://erdani.com/conversions.svg and said actually "const inout" and "const inout shared" should not exist as distinct qualifier groups, leading to the simplified qualifier hierarcy in http://erdani.com/conversions-simplified.svg. Are we missing something? Is there a need for combining const and inout? Yes. inout == mutable, const or immutable const inout == const or immutable
Re: Let's paint those bikesheds^Werror messages!
On Tuesday, 27 June 2017 at 19:43:03 UTC, Vladimir Panteleev wrote: On Tuesday, 27 June 2017 at 19:39:25 UTC, deadalnix wrote: Please, please, please, just do the same as clang. I don't think clang has this feature, so doing the same as clang would be a regression. We're in uncharted waters! Ho, sorry, this is syntax highlighting for the core itself, not the error messages. Well not sure, I'm no designer so I trust you guys to come up with something good.
Re: Let's paint those bikesheds^Werror messages!
On Tuesday, 27 June 2017 at 14:32:28 UTC, Vladimir Panteleev wrote: As has been announced, DMD now has colorized syntax highlighting in error messages: http://forum.dlang.org/post/of9oao$230j$1...@digitalmars.com With 2.075's release near, now would be a good time to decide on a nice color palette that looks fine on most terminals. So, please vote: https://github.com/dlang/dmd/pull/6943 Obligatory: - Yes, not everyone likes colors. You can turn all colors off with a command-line switch. - Yes, everyone agrees that having all colors be configurable would be good. We still need defaults that are going to look OK on most terminals. - Yes, no matter what colors we choose, they're going to look bad on some terminal somewhere. Let's worry about the major platforms' most common terminals for now. Please, please, please, just do the same as clang.
Re: Is there a good lib out there to handle large integer of know size ?
On Monday, 12 June 2017 at 15:41:03 UTC, Era Scarecrow wrote: On Monday, 12 June 2017 at 13:41:36 UTC, deadalnix wrote: You misunderstood. We need cent/ucent supported by the compiler to get to larger integral types efficiently. There are no ways around it. There are a ton of operations such as the X86 MUL which are able to produce a large multiplication into 2 registers. There are no way to leverage these without compiler provided cent/ucent. Agreed. But until larger types are natively avaliable (either by simulation built into the compiler or by hardware registers) you gotta work with what you got. I work with SDC on that one. That's the only reasonable path forward.
Re: Is there a good lib out there to handle large integer of know size ?
On Sunday, 11 June 2017 at 18:01:41 UTC, Era Scarecrow wrote: On Sunday, 11 June 2017 at 08:52:37 UTC, deadalnix wrote: I ended up doing my own. There are just no way to do it well without cent/ucent . Weka is running into the same problem for error correction. And what timing, I just finished getting it working with assembly and compatible versions. Seems 5x slower rather than the dreaded 50, but that's from testing factorial up to 100 (200ms vs 1,200ms). Anyways, here's how it is currently, most features are avaliable, need more tests and find occurrences where it doesn't work. https://github.com/rtcvb32/Side-Projects/blob/master/scaledint.d One note, to get the x86 assembly versions, use -version=Intel, I have it disabled so I can concentrate on the compatible version and switch between them on my machine. -debug includes a unittest that runs a factorial test You misunderstood. We need cent/ucent supported by the compiler to get to larger integral types efficiently. There are no ways around it. There are a ton of operations such as the X86 MUL which are able to produce a large multiplication into 2 registers. There are no way to leverage theses without compiler provided cent/ucent.
Re: Is there a good lib out there to handle large integer of know size ?
On Saturday, 10 June 2017 at 20:19:22 UTC, Era Scarecrow wrote: On Saturday, 10 June 2017 at 19:40:47 UTC, Andrei Alexandrescu wrote: On 6/10/17 3:28 PM, Era Scarecrow wrote: Got a possible one. My implementation is heavy on assembly language to take advantage of x86 features That's cool as long as the assembler is guarded by version(X_86) and has a portable alternative. Where's the code? -- Andrei My computer, and no portable alternative yet. I've got a ways before that's an option, probably end up doing 32bit math with ulongs to make it work reliably and portably. Worse it will probably be 10-50x slower. I ended up doing my own. There are just no way to do it well without cent/ucent . Weka is running into the same problem for error correction.
Re: Value closures (no GC allocation)
On Sunday, 21 May 2017 at 00:33:30 UTC, Vittorio Romeo wrote: Hello everyone, I recently started learning D (I come from a Modern C++ background) and I was curious about closures that require GC allocation. I wrote this simple example: auto bar(T)(T x) @nogc { return x(10); } auto foo(int x) @nogc { return bar((int y) => x + y + 10); } int main() @nogc { return foo(10); } It doesn't compile with the following error: Error: function example.foo is @nogc yet allocates closures with the GC example.foo.__lambda2 closes over variable x at [...] Live example on godbolt: https://godbolt.org/g/tECDh4 I was wondering whether or not D could provide some syntax that allowed the user to create a "value closure", similar to how C++ lambdas work. How would you feel about something like: auto bar(T)(T x) @nogc { return x(10); } auto foo(int x) @nogc { return bar([x](int y) => x + y + 10); } int main() @nogc { return foo(10); } The syntax: [x](int y) => x + y + 10 would mean "create a 'value closure' that captures `x` by value inside it". It would be equivalent to the following program: struct AnonymousClosure { int captured_x; this(int x) @nogc { captured_x = x; } auto opCall(int y) @nogc { return captured_x + y + 10; } } auto foo(int x) @nogc { return bar(AnonymousClosure(x)); } Which is very similar to how C++ lambdas work. This would allow closures to be used in @nogc contexts with minimal syntactical overhead over classical closures. Live example on godbolt: https://godbolt.org/g/ML2dlP What are your thoughts? Has something similar been proposed before? https://wiki.dlang.org/DIP30 Also, while no syntax is provided, this is how SDC works internally and this is how it can handle multiple context pointers.
Re: Thoughts on some code breakage with 2.074
On Thursday, 11 May 2017 at 12:26:11 UTC, Steven Schveighoffer wrote: if(arr) -> same as if(arr.ptr) Nope. It is: if(arr) -> same as if(((cast(size_t) arr.ptr) | arr.length) != 0) Should we conclude from the fact that absolutely nobody gets it right in this very forum that nobody will get it right outside ? I'll let you judge.
Re: Thoughts on some code breakage with 2.074
On Thursday, 11 May 2017 at 12:21:46 UTC, Steven Schveighoffer wrote: I can't imagine anyone attempted to force this to break without a loud backlash. I think if(ptr) is mostly universally understood to mean the pointer is not null. -Steve It is not a problem for pointer because for pointer identity and equality are the same thing. It isn't for slices.
Re: Fantastic exchange from DConf
On Thursday, 11 May 2017 at 21:20:35 UTC, Jack Stouffer wrote: On Tuesday, 9 May 2017 at 14:13:31 UTC, Walter Bright wrote: 2. it may not be available on your platform I just had to use valgrind for the first time in years at work (mostly Python code there) and I realized that there's no version that works on the latest OS X version. So valgrind runs on about 2.5% of computers in existence. Fun! Use ASAN.
Re: Fantastic exchange from DConf
On Wednesday, 10 May 2017 at 17:51:38 UTC, H. S. Teoh wrote: Haha, I guess I'm not as good of a C coder as I'd like to think I am. :-D That comment puts you ahead of the pack already :)
Re: Thoughts on some code breakage with 2.074
On Wednesday, 10 May 2017 at 19:06:40 UTC, Ali Çehreli wrote: Bummer for H. S. Teoh I guess... :/ Although I prefer explicit over implicit in most cases, I've never graduated from if(p) and still using it happily. :) Ali All bool conversions in D are value based, not identity based. Not only this is error prone, this is inconsistent.
Re: DIP 1004 Preliminary Review Round 1
On Monday, 8 May 2017 at 08:25:24 UTC, Andrej Mitrovic wrote: Thoughts? It seems like the most sensible path forward. Mike ?
Re: Fantastic exchange from DConf
On Saturday, 6 May 2017 at 17:59:38 UTC, thedeemon wrote: On Saturday, 6 May 2017 at 06:26:29 UTC, Joakim wrote: Walter: I believe memory safety will kill C. And then null safety will kill D. ;) I actually think this is more likely than memory safety killing C. Just because both are very important but D is just easier to kill than C for historical reasons.
Re: DIP 1004 Preliminary Review Round 1
On Tuesday, 2 May 2017 at 11:13:35 UTC, Andrej Mitrovic wrote: On Tuesday, 2 May 2017 at 09:03:27 UTC, deadalnix wrote: 100% in favor of the constructor behavior change in case no constructor is in the derived class. I think we could even split this up into two separate proposals, because this part of the DIP is fairly non-controversial and could be approved much faster (and implementation-wise it should be fairly simple to support). <3
Re: DIP 1004 Preliminary Review Round 1
On Monday, 1 May 2017 at 14:55:28 UTC, Mike Parker wrote: DIP 1004 is titled "Inherited Constructors. https://github.com/dlang/DIPs/blob/master/DIPs/DIP1004.md All review-related feedback on and discussion of the DIP should occur in this thread. Due to DConf taking place during the review period, the period will be extended by a week. The review period will end at 11:59 PM ET on May 22 (3:59 AM GMT May 23), or when I make a post declaring it complete. At the end of Round 1, if further review is deemed necessary, the DIP will be scheduled for another round. Otherwise, it will be queued for the formal review and evaluation by the language authors. Thanks in advance to all who participate. Destroy! 100% in favor of the constructor behavior change in case no constructor is in the derived class. Not convinced by the alias this trick. It doesn't pays for itself, IMO. It could be provided by a mixin or something and shouldn't be baked into the language.
Re: DIP 1005 - Preliminary Review Round 1
On Sunday, 23 April 2017 at 19:25:09 UTC, Andrej Mitrovic wrote: With this syntax, the import is executed only if the declared name (process) is actually looked up. I don't believe the workaround with the `from` template fixes this. Not sure what DMD does, but SDC sure would do it only if used.
Re: DIP 1005 - Preliminary Review Round 1
On Saturday, 22 April 2017 at 11:54:08 UTC, Mike Parker wrote: Destroy! I'm not per se against going there but there are 2 points that needs to be considered. The first one is the "self important lookup" which obviate the need for this DIP to some extent. Second, if we are going to proceed anyway, the way this is specified is not ideal. This DIP effectively adds 2 features: 1/ The ability to use import foo as an argument to a with statement. 2/ The introducing of a with declaration in addition of a with statement. These two addition are independents as far as the spec goes and should be kept as such as to avoid an explosion of ad hoc solutions for the use case we want to enable, rather than providing building blocks that combine nicely to build such solutions.
Re: DIP 1005 - Preliminary Review Round 1
On Sunday, 23 April 2017 at 12:34:34 UTC, Andrej Mitrovic wrote: On Sunday, 23 April 2017 at 12:03:47 UTC, Andrei Alexandrescu wrote: Mostly out of a sense of conformity. We asked Michael to give no special treatment of DIPs originating from us, and this one was open, so he put it up for review. It is likely it will end up rejected in favor of https://github.com/dlang/druntime/pull/1756. Wouldn't there be a compile-time performance impact from instantiating so many templates, virtually for every parameter who's type is defined in another module? It's just one per module. Templates are only instantiated once per new set of arguments. There may be some gain here, but I doubt this is worth adding a new language feature.
Re: Proposal 2: Exceptions and @nogc
On Tuesday, 11 April 2017 at 17:43:20 UTC, Walter Bright wrote: On the other hand, overly principled languages tend to not be as successful, because what people need to do with programs is often dirty. Monads, and "functional reactive programming", are obtuse things that come about when a functional programming language requires 100% purity and immutability. Monads and "functional reactive programming" are not a manifestation of being principled but being ideological. Javascript is principled. Language that historically are very unprincipled such as PHP and C++, first didn't succeeded because they were unprincipled, but because they were quick and dirty hack solving people's problems where no other solution was available, and are moving toward being more principled with each version.
Re: Proposal 2: Exceptions and @nogc
On Monday, 10 April 2017 at 21:44:32 UTC, Jonathan Marler wrote: "throw" operator (throw a Throwable object) "new" operator (create a GC object) "throw new" operator (create and throw a reference-counted Throwable object) There is no need for this, the compiler already understands the notion of unique for newed objects and object returned from pure functions.
Re: Proposal 2: Exceptions and @nogc
On Sunday, 9 April 2017 at 20:14:24 UTC, Walter Bright wrote: For another, a general mechanism for safe refcounting of classes has eluded us. The only thing you need to get backed into the language is to make sure things do not escape in uncontrolled manner. Everything else is library. You wouldn't have this problem if you had listened to myself and Marc when defining DIP1000, because that's exactly what you've been warned about at the time. Quoting from the timeline ML from Nov 2014: [...] Every expression has now has a lifetime associated with it, and can be marked as "scope". it is only possible to assign b to a if b has a lifetime equal or greater than a's. An infinite lifetime is a lifetime greater or equal than any other lifetime. Expression of infinite lifetime are: - literals - GC heap allocated objects - statics and enums. - rvalues of type that do not contain indirections. - non scope rvalues. Dereference share the lifetime of the dereferenced expression (ie infinite lifetime unless the expression is scope). Address of expression shared the lifetime of the base expression, and in addition gain the scope flag. Comment: Using these rule, we basically define any indirection being of infinite lifetime by default, and we propagate the lifetime when scope. The addition of the scope flag for address of is necessary to disallow taking address->dereference to yield an infinite lifetime. Variables delcarations (including parameters) have the lifetime of the block they are declared in (2 pitfalls here, I don't have good solution, and the original spec do not as well : #1 destructor, finally, scope statement and #2 closures). Use of these variables shared the lifetime of the variable, unless they qualify for infinite lifetime. Parameter's lifetime are unordered, meaning smaller than infinite, greater than the function's scope, but not equal to each other nor greater/smaller than each others. [...]
Re: Exceptions in @nogc code
On Sunday, 9 April 2017 at 13:16:45 UTC, irritate wrote: The problems is with the so called "proposals". Second class ideas nowhere near implementation. There is a better discussion in this forum, every other week. Deadalinx should get a better image of the quality of his own work and stop shamelessly touting it. irritate From you, I'm taking it as a compliment. Thanks.
Re: dmd Backend converted to Boost License
On Friday, 7 April 2017 at 15:14:40 UTC, Walter Bright wrote: https://github.com/dlang/dmd/pull/6680 Yes, this is for real! Symantec has given their permission to relicense it. Thank you, Symantec! <3
Re: Exceptions in @nogc code
On Thursday, 6 April 2017 at 16:56:10 UTC, Olivier FAURE wrote: I'm not saying you're wrong, but there's a different between saying "You should flesh out your idea" and "We're not going to respond formally before you submit a DIP". Yes that's essentially my problem here.
Re: Exceptions in @nogc code
On Thursday, 6 April 2017 at 22:11:55 UTC, Walter Bright wrote: On 4/6/2017 2:18 PM, H. S. Teoh via Digitalmars-d wrote: You were asking for a link to deadalnix's original discussion, and that's the link I found (somebody else also posted a link to the same discussion). Only deadalnix can confirm that's what he's talking about. Yes this: https://forum.dlang.org/thread/kpgilxyyrrluxpepe...@forum.dlang.org Also this: https://forum.dlang.org/post/kluaojijixhwigouj...@forum.dlang.org I also produced a fairly detailed spec of how lifetime can be tracked in the lifetime ML. This address scope and do not require owned by itself. Considering the compiler infer what it calls "unique" already, it could solve the @nogc Exception problem to some extent without the owned part. Because it is in a ML, I cannot post a link.
Re: shared: Has anyone used it without a lot of pain?
On Wednesday, 5 April 2017 at 14:01:24 UTC, Guillaume Piolat wrote: Do we have a missed opportunity with shared? Yes we do. The #1 problem is that it lack a bridge to and from the "normal" thread local world. there is literally no way to use shared in a correct way, you always need to bypass part of the language ATM. The 3 main way data are shared go as follow : 1/ Producer/consumer. Thread 1 create some object, send it to thread 2 for processing. This is common in server applications for instance, where a thread will accept request and then dispatch it to worker threads. Right now, there are no way to transfers ownership from one thread to another, so you pretty much got to cast to shared, move data to the other thread and then cast back to not shared. (People who followed closely will notice that this is somewhat entangled with other problem D has such as nogc exception, see http://forum.dlang.org/post/ikzzvbtvwhqweqlzx...@forum.dlang.org ). 2/ Actually shared objects with temporal ownership via a mutex. Once again, this is about ownership. One can get a temporary ownership of some shared object which one can manipulate as if it was thread local for the duration a mutex is held. The current way to do this is to take the mutex and cast away shared. This is bad as it breaks all type system guarantees. To be done safely, this is needs to specify what is actually owned by the object that is protected by the mutex and what isn't. A phobos facility could, granted ownership was known, take an object, take the mutex and then allow to access it in a "scope" manner based on the lifetime of the mutex's lock. the accessed object can even be const in case of RWlock and it works beautifully because const is transitive. 3/ Actually shared object providing method which use atomics and alike to be thread safe. This use case is actually decently served by shared today, except the construction of the object in the first place.
Re: Exceptions in @nogc code
On Wednesday, 5 April 2017 at 23:49:00 UTC, Walter Bright wrote: Your original proposal listed 3 different kinds of catch, now it seems different. It's clear to me that this is more of an idea than a proposal - a lot more work needs to go into it. It is no different. These aren't special type of catch, no more than existing try { ... } catch (immutable(Exception) e) { ... } is a special type of catch. For example, adding an `owned` type constructor is a major, major language change. Issues of implicit conversion, overloading, covariance, partial ordering, type deduction, mangling, construction, postblit, TypeInfo, inout, etc., all have to be addressed and worked out. Then there's legacy compatibility, deprecation issues, interaction with other languages, interaction with multiple storage allocators, etc. Most of that was specified in the past and ignored. The C# paper is 5 years old, and it has not been adopted by C# for unknown reasons. C# is a much more constrained language than D is, making it more practical for C#. It not being adopted by C# suggests problems with it - perhaps it doesn't deliver the promised results enough to justify its cost? In D, the cost/benefit ratio is higher because it solves problems with manual memory management that are irrelevant in C# and complete existing type system instead of creating something new. However, the general idea is getting traction, as you can see from Herb's presentation at the CppCon posted in a previous post. His goal was to introduce GCed arenas in C++, the basic concept is the same. I don't believe that a back-and-forth disjointed email chain here is going to resolve the major issues with it. Until a far more thorough design proposal is made, I'm going to bow out. It was specified to a fairly good extent in the lifetime ML (especialy the scope part, was specified in extreme details) and got ignored. You got presented with most of what you are asking for now literally years ago and chose to ignore it. What about we start from there instead of asking me to redo all the work from scratch ? I have people paying me to do useful work, and people who ask me to redo some work again and again for free and ignore it. Who do you think is getting most of my time ? I like D and all, but there are limits.
Re: Exceptions in @nogc code
On Thursday, 6 April 2017 at 03:52:39 UTC, Andrei Alexandrescu wrote: Thank you. If history is any indication, there is little to show after years of being around the community. The pattern seems to be a frustration that other people don't work on your ideas, which you can't convince yourself to spend time on. No the pattern is you guys propose something. You are pointed out that this thing won't work as well as expected and you'll run into problem X, Y and Z down the road. You are then presented will alternatives that you ignore and chose to proceed anyway. When later on, you run into problem X, Y and Z, as predicted, you act like this is new to you, and ask for people to redo a bunch of work that you can ignore again. You can point me as the lazy bum here, but there is a reasons why the lifetime ML died. You ignored all proposal that weren't your own and people stopped participating. I'm just the only one persistent enough to continue pointing it out.
Re: Proposal: Exceptions and @nogc
On Wednesday, 5 April 2017 at 09:51:16 UTC, Walter Bright wrote: Much of Phobos has been redone to not assume/require the GC. A glaring exception (!) is when Exceptions are thrown, which is why we're looking for a solution. Make the exception owned, and let the caller decide.
Re: Exceptions in @nogc code
On Wednesday, 5 April 2017 at 12:14:38 UTC, Andrei Alexandrescu wrote: As a matter of procedure no, a forum post will not be followed by a formal response. The DIP process ensures a formal response. [...] I encourage anyone interested in pursuing this idea to work on a DIP. Thanks, Andrei To be blunt I played the DIP game in the past, never again. This is very time consuming and nobody gives a shit. You two just do whatever the heck you want at the end of the day. I'm just pointing I predicted the problem you are running into with your brilliant approach years before you realized it is a problem. You can decide to not listen, not really my problem. I wrote fairly comprehensive specs of the idea in various places, including in the ML you created for this very topic. I just can't be writing specs again and again for them to be ignored, that's just not a productive use of my time, and at this point I'd even say it's not very respectful to ask people to waste more time. I'm happy to work with you guy to come up with something, but I surely won't spend several days working a spec for nothing.
Re: Exceptions in @nogc code
On Wednesday, 5 April 2017 at 09:48:47 UTC, Walter Bright wrote: try { ... } catch (owned Exception e) { ... } catch (scope Exception e) { ... } catch (Exception e) { ... } It not look enticing. You can do that, but that's 100% equivalent to: try { ... } catch (scope Exception e) { ... } Unless you want to do something specific with the owned case ? You seems to be under the impression that this does anything specific for catch/throw when it doesn't. Today, you can do try { ... } catch(immutable(Exception) e) { ... } There is nothing different here.
Re: Exceptions in @nogc code
On Tuesday, 4 April 2017 at 09:45:14 UTC, Walter Bright wrote: 1. we already have some of the benefits of the proposal because D has transitive immutability This works hand in hand with D type qualifier system. 2. I'm looking for a solution where exceptions don't rely on the GC to the point where the GC code doesn't even need to be linked in. This proposal appears to maintain a dependence on the GC. Then just do: auto ePtr = malloc(...); auto e = *(cast(Exception*) ); throw e; Problem solved, you did not used the GC. This proposal has nothing to do with Exceptions. It just happens to solve the Exception problem, just as it does for many others. The problem people have with the GC isn't that it need to be linked in, it is that collection cycles create latency that doesn't work for their use case. If allocation are freed, then there is no GC problem. Nobody is complaining that they cannot use C++ without its runtime. And for the 0.1% who actually need it, they just write custom allocation, and jump through a few hoops. This is a just good engineering. You are trying to solve the wrong problem. really, in the process you are also murdering another legit use case that is orders of magnitude more common: library writers. They want to write code that'll work when the GC is used or not. 3. It requires annotation of catch declarations with one of "", "scope", or "owned". I expect this would be a significant problem This proposal just add the owned type qualifier. scope already exists. It solves many problem for which new syntax have been introduced, so it's rather rich. "I don't want this proposal that add one new syntax, I'd rather have 4 smaller proposals that each add a new syntax." 4. Since the catch block is not type checked against the corresponding throw, the object will have to have noted in it who owns it. throw never leaks, so there is no point in checking the throw. You already throw something that is owned by the GC, in which case you leaked earlier, or you transfer the ownership to the runtime and you haven't leaked yet. Because throw never leaks, there is no point in butchering throw to make it compatible with nogc. Back to the catch block, the question is the same as for a function call or anything else really. It either borrow the exception (scope) take ownership of it (owned) or just delegate the work to the GC. 5. The normal case is: throw new Exception("message"); ... catch (Exception e) { writeln(e.msg); } which would ideally work without involving the GC at all. This cannot be nogc in the general case because e can be reassigned to anything that is owned by the GC. In that specific case, scope can be inferred. 6. reducing the amount of GC garbage created is good, but does not solve the problem of "I don't want to use D because of the GC". This proposal looks promising for making a better garbage collected language, but people want a language with an optional GC. If you have guarantee that you won't leak, and so will never run collection cycle, you don't have a GC, you effectively have malloc and free. There is no need to butcher the language because there are morons on the internet. really that's hardly newsworthy.
Re: Proposal: Exceptions and @nogc
On Monday, 3 April 2017 at 22:20:23 UTC, Walter Bright wrote: You're right that this proposal does not address how memory is allocated for anything indirectly referenced by the exception object. That is an independent issue, and is not peculiar to exception objects. There is no issue specific to Exception here.
Re: Exceptions in @nogc code
On Monday, 3 April 2017 at 08:22:41 UTC, Matthias Bentrup wrote: How would you deal with the Exception payload, e.g. the message string ? Yes current proposal are unable to handle properly multiple levels of indirections.
Re: Exceptions in @nogc code
On Saturday, 1 April 2017 at 22:08:27 UTC, Walter Bright wrote: On 4/1/2017 7:54 AM, deadalnix wrote: It doesn't need any kind of throw new scope Exception, and was proposed, literally, years ago during discussion around DIP25 and alike. A link to that proposal would be appreciated. The forum search isn't returning anything useful so I'm not sure how to get that link. However, it goes roughly as follow. Note that it's a solution to solve DIP25+DIP1000+RC+nogc exception and a sludge of other issues, and that comparing it to any of these independently will yield the obvious it is more complex. But that wouldn't be a fair comparison, as one should compare it to the sum of all these proposals, not to any of them independently. The compiler already has the nothing of "unique" and use it to some extent, for instance to optimize pure functions and allow to do some magic with new allocations. The proposal relax and extend the concept of unique and make it explicit, via the type qualifier 'owned'. Going into the details here would take too long, so i'll just reference this paper ( https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/msr-tr-2012-79.pdf ) for detailed work in C# about this. This is easily applicable to D. The GC heap is currently composed of several islands. One island per thread + one shared island + one immutable island. The owned qualifier essentially enable to have more island, and each indirection tagged owned is a jump to another island. What's currently understood as "unique" by the compiler can be considered owned, which includes values returned from strongly pure functions and fresh new allocations. Let's see how that apply to exceptions and the GC: - If a T is thrown, the runtime assumes GC ownership of the exception. - If a owned(T) is thrown, the runtime takes ownership of the exception (and of the graph of objects reachable from the exception. When catching: - If the catch isn't scope or owned, then is is a "consume" operation. If the runtime had ownership of the exception, it transfers it to the GC. - If the catch is scope, then the runtime keeps ownership of the Exception. Exiting the catch block is going to destroy the Exception and the whole island associated with it. - If the catch is owned, then the ownership of the Exception is transferred to the catch block. It is then either transferred back to the runtime in case of rethrow, or consumed/destroyed depending on what the catch block is doing with it. The only operations that need to be disallowed in nogc code are consuming owned such as their ownership is transferred to the GC, in this case, catch blocks which aren't owned or scope. This mechanism solves numerous other issues. Notably and non exhaustively: - General reduction in the amount of garbage created. - Ability to transfers ownership of data between thread safely (without cast to/from shared). - Safe std.parralelism . - Elaborate construction of shared and immutable objects. - Safe reference counting. - Safe "arena" style reference counting such as: https://www.youtube.com/watch?v=JfmTagWcqoE - Solves problems with collection ownership and alike.
Re: Exceptions in @nogc code
On Sunday, 2 April 2017 at 18:41:45 UTC, Adam D. Ruppe wrote: On Sunday, 2 April 2017 at 18:16:43 UTC, Johannes Pfau wrote: I do not want GC _allocation_ for embedded systems (don't even want to link in the GC or GC stub code) ;-) Then don't use operator `new`... you're probably using some kind of custom druntime anyway. Yes I think it is fair to assume one will have to jump through some hoops if one doesn't use want to use the runtime. That's fine.
Re: Exceptions in @nogc code
On Saturday, 1 April 2017 at 13:34:58 UTC, Andrei Alexandrescu wrote: Walter and I discussed the following promising setup: Use "throw new scope Exception" from @nogc code. That will cause the exception to be allocated in a special stack-like region. If the catching code uses "catch (scope Exception obj)", then a reference to the exception thus created will be passed to catch. At the end of the catch block there's no outstanding reference to "obj" so it will be freed. All @nogc code must use this form of catch. If the catching code uses "catch (Exception obj)", the exception is cloned on the gc heap and then freed. Finally, if an exception is thrown with "throw new Exception" it can be caught with "catch (scope Exception obj)" by copying the exception from the heap into the special region, and then freeing the exception on the heap. Such a scheme preserves backward compatibility and leverages the work done on "scope". Andrei I'll repeat myself, even if I don't believe it'll be listened to at this point. The problem you want to address is not GC allocations, it is GC collection cycles. If everything is freed, then there is no GC problem. not only this, but this is the only way GC and nogc code will interact with each others. As long as a memory allocation has an owner the compiler can track, it can be freed explicitly, and, when it cannot, the compiler transfer ownership to the GC, which is illegal in @nogc code. Transfering the ownership to the unwind handler when doing: throw new FooException(); Is not rocket science and doesn't need any new language addition. Now onto to scope. Scope essentially means that you are going to use some object without taking ownership of it. Indeed, in case of catch(Exception e) the language has to transfers the ownership of the Exception to the GC, which is the thing that should be illegal (not throwing). catch(scope Exception e) would work both with GC owned and runtime owned exception, and, because the runtime know what's up, it can explicitly free the exception when it exit the catch block (there is already a runtime call for that), in the case it owns it. It doesn't need any kind of throw new scope Exception, and was proposed, literally, years ago during discussion around DIP25 and alike. I urge you to reconsider the proposal that were made at the time. They solve all the problems you are discovering now, and more. And, while more complex that DIP25 alone, considering DIP25+DIP1000+this thing+the RC object thing, you are already in the zone where the "simple" approach is not so simple already. Things are unfolding exactly as predicted at the time. Ad hoc solutions to various problems are proposed one by one and the overall complexity is growing much larger than initially proposed solutions.
Re: Is it acceptable to not parse unittest blocks when unittests are disabled ?
On Thursday, 30 March 2017 at 20:29:26 UTC, Dukc wrote: On Thursday, 30 March 2017 at 17:22:20 UTC, Stefan Koch wrote: SDC has the goal to be more principled. And Not to be Mr. fast and loose, right ? If a file parses it'd better be syntactically correct! All of it. Just an idea, but could the solution for SDC be to enable unittests by default, disabling them would be explicit? That would sure make using it alot more principled that dmd, regardless whether it syntax checks unittests when they are disabled. SDC uses an utility called sdunit to JIT the unittest. Right now, sdunit doesn't handle exceptions so its utility is limited, but it's getting there.
Re: Is it acceptable to not parse unittest blocks when unittests are disabled ?
On Wednesday, 29 March 2017 at 19:32:50 UTC, Vladimir Panteleev wrote: Sorry, is this not already the case? $ dmd test.d $ cat test.d void main() { import std.stdio; writeln("Hello, world!"); } unittest { foo bar {} baz more-syntax!errors)blah } $ dmd test.d $ ./test Hello, world! Alright then, it looks like it is :) I was asking for SDC.
Re: Is it acceptable to not parse unittest blocks when unittests are disabled ?
On Wednesday, 29 March 2017 at 11:22:59 UTC, rikki cattermole wrote: Which is basically what you said. It isn't. version needs to be parsed and thus, grammatically valid.
Is it acceptable to not parse unittest blocks when unittests are disabled ?
I was wondering. When uniitests aren't going to run, it may be desirable to skip parsing altogether, just lexing and counting braces until the matching closing brace is found. Obviously, this means that no error will be found in unittests blocks. That can contain pretty much anything that lex, so it's even more lax than what's allowed inside a static if. Is that an acceptable tradeof ?
Re: const(Class) is mangled as Class const* const
On Tuesday, 28 March 2017 at 13:18:57 UTC, kinke wrote: You don't seem to get my point, I don't know why it's apparently that hard. It's hard because you assume I did not understood you point and you keep repeating the same thing. I understand you point and showed you why it isn't a mangling problem at all, and gave you direction you may want to dig in to make a proposal that may actually get traction. But you can chose to entrench yourself in a position were nobody understands your genius. You won't get any result, but depending on your personality, it may make you feel good, which is already something.
Re: const(Class) is mangled as Class const* const
On Tuesday, 28 March 2017 at 08:30:43 UTC, kinke wrote: What I don't get is why it's considered important to have a matching C++ mangling for templates across D and C++ - what for? I only care about mangling wrt. If you still think this is a mangling problem, please reread my first response in this thread.
Re: const(Class) is mangled as Class const* const
On Sunday, 26 March 2017 at 22:56:59 UTC, Jerry wrote: On Sunday, 26 March 2017 at 22:29:56 UTC, deadalnix wrote: It is clear that you won't be able to express 100% of C++ in D, that would require to important all the weird parts of C++ into D, but if we are doing so, why use D in the first place ? Note that using const Class* in C++ is essentially useless. The class remains mutable and the reference is local the the callee anyway, so it doesn't change anything for the caller. Such a pattern is most likely indicative of a bug on the C++ side, or at least of code that do not do what the author intend to. For `const Class*` the Class is not mutable. It is the case of `Class* const` that Class is mutable. You are correct. See my first post for an explanation of this specific case.
Re: const(Class) is mangled as Class const* const
On Sunday, 26 March 2017 at 17:41:57 UTC, Benjamin Thaut wrote: On Sunday, 26 March 2017 at 14:30:00 UTC, deadalnix wrote: It's consistent. D's const is transitive, and D doesn't allow you to specify const on the indirection of a reference type. So there is no problem on the C++ mangling side of things, but, arguably, there is one in D's sementic, that isn't new. I disagree. When binding C++ code to D I don't care about D's const rules. I care about the C++ const rules. There are thousands of C++ libraries out there that can't be bound to D because they use const Class* instead of const Class* const. So in my eyes there is definitly something wrong with the C++ mangling of D. It is clear that you won't be able to express 100% of C++ in D, that would require to important all the weird parts of C++ into D, but if we are doing so, why use D in the first place ? Note that using const Class* in C++ is essentially useless. The class remains mutable and the reference is local the the callee anyway, so it doesn't change anything for the caller. Such a pattern is most likely indicative of a bug on the C++ side, or at least of code that do not do what the author intend to.
Re: const(Class) is mangled as Class const* const
On Sunday, 26 March 2017 at 10:43:11 UTC, Benjamin Thaut wrote: As you see from the above example D mangles the getClassConst as a "Class const * const" instead of a "Class const *" ("YAQEBV" vs "YAPEBV"). Is this expected behavior? It's consistent. D's const is transitive, and D doesn't allow you to specify const on the indirection of a reference type. So there is no problem on the C++ mangling side of things, but, arguably, there is one in D's sementic, that isn't new. Something like differentiating "const(C) i" and "const C i" may be a good idea.
Re: Multi-commit PRs vs. multiple single-commit PRs
On Friday, 24 March 2017 at 09:27:54 UTC, Vladimir Panteleev wrote: Yep, because of the misuse-worst-case arguments. Simple solutions that guard against such mistakes are welcome. E.g. we could allow squashing if all commits' commit messages except the first one's start with "[SQUASH] " or "fixup! ". Because it is meant to be the default, doing only when some specific message exist is not going to fly. Using !donotsquash or alike in the commit message is, however, a good way to proceed.
Re: Multi-commit PRs vs. multiple single-commit PRs
On Wednesday, 22 March 2017 at 09:02:24 UTC, Vladimir Panteleev wrote: On Tuesday, 21 March 2017 at 18:07:57 UTC, deadalnix wrote: Large companies such as Google or Facebook A blind appeal to authority is fallacious, but it's still worthwhile to see what others are doing. I think it's important to look at projects that are similar to our own, so I looked at what other programming language implementations do. The good new is, you provided much more authorities to confirm my claim, so is it so blind ? - Go is developed using Google's source code infrastructure, and code reviews happen using Gerrit. On Gerrit, every commit is reviewed separately (as I've been advocating). Furthermore, if you push multiple commits to Gerrit, this automatically creates one review page per commit, and marks them as inter-dependent in the commit order. This is an awesome approach, and I wish GitHub made this workflow more practical. Importantly, Gerrit does not squash commits - you are expected to squash fixup commits yourself. So Go use squash. - Rust uses GitHub, and all merges seem to be done by a bot. We are heading in that direction too. The bot uses regular merges and does not squash commits or rebase them onto master. So that's 1. - Python: I looked at the CPython repository on GitHub. They seem to be using squashing exclusively, and only using branches for version maintenance. However, when I tried to find how they would deal with a contribution that would be desirable to be split into several PRs/commits, I couldn't find one on the first 5 pages of merged PRs. I guess the project is in the stage of mostly minor bugfixes only - we're certainly not there yet. Curiously, submitters are expected to resubmit the same PR themselves against every maintenance branch, e.g. here is the same PR submitted 4 times, to different branches: - https://github.com/python/cpython/pull/629 - https://github.com/python/cpython/pull/633 - https://github.com/python/cpython/pull/634 - https://github.com/python/cpython/pull/635 So they use squash. - Ruby uses Subversion, a GitHub mirror, and a bot which synchronizes between the two. I don't think there's anything we can learn from here. So they use squash (that's the only thing svn knows how to do). - OCaml uses GitHub PRs and regular git merges. That's 2. - Clang and GHC use Phabricator. I'm not too familiar with it, but I understand it's not too different from Gerrit: it creates one review per commit, and you can push multiple commits at once which will do the right thing. Phabricator can be configured to do many things, pretty much like gerrit, but in the case of clang and LLVM, they use squash. To sum it up, I don't think we're doing anything too weird. Though it would be nice if GitHub's UI were to improve to better handle this workflow, I don't think it makes sense to force submitters to go through the busywork of creating one PR per commit for many cases. 4 out of your 6 examples use squash.
Re: Multi-commit PRs vs. multiple single-commit PRs
On Tuesday, 21 March 2017 at 12:49:22 UTC, Vladimir Panteleev wrote: there are ample proof that is increase the quality of the code review, OK, where is the proof? Large companies such as Google or Facebook measure these things. You have presented 0 arguments so far, and dismissed both facts and argument that were presented to you (one of them as unfair, because fairness and correctness surely are correlated). But cool guys you are right, don't change anything. This is great. I have other things to do to convince you guys when other are paying me to do so.
Re: The delang is using merge instead of rebase/squash
On Tuesday, 21 March 2017 at 12:45:45 UTC, Vladimir Panteleev wrote: On Tuesday, 21 March 2017 at 11:59:42 UTC, deadalnix wrote: It's not good either. Why would I want to look at a DAG when the serie of event is strictly linear to begin with ? Not sure what you mean here. The way it's presented is not a DAG. Blue is red, up is down, and the commit graph is not a DAG. "Our source control is completely broken, but that's not a problem because we developed 3rd party tools to work around the brokenness" That's fallacious. If you can't bissect, it's broken. Listen, you know it's broken because you wrote tools to work around the brokenness. If it wasn't broken you wouldn't have written these tools as there would be no need to do so. So let's not play pretend.
Re: The delang is using merge instead of rebase/squash
On Tuesday, 21 March 2017 at 01:39:39 UTC, Vladimir Panteleev wrote: On Monday, 20 March 2017 at 12:25:22 UTC, deadalnix wrote: Because a picture is clearer than a thousand words: What this tells me is that the default way git-log presents history is not very useful. Consider this presentation of the same information: It's not good either. Why would I want to look at a DAG when the serie of event is strictly linear to begin with ? In particular, the origin commit of a branch is often not interesting; only the list of commits that are on one branch and aren't on another are. Yes, that's why rebasing makes thing clearer. Nobody care what the master commit was when the work was started. First there is no given that any intermediate state is sound, or even builds at all. That makes it very hard to bissect anything. Bisecting D is not something that can be reasonably done by looking at just one repository's history anyway; this is why we have D-dot-git and Digger. Either way, for pull requests that make non-trivial changes or additions, you will need to descend into the pull request itself. "Our source control is completely broken, but that's not a problem because we developed 3rd party tools to work around the brokenness" While I agree with you that things like bisecting are broken in D, I don't see it as a reason to screw things up even more. I'm not a big fan of "it's already broken, so we can break it even more". This should, and can, be fixed. https://danluu.com/monorepo/ Incidentally, I got a company contacting me last week willing to pay me good money to help them transition toward these kind of workflow. - If a pull request that should not have been squashed has been squashed while merging, the result is: - Commit messages are lost and remain available only on GitHub. - Any logical separation of changes that might have been represented through separate commits is lost and remains available only on GitHub. - "git blame" becomes less useful because it can only lead to the big blob of the squashed changes. - "git blame" becomes less useful because in some situations it loses its ability to track moved code, which should and often is done in separate commits. - Bisection becomes more difficult because it is no longer easily possible to dive into a PR, as has been occasionally necessary. Then it should have been 2 PR or more to begin with. Splitting PR in smaller ones is a good practice in general, there are ample proof that is increase the quality of the code review, reduce conflicts surface with other PR, makes reverting easier and more targeted when something happens, etc... Keeping this PR's commits is just a way to mitigate one of the negative consequences of kitchen sink PRs. It does so by impacting negatively others aspects of source control, and does nothing to mitigate other negatives aspects of kitchen sink PRs, such as review fatigue (see a specific example below). In general, I am not opposed to giving reviewers the option to merge pull requests with squashing, assuming we can all agree to not abuse it and only use it for PRs where there nothing useful can be gained by preserving the multiple commits as they are; however, their words and actions have shown that this doesn't seem to be an attainable point of agreement. If multiple commits are important for the PR, then the PR should have been several PR to begin with. Asking people to split s the way to go. Consider this PR: https://github.com/BitcoinUnlimited/BitcoinUnlimited/pull/164 You can see in the comments that I asked the original author to split it up because it was a kitchen sink and very hard to review in its current form. This was ignored. The PR ended up containing a bug that would cost about $12 500 to one of the users of the software, plus a fair amount of reputational damage. The change containing the bug did not need to be bundled with the rest of the PR, and would have almost certainly be noticed if it had been made on a PR of its own. Bundling several changes in the same PR has real world consequences that go beyond screwing up source control.
Re: The delang is using merge instead of rebase/squash
On Monday, 20 March 2017 at 05:10:04 UTC, Martin Nowak wrote: On Wednesday, 15 March 2017 at 13:14:31 UTC, deadalnix wrote: This is making the history very spaghettified. Is that possible to have the bot rebase/squash commits and then pushing ? I don't really agree with the argument. A merge commit is a clear way to integrate changes from a PR/branch. Just rebasing a PR on top of master removes a lot of information from git, only leaving references to github. Can you be more specific, what you mean w/ spaghetti? The fact that reciew fixes are added to PRs. Also github's commit view misleadingly shows commits from merged PRs/branches, which aren't actually in master. Because a picture is clearer than a thousand words: | | | | | | | | * | | | | | | | 08ae52d8 The Dlang Bot |\ \ \ \ \ \ \ \ Merge pull request #5231 from RazvanN7/Update_generated | |_|_|_|_|/ / / |/| | | | | | | | | | | | | | | | * | | | | | | c6480976 RazvanN7 |/ / / / / / / Updated posix.mak makefile to use ../tools/checkwhitespace.d | | | | | | | * | | | | | | 1181fcf7 The Dlang Bot |\ \ \ \ \ \ \ Merge pull request #5239 from sprinkle131313/ignore-vscode-lib | | | | | | | | | * | | | | | | f1b8d0d4 sprinkle131313 | | | | | | | | Add temp/tmp folder to gitignore. | | | | | | | | | * | | | | | | b67bf9d1 sprinkle131313 | | |_|/ / / / Add vscode folder and lib files to gitignore. | |/| | | | | | | | | | | | * | | | | | | 0b41c996 The Dlang Bot |\ \ \ \ \ \ \ Merge pull request #5242 from wilzbach/fix-lref-links | | | | | | | | | * | | | | | | 090d5164 Sebastian Wilzbach |/ / / / / / / Fix links from $(LREF $(D ...)) -> $(LREF ...) | | | | | | | * | | | | | | f2a019df The Dlang Bot |\ \ \ \ \ \ \ Merge pull request #5241 from MartinNowak/merge_stable | | | | | | | | | * | | | | | | a6cb85b8 Sebastian Wilzbach | | | | | | | | Add @safe to std.regex unittest | | | | | | | | | * | | | | | | ad70b082 Martin Nowak | |\ \ \ \ \ \ \ Merge remote-tracking branch 'upstream/stable' into merge_stable |/ / / / / / / / | | | | | | | | | * | | | | | | 694dd174 Stefan Koch | |\ \ \ \ \ \ \ Merge pull request #5167 from DmitryOlshansky/fix-freeform-regex | | | | | | | | | | | * | | | | | | 62cf615d Dmitry Olshansky | |/ / / / / / / Fix issue 17212 std.regex doesn't ignore whitespace after character classes | | | | | | | | * | | | | | | | 5b07bd59 Sebastian Wilzbach | |_|_|_|/ / / [BOOKTABLES]: Add BOOKTABLE to stdx.checkedint (#5238) |/| | | | | | | | | | | | | * | | | | | | 75059373 Jack Stouffer |\ \ \ \ \ \ \ Merge pull request #5225 from wilzbach/booktable-std-utf | |_|_|_|_|_|/ |/| | | | | | What the hell is going on in there ? In addition there are a bunch of practical issues with this way of doing things. First there is no given that any intermediate state is sound, or even builds at all. That makes it very hard to bissect anything. There are also a lot of errands and correction that are made during review that are not that interesting to keep in the project history. Knowing that someone did thing the A way and then changed it the B way after review is more noise than useful infos in the general case, and in the rare case when someone actually wants to know, the github PR is still out there (on that note, yes GH PR kind fo sucks, but that's another topic).
Re: The delang is using merge instead of rebase/squash
On Wednesday, 15 March 2017 at 13:14:31 UTC, deadalnix wrote: This is making the history very spaghettified. Is that possible to have the bot rebase/squash commits and then pushing ? Arf I fat fingered the title, i meant the dlang bot.
The delang is using merge instead of rebase/squash
This is making the history very spaghettified. Is that possible to have the bot rebase/squash commits and then pushing ?
Re: Zcoin implementation bug enabled attacker to create 548, 000 Zcoins
On Thursday, 9 March 2017 at 15:42:22 UTC, qznc wrote: I'm curious. Where does it make sense for opEquals to be non-pure? Likewise opCmp, etc. When the object need some kind of normalization to be comparable and you don't want to do the normalization every single time.
Re: Clarification on D.
On Wednesday, 8 March 2017 at 20:00:54 UTC, aberba wrote: I don't really have much experience with large code base, so spare me. From a technical and experience point of view (those with experience in large D code-base), how is only D's GC & optional MMM a significant production-use blocker? (To make my problem clear, how is D's current state not going to allow / make it so difficult for developers (who know what they are doing) to write say Photoshop-scale software: excluding those *so* realtime use cases?) Note: I understand that D is never going without critics: perfection is impossible. And, in my line of work, I highly prefer the safety of GC compared to MMM... so I don't see myself worried about GC pauses. I hope my question makes sense. D's GC doesn't have great performances. However it works great and generally you don't depend as much on the GC as you would in other languages. If you don't have real time constraints, you should be fine.
Re: Spotted on twitter: Rust user enthusiastically blogs about moving to D
On Tuesday, 7 March 2017 at 19:07:29 UTC, Jack Stouffer wrote: I've seen this mentioned serval times now by people coming from Rust. Rust users: Is the PC/politicking really that pervasive in their community? https://www.youtube.com/watch?v=dIageYT0Vgg Lot of good stuff in there, but, if you know how to read between the lines, all you need to know about the PC/politicking as well.
Re: Spotted on twitter: Rust user enthusiastically blogs about moving to D
On Tuesday, 7 March 2017 at 16:18:15 UTC, Wyatt wrote: On Tuesday, 7 March 2017 at 03:04:05 UTC, Joakim wrote: https://z0ltan.wordpress.com/2017/02/21/goodbye-rust-and-hello-d/ I like the bit in the comments where he says this: "It doesn’t have to be idiomatic to work just fine, which is relaxing." People often don't get how nice this is. -Wyatt "Beautiful! The code probably deserves a bit of explanation – in D, functions are (as far as I can tell), first-class objects" Maybe they should be...
Re: Ordering comparisons
On Tuesday, 7 March 2017 at 01:27:56 UTC, Andrei Alexandrescu wrote: The question is what to do to minimize breakage yet "break the bad code". The most backward-compatible solution is to define opCmp automatically to do a field-by-field lexicographical comparison. The most radical solution is disable ordering comparisons unless explicitly implemented by the user. There should be no assumption that structs are comparable, so the later.
Re: Nothing builds on debian anymore.
On Friday, 3 March 2017 at 18:47:53 UTC, H. S. Teoh wrote: Actually, I just tested on a freshly-cloned copy of dmd/druntime/phobos, it seems that building on Debian does work. Digging into the git log, it appears that commit 78cd023 *should* have added -fPIC to the makefiles. So how come it's still not working for you? Are you using an older bootstrap compiler? (If so, the g++wrapper trick I posted should solve that problem -- you can skip the PIC=1 hacks for druntime/phobos as this is apparently already merged into git master.) T I blasted everything away, reinstalled everything and now it builds Something must be broken with make clean then.
Nothing builds on debian anymore.
https://issues.dlang.org/show_bug.cgi?id=17236 Coming to you on ubuntu soon.
Re: Fast hashtable
On Wednesday, 1 March 2017 at 06:44:34 UTC, Cecil Ward wrote: const uint power2 = 512; // say, some 1 << n anyway const uint prime = 509; // some prime just below the power, some prime > power2/2 static assert( power2 - 1 - prime < prime ); x = x & ( power2 - 1 ); x = ( x >= prime ) ? x - prime : x; which is good news on my x86 with GDC -O3 (only 3 operations, and sub cmovx ) - all well provided you make sure that you are getting CMOVx not branches. I could work out the power from the prime using CTFE given a bit of thought. Maybe CTFE could even do the reverse? Have I finally gone mad? The lower slot will be twice as crowded as the higher ones.
Re: Fast hashtable
On Tuesday, 28 February 2017 at 17:57:14 UTC, Andrei Alexandrescu wrote: This is of possible interest: https://probablydance.com/2017/02/26/i-wrote-the-fastest-hashtable/ -- Andrei But let’s say you know that your hash function returns numbers that are well distributed and that you’re rarely going to get hash collisions even if you use powers of two. In which case you don't need powers of 2 either. ucent h = hash64(key); ulong slot = (h * slotCount) >> 64; And you get what you want, with no constraint on the number of slots. Note that on most architectures, this actually lowers to one mulhi operation, which is typically 3 cycles.
Re: Name That Technique!
On Saturday, 4 February 2017 at 23:54:12 UTC, David Gileadi wrote: That's obviously a self important lookup. This. So much this. I'm afraid you are the only one who appreciate my humor :)
Re: Why we need DIP25 and DIP1000
On Monday, 6 February 2017 at 11:02:31 UTC, Walter Bright wrote: https://www.reddit.com/r/programming/comments/5sda9s/what_rust_can_do_that_other_languages_cant_in_six/ https://news.ycombinator.com/item?id=13576976 (On ycombinator, don't click on the link above, click on https://news.ycombinator.com and looke for "What Rust Can Do That Other Languages Can't". If you click on the direct link, your votes will not be counted.) I don't think anyone argue that D shouldn't have anything allowing this kind of things. That's a bad strawman. It's like politician who are like, "you oppose mass surveillance ? Surely you want children to die !". DIP25 and DIP1000 are addressing a real problem. They do it in a clumsy way. Specifically, DIP25 do it in a way that actaully was tried before with type qualifier ( inout ) and, long story short: http://forum.dlang.org/post/neqr75$1pbl$1...@digitalmars.com . Now you can chose to wage an exhaustion war until everybody who remains either agrees or don't care to fight anymore. That won't make it a good idea, but it'll look like one.
Re: Name That Technique!
On Friday, 3 February 2017 at 23:33:58 UTC, Walter Bright wrote: I agree, it's pretty dazz! We need to give this technique a memorable name (not an acronym). I thought "Voldemort Types" turned out rather well, whereas CTFE is klunky, UFCS is even worse. The absolute worst is C++ SFINAE. Any ideas? Scherkl-Nielsen Lookup? The perfect bikeshedding moment! Daniel, Dominikus: please consider writing an article about this. That's obviously a self important lookup.
Re: LDC 1.1.0 released
On Wednesday, 1 February 2017 at 03:43:10 UTC, David Nadlinger wrote: Hi all, Version 1.1.0 of LDC, the LLVM-based D compiler, has finally been released: https://github.com/ldc-developers/ldc/releases/tag/v1.1.0 Please head over to the digitalmars.D.ldc forums for more details and discussions: http://forum.dlang.org/post/etynfqwjosdvuuukl...@forum.dlang.org — David What's the state of cent/ucent ?
Re: memcpy() comparison: C, Rust, and D
On Tuesday, 31 January 2017 at 23:42:43 UTC, Walter Bright wrote: On 1/31/2017 11:32 AM, Nordlöw wrote: On Tuesday, 31 January 2017 at 19:26:51 UTC, Walter Bright wrote: This "as if" thing enables the designer of a function API to set the desired relationships even if the implementation is doing some deviated preversion with the data (i.e. a ref counted object). Why is this feature used? Optimizations? Safety? So ref counted containers can be built. As long as they don't use a tree or any other datastructure that require more than one level of indirection internally.
Re: Release D 2.073.0
On Saturday, 28 January 2017 at 21:46:17 UTC, Walter Bright wrote: Same problem, same solution, same fallout. What problem? Ask Andrei, he asked for inout's deprecation. I'm not going to run after you two like you are toddlers. Having to make the same case again and again for literally years is not something I wish to take part in. That case has been made. Get up to date or delegate.
Re: Release D 2.073.0
On Monday, 30 January 2017 at 01:34:52 UTC, ilya-stromberg wrote: Walter created an entire language and a community around it. Can you, please, share with us how your accomplishments give any importance to whatever your disagreement is with him? All that is visible, here is you protest everything, take any opportunity to verbally abuse everyone and make no contribution. Thanks. No because you are making an argument from authority and are asking to replied by another argument from authority, which bring 0 value to anyone.
Re: Release D 2.073.0
On Monday, 30 January 2017 at 01:15:52 UTC, Dicebot wrote: On 01/30/2017 12:38 AM, Walter Bright wrote: ... Please, don't waste your time. You mentioned being curious about what is wrong with that PR - I have explained. Let's just stop here before you write another 20 posts presuming that I only disagree with your development methodology because I don't understand it. I hope it puts some light on why I abandoned the DIP process.
Re: Release D 2.073.0
On Saturday, 28 January 2017 at 03:40:43 UTC, Walter Bright wrote: On 1/27/2017 4:43 PM, deadalnix wrote: I mostly went silent on this because I this point, I have no idea how to reach to you and Andrei. This is bad because of all the same reasons inout is bad, plus some other on its own, and is going down exactly like inout so far, plus some extra problems on its own. If you've got a case, make it. If you see problems, explain. If you want to help, please do. I did so repeatedly for years and never reached to you or Andrei, so I'm not sure how that's going to change anything but here you go. The root problem you are trying to solve is to be able to specify that what comes out of a function has a common property with what come in. In the case of inout, this property is the type qualifier, in the case of return/scope this is the lifetime. Same problem, same solution, same fallout.
Re: Release D 2.073.0
On Friday, 27 January 2017 at 19:12:37 UTC, Walter Bright wrote: Yes, I'm 100% responsible for 'return scope' and pushing it harder than most people probably would like. Maybe I'm alone, but I strongly believe it is critical to D's future. You sound like this guy: http://www.drdobbs.com/cpp/type-qualifiers-and-wild-cards/231902461
Re: Release D 2.073.0
On Friday, 27 January 2017 at 19:09:30 UTC, Walter Bright wrote: On 1/26/2017 5:42 AM, Dicebot wrote: https://issues.dlang.org/show_bug.cgi?id=17123 Can I have my "I told you so" badge please? Yes, you may. But nobody promised there would be no regressions - just that we'll fix them. I'll see about taking care of this one. Thanks for reporting it. Regressions are the symptoms. I mostly went silent on this because I this point, I have no idea how to reach to you and Andrei. This is bad because of all the same reasons inout is bad, plus some other on its own, and is going down exactly like inout so far, plus some extra problems on its own.
Re: CTFE Status
On Wednesday, 25 January 2017 at 12:36:02 UTC, Stefan Koch wrote: newCTFE is green now on all platforms! <3
Re: Release D 2.073.0
On Sunday, 22 January 2017 at 17:55:03 UTC, Martin Nowak wrote: Glad to announce D 2.073.0. This release comes with a few phobos additions, new -mcpu=avx and -mscrt switch, and several bugfixes. http://dlang.org/download.html http://dlang.org/changelog/2.073.0.html -Martin <3
Re: Interior pointers and fast GC
On Sunday, 22 January 2017 at 05:02:43 UTC, Araq wrote: It's an O(1) that requires a hash table lookup in general because allocations can exceed the chunk size and so you cannot just mask the pointer and look at the chunk header because it might not be a chunk header at all. Know any production GCs that use hash table lookups for pointer assignments? Me neither. Ok ok, maybe Go does, it's the only language with GC that embraces interior pointers as stupid as that is. Huge allocs are always handled specifically by allocators. The usual technique is via a radix tree. But it doesn't really matter for the discussion at hand: huge alloc are not numerous. If you have 4G of RAM, by definition, you need to have less than a 1000 of them with he above mentioned scheme. The whole lookup datastructure can fit in cache.
Re: Interior pointers and fast GC
On Saturday, 14 January 2017 at 04:37:01 UTC, Chris Wright wrote: Unfortunately, given an interior pointer, you can't identify the base of its heap object in constant time. 1. Split the heap in chunk of size n being a power of 2, say 4M. Align them 4M. 2. Find the chunk an alloc is part of in O(1) bu masking the lower bits (22 bits to mask in our 4M case). 3. Have a table of page descriptor in the chunk header. Lookup the page the alloc is in in O(1). 4a. If the alloc is large (flag in the page descriptor), find the base pointer in O(1). 4b. if the alloc is small, compute the index of the item in the page from the size class in the page descriptor (on addition, one multiply and one shift) in O(1). Start on false premise, end up nowhere.
Re: It is still not possible to use D on debian/ubuntu
On Wednesday, 11 January 2017 at 00:33:41 UTC, Martin Nowak wrote: But it is not clear if anyone cares at this stage. are rather frustrating to read. Alright, sentence like this come from extreme frustration at things being almost constantly broken. For instance: https://issues.dlang.org/show_bug.cgi?id=17107 As it turns out, this problem is not quite fixed.
Re: Voting for std.experimental.checkedint
Alright some feedback. It is rather disappointing that Warn and Abort only write to stderr. Being able to specify the sink would be great. i may want to log the issue or something. There is option to throw on error. Checked!(Checked!(int, ProperCompare), WithNaN) is rather inelegent. Why not Checked!(int, ProperCompare, WithNaN) ? get() should not be inout. It returns a value type. const is fine. Otherwise, the overall design looks pretty solid. Congrats to you guys. Idealy, I'd like to see these things polished, but I'm rather pleased to see where this is going. I'd say yes, modulo the above.
Re: Voting for std.experimental.checkedint
Is the doc available somewhere in a readable form ?
Re: DIP10005: Dependency-Carrying Declarations is now available for community feedback
On Wednesday, 4 January 2017 at 15:56:13 UTC, Timon Gehr wrote: I don't fully agree. Nested imports, the way they have been implemented, pose a new symbol hijacking hazard. I'd argue this was an existing bug in import handling. This is why I like to have very orthogonal definitions. It adds basically no implementation complexity [1]. I consider the benefit real, but minor enough to oppose the DIP based on its wacky syntax. [1] Both static if and static foreach (once it lands) need the same kind of scoping rules. I know about [1], this is why I did not mentioned it. I don't really mind about implementation complexity, I care about complexity of the definition. For the following reasons: - If the implementation may be complex, it can be isolated and/or abstracted away. - Interaction with other parts of the language are more predictable, including future parts that do not exists yet. - It obviate the explosion of trivia experienced devs needs to know to use the language.
Re: [OT] static foreach
On Wednesday, 4 January 2017 at 16:03:29 UTC, Stefan Koch wrote: On Wednesday, 4 January 2017 at 15:56:13 UTC, Timon Gehr wrote: [1] Both static if and static foreach (once it lands) need the same kind of scoping rules. Please do contact me if you are working on static foreach, there are dmd and implementation specific issues to be taken into account. I think the best path forward is to define them properly.
Re: It is still not possible to use D on debian/ubuntu
On Tuesday, 3 January 2017 at 00:16:52 UTC, Martin Nowak wrote: On Monday, 2 January 2017 at 18:18:33 UTC, deadalnix wrote: Plus the fix was actually released yesterday, so it's not like I'm lagging by much. The internal meddling nonsense that's going on is none of any user business. Bug reports are dealt with on Bugzilla, shouldn't be surprising. It's also fairly reasonable to ask you to search the Forum and Bugzilla for your topic and progress on that before posting a rant. I understand this is frustrating to see users complain about something when it looks like it's been fixes ages ago when you are the nose into it. But please understand that as far as D goes, if someone like me is not aware of something, it is fair to assume 99% of the populace isn't either. This kind of thing as real effect in the real world. In past Nov, I saw 2 professional user drop D because of this kind of problems. These users don't complains and just move on to something else.
Re: DIP10005: Dependency-Carrying Declarations is now available for community feedback
There are quite a few fallacies in there. On Monday, 2 January 2017 at 21:23:19 UTC, Andrei Alexandrescu wrote: Regarding the ongoing doubts about the advantages of inline imports: they are first and foremost a completion of the nested import feature. As such, most, if not all, arguments against inline imports apply equally to nested imports. Come to think of it, lazy imports vs nested imports: * same improvement in compilation speed? check * no language changes? check * no nasty bugs in the aftermath (such as the infamous https://issues.dlang.org/show_bug.cgi?id=10378)? check * scalable builds? check Yet local imports are overwhelmingly superior to lazy imports because of one thing: they localize dependencies. They introduce modularity and its ancillary perks (fast and scalable builds, easier review and refactoring) not by engineering, but by organically placing dependencies spatially with their dependents. (The scope statement does the same thing with temporal dependencies.) That the DIP does not make it clear that it is a necessary and sufficient extension of local imports is a problem with it. There is a major difference with this DIP. Lazy import is not a language change, but a compiler implementation detail. As such, it doesn't require a DIP or anything specific. Nested import are a language simplification. Declaration can appear anywhere, import is a declaration, the fact that import couldn't appear anywhere was an arbitrary limitation, and removing it makes the language simpler. As such, the burden of proof is on maintaining the limitation rather than removing it. This DIP is a language addition. Therefore, contrary to nested or lazy import, the burden of proof is on it. This DIP should be considered as follow: how much complexity does it add and how much benefit does it bring, compared to alternatives. The obvious benefit is localizing dependencies. I think I'm not too far off track by considering most of the speedup and scalable build can be achieved with lazy import and, while I'm sure there are example where this is superior, we are talking marginal gains as lazy and nested imports squeezed most of the juice already. The cost is the language addition. The first obvious improvement that can be made to this DIP to reduce its cost is to not introduce a new syntax. As such, the addition is limited to allowing the existing syntax in a new place rather than adding a whole new syntax for imports. I like the extra expressivity. I'm not 100% convinced it is worth the extra cost, but the more the cost is reduced, the more rational it seems to me that this option should be pursued. I now am really glad we slipped local imports before the formalization of DIPs. The feature could have been easily demeaned out of existence. Good you also notice how broken the DIP process is. One suggestion: let's keep the DIP describing the change to be made. Some examples are fine to illustrate, but it is not the DIp's purpose to be easy to understand or expand too much in argumentation, or it'll be useless as a spec document, and trying to have the DIP be a spec, a tutorial, a essay on why the feature, and so on just lead to endless rewriting lead to perpetual motion but no progress.
Re: It is still not possible to use D on debian/ubuntu
On Monday, 2 January 2017 at 13:52:29 UTC, Martin Nowak wrote: On Monday, 2 January 2017 at 13:51:15 UTC, Martin Nowak wrote: On Sunday, 1 January 2017 at 23:55:37 UTC, deadalnix wrote: But it is not clear if anyone cares at this stage... It's fairly embarrassing to read so much uninformed noise. Not to mention that everyone could have fixed this bug. Everyone could have written a patch, see it not being reviewed for a week, ping it on a daily basis, got asked for some changes, do the changes, wait another week, go hunt reviewers on IRC, finally got it merged, ask if that's possible to get it in the next release, get told that the patch was against master rather than stable or whatnot, get asked to go through the whole process all over again because why would anyone use git cherry pick, abandon, wait 6 month to get the fix live. Yes, that's how it goes.