Re: associative arrays with manual memory management
On 22-Aug-2015 10:46, rsw0x wrote: On Saturday, 22 August 2015 at 07:37:38 UTC, Dmitry Olshansky wrote: On 21-Aug-2015 20:20, Ilya Yaroshenko wrote: Hi All! I am going to implement associative arrays with manual memory management based on amazing std.experimental.allocator by Andrei http://wiki.dlang.org/Review/std.experimental.allocator I will be happy to receive any advices about algorithms, use cases and API. Best Regards, Ilya FYI https://github.com/D-Programming-Language/druntime/pull/1282 Maybe someone who isn't confused by dmd could answer this for me, but why are the druntime hooks generated by dmd non-templated and rely on dynamic info(rtti) when all the information is known at compile time? druntime predates D templates. -- Dmitry Olshansky
Re: Object.factory() and exe file size bloat
On Friday, 21 August 2015 at 21:37:34 UTC, Walter Bright wrote: On 8/21/2015 4:44 AM, Marc =?UTF-8?B?U2Now7x0eiI=?= schue...@gmx.net wrote: Just change Object.factory to require registration of the class. What mechanism do you propose for that? E.g.: template factoryConstructors(Args...) { // using void* because I don't know whether it's possible // to have a function pointer to a constructor void*[string] factoryConstructors; } void registerFactoryConstructor(Class, Args...)() if(is(Class == class)) { factoryConstructors!Args[Class.stringof] = ...; } Object factory(Args...)(string className, Args args) { auto constructor = factoryConstructors!Args[className]; return ...; } This even allows to call constructors with arguments. deadalnix's proposal is a nice way to automate this for an entire class hierarchy. Another possible mechanisms would be some UDA magic.
Re: Object.factory() and exe file size bloat
On Saturday, 22 August 2015 at 08:16:06 UTC, Marc Schütz wrote: Another possible mechanisms would be some UDA magic. E.g.: class MyClass { @factorizable this() { } @factorizable this(string) { } this(int) { } } mixin registerFactoryConstructors; // for entire module
Re: Wiki article: Starting as a Contributor
I confess to being a bit confused overall - there is a bit of overlap and confusion for someone who wishes to venture into this area. Please bear with me: From wiki.dlang.org - 'Get involved'. So far so good. From here, I can go to 'Building DMD' and 'How to Fork and Build dlang.org', which both seem to build DMD - I'm unsure of the overlap aspects here. There is also an issue with the set of instructions in 'How to Fork and Build dlang.org' - and I don't know what category to file the bug under in bugzilla !?). Now, where does 'Starting as a Contributor' fit? --ted - $ git clone https://github.com/D-Programming-Language/dlang.org $ git clone https://github.com/D-Programming-Language/dmd $ cd dlang.org/ $ make -f posix.mak html $ make -f posix.mak druntime-release From https://github.com/D-Programming-Language/dmd * branchHEAD - FETCH_HEAD LATEST=2.068.0 -- place in the command line to skip network traffic. [ -d ../druntime-2.068.0 ] || git clone -b v2.068.0 --depth=1 https://github.com/D-Programming-Language/druntime ../druntime-2.068.0/ Cloning into '../druntime-2.068.0'... remote: Counting objects: 412, done. remote: Compressing objects: 100% (372/372), done. remote: Total 412 (delta 42), reused 154 (delta 30), pack-reused 0 Receiving objects: 100% (412/412), 959.29 KiB | 564.00 KiB/s, done. Resolving deltas: 100% (42/42), done. Checking connectivity... done. Note: checking out '0ca25648947bb8f27d08dc618f23ab86fddea212'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b new_branch_name touch ../druntime-2.068.0/.cloned make --directory=../dmd-2.068.0/src -f posix.mak -j 4 make[1]: *** ../dmd-2.068.0/src: No such file or directory. Stop. posix.mak:338: recipe for target '../dmd-2.068.0/src/dmd' failed make: *** [../dmd-2.068.0/src/dmd] Error 2 Andrei Alexandrescu wrote: I had to set up dmd and friends on a fresh Ubuntu box, so I thought I'd document the step-by-step process: http://wiki.dlang.org/Starting_as_a_Contributor Along the way I also hit a small snag and fixed it at https://github.com/D-Programming-Language/dlang.org/pull/1049 Further improvements are welcome. Thanks, Andrei
Re: post on using go 1.5 and GC latency
On Sat, 2015-08-22 at 07:30 +, rsw0x via Digitalmars-d-learn wrote: […] because Go is not a general purpose language. Not entirely true. Go is a general purpose language, it is a successor to C as envisioned by Rob Pike, Russ Cox, and others (I am not sure how much input Brian Kernighan has had). However, because of current traction in Web servers and general networking, it is clear that that is where the bulk of the libraries are. Canonical also use it for Qt UI applications. I am not sure of Google real intent for Go on Android, but there is one. A concurrent GC for D would kill D. Go programs saw a 25-50% performance decrease across the board for the lower latencies. They also saw a 100% increase in performance when it was rewritten, and a 20% fall with this latest rewrite. I anticipate great improvement for the 1.6 rewrite. I am surprised they are retaining having only a single garbage collector: different usages generally require different garbage collection strategies. Having said that Java is moving from having four collectors, to having one, it is going to be interesting to see if G1 meets the needs of all JVM usages. D could make some very minor changes and be capable of a per-thread GC with none of these performance drawbacks, but nobody seems very interested in it. Until some organization properly funds a suite of garbage collectors for different performance targets, you have what there is. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: post on using go 1.5 and GC latency
On Saturday, 22 August 2015 at 09:16:32 UTC, Russel Winder wrote: On Sat, 2015-08-22 at 07:30 +, rsw0x via Digitalmars-d-learn wrote: [...] Not entirely true. Go is a general purpose language, it is a successor to C as envisioned by Rob Pike, Russ Cox, and others (I am not sure how much input Brian Kernighan has had). However, because of current traction in Web servers and general networking, it is clear that that is where the bulk of the libraries are. Canonical also use it for Qt UI applications. I am not sure of Google real intent for Go on Android, but there is one. [...] They also saw a 100% increase in performance when it was rewritten, and a 20% fall with this latest rewrite. I anticipate great improvement for the 1.6 rewrite. I am surprised they are retaining having only a single garbage collector: different usages generally require different garbage collection strategies. Having said that Java is moving from having four collectors, to having one, it is going to be interesting to see if G1 meets the needs of all JVM usages. [...] Until some organization properly funds a suite of garbage collectors for different performance targets, you have what there is. The performance decrease has been there since 1.4 and there is no way to remove it - write barriers are the cost you pay for concurrent collection. Go was already much slower than other compiled languages, now it probably struggles to keep up with mono.
Re: Object.factory() and exe file size bloat
On Friday, 21 August 2015 at 20:28:47 UTC, Walter Bright wrote: Btw we use it for high-level testing framework - will be rather hard to move that to compile-time approach It's good to hear of use cases for Object.factory. If you want details it is special library for black box testing applications by spawning them as external processes and interacting with their shell/network API. To minimize boilerplate test scenarios are derived from special TestCase class and test runner finds all classes that derive from TestCase automatically. Marking them all as export will be inconvenient but is possible - but I'd like to get something useful in return, like well-defined and working export for example. until some reflection bugs gets fixed. Bugzilla issues? (You knew that was coming!) https://issues.dlang.org/show_bug.cgi?id=11595 is the main offender. Currently the task 'find all symbols with a given trait in the whole program' can't be implemented at CT.
Re: associative arrays with manual memory management
On Saturday, 22 August 2015 at 04:16:30 UTC, Rikki Cattermole wrote: Will it be language feature fix, or is it an independent container? Independent container. If the later I already have a simple dumb one which I can share (not on this machine). I'll be happy to use what you create. Same goes for list and friends ones. After AA I will look at RedBlackTree. However looks like Andrei have great plans about new std.experemental.collection. Ilya
Re: Object.factory() and exe file size bloat
On Friday, 21 August 2015 at 05:06:47 UTC, Walter Bright wrote: The solution seems straightforward - only have Object.factory be able to instantiate classes marked as 'export'. This only makes sense anyway. The export seems to be an arbitrary rule (and export is really broken currently). Let's just use every class that is linked into the binary (e.g. weakly referencing them), then it'll naturally work with all linker functionalities. This doesn't only affect Object.factory but also ModuleInfo.localClasses. I'd suggest we first add a new internal array of weakly linked classes, turn localClasses into an opApply function or range so it automatically skips null classes (weakly undefined), then change Object.factory to only load weakly linked classes. For an intermediate time we can keep the old array and print a deprecation warning in Object.factory when a class would no longer be available. https://github.com/D-Programming-Language/dmd/pull/4638
Re: Object.factory() and exe file size bloat
On Friday, 21 August 2015 at 13:47:49 UTC, Andrei Alexandrescu wrote: I think these need to be fixed (by replacing indirect-calls-based code with templates) regardless of where we go with TypeInfo. There's a fair amount of druntime code that suffers from being written before templates or in avoidance thereof. -- Andrei For whatever it's worth, below is a list of druntime functions that take TypeInfo as a parameter. My immediate need is to make it possible for an -fno-rtti switch to be added to the compiler with as little compromise as possible. In general, I'm not actually trying disable D features, even those that I don't actually need. I only need to remove dead code. If -fno-rtti is the best I can hope for, than I'll take it. Perhaps templating some of these functions will make an -fno-rtti switch more viable. I'm judging from the comments in this thread that there may be additional benefits. Would submitting pull requests towards this goal be a distraction from current priorities? Should I wait until after DDMD is out? Mike \core\memory.d 143 extern (C) void gc_addRange( in void* p, size_t sz, const TypeInfo ti = null ) nothrow @nogc; 364 static void* malloc( size_t sz, uint ba = 0, const TypeInfo ti = null ) pure nothrow 390 static BlkInfo qalloc( size_t sz, uint ba = 0, const TypeInfo ti = null ) pure nothrow 417 static void* calloc( size_t sz, uint ba = 0, const TypeInfo ti = null ) pure nothrow 457 static void* realloc( void* p, size_t sz, uint ba = 0, const TypeInfo ti = null ) pure nothrow 501 static size_t extend( void* p, size_t mx, size_t sz, const TypeInfo ti = null ) pure nothrow 754 static void addRange( in void* p, size_t sz, const TypeInfo ti = null ) @nogc nothrow /* FIXME pure */ \object.d 1761 inout(void)[] _aaValues(inout void* p, in size_t keysize, in size_t valuesize, const TypeInfo tiValArray) pure nothrow; 1762 inout(void)[] _aaKeys(inout void* p, in size_t keysize, const TypeInfo tiKeyArray) pure nothrow; 1778 int _aaEqual(in TypeInfo tiRaw, in void* e1, in void* e2); 1779 hash_t _aaGetHash(in void* aa, in TypeInfo tiRaw) nothrow; 2794 extern (C) void _d_arrayshrinkfit(const TypeInfo ti, void[] arr) nothrow; 2795 extern (C) size_t _d_arraysetcapacity(const TypeInfo ti, size_t newcapacity, void *arrptr) pure nothrow; 3231 private extern (C) void[] _d_newarrayU(const TypeInfo ti, size_t length) pure nothrow; \core\stdc\stdarg.d 60 void va_arg()(ref va_list ap, TypeInfo ti, void* parmn) 131 void va_arg()(ref va_list ap, TypeInfo ti, void* parmn) 345 void va_arg()(va_list apx, TypeInfo ti, void* parmn) \gc\proxy.d 58 void function(void*, size_t, const TypeInfo ti) gc_addRange; 184 void* gc_malloc( size_t sz, uint ba = 0, const TypeInfo ti = null ) nothrow 191 BlkInfo gc_qalloc( size_t sz, uint ba = 0, const TypeInfo ti = null ) nothrow 203 void* gc_calloc( size_t sz, uint ba = 0, const TypeInfo ti = null ) nothrow 210 void* gc_realloc( void* p, size_t sz, uint ba = 0, const TypeInfo ti = null ) nothrow 217 size_t gc_extend( void* p, size_t mx, size_t sz, const TypeInfo ti = null ) nothrow 282 void gc_addRange( void* p, size_t sz, const TypeInfo ti = null ) nothrow \gcstub\gc.d 65 extern (C) void function(void*, size_t, const TypeInfo ti) gc_addRange; 184 extern (C) void* gc_malloc( size_t sz, uint ba = 0, const TypeInfo ti = null ) 197 extern (C) BlkInfo gc_qalloc( size_t sz, uint ba = 0, const TypeInfo ti = null ) 210 extern (C) void* gc_calloc( size_t sz, uint ba = 0, const TypeInfo ti = null ) 223 extern (C) void* gc_realloc( void* p, size_t sz, uint ba = 0, const TypeInfo ti = null ) 236 extern (C) size_t gc_extend( void* p, size_t mx, size_t sz, const TypeInfo ti = null ) 293 extern (C) void gc_addRange( void* p, size_t sz, const TypeInfo ti = null ) \rt\arrayassign.d 27 extern (C) void[] _d_arrayassign(TypeInfo ti, void[] from, void[] to) 49 extern (C) void[] _d_arrayassign_l(TypeInfo ti, void[] src, void[] dst, void* ptmp) 141 extern (C) void[] _d_arrayassign_r(TypeInfo ti, void[] src, void[] dst, void* ptmp) 167 extern (C) void[] _d_arrayctor(TypeInfo ti, void[] from, void[] to) 205 extern (C) void* _d_arraysetassign(void* p, void* value, int count, TypeInfo ti) 236 extern (C) void* _d_arraysetctor(void* p, void* value, int count, TypeInfo ti) \rt\aaA.d 438 extern (C) inout(void[]) _aaValues(inout AA aa, in size_t keysz, in size_t valsz, const TypeInfo tiValueArray) pure nothrow 461 extern (C) inout(void[]) _aaKeys(inout AA aa, in size_t keysz, const TypeInfo tiKeyArray) pure nothrow 567 extern (C) int _aaEqual(in TypeInfo tiRaw, in AA aa1, in AA aa2) 597 extern (C) hash_t _aaGetHash(in AA* aa, in TypeInfo tiRaw) nothrow \rt\adi.d 24 extern (C) void[] _adSort(void[] a, TypeInfo ti); 365 extern (C) int _adEq(void[] a1, void[] a2, TypeInfo ti) 386 extern (C) int _adEq2(void[] a1, void[] a2, TypeInfo ti) 415 extern (C) int _adCmp(void[] a1, void[] a2,
Re: dmd codegen improvements
On 22 August 2015 at 09:31, deadalnix via Digitalmars-d digitalmars-d@puremagic.com wrote: On Saturday, 22 August 2015 at 07:10:28 UTC, Ola Fosheim Grøstad wrote: On Saturday, 22 August 2015 at 02:42:48 UTC, H. S. Teoh wrote: Ah, yes...: http://emptybottle.org/bullshit/index.php It would be a lot more helpful if you had provided a link to a paper on scalar branch divergence and memory divergence. http://www.cs.virginia.edu/~skadron/Papers/meng_dws_isca10.pdf Here you go. Also relevant: https://www.jstage.jst.go.jp/article/trol/7/3/7_147/_pdf ;-)
Re: Appender at CTFE?
On Friday, 21 August 2015 at 23:51:16 UTC, cym13 wrote: On Friday, 21 August 2015 at 22:39:29 UTC, Nick Sabalausky wrote: Not at a pc, so can't test right now, but does Appender work at compile time? If not, does ~= still blow up CTFE memory usage like it used to? Any other best practice / trick for building strings in CTFE? I did two experiments: [...] Each make use of CTFE but the f (appender) variant blew my RAM (old computer) Excepted any error from my part, shouldn't you call '.reserve' in order to make the appender efficient ?
Re: associative arrays with manual memory management
On 21-Aug-2015 20:20, Ilya Yaroshenko wrote: Hi All! I am going to implement associative arrays with manual memory management based on amazing std.experimental.allocator by Andrei http://wiki.dlang.org/Review/std.experimental.allocator I will be happy to receive any advices about algorithms, use cases and API. Best Regards, Ilya FYI https://github.com/D-Programming-Language/druntime/pull/1282 -- Dmitry Olshansky
Re: associative arrays with manual memory management
On Saturday, 22 August 2015 at 07:46:22 UTC, rsw0x wrote: On Saturday, 22 August 2015 at 07:37:38 UTC, Dmitry Olshansky wrote: On 21-Aug-2015 20:20, Ilya Yaroshenko wrote: Hi All! I am going to implement associative arrays with manual memory management based on amazing std.experimental.allocator by Andrei http://wiki.dlang.org/Review/std.experimental.allocator I will be happy to receive any advices about algorithms, use cases and API. Best Regards, Ilya FYI https://github.com/D-Programming-Language/druntime/pull/1282 Maybe someone who isn't confused by dmd could answer this for me, but why are the druntime hooks generated by dmd non-templated and rely on dynamic info(rtti) when all the information is known at compile time? I meant that wrt the AA implementation linked, if it did not seem obvious by the way. But my question applies to most of the druntime hooks. Sorry for doublepost, wanted to clarify.
Re: associative arrays with manual memory management
On Saturday, 22 August 2015 at 07:37:38 UTC, Dmitry Olshansky wrote: On 21-Aug-2015 20:20, Ilya Yaroshenko wrote: Hi All! I am going to implement associative arrays with manual memory management based on amazing std.experimental.allocator by Andrei http://wiki.dlang.org/Review/std.experimental.allocator I will be happy to receive any advices about algorithms, use cases and API. Best Regards, Ilya FYI https://github.com/D-Programming-Language/druntime/pull/1282 Maybe someone who isn't confused by dmd could answer this for me, but why are the druntime hooks generated by dmd non-templated and rely on dynamic info(rtti) when all the information is known at compile time?
Re: std.data.json formal review
Am 21.08.2015 um 18:54 schrieb Andrei Alexandrescu: On 8/19/15 4:55 AM, Sönke Ludwig wrote: Am 19.08.2015 um 03:58 schrieb Andrei Alexandrescu: On 8/18/15 1:24 PM, Jacob Carlborg wrote: On 2015-08-18 17:18, Andrei Alexandrescu wrote: Me neither if internal. I do see a problem if it's public. -- Andrei If it's public and those 20 lines are useful on its own, I don't see a problem with that either. In this case at least they aren't. There is no need to import the JSON exception and the JSON location without importing anything else JSON. -- Andrei The only other module where it would fit would be lexer.d, but that means that importing JSONValue also has to import the parser and lexer modules, which is usually only needed in a few places. I'm sure there are a number of better options to package things nicely. -- Andrei I'm all ears ;)
Re: dmd codegen improvements
On Saturday, 22 August 2015 at 07:31:45 UTC, deadalnix wrote: On Saturday, 22 August 2015 at 07:10:28 UTC, Ola Fosheim Grøstad wrote: On Saturday, 22 August 2015 at 02:42:48 UTC, H. S. Teoh wrote: Ah, yes...: http://emptybottle.org/bullshit/index.php It would be a lot more helpful if you had provided a link to a paper on scalar branch divergence and memory divergence. http://www.cs.virginia.edu/~skadron/Papers/meng_dws_isca10.pdf Here you go. Not relevant.
Re: Object.factory() and exe file size bloat
On Friday, 21 August 2015 at 13:47:49 UTC, Andrei Alexandrescu wrote: Thanks for this list. I think these need to be fixed (by replacing indirect-calls-based code with templates) regardless of where we go with TypeInfo. There's a fair amount of druntime code that suffers from being written before templates or in avoidance thereof. -- Andrei Yes, it's a major pain and a source for many incorrect attributes.
Re: Object.factory() and exe file size bloat
Am Fri, 21 Aug 2015 11:46:21 + schrieb Kagamin s...@here.lot: On Friday, 21 August 2015 at 11:03:09 UTC, Mike wrote: * postblit - https://github.com/D-Programming-GDC/GDC/pull/100/files?diff=unified#diff-1f51c84492753de4c1863d02e24318bbR918 * destructor - https://github.com/D-Programming-GDC/GDC/pull/100/files?diff=unified#diff-1f51c84492753de4c1863d02e24318bbR1039 Looks like these are generated for fixed sized array of structs in a struct. * slicing - https://github.com/D-Programming-GDC/GDC/pull/100/files?diff=unified#diff-5960d486a42197785b9eee4ba95c6b95R11857 Can't even understand, what is this. Array op? But array ops are handled just above. If you do 'array[] = n' the compiler calls one of _d_arraysetctor, _d_arraysetassign, _d_arrayassign, _d_arrayctor or _d_arraycopy. http://wiki.dlang.org/Runtime_Hooks The calls basically copy n to the array and call the postblit for every value in array[]. They also call 'TypeInfo.destroy' (destructor) on old values before overwriting. arraycopy doesn't use TypeInfo. The rest could be easily rewritten to be templated* or completely compiler generated. * I'm not sure if we support manually accessing the postblit of a type. OTOH if these functions were templated the compiler would likely emit the correct postblit/destroy calls automatically.
[Issue 14948] New: [Reg 2.068.0] AA key requirement was broken w/o notice and w/ horrible error message
https://issues.dlang.org/show_bug.cgi?id=14948 Issue ID: 14948 Summary: [Reg 2.068.0] AA key requirement was broken w/o notice and w/ horrible error message Product: D Version: D2 Hardware: All OS: All Status: NEW Severity: regression Priority: P1 Component: dmd Assignee: nob...@puremagic.com Reporter: c...@dawg.eu /usr/include/dmd/druntime/import/object.d(1962): Error: AA key type HTTP supports const equality but doesn't support const hashing /usr/include/dmd/druntime/import/object.d(1968): Error: AA key type HTTP supports const equality but doesn't support const hashing First of all this should have been a warning or deprecation, not directly an error. Then it definetely should have been mentioned in the changelog. And finally the error message doesn't give me a clue which one out of 1000 LOCs program (https://github.com/braddr/downloads.dlang.org/tree/master/src) caused the error. --
Re: Object.factory() and exe file size bloat
Am Fri, 21 Aug 2015 15:16:01 +0200 schrieb Iain Buclaw via Digitalmars-d digitalmars-d@puremagic.com: Other than that, the semantics of pragma(inline, true) should guarantee that the function is never *written* to object file. This really should be documented then. If we build a shared library with pragma(inline) functions not emitting the function prevents taking the address of that function in all client code. As this is a breaking change to 'normal' inline semantics it needs to be documented. https://github.com/D-Programming-Language/dlang.org/pull/1073
Re: associative arrays with manual memory management
On 8/22/2015 9:44 PM, Ilya Yaroshenko wrote: On Saturday, 22 August 2015 at 04:16:30 UTC, Rikki Cattermole wrote: Will it be language feature fix, or is it an independent container? Independent container. If the later I already have a simple dumb one which I can share (not on this machine). I'll be happy to use what you create. Same goes for list and friends ones. After AA I will look at RedBlackTree. However looks like Andrei have great plans about new std.experemental.collection. Yeah he does. I'm quite excited by it. Although I think right now he is focusing more on a linear algerbra/matrix family library. Based upon his response of when I said I was trying to get relicense rights to gl3n.
Re: Object.factory() and exe file size bloat
On 21 August 2015 at 13:35, Steven Schveighoffer via Digitalmars-d digitalmars-d@puremagic.com wrote: On 8/21/15 7:22 AM, Iain Buclaw via Digitalmars-d wrote: Where removing RTTI disables D feature's in a compromising way, I'd start by questioning the why. Eg: Why does array literals need RTTI? Looking at _d_arrayliteralTX implementation, it only does the following with the given TypeInfo provided: - Get the array element size (this is known at compile time) - Get the array element type flags (calculated during the codegen stage, but otherwise known at compile time) - Test if the TypeInfo is derived from TypeInfo_Shared (can be done at - you guessed it - compile time by peeking through the baseClass linked list for a given TypeInfo type we are passing). So we have this function that accepts a TypeInfo, but doesn't really *need* to at all. void* _d_arrayliteralTX(size_t length, size_t sizeelem, uint flags, bool isshared); Just putting it out there I strongly suggest we *don't* go this route. This means that any changes to what is required for the runtime to properly construct an array requires a compiler change. A MUCH better solution: T[] _d_arrayliteral(T)(size_t length) Also, isn't the typeinfo now stored by the GC so it can call the dtor? Perhaps that is done in the filling of the array literal, but I would be surprised as this is a GC feature. I only looked at 2.066.1, the runtime implementation did not pass the typeinfo to the GC.
[Issue 9785] dmd -inline should inline lambda delegates
https://issues.dlang.org/show_bug.cgi?id=9785 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added Keywords||performance --
Re: post on using go 1.5 and GC latency
On Saturday, 22 August 2015 at 09:16:32 UTC, Russel Winder wrote: On Sat, 2015-08-22 at 07:30 +, rsw0x via Digitalmars-d-learn wrote: […] because Go is not a general purpose language. Not entirely true. Go is a general purpose language, it is a successor to C as envisioned by Rob Pike, Russ Cox, and others (I am not sure how much input Brian Kernighan has had). However, because of current traction in Web servers and general networking, it is clear that that is where the bulk of the libraries are. Canonical also use it for Qt UI applications. I am not sure of Google real intent for Go on Android, but there is one. A concurrent GC for D would kill D. Go programs saw a 25-50% performance decrease across the board for the lower latencies. They also saw a 100% increase in performance when it was rewritten, and a 20% fall with this latest rewrite. I anticipate great improvement for the 1.6 rewrite. I am surprised they are retaining having only a single garbage collector: different usages generally require different garbage collection strategies. Having said that Java is moving from having four collectors, to having one, it is going to be interesting to see if G1 meets the needs of all JVM usages. D could make some very minor changes and be capable of a per-thread GC with none of these performance drawbacks, but nobody seems very interested in it. Until some organization properly funds a suite of garbage collectors for different performance targets, you have what there is. I didn't mean to start again the whole GC and Go vs D thing. Just that one ought to know the lay of the land as it develops. Out of curiosity, how much funding is required to develop the more straightforward kind of GCs ? Or to take what's been done and make it possible for others to use? It needn't be a single organisation I would think if there are many that would benefit and one doesn't get bogged down in a mentality of people worrying about possibly spurious free rider problems. Since the D Foundation seems under way, it seems worth asking the question first and thinking about goals without worrying for now about what seems realistic.
Re: post on using go 1.5 and GC latency
On 8/22/2015 10:47 PM, Laeeth Isharc wrote: On Saturday, 22 August 2015 at 09:16:32 UTC, Russel Winder wrote: On Sat, 2015-08-22 at 07:30 +, rsw0x via Digitalmars-d-learn wrote: […] because Go is not a general purpose language. Not entirely true. Go is a general purpose language, it is a successor to C as envisioned by Rob Pike, Russ Cox, and others (I am not sure how much input Brian Kernighan has had). However, because of current traction in Web servers and general networking, it is clear that that is where the bulk of the libraries are. Canonical also use it for Qt UI applications. I am not sure of Google real intent for Go on Android, but there is one. A concurrent GC for D would kill D. Go programs saw a 25-50% performance decrease across the board for the lower latencies. They also saw a 100% increase in performance when it was rewritten, and a 20% fall with this latest rewrite. I anticipate great improvement for the 1.6 rewrite. I am surprised they are retaining having only a single garbage collector: different usages generally require different garbage collection strategies. Having said that Java is moving from having four collectors, to having one, it is going to be interesting to see if G1 meets the needs of all JVM usages. D could make some very minor changes and be capable of a per-thread GC with none of these performance drawbacks, but nobody seems very interested in it. Until some organization properly funds a suite of garbage collectors for different performance targets, you have what there is. I didn't mean to start again the whole GC and Go vs D thing. Just that one ought to know the lay of the land as it develops. Out of curiosity, how much funding is required to develop the more straightforward kind of GCs ? Or to take what's been done and make it possible for others to use? It needn't be a single organisation I would think if there are many that would benefit and one doesn't get bogged down in a mentality of people worrying about possibly spurious free rider problems. Since the D Foundation seems under way, it seems worth asking the question first and thinking about goals without worrying for now about what seems realistic. I believe the hardest part is finding somebody can and willing to work on it. For example I'm willing but I don't know how and there are people willing with a job and can do it. But cannot dedicated time because of money. Really it comes down to having a budget and if somebody says hey I'll do x, y and z features to pay them for their time as they do it. Even if they only do one small feature which takes a week.
Re: post on using go 1.5 and GC latency
On Saturday, 22 August 2015 at 07:30:23 UTC, rsw0x wrote: On Saturday, 22 August 2015 at 06:48:48 UTC, Russel Winder wrote: On Fri, 2015-08-21 at 10:47 +, via Digitalmars-d-learn wrote: Yes, Go has sacrificed some compute performance in favour of latency and convenience. They have also released GC improvement plans for 1.6: https://docs.google.com/document/d/1kBx98ulj5V5M9Zdeamy7v6ofZXX3yPziA f0V27A64Mo/edit It is rather obvious that a building a good concurrent GC is a time consuming effort. But one that Google are entirely happy to fully fund. because Go is not a general purpose language. A concurrent GC for D would kill D. Go programs saw a 25-50% performance decrease across the board for the lower latencies. D could make some very minor changes and be capable of a per-thread GC with none of these performance drawbacks, but nobody seems very interested in it. This puts ddmd into context, bearing in mind an automated translation without won't I guess be much slower in LDC or GDC, and it's already a small difference: Release notes on go 1.5 via stack overflow. Builds in Go 1.5 will be slower by a factor of about two. The automatic translation of the compiler and linker from C to Go resulted in unidiomatic Go code that performs poorly compared to well-written Go. Analysis tools and refactoring helped to improve the code, but much remains to be done. Further profiling and optimization will continue in Go 1.6 and future releases. For more details, see these slides and associated video.
Re: post on using go 1.5 and GC latency
On Saturday, 22 August 2015 at 10:47:55 UTC, Laeeth Isharc wrote: Out of curiosity, how much funding is required to develop the more straightforward kind of GCs ? A classical GC like D has is very straightforward. It is been used since the 60s, I even have a paper from 1974 or so describing the implementation used for Simula which is a precise stop-the world GC. Trivial to do. Or to take what's been done and make it possible for others to use? Therein is the trouble, a more advanced GC is intrinsically linked to the language semantics and has to be tuned to the hardware. Expect at least 2 years of work for anything approaching state-of-the-art. In the web server space you wait a lot for I/O so raw performance is not key for Go's success. Stability, memory usage and low latency is more important.
Re: post on using go 1.5 and GC latency
On Saturday, 22 August 2015 at 07:02:40 UTC, Russel Winder wrote: I think Go 2 is a long way off, and even then generics will not be part of the plan. I agree that Go from Google will stay close to the ideals of the creators. I think it would be difficult get beyond that for social reasons. But I think the mechanics Go provides are generic enough that someone could build a transpiler providing more high level convenience. I am thinking along the lines of a convenient language that can compile to both Go and Javascript... I'm tempted to have a go at it. ;) Go UK 2015 was held yesterday. It was less a conference and more a Google rah rah event. It was though very clear that Google are looking for new idioms and practices to come from users other than Google, rather than what has happened to date, which is the Go central team dictating everything to everyone else. Go UK sounds interesting. I wonder if they will have one in Oslo? Probably not :-/.
Re: post on using go 1.5 and GC latency
On Fri, 2015-08-21 at 10:47 +, via Digitalmars-d-learn wrote: Yes, Go has sacrificed some compute performance in favour of latency and convenience. They have also released GC improvement plans for 1.6: https://docs.google.com/document/d/1kBx98ulj5V5M9Zdeamy7v6ofZXX3yPziA f0V27A64Mo/edit It is rather obvious that a building a good concurrent GC is a time consuming effort. But one that Google are entirely happy to fully fund. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: New to D - playing with Thread and false Sharing
On Fri, 2015-08-21 at 01:22 +, Nicholas Wilson via Digitalmars-d -learn wrote: […] Keep in mind java may be using green threads as opposed to kernel threads. The equivalent in D is a Fiber. I believe Java itself hasn't used green threads in an awful long time: Threads are mapped to kernel threads. As you say the lightweight threads (aka green threads,…) has recently (!) been reinvented under the name fibers. One of the Java market leaders for this is Quasar, another is GPars. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: post on using go 1.5 and GC latency
On Saturday, 22 August 2015 at 06:48:48 UTC, Russel Winder wrote: But one that Google are entirely happy to fully fund. Yes, they have made Go fully supported on Google Cloud now, so I think it is safe to say that Google management is backing Go fully. I'm kinda hoping for Go++...
Re: New to D - playing with Thread and false Sharing
On Thu, 2015-08-20 at 20:01 +, tony288 via Digitalmars-d-learn wrote: […] Now what I would like to know, how would I make this code more efficient? Which is basically the aim I'm trying to achieve. Any pointers would be really help full. Should I use concurrency/parallelism etc..? I have found very, very few codes (in many different language, including Java) that were not massively improved by removal of all explicit thread (and fiber) usage. Threads and fibers are, and should be, infrastructure for application level APIs. If a language doesn't have the abstractions then it is deficient and should gain them quickly or not be used. I observe that the only programs that need to explicitly use threads and fibers are the ones that must explicitly manage heap and stack. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
[Issue 14947] New: std.traits: ParameterIdentifierTuple on an 'interface' not working
https://issues.dlang.org/show_bug.cgi?id=14947 Issue ID: 14947 Summary: std.traits: ParameterIdentifierTuple on an 'interface' not working Product: D Version: D2 Hardware: x86_64 OS: Linux Status: NEW Severity: normal Priority: P1 Component: phobos Assignee: nob...@puremagic.com Reporter: s_dlang_bugzi...@asylum.id.au for dmd2.068, linux 64bit: dmd -oftest test.d (string, uint, bool) tuple(a, b, c) (string, uint, bool) tuple(, , ) second tuple printout should be the same as the first one. with test.d: import std.traits; interface IFoo { ulong foo_me(string a, uint b, bool c); } class myFoo:IFoo{ ulong foo_me(string a, uint b, bool c){ return 42; } } void main() { auto tmp_foo = new myFoo; alias ParameterTypeTuple!(tmp_foo.foo_me) tmpfooTypes; pragma(msg,tmpfooTypes); alias ParameterIdentifierTuple!(tmp_foo.foo_me) tmpfooNames; pragma(msg,tmpfooNames); foreach ( method; __traits(allMembers, IFoo)) { alias MemberFunctionsTuple!(IFoo, method) funcs; alias typeof(funcs[0]) func; alias ReturnType!func return_type; alias ParameterTypeTuple!func ParameterTypes; alias ParameterIdentifierTuple!func ParameterNames; pragma(msg,ParameterTypes); pragma(msg,ParameterNames); } } --
Re: post on using go 1.5 and GC latency
On Sat, 2015-08-22 at 06:54 +, via Digitalmars-d-learn wrote: On Saturday, 22 August 2015 at 06:48:48 UTC, Russel Winder wrote: But one that Google are entirely happy to fully fund. Yes, they have made Go fully supported on Google Cloud now, so I think it is safe to say that Google management is backing Go fully. I'm kinda hoping for Go++... I think Go 2 is a long way off, and even then generics will not be part of the plan. Go UK 2015 was held yesterday. It was less a conference and more a Google rah rah event. It was though very clear that Google are looking for new idioms and practices to come from users other than Google, rather than what has happened to date, which is the Go central team dictating everything to everyone else. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: dmd codegen improvements
On Saturday, 22 August 2015 at 02:42:48 UTC, H. S. Teoh wrote: Ah, yes...: http://emptybottle.org/bullshit/index.php It would be a lot more helpful if you had provided a link to a paper on scalar branch divergence and memory divergence.
[Issue 8812] functionAttributes doesn't returns const/immutable/shraed/inout attributs
https://issues.dlang.org/show_bug.cgi?id=8812 ted s_dlang_bugzi...@asylum.id.au changed: What|Removed |Added CC||s_dlang_bugzi...@asylum.id. ||au --- Comment #1 from ted s_dlang_bugzi...@asylum.id.au --- dmd2.068, linux, 64bit Code works as expected. --
Re: dmd codegen improvements
On Saturday, 22 August 2015 at 07:10:28 UTC, Ola Fosheim Grøstad wrote: On Saturday, 22 August 2015 at 02:42:48 UTC, H. S. Teoh wrote: Ah, yes...: http://emptybottle.org/bullshit/index.php It would be a lot more helpful if you had provided a link to a paper on scalar branch divergence and memory divergence. http://www.cs.virginia.edu/~skadron/Papers/meng_dws_isca10.pdf Here you go.
Re: post on using go 1.5 and GC latency
On Saturday, 22 August 2015 at 06:48:48 UTC, Russel Winder wrote: On Fri, 2015-08-21 at 10:47 +, via Digitalmars-d-learn wrote: Yes, Go has sacrificed some compute performance in favour of latency and convenience. They have also released GC improvement plans for 1.6: https://docs.google.com/document/d/1kBx98ulj5V5M9Zdeamy7v6ofZXX3yPziA f0V27A64Mo/edit It is rather obvious that a building a good concurrent GC is a time consuming effort. But one that Google are entirely happy to fully fund. because Go is not a general purpose language. A concurrent GC for D would kill D. Go programs saw a 25-50% performance decrease across the board for the lower latencies. D could make some very minor changes and be capable of a per-thread GC with none of these performance drawbacks, but nobody seems very interested in it.
Re: std.data.json formal review
Am 17.08.2015 um 00:03 schrieb Walter Bright: On 8/16/2015 5:34 AM, Sönke Ludwig wrote: Am 16.08.2015 um 02:50 schrieb Walter Bright: if (isInputRange!R is(Unqual!(ElementEncodingType!R) == char)) I'm not a fan of more names for trivia, the deluge of names has its own costs. Good, I'll use `if (isInputRange!R (isSomeChar!(ElementEncodingType!R) || isIntegral!(ElementEncodingType!R))`. It's just used in number of places and quite a bit more verbose (twice as long) and I guess a large number of algorithms in Phobos accept char ranges, so that may actually warrant a name in this case. Except that there is no reason to support wchar, dchar, int, ubyte, or anything other than char. The idea is not to support something just because you can, but there should be an identifiable, real use case for it first. Has anyone ever seen Json data as ulongs? I haven't either. But you have seen ubyte[] when reading something from a file or from a network stream. But since Andrei now also wants to remove it, so be it. I'll answer some of the other points anyway: The json parser will work fine without doing any validation at all. I've been implementing string handling code in Phobos with the idea of doing validation only if the algorithm requires it, and only for those parts that require it. Yes, and it won't do that if a char range is passed in. If the integral range path gets removed there are basically two possibilities left, perform the validation up-front (slower), or risk UTF exceptions in unrelated parts of the code base. I don't see why we shouldn't take the opportunity for a full and fast validation here. But I'll relay this to Andrei, it was his idea originally. That argument could be used to justify validation in every single algorithm that deals with strings. Not really for all, but indeed there are more where this could apply in theory. However, JSON is used frequently in situations where parsing speed, or performance in general, is often crucial (e.g. web services), which makes it stand out due to practical concerns. Others, such as an XML parser would apply, too, but probably none of the generic string manipulation functions. Why do both? Always return an input range. If the user wants a string, he can pipe the input range to a string generator, such as .array Convenience for one. Back to the previous point, that means that every algorithm in Phobos should have two versions, one that returns a range and the other a string? All these variations will result in a combinatorical explosion. This may be a factor of two, but not a combinatorial explosion. We're already up to validate or not, to string or not, i.e. 4 combinations. Validation is part of the lexer and not the generator. There is no combinatorial relation between the two. Validation is also just a template parameter, so there are no two combinations in terms of implementation either. There is just a static if statement somewhere to decide if validate() should be called or not. The other problem, of course, is that returning a string means the algorithm has to decide how to allocate that string. As much as possible, algorithms should not be making allocation decisions. Granted, the fact that format() and to!() support input ranges (I didn't notice that until now) makes the issue less important. But without those, it would basically mean that almost all places that generate JSON strings would have to import std.array and append .array. Nothing particularly bad if viewed in isolation, but makes the language appear a lot less clean/more verbose if it occurs often. It's also a stepping stone for language newcomers. This has been argued before, and the problem is it applies to EVERY algorithm in Phobos, and winds up with a doubling of the number of functions to deal with it. I do not view this as clean. D is going to be built around ranges as a fundamental way of coding. Users will need to learn something about them. Appending .array is not a big hill to climb. It isn't if you get taught about it. But it surely is if you don't know about it yet and try to get something working based only on the JSON API (language newcomer that wants to work with JSON). It's also still an additional thing to remember, type and read, making it an additional piece of cognitive load, even for developers that are fluent with this. Have many of such pieces and they add up to a point where productivity goes to its knees. I already personally find it quite annoying constantly having to import std.range, std.array and std.algorithm to just use some small piece of functionality in std.algorithm. It's also often not clear in which of the three modules/packages a certain function is. We need to find a better balance here if D is to keep its appeal as a language where you stay in the zone (a.k.a flow), which always has been a big thing for me.
Re: Object.factory() and exe file size bloat
On 22 August 2015 at 11:33, Johannes Pfau via Digitalmars-d digitalmars-d@puremagic.com wrote: Am Fri, 21 Aug 2015 15:16:01 +0200 schrieb Iain Buclaw via Digitalmars-d digitalmars-d@puremagic.com: Other than that, the semantics of pragma(inline, true) should guarantee that the function is never *written* to object file. This really should be documented then. If we build a shared library with pragma(inline) functions not emitting the function prevents taking the address of that function in all client code. As this is a breaking change to 'normal' inline semantics it needs to be documented. https://github.com/D-Programming-Language/dlang.org/pull/1073 I wouldn't go as far as preventing these functions from having their address taken. In that instance, of *course* it needs to be written to object file. But it should be put in COMDAT as each external module that takes its address will have a copy of it. Regards Iain.
Re: dsource.org moved
On Friday, 21 August 2015 at 17:05:42 UTC, tired_eyes wrote: So, four months later, can we have some kind of warning banner on dsource.org? Done.
Re: std.data.json formal review
Am 21.08.2015 um 19:30 schrieb Andrei Alexandrescu: On 8/18/15 12:54 PM, Sönke Ludwig wrote: Am 18.08.2015 um 00:21 schrieb Andrei Alexandrescu: * On the face of it, dedicating 6 modules to such a small specification as JSON seems excessive. I'm thinking one module here. (As a simple point: who would ever want to import only foundation, which in turn has one exception type and one location type in it?) I think it shouldn't be up for debate that we must aim for simple and clean APIs. That would mean a single module that is 5k lines long. Spreading out certain things, such as JSONValue into an own module also makes sense to avoid unnecessarily large imports where other parts of the functionality isn't needed. Maybe we could move some private things to std.internal or similar and merge some of the modules? That would help. My point is it's good design to make the response proportional to the problem. 5K lines is not a lot, but reducing those 5K in the first place would be a noble pursuit. And btw saving parsing time is so C++ :o). Most lines are needed for tests and documentation. Surely dropping some functionality would make the module smaller, too. But there is not a lot to take away without making severe compromises in terms of actual functionality or usability. But I also think that grouping symbols by topic is a good thing and makes figuring out the API easier. There is also always package.d if you really want to import everything. Figuring out the API easily is a good goal. The best way to achieve that is making the API no larger than necessary. So, what's your suggestion, remove all read*/skip* functions for example? Make them member functions of JSONParserRange instead of UFCS functions? We could of course also just use the pseudo modules that std.algorithm had for example, where we'd create a table in the documentation for each category of functions. Another thing I'd like to add is an output range that takes parser nodes and writes to a string output range. This would be the kind of interface that would be most useful for a serialization framework. Couldn't that be achieved trivially by e.g. using map!(t = t.toString) or similar? This is the nice thing about rangifying everything - suddenly you have a host of tools at your disposal. No, the idea is to have an output range like so: Appender!string dst; JSONNodeOutputRange r(dst); r.put(beginArray); r.put(1); r.put(2); r.put(endArray); This would provide a forward interface for code that has to directly iterate over its input, which is the case for a serializer - it can't provide an input range interface in a sane way. The alternative would be to either let the serializer re-implement all of JSON, or to just provide some primitives (writeJSON() that takes bool, number or string) and to let the serializer implement the rest of JSON (arrays/objects), which includes certain options, such as pretty-printing. - Also, at token level strings should be stored with escapes unresolved. If the user wants a string with the escapes resolved, a lazy range does it. To make things efficient, it currently stores escaped strings if slices of the input are used, but stores unescaped strings if allocations are necessary anyway. That seems a good balance, and probably could be applied to numbers as well. With the difference that numbers stored as numbers never need to allocate, so for non-slicable inputs the compromise is not the same. What about just offering basically three (CT selectable) modes: - Always parse as double (parse lazily if slicing can be used) (default) - Parse double or long (again, lazily if slicing can be used) - Always store the string representation The question that remains is how to handle this in JSONValue - support just double there? Or something like JSONNumber that abstracts away the differences, but makes writing generic code against JSONValue difficult? Or make it also parameterized in what it can store? - Validating UTF is tricky; I've seen some discussion in this thread about it. On the face of it JSON only accepts valid UTF characters. As such, a modularity-based argument is to pipe UTF validation before tokenization. (We need a lazy UTF validator and sanitizer stat!) An efficiency-based argument is to do validation during tokenization. I'm inclining in favor of modularization, which allows us to focus on one thing at a time and do it well, instead of duplicationg validation everywhere. Note that it's easy to write routines that do JSON tokenization and leave UTF validation for later, so there's a lot of flexibility in composing validation with JSONization. It's unfortunate to see this change of mind in face of the work that already went into the implementation. I also still think that this is a good optimization opportunity that doesn't really affect the implementation complexity. Validation isn't duplicated, but reused from std.utf. Well if the
[Issue 4541] Intrinsic functions do not have pointers
https://issues.dlang.org/show_bug.cgi?id=4541 hst...@quickfur.ath.cx changed: What|Removed |Added CC||hst...@quickfur.ath.cx --- Comment #5 from hst...@quickfur.ath.cx --- Perhaps a possible workaround is to have the compiler emit a wrapper function when the address of an intrinsic is asked for, and return the pointer to the wrapper? --
Re: post on using go 1.5 and GC latency
On Saturday, 22 August 2015 at 10:47:55 UTC, Laeeth Isharc wrote: On Saturday, 22 August 2015 at 09:16:32 UTC, Russel Winder wrote: [...] I didn't mean to start again the whole GC and Go vs D thing. Just that one ought to know the lay of the land as it develops. Out of curiosity, how much funding is required to develop the more straightforward kind of GCs ? Or to take what's been done and make it possible for others to use? It needn't be a single organisation I would think if there are many that would benefit and one doesn't get bogged down in a mentality of people worrying about possibly spurious free rider problems. Since the D Foundation seems under way, it seems worth asking the question first and thinking about goals without worrying for now about what seems realistic. The problem with D's GC is that there's no scaffolding there for it, so you can't really improve it. At best you could make the collector parallel. If I had the runtime hooks and language guarantees I needed I'd begin work on a per-thread GC immediately.
Re: associative arrays with manual memory management
On Saturday, 22 August 2015 at 07:37:38 UTC, Dmitry Olshansky wrote: FYI https://github.com/D-Programming-Language/druntime/pull/1282 Thanks!
Cross module conflict bug with private and public members?
import std.stdio; import std.range; import std.concurrency; void main(string[] args) { auto generator = new Generator!(int)({ foreach (value; 1..10) { yield(value); } }); foreach (value; generator) { writeln(value); } } Compiled with `rdmd test.d` test.d(41): Error: std.concurrency.Generator(T) at /usr/include/dmd/phobos/std/concurrency.d(1569) conflicts with std.range.Generator(Fun...) at /usr/include/dmd/phobos/std/rang e/package.d(2806) `std.concurrency.Generator` is public and `std.range.Generator` is private so surely these shouldn't conflict?
Re: dsource.org moved
On Saturday, 22 August 2015 at 13:03:34 UTC, Vladimir Panteleev wrote: On Friday, 21 August 2015 at 17:05:42 UTC, tired_eyes wrote: So, four months later, can we have some kind of warning banner on dsource.org? Done. Excellent, thank you! It was a source of confusion.
Re: std.data.json formal review
Am 21.08.2015 um 18:56 schrieb Andrei Alexandrescu: On 8/18/15 1:21 PM, Sönke Ludwig wrote: Am 18.08.2015 um 00:37 schrieb Andrei Alexandrescu: On 8/17/15 2:56 PM, Sönke Ludwig wrote: - The enum is useful to be able to identify the types outside of the D code itself. For example when serializing the data to disk, or when communicating with C code. OK. - It enables the use of pattern matching (final switch), which is often very convenient, faster, and safer than an if-else cascade. Sounds tenuous. It's more convenient/readable in cases where a complex type is used (typeID == Type.object vs. has!(JSONValue[string]). This is especially true if the type is ever changed (or parametric) and all has!()/get!() code needs to be adjusted accordingly. It's faster, even if there is no indirect call involved in the pointer case, because the compiler can emit efficient jump tables instead of generating a series of conditional jumps (if-else-cascade). It's safer because of the possibility to use final switch in addition to a normal switch. I wouldn't call that tenuous. Well I guess I would, but no matter. It's something where reasonable people may disagree. It depends on the perspective/use case, so it's surely not unreasonable to disagree here. But I'm especially not happy with the final switch argument getting dismissed so easily. By the same logic, we could also question the existence of final switch, or even switch, as a feature in the first place. Performance benefits are certainly nice, too, but that's really just an implementation detail. The important trait is that the types get a name and that they form an enumerable set. This is quite similar to comparing a struct with named members to an anonymous Tuple!(T...).
Re: Cross module conflict bug with private and public members?
On Sat, Aug 22, 2015 at 01:47:30PM +, Gary Willoughby via Digitalmars-d wrote: [...] test.d(41): Error: std.concurrency.Generator(T) at /usr/include/dmd/phobos/std/concurrency.d(1569) conflicts with std.range.Generator(Fun...) at /usr/include/dmd/phobos/std/rang e/package.d(2806) `std.concurrency.Generator` is public and `std.range.Generator` is private so surely these shouldn't conflict? https://issues.dlang.org/show_bug.cgi?id=1238 T -- Nothing in the world is more distasteful to a man than to take the path that leads to himself. -- Herman Hesse
Re: Cross module conflict bug with private and public members?
On Saturday, 22 August 2015 at 13:47:31 UTC, Gary Willoughby wrote: `std.concurrency.Generator` is public and `std.range.Generator` is private so surely these shouldn't conflict? You would think so, but this is the way it has been since the beginning of D so I wouldn't expect the implementation to change any time soon. Just work around it with static+renamed imports or full name disambiguation. https://issues.dlang.org/show_bug.cgi?id=1238
(De)Serializing interfaces
I think interfaces are very powerful and I heavily use them. The only problem I have with them is that serializing/deserializing them to XML or JSON doesn't seem to work. So far I got to try Orange and painlessjson. Using Orange all I got was a lot of compiler errors. Painlessjson did compile normally but just ignores all interface class members. This is the code I tried (I apologize for not formatting it, I have no idea how to do that): interface MyInterface { int GetA(); } class Foo: MyInterface { int a; int GetA() { return a; } } // maybe add a class Bar later which implements the same interface class Foobar { MyInterface myBar = new Foo(); } void main() { // serialize it }
Re: Object.factory() and exe file size bloat
Am Sat, 22 Aug 2015 14:47:34 +0200 schrieb Iain Buclaw via Digitalmars-d digitalmars-d@puremagic.com: On 22 August 2015 at 11:33, Johannes Pfau via Digitalmars-d digitalmars-d@puremagic.com wrote: Am Fri, 21 Aug 2015 15:16:01 +0200 schrieb Iain Buclaw via Digitalmars-d digitalmars-d@puremagic.com: Other than that, the semantics of pragma(inline, true) should guarantee that the function is never *written* to object file. This really should be documented then. If we build a shared library with pragma(inline) functions not emitting the function prevents taking the address of that function in all client code. As this is a breaking change to 'normal' inline semantics it needs to be documented. https://github.com/D-Programming-Language/dlang.org/pull/1073 I wouldn't go as far as preventing these functions from having their address taken. In that instance, of *course* it needs to be written to object file. But it should be put in COMDAT as each external module that takes its address will have a copy of it. Regards Iain. That's indeed a better solution.
Re: Wiki article: Starting as a Contributor
On Saturday 22 August 2015 11:05, ted wrote: From here, I can go to 'Building DMD' and 'How to Fork and Build dlang.org', which both seem to build DMD - I'm unsure of the overlap aspects here. The 'dlang.org' project is the website. It deals with building dmd only insofar as you need a compiler to build the website. If your goal is to build dmd, don't bother with anything 'dlang.org'. There is also an issue with the set of instructions in 'How to Fork and Build dlang.org' - and I don't know what category to file the bug under in bugzilla !?). Component: dlang.org make --directory=../dmd-2.068.0/src -f posix.mak -j 4 make[1]: *** ../dmd-2.068.0/src: No such file or directory. Stop. posix.mak:338: recipe for target '../dmd-2.068.0/src/dmd' failed make: *** [../dmd-2.068.0/src/dmd] Error 2 This is issue 14915, a regression in 2.068: https://issues.dlang.org/show_bug.cgi?id=14915 As a workaround, you can manually revert the changes done in PR #1050. Though, repeating myself, if your goal is to build dmd, don't bother with dlang.org.
[Issue 14915] [REG2.068.0] can't build phobos-release
https://issues.dlang.org/show_bug.cgi?id=14915 ag0ae...@gmail.com changed: What|Removed |Added Keywords||pull --- Comment #1 from ag0ae...@gmail.com --- https://github.com/D-Programming-Language/dlang.org/pull/1074 --
Re: Object.factory() and exe file size bloat
On 8/22/2015 2:42 AM, Dicebot wrote: On Friday, 21 August 2015 at 20:28:47 UTC, Walter Bright wrote: Btw we use it for high-level testing framework - will be rather hard to move that to compile-time approach It's good to hear of use cases for Object.factory. If you want details it is special library for black box testing applications by spawning them as external processes and interacting with their shell/network API. To minimize boilerplate test scenarios are derived from special TestCase class and test runner finds all classes that derive from TestCase automatically. Marking them all as export will be inconvenient but is possible - but I'd like to get something useful in return, like well-defined and working export for example. I'm not sure how export would help on Linux. until some reflection bugs gets fixed. Bugzilla issues? (You knew that was coming!) https://issues.dlang.org/show_bug.cgi?id=11595 is the main offender. Currently the task 'find all symbols with a given trait in the whole program' can't be implemented at CT. Thanks!
[Issue 7625] inlining only works with explicit else branch
https://issues.dlang.org/show_bug.cgi?id=7625 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added CC||bugzi...@digitalmars.com --- Comment #4 from Walter Bright bugzi...@digitalmars.com --- https://github.com/D-Programming-Language/dmd/pull/4919 --
Re: string - null/bool implicit conversion
On 08/22/2015 12:04 AM, Jonathan M Davis wrote: On Friday, 21 August 2015 at 21:13:35 UTC, David Nadlinger wrote: On Friday, 21 August 2015 at 20:01:21 UTC, Vladimir Panteleev wrote: This warning almost doesn't break any code! It indeed doesn't break almost any code. Yours is quite the outlier. In general, it gets a bit interesting when a feature is useful if used correctly and gets used correctly by experts but generally screws up most programmers. For instance, the comma operator would be a case of that. If used correctly, it can be really useful, but it's so easy to misuse that it's generally considered bad practice to use it. And yet, I'm sure that there are folks out there who love it and use it correctly on a regular basis. That's not the norm though. - Jonathan M Davis For the comma operator, I think it's pretty clear that the usage of ',' to separate components of a tuple would be more useful. (With L-T-R evaluation, replacing usages of the comma operator is easy, e.g. 'a,b,c' becomes '(a,b,c)[$-1]', not to speak about the delegate option.)
Re: dmd codegen improvements
On Tuesday, 18 August 2015 at 17:10:36 UTC, Vladimir Panteleev wrote: On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir Panteleev wrote: I suggest that we revamp the compiler download page again. The lead should be a select your compiler which lists the advantages and disadvantages of each of DMD, LDC and GDC. https://github.com/D-Programming-Language/dlang.org/pull/1067 Now live on http://dlang.org/download.html Better artwork welcome :)
[Issue 4440] [patch] Inlining delegate literals
https://issues.dlang.org/show_bug.cgi?id=4440 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added CC||bugzi...@digitalmars.com --- Comment #11 from Walter Bright bugzi...@digitalmars.com --- You're right, Brad. Fixing this involves doing constant propagation in the front end. It's possible to do it, as constant propagation is the easiest of the data flow optimizations to do. --
[Issue 8846] Specs for Inline Assembler don't include cmpxchg16b
https://issues.dlang.org/show_bug.cgi?id=8846 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added CC||bugzi...@digitalmars.com --- Comment #3 from Walter Bright bugzi...@digitalmars.com --- https://github.com/D-Programming-Language/dlang.org/pull/1075 --
Re: Object.factory() and exe file size bloat
On Saturday, 22 August 2015 at 21:56:25 UTC, Walter Bright wrote: On 8/22/2015 1:22 PM, David Nadlinger wrote: One of the use cases for export on Linux would be to set the ELF visibility based on it. Emitting all the symbols with default visibility, like we currently do, leads to size and load time problems with large libraries. Big C++ projects are plagued regularly by this (cf. -fvisibility=hidden). A bugzilla enhancement request for this would be nice. https://issues.dlang.org/show_bug.cgi?id=9893 – David
Re: string - null/bool implicit conversion
On Saturday, 22 August 2015 at 21:23:19 UTC, Timon Gehr wrote: For the comma operator, I think it's pretty clear that the usage of ',' to separate components of a tuple would be more useful. (With L-T-R evaluation, replacing usages of the comma operator is easy, e.g. 'a,b,c' becomes '(a,b,c)[$-1]', not to speak about the delegate option.) Which can be wrapped into a simple construct à la last(a, b, c) or similar.
Re: Object.factory() and exe file size bloat
On Saturday, 22 August 2015 at 23:33:15 UTC, Manu wrote: On 21 August 2015 at 15:06, Walter Bright via Digitalmars-d digitalmars-d@puremagic.com wrote: [...] I don't follow the reasoning, but yes! Kill it with fire! I'd rather see a compile option or something to disable it completely, like how disabling RTTI is a common C++ option. rtti is used heavily in the runtime hooks, this needs to be fixed first as far as I know.
Re: D-Day for DMD is today!
On Sunday, 23 August 2015 at 05:17:33 UTC, Walter Bright wrote: https://github.com/D-Programming-Language/dmd/pull/4923 We have made the switch from C++ DMD to D DMD! [...] Excellent. I guess it's also time to clean the wiki page that explained how to build under win32 with DMC. It's obsolete now.
Re: D-Day for DMD is today!
On 08/23/2015 07:22 AM, Rikki Cattermole wrote: Now lets hope the next stage is smooth in the transition. Here is a small guide on how to update a PR. https://github.com/D-Programming-Language/dmd/pull/4922#issuecomment-133776696
[Issue 13007] Wrong x86 code: long negate
https://issues.dlang.org/show_bug.cgi?id=13007 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added Severity|major |critical --
[Issue 14950] New: Setting enum value to the last member of another enum causes int overflow error
https://issues.dlang.org/show_bug.cgi?id=14950 Issue ID: 14950 Summary: Setting enum value to the last member of another enum causes int overflow error Product: D Version: D2 Hardware: x86_64 OS: Linux Status: NEW Severity: normal Priority: P1 Component: dmd Assignee: nob...@puremagic.com Reporter: initrd...@gmail.com I encountered this when writing interfaces for a C library. Code sample: enum A { start, end } enum B { start = A.end, end } rdmd -main ./test.d ./test.d(9): Error: enum member test.B.end initialization with (B.start + 1) causes overflow for type 'int' Failed: [dmd, -main, -v, -o-, ./test.d, -I.] --
Re: (De)Serializing interfaces
On Saturday, 22 August 2015 at 19:14:16 UTC, nims wrote: I think interfaces are very powerful and I heavily use them. The only problem I have with them is that serializing/deserializing them to XML or JSON doesn't seem to work. So far I got to try Orange and painlessjson. Using Orange all I got was a lot of compiler errors. Painlessjson did compile normally but just ignores all interface class members. I've never used Orange, but one thing you could try is casting your object from MyInterface to Object, and registering the type Foobar like in http://dsource.org/projects/orange/wiki/Tutorials/SerializeBase, then serializing/deserializing it as Object rather than MyInterface. I'm not sure if this will work, but it's worth a try if it doesn't handle interfaces. Interfaces are a bit odd in some ways, as they are not necessarily classes (and thus not implicitly convertible to Object) in situations like with COM / extern(C++).
Re: string - null/bool implicit conversion
On 08/23/2015 01:09 AM, deadalnix wrote: On Saturday, 22 August 2015 at 21:23:19 UTC, Timon Gehr wrote: For the comma operator, I think it's pretty clear that the usage of ',' to separate components of a tuple would be more useful. (With L-T-R evaluation, replacing usages of the comma operator is easy, e.g. 'a,b,c' becomes '(a,b,c)[$-1]', not to speak about the delegate option.) Which can be wrapped into a simple construct à la last(a, b, c) or similar. Which can even be force-inlined.
Re: [OT] Sharp Regrets: Top 10 Worst C# Features
On 8/22/2015 8:32 PM, H. S. Teoh via Digitalmars-d wrote: People who are more than casually interested in computers should have at least some idea of what the underlying hardware is like. Otherwise the programs they write will be pretty weird. -- D. Knuth A good friend of mine in college decided to learn Fortran, having never programmed before. Being a practical sort, he got a copy of the Fortran-10 reference manual, read it, and wrote a program. Being an amazingly smart man, his program worked. But it ran awfully slowly. He was quite mystified, and finally asked for help from someone who knew Fortran. It was quickly discovered that the program wrote a file by opening the file, appending a character, then closing the file, for each byte in the file. (You can imagine how slow that is!) My friend defended himself with the fact that the Fortran reference manual made no mention about how to do file I/O for performance - knowledge of this sort of thing was just assumed. He was quite right.
Re: Object.factory() and exe file size bloat
On 8/22/2015 3:41 PM, Adam D. Ruppe wrote: The common saying if it isn't in bugzilla it is forgotten seems quite silly when so much that IS in bugzilla is forgotten all the same. Lots of people, like Daniel and Kenji and Vladimir and Martin, etc., go through Bugzilla looking for things to fix. I don't know anyone combing through the 300,000 messages in this n.g. looking for vaguely described complaints to fix. Furthermore, the changelog for each release shows hundreds of bugzilla issues fixed, and 0 newsgroup complaints fixed.
Re: Object.factory() and exe file size bloat
On 8/22/2015 5:47 AM, Iain Buclaw via Digitalmars-d wrote: But it should be put in COMDAT And it is.
[Issue 11252] in operator for std.range.iota
https://issues.dlang.org/show_bug.cgi?id=11252 Jack Stouffer j...@jackstouffer.com changed: What|Removed |Added CC||j...@jackstouffer.com --- Comment #1 from Jack Stouffer j...@jackstouffer.com --- This enhancement request makes no sense, as the in operator in Python and D do two completely different things. To replicate the Python behavior, you can do the following: import std.stdio, std.range, std.algorithm.searching; void main() { if (iota(1, 10).countUntil(foo(2)) -1) { yes.writeln; } } But I don't see why you would want to, as this is much faster: void main() { immutable int temp = foo(2); if (temp = 1 temp = 10) { yes.writeln; } } --
[Issue 10042] std.range.inits and tails
https://issues.dlang.org/show_bug.cgi?id=10042 Jack Stouffer j...@jackstouffer.com changed: What|Removed |Added Status|NEW |RESOLVED CC||j...@jackstouffer.com Resolution|--- |INVALID --
[Issue 4440] [patch] Inlining delegate literals
https://issues.dlang.org/show_bug.cgi?id=4440 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added Hardware|Other |All --
Re: dmd codegen improvements
On Sun, Aug 23, 2015 at 12:03:25AM +, Vladimir Panteleev via Digitalmars-d wrote: On Tuesday, 18 August 2015 at 17:10:36 UTC, Vladimir Panteleev wrote: On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir Panteleev wrote: I suggest that we revamp the compiler download page again. The lead should be a select your compiler which lists the advantages and disadvantages of each of DMD, LDC and GDC. https://github.com/D-Programming-Language/dlang.org/pull/1067 Now live on http://dlang.org/download.html Better artwork welcome :) Um... why is the GNU icon twice the size of the others? T -- The best way to destroy a cause is to defend it poorly.
Re: [OT] Sharp Regrets: Top 10 Worst C# Features
On Sat, Aug 22, 2015 at 08:19:26PM -0700, Walter Bright via Digitalmars-d wrote: On 8/21/2015 3:59 AM, Chris wrote: The whole article, imo, is like saying that when dealing with programming there are problems, difficulties and outright contradictions (like in maths or any other logical system the human mind has come up with), but language designers should make all these evil things go away! Imagine you had to attach warnings to a programming language, like the labels on microwaves Don't put your pets in it [you stupid *]!. We have conversations like that around here often. Some people want to hide what a CPU normally does. It's a fine sentiment, but a systems programming language should expose what a CPU does so it can be exploited for efficient programming. Yes! People who are more than casually interested in computers should have at least some idea of what the underlying hardware is like. Otherwise the programs they write will be pretty weird. -- D. Knuth T -- Klein bottle for rent ... inquire within. -- Stephen Mulraney
Re: dmd codegen improvements
On Sun, Aug 23, 2015 at 12:03:25AM +, Vladimir Panteleev via Digitalmars-d wrote: On Tuesday, 18 August 2015 at 17:10:36 UTC, Vladimir Panteleev wrote: On Tuesday, 18 August 2015 at 12:37:37 UTC, Vladimir Panteleev wrote: I suggest that we revamp the compiler download page again. The lead should be a select your compiler which lists the advantages and disadvantages of each of DMD, LDC and GDC. https://github.com/D-Programming-Language/dlang.org/pull/1067 Now live on http://dlang.org/download.html Better artwork welcome :) What about one of the figures from: http://eusebeia.dyndns.org/~hsteoh/tmp/mascot.png for the DMD icon? (Or any other pose that you might suggest -- I still have the povray files and can do a render in a different post.) T -- It is of the new things that men tire --- of fashions and proposals and improvements and change. It is the old things that startle and intoxicate. It is the old things that are young. -- G.K. Chesterton
Re: std.data.json formal review
On 08/21/2015 12:29 PM, David Nadlinger wrote: On Friday, 21 August 2015 at 15:58:22 UTC, Nick Sabalausky wrote: It also fucks up UFCS, and I'm a huge fan of UFCS. Are you saying that import json : parseJSON = parse; foo.parseJSON.bar; does not work? Ok, fair point, although I was referring more to fully-qualified name lookups, as in the snippet I quoted from Jacob. Ie, this doesn't work: someJsonCode.std.json.parse(); I do think though, generally speaking, if there is much need to do a renamed import, the symbol in question probably didn't have the best name in the first place. Renamed importing is a great feature to have, but when you see it used it raises the question *Why* is this being renamed? Why not just use it's real name? For the most part, I see two main reasons: 1. Just because. I like this bikeshed color better. But this is merely a code smell, not a legitimate reason to even bother. or 2. The symbol has a questionable name in the first place. If there's reason to even bring up renamed imports as a solution, then it's probably falling into the questionably named category. Just because we CAN use D's module system and renamed imports and such to clear up ambiguities, doesn't mean we should let ourselves take things TOO far to the opposite extreme when avoiding C/C++'s big long ugly names as a substitute for modules. Like Walter, I do very much dislike C/C++'s super-long, super-unambiguous names. But IMO, preferring parseStream over parseJSONStream isn't a genuine case of avoiding C/C++-style naming, it's just being overrun by fear of C/C++-style naming and thus taking things too far to the opposite extreme. We can strike a better balance than choosing between brief and unclear-at-a-glance and C++-level verbosity. Yea, we CAN do import std.json : parseJSONStream = parseStream;, but if there's even any motivation to do so in the first place, we may as well just use the better name right from the start. Besides, those who prefer ultra-brevity are free to paint their bikesheds with renamed imports, too ;)
[Issue 13007] Wrong x86 code: long negate
https://issues.dlang.org/show_bug.cgi?id=13007 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added CC||bugzi...@digitalmars.com --- Comment #2 from Walter Bright bugzi...@digitalmars.com --- https://github.com/D-Programming-Language/dmd/pull/4920 --
Re: [OT] Sharp Regrets: Top 10 Worst C# Features
On 8/19/2015 5:00 AM, Timon Gehr wrote: Classes are reference types in C# as well. This is hardly innovation. C# took that feature from Java, and it's likely much, much older than that.
Re: (De)Serializing interfaces
On 8/23/2015 7:14 AM, nims wrote: I think interfaces are very powerful and I heavily use them. The only problem I have with them is that serializing/deserializing them to XML or JSON doesn't seem to work. So far I got to try Orange and painlessjson. Using Orange all I got was a lot of compiler errors. Painlessjson did compile normally but just ignores all interface class members. This is the code I tried (I apologize for not formatting it, I have no idea how to do that): interface MyInterface { int GetA(); } class Foo: MyInterface { int a; int GetA() { return a; } } // maybe add a class Bar later which implements the same interface class Foobar { MyInterface myBar = new Foo(); } void main() { // serialize it } Based upon the name for 'GetA' I suspect you are comming from C#. So let me put this into that context. For C# ISerialize[0] interface is used to denote a class that can be serialized. Most notably is that ISerialize has a method called GetObjectData which populates a data table SerializationInfo with enough information to perform deserialization. There is also a special constructor applied to any serializable class so it can manually recreated. This is not possible to be called in D unfortunately. You will need an empty constructor a separate method to emulate this successfully. Most importantly with D is knowing type sizes, offsets and of course if pointer. If it is a pointer is it an array? Again if so, the sizes and if pointer ext. ext. Now if you want to look at how Java does it[1]. It is very similar to what I'm saying with how C# also does it. Anyway to summise why D doesn't yet have something akin to Java or C#. Simply put, we generally work with the actual type not an interface. So libraries like Orange can serialize/deserialize with great certainty that it got everything. However if you need any help with making such a library, please let me know! [0] https://msdn.microsoft.com/en-us/library/system.runtime.serialization.iserializable(v=vs.110).aspx [1] http://docs.oracle.com/javase/7/docs/api/java/io/Serializable.html
Re: Object.factory() and exe file size bloat
On Saturday, 22 August 2015 at 20:14:59 UTC, Walter Bright wrote: I'm not sure how export would help on Linux. One of the use cases for export on Linux would be to set the ELF visibility based on it. Emitting all the symbols with default visibility, like we currently do, leads to size and load time problems with large libraries. Big C++ projects are plagued regularly by this (cf. -fvisibility=hidden). — David
Re: Wiki article: Starting as a Contributor
cheers for that - that differentiation/distinction wasn't clear (to me). anonymous wrote: On Saturday 22 August 2015 11:05, ted wrote: From here, I can go to 'Building DMD' and 'How to Fork and Build dlang.org', which both seem to build DMD - I'm unsure of the overlap aspects here. The 'dlang.org' project is the website. It deals with building dmd only insofar as you need a compiler to build the website. If your goal is to build dmd, don't bother with anything 'dlang.org'. There is also an issue with the set of instructions in 'How to Fork and Build dlang.org' - and I don't know what category to file the bug under in bugzilla !?). Component: dlang.org make --directory=../dmd-2.068.0/src -f posix.mak -j 4 make[1]: *** ../dmd-2.068.0/src: No such file or directory. Stop. posix.mak:338: recipe for target '../dmd-2.068.0/src/dmd' failed make: *** [../dmd-2.068.0/src/dmd] Error 2 This is issue 14915, a regression in 2.068: https://issues.dlang.org/show_bug.cgi?id=14915 As a workaround, you can manually revert the changes done in PR #1050. Though, repeating myself, if your goal is to build dmd, don't bother with dlang.org.
Re: Object.factory() and exe file size bloat
On 8/22/2015 1:22 PM, David Nadlinger wrote: On Saturday, 22 August 2015 at 20:14:59 UTC, Walter Bright wrote: I'm not sure how export would help on Linux. One of the use cases for export on Linux would be to set the ELF visibility based on it. Emitting all the symbols with default visibility, like we currently do, leads to size and load time problems with large libraries. Big C++ projects are plagued regularly by this (cf. -fvisibility=hidden). A bugzilla enhancement request for this would be nice.
Re: Object.factory() and exe file size bloat
On Saturday, 22 August 2015 at 22:08:50 UTC, David Nadlinger wrote: A bugzilla enhancement request for this would be nice. https://issues.dlang.org/show_bug.cgi?id=9893 The common saying if it isn't in bugzilla it is forgotten seems quite silly when so much that IS in bugzilla is forgotten all the same.
Re: Object.factory() and exe file size bloat
On 21 August 2015 at 15:06, Walter Bright via Digitalmars-d digitalmars-d@puremagic.com wrote: This function: http://dlang.org/phobos/object.html#.Object.factory enables a program to instantiate any class defined in the program. To make it work, though, every class in the program has to have a TypeInfo generated for it. This leads to bloat: https://issues.dlang.org/show_bug.cgi?id=14758 and sometimes the bloat can be overwhelming. The solution seems straightforward - only have Object.factory be able to instantiate classes marked as 'export'. This only makes sense anyway. What do you think? I don't follow the reasoning, but yes! Kill it with fire! I'd rather see a compile option or something to disable it completely, like how disabling RTTI is a common C++ option.
[Issue 13147] Wrong codegen for thisptr in naked extern (C++) methods
https://issues.dlang.org/show_bug.cgi?id=13147 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added CC||bugzi...@digitalmars.com --- Comment #3 from Walter Bright bugzi...@digitalmars.com --- (In reply to yebblies from comment #2) I suspect the correct solution is for the invariant calls to not be implicitly inserted for naked functions. And you'd be right. https://github.com/D-Programming-Language/dmd/pull/4921 --
[Issue 14663] shared library test - link_linkdep - segfaults on FreeBSD 10
https://issues.dlang.org/show_bug.cgi?id=14663 --- Comment #9 from Jonathan M Davis issues.dl...@jmdavisprog.com --- All of the druntime tests now pass on my FreeBSD 10 box. Thanks! --
Re: [OT] Sharp Regrets: Top 10 Worst C# Features
On 8/21/2015 3:59 AM, Chris wrote: The whole article, imo, is like saying that when dealing with programming there are problems, difficulties and outright contradictions (like in maths or any other logical system the human mind has come up with), but language designers should make all these evil things go away! Imagine you had to attach warnings to a programming language, like the labels on microwaves Don't put your pets in it [you stupid *]!. We have conversations like that around here often. Some people want to hide what a CPU normally does. It's a fine sentiment, but a systems programming language should expose what a CPU does so it can be exploited for efficient programming.
Re: D-Day for DMD is today!
On Sunday, 23 August 2015 at 05:17:33 UTC, Walter Bright wrote: https://github.com/D-Programming-Language/dmd/pull/4923 We have made the switch from C++ DMD to D DMD! Many, many thanks to Daniel Murphy for slaving away for 2.5 years to make this happen. More thanks to Martin Nowak for helping shepherd it through the final stages, and to several others who have pitched in on this. This is a HUGE milestone for us. Much work remains to be done, such as rebasing existing dmd pull requests. Thanks in advance for the submitters who'll be doing that. I hope you aren't too unhappy about the extra work - it's in a good cause! does it build with ldc or gdc?
[Issue 9785] dmd -inline should inline lambda delegates
https://issues.dlang.org/show_bug.cgi?id=9785 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added See Also||https://issues.dlang.org/sh ||ow_bug.cgi?id=4440 --
[Issue 4440] [patch] Inlining delegate literals
https://issues.dlang.org/show_bug.cgi?id=4440 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added Keywords||performance --
[Issue 4440] [patch] Inlining delegate literals
https://issues.dlang.org/show_bug.cgi?id=4440 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added See Also||https://issues.dlang.org/sh ||ow_bug.cgi?id=9785 --
D-Day for DMD is today!
https://github.com/D-Programming-Language/dmd/pull/4923 We have made the switch from C++ DMD to D DMD! Many, many thanks to Daniel Murphy for slaving away for 2.5 years to make this happen. More thanks to Martin Nowak for helping shepherd it through the final stages, and to several others who have pitched in on this. This is a HUGE milestone for us. Much work remains to be done, such as rebasing existing dmd pull requests. Thanks in advance for the submitters who'll be doing that. I hope you aren't too unhappy about the extra work - it's in a good cause!
Re: Template Parameters in Struct Member Functions
On Saturday, 22 August 2015 at 16:49:26 UTC, DarthCthulhu wrote: I'm having difficulty understanding how templates operate as function parameters. Say I have this: struct ArrayTest { void arrayTest(T) (T arrayT) { writeln(arrayT); } } unittest { ArrayTest test; float farray[] = [ 0.5f, 0.5f, 0.0f, 0.5f, -0.5f, 0.0f, -0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f ]; test.arrayTest(farray); } Everything works peachy as expected. But as soon as I add another parameter to the arrayTest function like so (and changing the unit test to match): void arrayTest(T, int passing) (T arrayT) { ... } I get 'cannot deduce function from argument types' errors. Specifically stating the type of the function doesn't seem to help: test.arrayTest(float [])(farray, 1); test.arrayTest!(float, 1)(farray);
Re: Template Parameters in Struct Member Functions
On Saturday, 22 August 2015 at 17:08:36 UTC, Mike Parker wrote: void arrayTest(T, int passing) (T arrayT) { ... } I get 'cannot deduce function from argument types' errors. Specifically stating the type of the function doesn't seem to help: test.arrayTest(float [])(farray, 1); test.arrayTest!(float, 1)(farray); Sorry, that should be: test.arrayTest!(float[], 1)(farray); In your template declaration, you have declared two template parameters, T and passing, and one function parameter, arrayT. It is equivalent to: template arrayTest(T, int passing) { void arrayTest(T arrayT) {...} } To call the function, you have to explicitly instantiate the template with the instantiation operator (which is !) and give it two arguments as template parameters in the first pair of parentheses, then you have to give it one function argument in the second pair of parentheses. The template parameters are compile-time arguments, the function parameter is runtime. With this form: void arrayTest(T)(T arrayT) {...} There is no need for the explicit instantiation. The compiler is able to deduce what T should be since it has all the information it needs, so you can call it like: test.arrayTest(foo);
[Issue 14949] Non-descriptive Enforcement failed when attempting to write to closed file
https://issues.dlang.org/show_bug.cgi?id=14949 --- Comment #2 from github-bugzi...@puremagic.com --- Commits pushed to master at https://github.com/D-Programming-Language/phobos https://github.com/D-Programming-Language/phobos/commit/78a0fb92e9d69d89eb8c4e87fce0f0cd7d67cb14 fix Issue 14949 - Non-descriptive Enforcement failed when attempting to write to closed file https://github.com/D-Programming-Language/phobos/commit/a331d7f870cb3f2b32c5fff1edd3bc371ce4a057 Merge pull request #3573 from CyberShadow/pull-20150822-162819 fix Issue 14949 - Non-descriptive Enforcement failed when attemptin… --
[Issue 10667] http://dlang.org/cppstrings.html benchmark example doesn't really show off slices
https://issues.dlang.org/show_bug.cgi?id=10667 hst...@quickfur.ath.cx changed: What|Removed |Added Status|NEW |RESOLVED CC||hst...@quickfur.ath.cx Resolution|--- |WORKSFORME --- Comment #1 from hst...@quickfur.ath.cx --- This page apparently is no longer included in the website build. Closing this for now. Please reopen if similar issues exist in currently-public website pages, thanks! --