Re: Things that keep D from evolving?
On Friday, 12 February 2016 at 15:12:19 UTC, Steven Schveighoffer wrote: On 2/12/16 9:37 AM, Matt Elkins wrote: [...] Pass by reference and pass by value means different treatment inside the function itself, so it can't differ from call to call. It could potentially differ based on the type being passed, but I'm unaware of such an optimization, and it definitely isn't triggered specifically by 'in'. 'in' is literally replaced with 'scope const' when it is a storage class. -Steve note that 'in' and 'scope'(other than for delegates) parameter storage class usage should be avoided. It really should be a warning.
Re: Things that keep D from evolving?
On Friday, 12 February 2016 at 15:12:19 UTC, Steven Schveighoffer wrote: but I'm unaware of such an optimization, and it definitely isn't triggered specifically by 'in'. 'in' is literally replaced with 'scope const' when it is a storage class. -Steve I'd imagine GCC or LLVM may be able to make use of such (type) information for optimizations — moreso probably LLVM due to all the functional languages that use it nowadays.
Re: Things that keep D from evolving?
On Friday, 12 February 2016 at 17:29:54 UTC, Matt Elkins wrote: On Friday, 12 February 2016 at 17:20:23 UTC, rsw0x wrote: On Friday, 12 February 2016 at 15:12:19 UTC, Steven Schveighoffer wrote: On 2/12/16 9:37 AM, Matt Elkins wrote: [...] Pass by reference and pass by value means different treatment inside the function itself, so it can't differ from call to call. It could potentially differ based on the type being passed, but I'm unaware of such an optimization, and it definitely isn't triggered specifically by 'in'. 'in' is literally replaced with 'scope const' when it is a storage class. -Steve note that 'in' and 'scope'(other than for delegates) parameter storage class usage should be avoided. It really should be a warning. Why is that? Unless it has changed, 'scope' is a noop for everything but delegates. Code that works now will break when(if...) it gets implemented.
Re: Things that keep D from evolving?
On Monday, 8 February 2016 at 17:15:11 UTC, Wyatt wrote: On Monday, 8 February 2016 at 16:33:09 UTC, NX wrote: I see... By any chance, can we solve this issue with GC managed pointers? Maybe we could. But it's never going to happen. Even if Walter weren't fundamentally opposed to multiple pointer types in D, it wouldn't happen. You asked about things that prevent improvement, right? Here's the big one, and a major point of friction in the community: Walter and Andrei refuse to break existing code in pursuit of changes that substantially improve the language. (Never mind that code tends to break anyway.) -Wyatt Pretty much this. We can't go a version without code breakage, but also can't introduce features that would drastically help the language because it would introduce breakage. i.e, all the great ownership/scope/what-have-you proposals and shit like DIP25 gets pushed through instead, then 2 days later it gets proven to be worthless anyways. Woops.
Re: is increment on shared ulong atomic operation?
On Sunday, 7 February 2016 at 19:27:19 UTC, Charles Hixson wrote: If I define a shared ulong variable, is increment an atomic operation? E.g. shared ulong t; ... t++; It seems as if it ought to be, but it could be split into read, increment, store. I started off defining a shared struct, but that seems silly, as if the operations defined within a shared struct are synced, then the operation on a shared variable should be synced, but "+=" is clearly stated not to be synchronized, so I'm uncertain. https://dlang.org/phobos/core_atomic.html#.atomicOp
Re: is increment on shared ulong atomic operation?
On Sunday, 7 February 2016 at 20:25:44 UTC, Minas Mina wrote: On Sunday, 7 February 2016 at 19:43:23 UTC, rsw0x wrote: On Sunday, 7 February 2016 at 19:39:27 UTC, rsw0x wrote: On Sunday, 7 February 2016 at 19:27:19 UTC, Charles Hixson wrote: [...] https://dlang.org/phobos/core_atomic.html#.atomicOp Just noticed that there's no example. It's used like shared(ulong) a; atomicOp!"+="(a, 1); Wow, that syntax sucks a lot. how so? It's meant to be very explicit
Re: is increment on shared ulong atomic operation?
On Sunday, 7 February 2016 at 19:39:27 UTC, rsw0x wrote: On Sunday, 7 February 2016 at 19:27:19 UTC, Charles Hixson wrote: If I define a shared ulong variable, is increment an atomic operation? E.g. shared ulong t; ... t++; It seems as if it ought to be, but it could be split into read, increment, store. I started off defining a shared struct, but that seems silly, as if the operations defined within a shared struct are synced, then the operation on a shared variable should be synced, but "+=" is clearly stated not to be synchronized, so I'm uncertain. https://dlang.org/phobos/core_atomic.html#.atomicOp Just noticed that there's no example. It's used like shared(ulong) a; atomicOp!"+="(a, 1);
Re: Things that keep D from evolving?
On Saturday, 6 February 2016 at 17:46:48 UTC, rsw0x wrote: On Saturday, 6 February 2016 at 17:46:00 UTC, Ola Fosheim Grøstad wrote: On Saturday, 6 February 2016 at 17:38:30 UTC, rsw0x wrote: Can't be done with the root class because classes never trigger RAII outside of (deprecated) scope allocations. Not sure what you mean. The class instance doesn't have to trigger anything? You "retain(instance)" to increase the refcount and "release(instance)" to decrease refcount or destroy the instance. Might as well manually free and delete instead. Er, malloc and free* : )
Re: Things that keep D from evolving?
On Saturday, 6 February 2016 at 17:46:00 UTC, Ola Fosheim Grøstad wrote: On Saturday, 6 February 2016 at 17:38:30 UTC, rsw0x wrote: Can't be done with the root class because classes never trigger RAII outside of (deprecated) scope allocations. Not sure what you mean. The class instance doesn't have to trigger anything? You "retain(instance)" to increase the refcount and "release(instance)" to decrease refcount or destroy the instance. Might as well manually free and delete instead.
Re: Things that keep D from evolving?
On Saturday, 6 February 2016 at 17:36:28 UTC, Ola Fosheim Grøstad wrote: On Saturday, 6 February 2016 at 17:22:03 UTC, Adam D. Ruppe wrote: On Saturday, 6 February 2016 at 11:15:06 UTC, Ola Fosheim Grøstad wrote: Nothing prevents you from creating your own reference counting mechanism. A struct wrapper doesn't give the things you need to reliably handle inheritance. I don't think I suggested using a struct wrapper? :-) That just cause issues with alignment or requires a more complex allocator. You can either build the refcount into the root class or use an extra indirection like C++'s shared_ptr. Can't be done with the root class because classes never trigger RAII outside of (deprecated) scope allocations. Can't be done with indirection because you still hit the same issue. Applies to storage classes aswell, btw.
Bug or intended?
I was playing around with alias templates and came across this, I reduced it to: --- struct A(alias C c){ auto foo(){ return c.i; } } struct B{ C c; A!c a; } struct C{ int i; } --- It gives me a "need 'this' for 'i' of type 'int'" error.
Re: Things that keep D from evolving?
On Saturday, 6 February 2016 at 11:15:06 UTC, Ola Fosheim Grøstad wrote: On Saturday, 6 February 2016 at 11:09:28 UTC, NX wrote: On Saturday, 6 February 2016 at 10:29:32 UTC, Ola Fosheim Grøstad wrote: What makes it impossible to have ref counted classes? Nothing. Then why do we need DIP74 ? I think they aim for compiler optimizations, like ARC on Swift. But ARC requires all ref counting to be done behind the scene, so I think it is a bad idea for D to be honest. And why documentation says RefCounted doesn't work with classes? I don't use Phobos much. I think RefCounted creates a wrapper for an embedded struct or something. Something like struct { int refcount; T payload; } Nothing prevents you from creating your own reference counting mechanism. reference counting is incredibly slow, DIP74 attempts to partially amend that in D as it can't be done any other way besides compiler help. IIRC, it essentially just allows RC inc/dec to be elided where possible
Re: Functions that return type
On Saturday, 16 January 2016 at 21:22:15 UTC, data pulverizer wrote: Is it possible to create a function that returns Type like typeof() does? Something such as: Type returnInt(){ return int; } Functions return values, not types. You would use a template to "return" a type. More to the point what is the Type of a type such as int? Thanks What is the value of a value such as 9? A type is a type, it does not have a type. If this is not clear, I can try to make it clearer.
Re: function argument accepting function or delegate?
On Sunday, 17 January 2016 at 06:27:41 UTC, Jon D wrote: My underlying question is how to compose functions taking functions as arguments, while allowing the caller the flexibility to pass either a function or delegate. [...] Templates are an easy way. --- auto call(F, Args...)(F fun, auto ref Args args) { return fun(args); } --- Would probably look nicer with some constraints from std.traits.
Structs intended to run destructor immediately if not assigned to a variable?
Returning a struct with a destructor and not binding it to a variable appears to make the destructor run immediately instead of at the end of the scope. Is this intended? example: http://dpaste.dzfl.pl/dd285200ba2b
CAS and atomicOp!"" memory ordering?
Why is there no way to specify the desired memory order with these? What memory order am I supposed to assume? The documentation is sparse.
Re: issue porting C++/glm/openGL to D/gl3n/openGL
On Tuesday, 12 January 2016 at 01:00:30 UTC, Mike Parker wrote: On Sunday, 10 January 2016 at 05:47:01 UTC, WhatMeWorry wrote: Thanks. Bummer. I really like gl3n, but glm/opengl is used almost exclusively in all the modern opengl code (tutorials) I've seen, so this might be a deal breaker. As the author of Derelict do you have any ideas of how much work is involved with getting glm to work with D? Want to do a DerelictGLM :) AFAIK, glm is a header-only library, so there's nothing to bind to. And if it did have binaries, I don't think the current state of D's C++ support could handle it. Binding to C is easy, binding to C++ is hit or miss. the performance would also be terrible because AFAIK nothing could be inlined(outside of LTO, maybe)
Re: Anyone using glad?
On Sunday, 10 January 2016 at 21:30:32 UTC, Jason Jeffory wrote: Seems like it is a very nice way to get into openGL from D. http://glad.dav1d.de/ I generated the bindings for all the latest versions of the various specifications. Does anyone have any tutorials that use this library effectively? There's this https://github.com/Dav1dde/glamour But not sure what it is(diff between it and glad). Says it's a wrapper to OpenGL... but does it use the glad generated bindings? It looks like I'd prefer this to derelict because it seems like it is a automatically generated binding... which means future extensibility and no "extra" stuff. Would be nice if it works with dub. How could I use it easily with dub as a local library? (create a dependency from a local file location) Thanks. I preferred glad over derelict when I did some opengl work with D because it was easier to just include only the functions I wanted. Derelict made much bigger binaries, not sure how much in part that was to the whole kitchen sink approach or the derelict utility itself. However, both are great and work fine. Their analogues in C/C++ would be function pointer loaders like glew for derelict or opengl binding generators like glLoadGen(and glad, it's multi-language — I actually preferred it for C++ too) for glad. Bye.
Re: issue porting C++/glm/openGL to D/gl3n/openGL
On Sunday, 10 January 2016 at 02:51:57 UTC, WhatMeWorry wrote: Just translating some simple C++/glm/opengl tutorial code to D/gl3n/opengl and I'm coming across more friction than I expected. I've got a square centered at my window which is rotated by 45 degrees (counter clockwise) and then moved to the lower right quadrant. [...] iirc, gl3n uses row major and glm uses column major ordering just pass GL_TRUE to the transpose argument in glUniformMatrix4fv
Re: How is D doing?
On Tuesday, 22 December 2015 at 21:38:22 UTC, ZombineDev wrote: On Tuesday, 22 December 2015 at 17:49:34 UTC, Jakob Jenkov wrote: On Tuesday, 22 December 2015 at 03:30:32 UTC, ShinraTensei wrote: I recently noticed massive increase in new languages for a person to jump into(Nim, Rust, Go...etc) but my question is weather the D is actually used anywhere or are there chances of it dying anytime soon. Check out Google Trends. Searches for D Tutorial still beats searches for Scala Tutorial by a big margin: https://google.com/trends/explore#q=d%20tutorial%2C%20scala%20tutorial Google Trends shows something interesting: https://google.com/trends/explore#q=%2Fm%2F01kbt7%2C%20%2Fm%2F0dsbpg6%2C%20%2Fm%2F091hdj%2C%20%2Fm%2F03j_q%2C%20C%2B%2B=q=Etc%2FGMT-2 restrict it to 'programming' to get a more accurate assessment of D. https://google.com/trends/explore#cat=0-5-31=%2Fm%2F01kbt7%2C%20%2Fm%2F0dsbpg6%2C%20%2Fm%2F091hdj%2C%20%2Fm%2F03j_q=1%2F2010%2061m=q=Etc%2FGMT-2 removed C++ because it just dwarfs the others. D, as I expected, has a massive following in Japan. I'm still not quite sure why.
Re: How is D doing?
On Thursday, 24 December 2015 at 06:10:55 UTC, H. S. Teoh wrote: On Thu, Dec 24, 2015 at 12:16:16AM +, rsw0x via Digitalmars-d-learn wrote: [...] D, as I expected, has a massive following in Japan. I'm still not quite sure why. Maybe because one of the most prolific contributors to D, esp. to dmd, (and by far) happens to be from Japan? T I'm aware of Kenji, I'm just not sure why because I never notice that many Japanese posters here. It seems quite popular on Twitter with Japanese users though which is why I'm familiar with its popularity in Japan.
Re: The best way to store a structure by reference
On Friday, 27 November 2015 at 08:38:29 UTC, drug wrote: I need to store a struct like a reference type. Now I use pointer for this, is it the best D way? This pointer is private and access to it is safe, but it's just unusual for me to see pointers in D code. if you own the resource, consider std.typecons.unique or std.typecons.refcounted otherwise consider std.typecons.nullableref
Re: D equivalent of Python's try..else
On Sunday, 22 November 2015 at 10:01:48 UTC, Kagamin wrote: As an idiomatic option there can be `finally(exit)`, `finally(success)` and `finally(failure)` that would mirror semantics of scope guards. how does this differ from just putting a scope(failure) inside the try block? it only triggers if no exception is thrown, else it goes to the catch.
Re: D equivalent of Python's try..else
On Saturday, 21 November 2015 at 05:45:37 UTC, Shriramana Sharma wrote: Hello. In Python one has the syntax try..except..else.. where code in the else clause will only be executed if an exception does not occur. (Ref: http://stackoverflow.com/a/22579805/1503120) In D, is there such an idiomatic/canonical construct? The D try statement only seems to support finally (apart from catch). scope(failure) can be used to run code when an exception is thrown inside the scope, and scope(success) only triggers if the scope exited successfully http://ddili.org/ders/d.en/scope.html
Re: D equivalent of Python's try..else
On Saturday, 21 November 2015 at 05:55:53 UTC, Shriramana Sharma wrote: rsw0x wrote: scope(failure) can be used to run code when an exception is thrown inside the scope, and scope(success) only triggers if the scope exited successfully http://ddili.org/ders/d.en/scope.html Thanks but I know that and it executes only at the point of scope exit. But I want some code to run immediately after the try clause but only if an exception did not occur. The Python else clause is for code which should be run only if an exception never occurred i.e. even if one occurred and it was handled. It will be executed before `finally`. Is there a D equivalent? Put the scope(success) inside the try block?
Re: char[] == null
On Wednesday, 18 November 2015 at 20:57:08 UTC, Spacen Jasset wrote: Should this be allowed? What is it's purpose? It could compare two arrays, but surely not that each element of type char is null? char[] buffer; if (buffer == null) {} slices aren't arrays http://dlang.org/d-array-article.html
Re: Associative arrays
On Monday, 9 November 2015 at 21:33:09 UTC, TheFlyingFiddle wrote: On Monday, 9 November 2015 at 04:52:37 UTC, rsw0x wrote: On Monday, 9 November 2015 at 04:29:30 UTC, Rikki Cattermole wrote: Fwiw, EMSI provides high quality containers backed by std.experimental.allocator. https://github.com/economicmodeling/containers I have a question regarding the implementation of the economicmodeling hashmap. Why must buckets be a power of two? Is it to be able to use the: hash & (buckets.length - 1) for index calculations or is there some other reason? I have no idea, sorry. Schott wrote them AFAIK, he might be able to respond if he sees this.
Re: foreach statement: Are there no Iterators in D?
On Sunday, 8 November 2015 at 11:57:16 UTC, J.Frank wrote: On Sunday, 8 November 2015 at 11:47:41 UTC, Rikki Cattermole wrote: opApply if you want 0 .. N iterations during for a foreach statement and having it reset each time. No, that won't help. I want to be able to iterate over a data set of infinite size. Otherwise you want ranges :) An input range is more or less an iterator as you would think of it. You only need popFront, front and empty. Ah yes, that's what I missed. Looks good. Thank you. :) FWIW since you mentioned Java, if you're accustomed to Java 8 streams they're very similar to D's ranges.
Re: Associative arrays
On Monday, 9 November 2015 at 04:29:30 UTC, Rikki Cattermole wrote: On 09/11/15 4:57 PM, TheFlyingFiddle wrote: [...] Nope. [...] As far as I'm aware, you are stuck using e.g. structs to emulate AA behavior. I have a VERY basic implementation here: https://github.com/rikkimax/alphaPhobos/blob/master/source/std/experimental/internal/containers/map.d Feel free to steal. Fwiw, EMSI provides high quality containers backed by std.experimental.allocator. https://github.com/economicmodeling/containers
Re: Maybe a dmd bug, what do you think ?
On Friday, 6 November 2015 at 08:48:38 UTC, user123456789abcABC wrote: Template parameter deduction in partially specialized template fails: --- enum Bar{b,a,r} void foo(Bar bar, T)(T t){} alias foob(T) = foo!(Bar.b, T); void main() { foo!(Bar.b)(8); foob(8); // autsch } --- It looks like a bug, doesn't it ? I believe this is https://issues.dlang.org/show_bug.cgi?id=1807
Re: Align a variable on the stack.
On Friday, 6 November 2015 at 17:55:47 UTC, arGus wrote: I did some testing on Linux and Windows. I ran the code with ten times the iterations, and found the results consistent with what has previously been observed in this thread. The code seems to run just fine on Linux, but is slowed down 10x on Windows x86. Windows (32-bit) rdmd bug.d -inline -boundscheck=off -release Default: TickDuration(14398890) Explicit: TickDuration(16) Linux (64-bit) rdmd bug.d -m64 -inline -boundscheck=off Default: TickDuration(59090876) Explicit: TickDuration(49529493) Linux (32-bit) rdmd bug.d -inline -boundscheck=off Default: TickDuration(58882306) Explicit: TickDuration(49231968) File a bug report, this probably needs Walter to look at it.
Re: Align a variable on the stack.
On Thursday, 5 November 2015 at 23:37:45 UTC, TheFlyingFiddle wrote: On Thursday, 5 November 2015 at 21:24:03 UTC, TheFlyingFiddle wrote: [...] I reduced it further: [...] these run at the exact same speed for me and produce identical assembly output from a quick glance dmd 2.069, -O -release -inline
Re: Align a variable on the stack.
On Friday, 6 November 2015 at 01:17:20 UTC, TheFlyingFiddle wrote: On Friday, 6 November 2015 at 00:43:49 UTC, rsw0x wrote: On Thursday, 5 November 2015 at 23:37:45 UTC, TheFlyingFiddle wrote: On Thursday, 5 November 2015 at 21:24:03 UTC, TheFlyingFiddle wrote: [...] I reduced it further: [...] these run at the exact same speed for me and produce identical assembly output from a quick glance dmd 2.069, -O -release -inline Are you running on windows? I tested on windows x64 and there I also get the exact same speed for both functions. linux x86-64
Re: good reasons not to use D?
On Saturday, 31 October 2015 at 23:07:46 UTC, rumbu wrote: On Saturday, 31 October 2015 at 20:55:33 UTC, David Nadlinger wrote: On Saturday, 31 October 2015 at 18:23:43 UTC, rumbu wrote: My opinion is that a decimal data type must be builtin in any modern language, not implemented as a library. "must be builtin in any modern language" – which modern languages actually have decimals as a built-in type, and what is your rationale against having them as a solid library implementation? It seems like it would only be interesting for a very fringe sector of users (finance, and only the part of it that actually deals with accounting). — David GNU C - 3 built-in decimal data types - https://gcc.gnu.org/onlinedocs/gcc/Decimal-Float.html This is a vendor-specific extension and likely exposed by GDC already.
Re: good reasons not to use D?
On Saturday, 31 October 2015 at 14:37:23 UTC, rumbu wrote: On Friday, 30 October 2015 at 10:35:03 UTC, Laeeth Isharc wrote: I'm writing a talk for codemesh on the use of D in finance. Any other thoughts? For finance stuff - missing a floating point decimal data type. Things like 1.1 + 2.2 = 3.3003 Isn't D used by a rather large banking company internally? I believe I remember this being mentioned somewhere.
Re: `clear`ing a dynamic array
On Saturday, 24 October 2015 at 13:18:26 UTC, Shriramana Sharma wrote: Hello. I had first expected that dynamic arrays (slices) would provide a `.clear()` method but they don't seem to. Obviously I can always effectively clear an array by assigning an empty array to it, but this has unwanted consequences that `[]` actually seems to allocate a new dynamic array and any other identifiers initially pointing to the same array will still show the old contents and thus it would no longer test true for `is` with this array. See the following code: [...] use std.container.array
Re: Can't chain reduce(seed, range)
On Monday, 31 August 2015 at 01:32:01 UTC, Yuxuan Shui wrote: Why is reduce defined as 'auto reduce(S, R)(S seed, R r)', instead of reduce(R r, S seed)? I can't chain it. Maybe provide both? You might be interested in this PR https://github.com/D-Programming-Language/phobos/pull/1955 It's a bit old, left a ping to see what's up.
Re: MmFile : Is this std.mmFile BUG?
On Wednesday, 26 August 2015 at 17:30:29 UTC, Alex Parrill wrote: On Wednesday, 26 August 2015 at 15:49:23 UTC, Junichi Nakata wrote: Hi, all. I have a question. When 'testdic' file does' t exist, something wrong. --- import std.mmFile; int main() { auto x = new MmFile(testdic,MmFile.Mode.readWrite,0,null); return 0; } --- OSX 10.10.3 , DMD64 D Compiler v2.069-devel-d0327d9 After testdic file (size=0) was made, Segmentation Fault: 11 . I don't know whether this code is typical use. Is this Phobos BUG? or BY DESIGN? Note that mmap-ing a zero-length range is invalid on Linux. Dunno about OSX; it shouldn't segfault though. https://issues.dlang.org/show_bug.cgi?id=14968
Re: Role of D in Python and performance computing [was post on using go 1.5 and GC latency]
On Tuesday, 25 August 2015 at 07:18:24 UTC, Ola Fosheim Grøstad wrote: On Tuesday, 25 August 2015 at 05:09:56 UTC, Laeeth Isharc wrote: On Monday, 24 August 2015 at 21:57:41 UTC, rsw0x wrote: [...] Horses for courses ? Eg for Andy Smith's problem of processing trade information of tens of gigs where Python was choking, I guess nobody in their right mind would use Rust. I don't think there is much difference between C, D or Rust in terms of computing. The core semantics are similar. With Rust you have the additional option of linear type checking. But Rust programmers of course want to use idiomatic linear typing as much as possible and that makes designing graph-like structures a challenge. An option implies you can turn it off, has this changed since the last time I used Rust?(admittedly, a while back) Memory safety doesn't seem like it's the top priority for scientific computing as much as fast turnarouds and performance... in my opinion, anyways.
Re: RAII and Deterministic Destruction
On Tuesday, 25 August 2015 at 22:35:57 UTC, Jim Hewes wrote: Although C++ can be ugly, one reason I keep going back to it rather then commit more time to reference-based languages like C# is because I like deterministic destruction so much. My question is whether D can REALLY handle this or not. I've not been sure about this for some time so now I'm just going to come out and finally ask. I know about this RAII section in the documentation: http://dlang.org/cpptod.html#raii But I don't believe that handles all cases, such as having classes as member variables of other classes. (Do the members get destructors called too?) Then there is std.typecons.Unique and std.typecons.RefCounted. With these, can I really get deterministic destruction for all cases like I would in C++? If so, it might be a good idea to emphasize this more in the documentation because I'd think people coming from C++ would be looking for this. Jim To add to what the other people said, there exists scoped!T in std.typecons to allocate a class on the stack, and Unique/RefCounted as you mentioned. AFAIK refcounted is in the process of being overhauled, but the user should notice no differences.
Re: Role of D in Python and performance computing [was post on using go 1.5 and GC latency]
On Monday, 24 August 2015 at 21:20:39 UTC, Russel Winder wrote: For Python and native code, D is a great fit, perhaps more so that Rust, except that Rust is getting more mind share, probably because it is new. I'm of the opinion that Rust's popularity will quickly die when people realize it's a pain to use.
Re: post on using go 1.5 and GC latency
On Sunday, 23 August 2015 at 11:06:20 UTC, Russel Winder wrote: On Sat, 2015-08-22 at 09:27 +, rsw0x via Digitalmars-d-learn wrote: […] The performance decrease has been there since 1.4 and there is no way to remove it - write barriers are the cost you pay for concurrent collection. Go was already much slower than other compiled languages, now it probably struggles to keep up with mono. I know Walter hates it when people mention the word but: benchmarks. As soon as someone say things like it probably struggles to keep up with mono further discussion of the topic is probably not worth entertaining without getting some agreed codes and running them all on the same machine. I agree the standard Go compiler generates not well optimized code, but gccgo generally does, and generally performs at C-level speeds. Of course Java often performs far better than that, and often fails to. You have to be careful with benchmarking and performance things generally. https://groups.google.com/forum/#!msg/golang-dev/pIuOcqAlvKU/C0wooVzXLZwJ 25-50% performance decrease across the board in 1.4 with the addition of write barriers, to an already slow language. random benchmarks of Go performing 3x(+) slower than C/C++/D, some of these predate Go 1.4. https://github.com/kostya/benchmarks https://benchmarksgame.alioth.debian.org/u64/benchmark.php?test=alllang=golang2=gccdata=u64 https://togototo.wordpress.com/2013/07/23/benchmarking-level-generation-go-rust-haskell-and-d/ (gcc-go performed the _worst_) https://togototo.wordpress.com/2013/08/23/benchmarks-round-two-parallel-go-rust-d-scala-and-nimrod/ (and again) https://github.com/logicchains/LPATHBench/blob/master/writeup.md (once again, Go is nowhere near C/C++/D/Rust. Where is it? Hanging out with C#/Mono.) Go is slow. These aren't cherrypicked, just random samples from a quick Googling. Where is Go performing C-level speeds at? D claims this, and D shows it does. Go falls into the fast enough category, because it is _not_ a general purpose programming language. So unless multiple randomly sampled benchmarks are all wrong, I'm going to stick with 'Go is slow.'
Re: post on using go 1.5 and GC latency
On Saturday, 22 August 2015 at 09:16:32 UTC, Russel Winder wrote: On Sat, 2015-08-22 at 07:30 +, rsw0x via Digitalmars-d-learn wrote: [...] Not entirely true. Go is a general purpose language, it is a successor to C as envisioned by Rob Pike, Russ Cox, and others (I am not sure how much input Brian Kernighan has had). However, because of current traction in Web servers and general networking, it is clear that that is where the bulk of the libraries are. Canonical also use it for Qt UI applications. I am not sure of Google real intent for Go on Android, but there is one. [...] They also saw a 100% increase in performance when it was rewritten, and a 20% fall with this latest rewrite. I anticipate great improvement for the 1.6 rewrite. I am surprised they are retaining having only a single garbage collector: different usages generally require different garbage collection strategies. Having said that Java is moving from having four collectors, to having one, it is going to be interesting to see if G1 meets the needs of all JVM usages. [...] Until some organization properly funds a suite of garbage collectors for different performance targets, you have what there is. The performance decrease has been there since 1.4 and there is no way to remove it - write barriers are the cost you pay for concurrent collection. Go was already much slower than other compiled languages, now it probably struggles to keep up with mono.
Re: post on using go 1.5 and GC latency
On Saturday, 22 August 2015 at 06:48:48 UTC, Russel Winder wrote: On Fri, 2015-08-21 at 10:47 +, via Digitalmars-d-learn wrote: Yes, Go has sacrificed some compute performance in favour of latency and convenience. They have also released GC improvement plans for 1.6: https://docs.google.com/document/d/1kBx98ulj5V5M9Zdeamy7v6ofZXX3yPziA f0V27A64Mo/edit It is rather obvious that a building a good concurrent GC is a time consuming effort. But one that Google are entirely happy to fully fund. because Go is not a general purpose language. A concurrent GC for D would kill D. Go programs saw a 25-50% performance decrease across the board for the lower latencies. D could make some very minor changes and be capable of a per-thread GC with none of these performance drawbacks, but nobody seems very interested in it.
Re: post on using go 1.5 and GC latency
On Saturday, 22 August 2015 at 10:47:55 UTC, Laeeth Isharc wrote: On Saturday, 22 August 2015 at 09:16:32 UTC, Russel Winder wrote: [...] I didn't mean to start again the whole GC and Go vs D thing. Just that one ought to know the lay of the land as it develops. Out of curiosity, how much funding is required to develop the more straightforward kind of GCs ? Or to take what's been done and make it possible for others to use? It needn't be a single organisation I would think if there are many that would benefit and one doesn't get bogged down in a mentality of people worrying about possibly spurious free rider problems. Since the D Foundation seems under way, it seems worth asking the question first and thinking about goals without worrying for now about what seems realistic. The problem with D's GC is that there's no scaffolding there for it, so you can't really improve it. At best you could make the collector parallel. If I had the runtime hooks and language guarantees I needed I'd begin work on a per-thread GC immediately.
Re: GC and MMM
On Thursday, 20 August 2015 at 17:13:33 UTC, Ilya Yaroshenko wrote: Hi All! Does GC scan manually allocated memory? I want to use huge manually allocated hash tables and I don't want to GC scan them because performance reasons. Best regards, Ilya GC does not scan memory allocated with malloc from core.stdc.
Re: Attributes not propagating to objects via typeinfo?
On Friday, 14 August 2015 at 15:39:39 UTC, Timon Gehr wrote: I don't understand. It is evidently fixable. E.g. if TypeInfo was just a template without the mostly redundant additional compiler support, this would be a trivial fix. It appears that this was suggested already after a bit of digging but nobody cares to fix it. IMO, compiler handles far too much stuff that should be in the runtime.
Re: Attributes not propagating to objects via typeinfo?
On Thursday, 13 August 2015 at 03:46:19 UTC, rsw0x wrote: Sample code: class C{} struct S{} void main(){ import std.stdio; auto c = new shared C(); auto s = new shared S(); writeln(typeid(c)); //modulename.C writeln(typeid(s)); //shared(modulename.S)* writeln(typeid(c).next); //null writeln(typeid(s).next); //shared(modulename.S) writeln(typeid(typeid(s).next) is typeid(TypeInfo_Shared)); //true writeln(typeid(typeid(c)) is typeid(TypeInfo_Shared)); //false } What's the reason that the shared propagates to the typeinfo for the struct, but not for the class declaration? bump, is this working as intended?
Attributes not propagating to objects via typeinfo?
Sample code: class C{} struct S{} void main(){ import std.stdio; auto c = new shared C(); auto s = new shared S(); writeln(typeid(c)); //modulename.C writeln(typeid(s)); //shared(modulename.S)* writeln(typeid(c).next); //null writeln(typeid(s).next); //shared(modulename.S) writeln(typeid(typeid(s).next) is typeid(TypeInfo_Shared)); //true writeln(typeid(typeid(c)) is typeid(TypeInfo_Shared)); //false } What's the reason that the shared propagates to the typeinfo for the struct, but not for the class declaration?
Re: Sending an immutable object to a thread
On Wednesday, 22 July 2015 at 17:17:17 UTC, Frank Pagliughi wrote: On Wednesday, 22 July 2015 at 09:04:49 UTC, Marc Schütz wrote: But as long as the original pointer is still on the stack, that one _will_ keep the object alive. It is only a problem if all pointers to a GC managed object are stored in places the GC isn't informed about. Sorry, I have gotten confused. In Ali's example, the pointer to a class object (via the address-of '' operator) actually points into the GC heap. It is *not* a pointer to a pointer, right? My reading of the Garbage web doc page is that this pointer to memory in the GC heap is sufficient (by some magic) to keep the memory alive, in and of itself. So the pointer, passed to the other thread is sufficient to keep the memory alive, even if the original reference disappears. Or, to put it another way, getting threads out of the equation, is this safe? class MyThing { ... } MyThing* create_a_thing() { MyThing mt = new MyThing(); do_something_with(mt); return mt; } void main() { MyThing* pmt = create_a_thing(); // ... } The thing will remain alive for the duration of main() ?? Thanks No. this is actually returning an address of a temporary.
Re: Sending an immutable object to a thread
On Wednesday, 22 July 2015 at 09:04:49 UTC, Marc Schütz wrote: On Tuesday, 21 July 2015 at 21:50:35 UTC, rsw0x wrote: On Tuesday, 21 July 2015 at 21:44:07 UTC, rsw0x wrote: [...] addendum: http://dlang.org/garbage.html [...] [...] I believe this implies that it would *not* keep the object alive. Sorry for the confusion/noise. But as long as the original pointer is still on the stack, that one _will_ keep the object alive. It is only a problem if all pointers to a GC managed object are stored in places the GC isn't informed about. correct, I managed to confuse myself :o)
Re: Sending an immutable object to a thread
On Sunday, 19 July 2015 at 17:12:07 UTC, rsw0x wrote: On Sunday, 19 July 2015 at 17:04:07 UTC, Frank Pagliughi wrote: [...] Oh, yes, pointer. Ha! I didn't even think of that. Thanks. I'm not familiar with how garbage collection works in D. If the initial reference goes out of scope, and you just have a pointer - in another thread, no less - then are you still guaranteed that the object will not disappear while the pointer exists? [...] a pointer to a pointer(or in this case, a reference) does not keep it alive. wow, I don't even remember posting this. This is (mostly) wrong, but I'm unsure if a pointer to another pointer on the stack would correctly keep its object alive(but, I believe this would just be a bug I think,) If the pointer was pointing to a pointer on the heap, then AFAICT it would keep it alive.
Re: Sending an immutable object to a thread
On Tuesday, 21 July 2015 at 21:44:07 UTC, rsw0x wrote: On Sunday, 19 July 2015 at 17:12:07 UTC, rsw0x wrote: [...] wow, I don't even remember posting this. This is (mostly) wrong, but I'm unsure if a pointer to another pointer on the stack would correctly keep its object alive(but, I believe this would just be a bug I think,) If the pointer was pointing to a pointer on the heap, then AFAICT it would keep it alive. addendum: http://dlang.org/garbage.html Pointers in D can be broadly divided into two categories: Those that point to garbage collected memory, and those that do not. Examples of the latter are pointers created by calls to C's malloc(), pointers received from C library routines, pointers to static data, pointers to objects on the stack, etc. and those that do not ... pointers to objects on the stack, etc. I believe this implies that it would *not* keep the object alive. Sorry for the confusion/noise.
Re: Sending an immutable object to a thread
On Sunday, 19 July 2015 at 17:04:07 UTC, Frank Pagliughi wrote: [...] Oh, yes, pointer. Ha! I didn't even think of that. Thanks. I'm not familiar with how garbage collection works in D. If the initial reference goes out of scope, and you just have a pointer - in another thread, no less - then are you still guaranteed that the object will not disappear while the pointer exists? [...] a pointer to a pointer(or in this case, a reference) does not keep it alive.
Does shared prevent compiler reordering?
I can't find anything on this in the spec.
Re: How to setup GDC with Visual D?
On Friday, 3 July 2015 at 19:17:28 UTC, Marko Grdinic wrote: [...] Have you tried using LDC? I'm unsure of GDC's support on Windows. LDC is D's LLVM compiler, and GDC/LDC generally produce binaries with similar performance. You can find a download link here: https://github.com/ldc-developers/ldc/releases I believe you want the ldc2-0.15.2-beta1-win64-msvc.zip package, but I don't use windows so I'm unsure.
Re: goroutines vs vibe.d tasks
On Wednesday, 1 July 2015 at 18:09:19 UTC, Mathias Lang wrote: On Tuesday, 30 June 2015 at 15:18:36 UTC, Jack Applegame wrote: [...] In your dub.json, can you use the following: subConfigurations: { vibe-d: libasync }, dependencies: { vibe-d: ~0.7.24-beta.3 }, Turns out it makes it much faster on my machine (371ms vs 1474ms). I guess it could be a good thing to investigate if we can make it the default in 0.7.25. submit an issue on vibe.d's github, they'd probably like to know about this.
Re: goroutines vs vibe.d tasks
On Tuesday, 30 June 2015 at 15:18:36 UTC, Jack Applegame wrote: Just creating a bunch (10k) of sleeping (for 100 msecs) goroutines/tasks. Compilers go: go version go1.4.2 linux/amd64 vibe.d: DMD64 D Compiler v2.067.1 linux/amd64, vibe.d 0.7.23 Code go: http://pastebin.com/2zBnGBpt vibe.d: http://pastebin.com/JkpwSe47 go version build with go build test.go vibe.d version built with dub build --build=release test.d Results on my machine: go: 168.736462ms (overhead ~ 68ms) vibe.d: 1944ms (overhead ~ 1844ms) Why creating of vibe.d tasks is so slow (more then 10 times)??? how do they compare if you replace the sleep with yield?
Re: is it safe to call `GC.removeRange` in dtor?
On Saturday, 27 June 2015 at 21:53:33 UTC, ketmar wrote: is it safe to call `GC.removeRange` in dtor? i believe it should be safe, so one can perform various cleanups, but documentation says nothing about guarantees It's not documented. Afaik parts of the standard library depend on this behavior so I'd say ok* where the asterisk means submit a specification update.
Re: Why aren't Ranges Interfaces?
On Friday, 26 June 2015 at 19:26:57 UTC, Jack Stouffer wrote: Thanks for the reply! I understand the reasoning now. On Friday, 26 June 2015 at 18:46:03 UTC, Adam D. Ruppe wrote: 2) interfaces have an associated runtime cost, which ranges wanted to avoid. They come with hidden function pointers and if you actually use it through them, you can get a performance hit. How much of a performance hit are we talking about? Is the difference between using an interface and not using one noticeable? It can be in a tight loop. http://eli.thegreenplace.net/2013/12/05/the-cost-of-dynamic-virtual-calls-vs-static-crtp-dispatch-in-c this is for C++, but it applies directly to D. Interestingly, CRTP is a gigantic C++ hack that D gets for free with alias this.
Re: Why aren't Ranges Interfaces?
On Friday, 26 June 2015 at 18:37:51 UTC, Jack Stouffer wrote: I have been learning D over the past three weeks and I came to the chapter in Programming in D on Ranges. And I am a little confused on the choice to make Ranges based on the methods you have in the struct, but not use a interface. With all of the isInputRange!R you have to write everywhere, it just seems like it would have made a lot more sense and made everyone's jobs easier if the different types of Ranges where just interfaces that you could inherit from. The only reason I can think of to not do it this way is the weird distinction between structs and classes in D. They're essentially compile-time interfaces. I would prefer having a real name/binding implementation for this, like contract.
Re: Are stack+heap classes possible in D?
On Friday, 19 June 2015 at 19:10:11 UTC, Shachar Shemesh wrote: On 14/06/15 04:31, Adam D. Ruppe wrote: On Sunday, 14 June 2015 at 00:52:20 UTC, FujiBar wrote: I have read that in D structs are always allocated on the stack while classes are always allocated on the heap. That's not true; it is a really common misconception. Putting a struct on the heap is trivial and built into the language: `S* s = new S();` Well Yeah. You would get a reference to a struct. The struct will be on the heap. In that narrow sense, you are right that it is possible. However, this does not behave like a normal struct. In particular, when will the destructor be called? (answer: never, not even before the memory is collected). So, no, I think D experts should avoid telling newbies it is okay to just new struct foo.[1] Shachar 1 - The counter argument is, of course, that struct destructors should not be counted upon to do anything useful anyways, as they are far from guaranteed to run even in situations where one would expect them to. This just relates to another area where D skirts truth in advertising when people say that D supports RAII. the destructor bug has been fixed for a while. for your second point, the issue is that D doesn't separate destructors from finalizers and it feels like it was designed by someone with little knowledge in low level memory management honestly.
Differences between C++11 atomics and core.atomics?
There's very little writing about D's core.atomics(TDPL seems to very barely cover them, I assume it's because it's aging compared to the library.) Is it safe to assume that they behave similarly to C++11's atomics?
Re: GC Destruction Order
On Tuesday, 19 May 2015 at 21:07:52 UTC, bitwise wrote: On Tue, 19 May 2015 15:36:21 -0400, rsw0x anonym...@anonymous.com wrote: On Tuesday, 19 May 2015 at 18:37:31 UTC, bitwise wrote: On Tue, 19 May 2015 14:19:30 -0400, Adam D. Ruppe destructiona...@gmail.com wrote: On Tuesday, 19 May 2015 at 18:15:06 UTC, bitwise wrote: Is this also true for D? Yes. The GC considers all the unreferenced memory dead at the same time and may clean up the class and its members in any order. Ugh... I was really hoping D had something better up it's sleeve. It actually does, check out RefCounted!T and Unique!T in std.typecons. They're sort of limited right now but undergoing a major revamp in 2.068. Any idea what the plans are?. Does RefCounted become thread safe? Correct me if I'm wrong though, but even if RefCounted itself was thread-safe, RefCounted objects could still be placed in classes, at which point you might as well use a GC'ed class instead, because you'd be back to square-one with your destructor racing around on some random thread. I don't understand what you're asking here. If you hold a RefCounted resource in a GC managed object, yes, it will be tied to the GC object's lifetime. With your avoidance of the GC, I feel like you were lied to by a C++ programmer that reference counting is the way to do all memory management, when in reality reference counting is dog slow and destroys your cache locality(esp. without compiler support.) Reference counting is meant to be used where you need absolute control over a resource's lifetime(IMHO,) not as a general purpose memory management tool. Bye.
Re: GC Destruction Order
On Tuesday, 19 May 2015 at 19:45:38 UTC, Namespace wrote: On Tuesday, 19 May 2015 at 19:36:23 UTC, rsw0x wrote: On Tuesday, 19 May 2015 at 18:37:31 UTC, bitwise wrote: On Tue, 19 May 2015 14:19:30 -0400, Adam D. Ruppe destructiona...@gmail.com wrote: On Tuesday, 19 May 2015 at 18:15:06 UTC, bitwise wrote: Is this also true for D? Yes. The GC considers all the unreferenced memory dead at the same time and may clean up the class and its members in any order. Ugh... I was really hoping D had something better up it's sleeve. It actually does, check out RefCounted!T and Unique!T in std.typecons. They're sort of limited right now but undergoing a major revamp in 2.068. By the way: when is 2.068 released? After dconf http://forum.dlang.org/thread/5554d763.1080...@dawg.eu#post-5554D763.1080308:40dawg.eu
Re: GC Destruction Order
On Tuesday, 19 May 2015 at 18:37:31 UTC, bitwise wrote: On Tue, 19 May 2015 14:19:30 -0400, Adam D. Ruppe destructiona...@gmail.com wrote: On Tuesday, 19 May 2015 at 18:15:06 UTC, bitwise wrote: Is this also true for D? Yes. The GC considers all the unreferenced memory dead at the same time and may clean up the class and its members in any order. Ugh... I was really hoping D had something better up it's sleeve. It actually does, check out RefCounted!T and Unique!T in std.typecons. They're sort of limited right now but undergoing a major revamp in 2.068.
Re: Efficiently passing structs
On Tuesday, 5 May 2015 at 14:14:51 UTC, bitwise wrote: On Tue, 05 May 2015 00:20:15 -0400, rsw0x anonym...@anonymous.com wrote: it does, auto ref can bind to both lvalues and rvalues. Create the function with an empty template like so, import std.stdio; struct S{ } void Foo()(auto ref S s){ } void main(){ S s; Foo(s); Foo(S()); } There might be other ways that I'm unaware of. Interesting.. Has this always worked? Theres a couple of forum conversations about trying to get auto ref to work for non-templates. The main problem seems to be that auto ref wont work for virtual functions. I know its worked for a while, I often use it when I'm too lazy to put attributes in and just have the templates infer them for me ;) Also, I don't see how someone could arrive at the above solution without showing up here and asking first. You're probably right, maybe someone should submit a PR to https://github.com/p0nce/d-idioms/
Re: Efficiently passing structs
On Tuesday, 5 May 2015 at 02:47:03 UTC, bitwise wrote: On Mon, 04 May 2015 00:16:03 -0400, Jonathan M Davis via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: D will move the argument if it can rather than copying it (e.g. if a temporary is being passed in), which reduces the need for worrying about copying like you tend to have to do in C++98, and I think that a lot of D code just doesn't worry about the cost of copying structs. How exactly would you move a struct? Just a memcpy without the postblit? However, if you have a large object that you know is going to be expensive to copy, you're either going to have to use const ref (and thus probably duplicate the function to allow rvalues), or you're going to need to make it a reference type rather than having all of its data live on the stack (either by making it so that the struct contains a pointer to its data or by making it a class). In general, if you're dealing with a type that is going to be expensive to copy, I'd advise making it a reference type over relying on const ref simply because it's less error-prone that way. It's trivial to forget to use ref on a parameter, and generic code won't use it, so it'll generally work better to just make it a reference type. - Jonathan M Davis Something like a Matrix4x4 lives in an awkward place between a class and a struct. Because of the fact that a graphics engine may have to deal with thousands of them per frame, both copying them at function calls, and allocating/collecting thousands of them per frame, are both unacceptable. I was reading up(DIP36, pull requests, forum) and it seems like auto ref was supposed to do something like this. Is there a reason you didn't mention it? it does, auto ref can bind to both lvalues and rvalues. Create the function with an empty template like so, import std.stdio; struct S{ } void Foo()(auto ref S s){ } void main(){ S s; Foo(s); Foo(S()); } There might be other ways that I'm unaware of. Why not just add rvref to D? D is already bloated.
Struct lifetime wrt function return?
I remember reading that guaranteed RVO was part of the D standard, but I am completely unable to find anything on it in the specification. I'm also unable to find anything in it that explicitly states the lifetime of returning a stack-local struct from a function. However, it does state Destructors are called when an object goes out of scope. So without guaranteed RVO I am quite confused. I apologize because this code will likely be poorly formatted. import std.stdio; struct S{ ~this(){ writeln(Goodbye!); } } S foo(){ S s; return s; } void main() { S s2 = foo(); } This says Goodbye! exactly once, indicating(?) that S was NRVO'd which means the scope of s went from foo to main. However, is this a guarantee by the standard? Is an implementation allowed to define foo such that it returns by copy and calls a destructor on s, meaning Goodbye! would print out twice?
Re: Converting (casting?) a dynamic array to a fixed array?
On Monday, 4 May 2015 at 02:47:24 UTC, WhatMeWorry wrote: This following code works fine. A triangle is displayed. GLfloat[6] verts = [ 0.0, 1.0, -1.0, -1.0, 1.0, -1.0 ]; glGenBuffers(1, vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); // Some of the types are: glBufferData(GL_ARRAY_BUFFER, verts.sizeof, verts, GL_STATIC_DRAW); Then, all I do is take out the 6 so that the static array becomes a dynamic one. It compiles fine. GLfloat[] verts = [ 0.0, 1.0, -1.0, -1.0, 1.0, -1.0 ]; However, when I run it, the triangle disappears. According to OpenGL, glBufferData shows: void glBufferData( ignore, GLsizeiptr size, const GLvoid * data, ignore); So I thought the best solution would be to simply cast the dynamic array to a pointer? So I tried: glBufferData(GL_ARRAY_BUFFER, verts.sizeof, cast(const GLvoid *) verts, GL_STATIC_DRAW); and glBufferData(GL_ARRAY_BUFFER, verts.sizeof, cast(const GLvoid *) verts, GL_STATIC_DRAW); and glBufferData(GL_ARRAY_BUFFER, verts.sizeof, verts.ptr, GL_STATIC_DRAW); and glBufferData(GL_ARRAY_BUFFER, verts.sizeof, cast(const GLvoid *) verts.ptr, GL_STATIC_DRAW); and nothing but more blank screens. Any ideas? Thanks. sizeof on a slice doesn't do what you think it does, it returns the size of the actual slice object I believe.
Re: Efficiently passing structs
On Monday, 4 May 2015 at 03:57:04 UTC, bitwise wrote: I'll probably go with in ref. I think escape proof is probably a good default. Not to mention, easier to type ;) FYI I'm unsure how well `scope` storage class is currently implemented because it's in a state of flux at the moment as far as I know. `in ref` still helps document your intent of the parameter however. It's hard to track this down exactly because scope has so many different meanings in D, making it difficult to search for - at least one of them has been deprecated.
Re: Efficiently passing structs
On Monday, 4 May 2015 at 01:58:12 UTC, bitwise wrote: If I have a large struct that needs to be passed around, like a 4x4 matrix for example, how do I do that efficiently in D? In std.datetime, in is used for most struct parameters, but I'm confused by the docs for function parameter storage classes[1]. In C++, I would pass a large struct as (const): void foo(const Matrix4x4 m); Is in in D the same as passing by const in C++? The documentation doesn't say anything about in being a reference, but it doesn't say that out parameters are references either, even though it's usage in the example clearly shows that it is. Thanks, Bit http://dlang.org/function.html#parameters Use the ref storage class. You can use more than one storage class i.e, foo(in ref int x) Unless I misunderstood you.