Workaround for dub build-path problem
https://github.com/dlang/dub/issues/658 As noted in the above issue, dub runs in the root project directory for all packages, including dependancies. So if any project aside from the root project includes a relative path in it's dub.json, that dub build will break, due to the incorrect working directory. I can't even use $PACKAGE_DIR to fix this because it doesn't work half the time (invalid identifier). How do people deal with this?
Re: Make Dub output *.di files
On Saturday, 9 March 2019 at 19:08:22 UTC, bitwise wrote: On Saturday, 9 March 2019 at 18:39:29 UTC, bitwise wrote: Is it possible to get Dub to output import headers for compiled D files? I found this, which almost works: "dflags": [ "-H", "-Hdimport", "-op" ] The only problem is that Dub runs *above* the source directory, resulting in all my import files being nested in /source/.. So can I get Dub to run the compilation from the /source directory instead? Or is there a way to tell Dub to generate the headers for each file some other way? For now, I guess this will work: "dflags": [ "-H", "-Hdimport", "-op" ], "postBuildCommands": [ "mv import/source/root_pkg import/root_pkg", "rm -rf import/source" ] A platform independent solution would be preferred.
Re: Make Dub output *.di files
On Saturday, 9 March 2019 at 18:39:29 UTC, bitwise wrote: Is it possible to get Dub to output import headers for compiled D files? I found this, which almost works: "dflags": [ "-H", "-Hdimport", "-op" ] The only problem is that Dub runs *above* the source directory, resulting in all my import files being nested in /source/.. So can I get Dub to run the compilation from the /source directory instead? Or is there a way to tell Dub to generate the headers for each file some other way?
Make Dub output *.di files
Is it possible to get Dub to output import headers for compiled D files?
Re: Are Fibers just broken in D?
On Friday, 20 April 2018 at 18:58:36 UTC, Byron Moxie wrote: [...] In WIN32 it looks like its leaking memory Unless there is something I'm misunderstanding, it seems that Fibers that were not run to completion won't unroll their stack, which would mean that some destructors wouldn't be called, and possibly, some memory wouldn't be freed: https://github.com/dlang/druntime/blob/86cd40a036a67d9b1bff6c14e91cba1e5557b119/src/core/thread.d#L4142 Could this have something to do with the problem?
Re: How do I set a class member value by its name in a string?
On Wednesday, 27 December 2017 at 20:04:29 UTC, Marc wrote: I'd like to set the members of a class by its name at runtime, I would do something like this: __traits(getMember, myClass, name) = value; but since name is only know at runtime, I can't use __traits(). What's a workaround for this? I think you could write something using a combination of these two things: https://dlang.org/phobos/std_traits.html#FieldNameTuple https://dlang.org/phobos/std_traits.html#Fields or maybe '.tupleof': https://dlang.org/spec/struct.html#struct_properties
git workflow for D
I've finally started learning git, due to our team expanding beyond one person - awesome, right? Anyways, I've got things more or less figured out, which is nice, because being clueless about git is a big blocker for me trying to do any real work on dmd/phobos/druntime. As far as working on a single master branch works, I can commit, rebase, merge, squash, push, reset, etc, like the best of em. What I'm confused about is how all this stuff will interact when working on a forked repo and trying to maintain pull requests while everyone else's commits flood in. How does one keep their fork up to date? For example, if I fork dmd, and wait a month, do I just fetch using dmd's master as a remote, and then rebase? Will that actually work, or is that impossible across separate forks/branches? What if I have committed and pushed to my remote fork and still want to merge in the latest changes from dlang's master branch? And how does a pull request actually work? Is it a request to merge my entire branch, or just some specific files? and do I need a separate branch for each pull request, or is the pull request itself somehow isolated from my changes? Anyways, I'd just be rambling if I kept asking questions. If anyone can offer any kind of advice, or an article that explains these things concisely and effectively, that would be helpful. Thanks
Re: Undo?
On Tuesday, 10 October 2017 at 02:36:56 UTC, Mr. Jonse wrote: I requiring an undo feature in my code. Rather than go the regular route of using commands, I'm wondering if D can facilitate an undo system quite easily? We can think of an undo system in an app as a sort of recorder. The traditional method is to use commands and inverse-commands. By recording the commands one can "unwind" the program by applying the inverse commands. The down side is that the app must be written with this approach in mind. Storing the complete state of the app is another way which some apps use but usually it is too much data to store. Since the only thing that one has to store from state of the app to the next is the change in data long with creation and deletion of the data, I think one could simplify the task? Is it possible to write a generic system that records the the entire state changes without much boilerplate? I'm thinking that two types of attributes would work, one for aggregates for creation and deletion of objects and one for properties to handle the data changes. If D can be used to automatically hook properties to have them report the changes to the undo system and one can properly deal with object creation and assignment, it might be a pretty sleek way to support undo. Help appreciated! I wrote an undo system for a level editor once: https://github.com/nicolasjinchereau/pizza-quest/blob/90d1a2ae75c1f80ee13cedcfb634c6de0f9528db/source/editor/History.h That class made it trivial to implement unlimited undo/redo. Each object that's passed to History::AddObjectState() has to have `Undoable` implemented so that its state can be copied and replaced later if the object needs to be restored. In D though, you don't even really have to implement `Undoable`. You can make something like AddObjectState() into a template that uses D's `__traits`, or `tupleof` to record all of an object's fields into some generic undo state. I wrote that code so long ago that I don't really remember how I dealt with pointer ownership, but if you use GC allocation, or POD types, it should be easy. With an approach like this, you don't need a discrete set of commands, but only objects that can be serialized before an operation, and restored afterward if you don't like the result.
Re: AliasSeq of T.tupleof for class and all base classes
On Saturday, 30 September 2017 at 12:42:17 UTC, Steven Schveighoffer wrote: [...] https://issues.dlang.org/show_bug.cgi?id=17870
Re: @property with 2 arguments
On Sunday, 1 October 2017 at 05:57:53 UTC, Tony wrote: "@property functions can only have zero, one or two parameters" I am looking for an example of an @property function defined with two parameters and the syntax for how it is accessed without (). And also this, which probably shouldn't actually work: struct S { @property void prop(int a, int b){} } int main(string[] argv) { S s; s.prop = AliasSeq!(1, 2); return 0; }
Re: AliasSeq of T.tupleof for class and all base classes
On Saturday, 30 September 2017 at 12:42:17 UTC, Steven Schveighoffer wrote: I think the problem may be that derived classes' tupleof has some of the same variables as the base class? I agree it should work, but I think if it did work, it may not be what you want. You would see a lot of repeats. .tupleof doesn't return fields from base classes. assert(D2.tupleof.stringof == "tuple(c)");
AliasSeq of T.tupleof for class and all base classes
As far as I can tell, this code should compile: class B { int a; } class D1 : B { int b; } class D2 : D1 { int c; } template TupleOf(Classes...) { static if(Classes.length > 1) alias TupleOf = AliasSeq!(Classes[0].tupleof, TupleOf!(Classes[1..$])); else static if(Classes.length == 1) alias TupleOf = AliasSeq!(Classes[0].tupleof); else alias TupleOf = AliasSeq!(); } int main(string[] argv) { alias allClasses = AliasSeq!(D2, BaseClassesTuple!D2); alias allFields = TupleOf!allClasses; return 0; } But I get this: Error: template instance AliasSeq!(b, a) AliasSeq!(b, a) is nested in both D1 and B Error: template instance main.TupleOf!(D1, B, Object) error instantiating instantiated from here: TupleOf!(D2, D1, B, Object) Any ideas? Thanks
Re: detect implicitly convertible typeid's?
On Tuesday, 26 September 2017 at 19:31:56 UTC, Steven Schveighoffer wrote: [...] I just recently fixed Variant so it could accept shared data (so you could pass shared data using std.concurrency), and part of that depends on the fact that I know nothing else can point at the data (so no locking/atomics are necessary, we know the actual data is not shared). I think it's very dangerous to extract a reference to the data, it breaks all kinds of expectations. -Steve Ok - I guess I'll have to make a custom Box of some kind. Or maybe...even think outside the box ;) Thanks
Re: detect implicitly convertible typeid's?
On Tuesday, 26 September 2017 at 17:27:02 UTC, Steven Schveighoffer wrote: -Steve About Variant - I was considering a pull request for retrieving a pointer to the internal data, but figured that it was left out on purpose due to @safety. OTOH, I was looking through dmd commits, and it seems like there has been significant progress on 'scope'. So I was thinking this: struct Variant { scope inout(ubyte)[] data() inout { auto sz = type.tsize; assert(sz <= size); return store[0..sz]; } } Thoughts?
Re: detect implicitly convertible typeid's?
On Monday, 25 September 2017 at 15:12:57 UTC, Steven Schveighoffer wrote: [...] I'm not sure of how much use this is, but I do not know enough to say that it's completely useless :) Certainly, some code somewhere has to be able to understand what the actual type of something is. That code may be more equipped to do the cast than code that doesn't know. Type-boxing can be made much easier. The current implementation of Variant falls short in a few ways. Variant.coerce(T) needs the type at compile time, only works for certain types, and needs a bunch of extra code to get it done. Giving TypeInfo a 'cast' method seems like a more robust solution. I've been working through a problem though, and it seesms like the only insurmountable issue is that Variant won't give you a pointer to it's contents without knowing the exact type. I have a reflection system where you can get a collection of 'Property' objects that represent all properties of a class/struct. The 'Property' returns a Variant holding the return value of the property function. So even though the 'Property' gives me a Reflection object for the return value, I still can't reflect on the contents of the Variant because I can't get a pointer to it at runtime. Basically, I need to take a given class and recursively scan 2-3 levels deep to find any properties/fields that are floats so they can be animated. The only setback right now is that Variant won't give me a void* to it's contents so I can reflect on it, modify it, and set it back in. A cast of builtin types is handled directly by the compiler. There is no function, for instance, to cast an int to a long. A cast of a class will go through dynamic type casting. A cast of custom types will call the opCast member (or in the case of implicit conversions, will use the alias this call). I guess this makes sense. I suppose it would be dumb to resort to checking TypeInfo every time you needed to cast a byte to an int. This part of my question was not particularly well(at all) thought out =/ The answer to the last is that, yes, at the moment you need a custom runtime. I really don't want to maintain a custom runtime just for this. It would be nice if there was a compiler flag to specify an RTInfo template to use. Or better yet, an attribute. so this: ` module source; template MyRTInfo(T) { ... } ` dmd source.d -rtinfo "source.MyRTInfo" Or even better, this: ` module reflection; @rtinfo template RTInfo(T) { ... } module test; class Test{} // typeid(Test).rtinfo == reflection.RTInfo ` dmd reflection.d test.d So of course, dmd could complain if you specified more than one RTInfo either way. If two static libraries were built with different RTInfo's, I don't think it would technically be a problem since every TypeInfo would get it's own rtinfo pointer anyways. Maybe something in the runtime could somehow warn about mismatched RTInfo types like it does about cyclic module dependencies.
Re: Is it possible to avoid call to destructor for structs?
On Monday, 25 September 2017 at 08:39:26 UTC, Adrian Matoga wrote: [...] You shouldn't store the pointer to barBuffer inside Foo. The language allows moving the structure around with a simple memcpy, so _bar is likely to point into garbage soon after it's assigned. Good point - but it's a mistake ;) 'Foo' is a class in the OP's code, so no problem. Why don't you just return *cast(Bar*)barBuffer.ptr in bar()? Lazy construction You could still emplace a Bar inside barBuffer in Foo's constructor, if needed. So you KNOW it's a class then...since structs can't have default ctors.. :P
Re: detect implicitly convertible typeid's?
On Monday, 25 September 2017 at 13:20:03 UTC, Steven Schveighoffer wrote: On 9/23/17 11:52 AM, bitwise wrote: Is it possible to tell if two objects represented by TypeInfo's are convertible to each other? Basically, is there a built in way to do this? int x; long y; assert(typeid(x).isImplicitlyConvertibleTo(typeid(y)); I would say no. There isn't any function/data to detect that. Keep in mind that TypeInfo is generated by the compiler, and only contains what the developers of the runtime have wanted it to contain. It's not a full-fledged reflection system. Something like this along side TypeInfo.postblit and TypeInfo.destroy would actually be useful: TypeInfo.cast(void* src, void** dst, TypeInfo dstType); I wonder though...does a cast even resolve to a single function call at compile time, or is there scattered context-dependent code throughout the compiler to insert the appropriate logic? Based on current trends though, it seems like TypeInfo.postblit/destroy may be on the chopping block...any idea? However, you COULD build something in RTInfo that could place that inside the TypeInfo. That is what RTInfo was added for. The comments say it's for precise GC: https://github.com/dlang/druntime/blob/cc8edc611fa1d753ebb6a5fabbc3f37d8564bda3/src/object.d#L312-L314 Doesn't that mean my code could some day get clobbered if I put it there and precise GC is implemented? Also, don't I need to compile a custom runtime for that? Thanks
Re: Is it possible to avoid call to destructor for structs?
On Monday, 25 September 2017 at 01:46:15 UTC, Haridas wrote: [...] It all works well so far. But as soon as I create an instance of Bar inside a Dlang class (say Foo) or as part of a Dlang dynamic array, hell follows. At some point, Dlang's GC kicks in and Bar's destructor gets called from within Dlang's GC. Now since Dlang executes GC on a different thread, the destructor gets confused and segfaults. I actually wrote a C# style Dispatcher for exactly this reason. I have D classes that own non thread-safe resources. So in the destructor of the D class, I add a call that queues the destruction to the main thread's dispatcher. In your case, the postblit of Bar is still going to run and add a ref to it's count when you place it in Foo, right? That means that if you don't destroy it, it will leak memory or resources. Unfortunately, my dispatcher is not production-ready yet, but you can get around this with a simpler approach. Just keep a shared container of your ref counted object type somewhere. When a destructor of a GC class runs, move the ref counted object into the trash container. Then, next time you want to create an instance of the ref counted object, you can empty the trash container at the same time. You should protect the container with a Mutex of some kind. Also, be sure that the container doesn't allocate using the GC since it will be called from class destructors. IIRC std.container.Array uses malloc, not GC, so you may be able to use that.
Re: Is it possible to avoid call to destructor for structs?
On Sunday, 24 September 2017 at 17:11:26 UTC, Haridas wrote: In the following code, Bar is an element of struct Foo. Is there a way to avoid a call to ~Bar when ~Foo is getting executed? Don't construct it to begin with. struct Bar { import std.stdio : writeln; int a = 123; void boink() { writeln(a); } ~this(){ writeln("bar dtor"); } } struct Foo { ubyte[Bar.sizeof] barBuffer; Bar* _bar = null; ref Bar bar() { import std.conv : emplace; if(!_bar) { _bar = cast(Bar*)barBuffer.ptr; emplace(_bar); } return *_bar; } } int main(string[] argv) { Foo foo; foo.bar.boink(); return 0; }
detect implicitly convertible typeid's?
Is it possible to tell if two objects represented by TypeInfo's are convertible to each other? Basically, is there a built in way to do this? int x; long y; assert(typeid(x).isImplicitlyConvertibleTo(typeid(y)); Thanks
Re: extern(C) enum
On Monday, 18 September 2017 at 00:12:49 UTC, Mike Parker wrote: On Sunday, 17 September 2017 at 19:16:06 UTC, bitwise wrote: [...] I've been maintaining bindings to multiple C libraries (including Freetype 2 bindings) for 13 years now. I have never encountered an issue with an enum size mismatch. That's not to say I never will. For which platforms? I would have to actually go through the specs for each compiler of each platform to make sure before I felt comfortable accepting that int-sized enums were defacto standard. I would be worried about iOS, for example. The following code will run fine on Windows, but crash on iOS due to the misaligned access: char data[8]; int i = 0x; int* p = (int*)[1]; *p++ = i; *p++ = i; *p++ = i; I remember this issue presenting due to a poorly written serializer I used once (no idea who wrote it ;) and it makes me wonder what kind of other subtle differences there may be. I think there may be a few (clang and gcc?) different choices of compiler for Android NDK as well.
Re: extern(C) enum
On Sunday, 17 September 2017 at 18:44:47 UTC, nkm1 wrote: On Sunday, 17 September 2017 at 17:06:10 UTC, bitwise wrote: [...] Just put the burden on the users then. It's implementation defined, so they are in position to figure it out... This isn't something that can really be done with bindings, which are important for D to start really picking up speed. If someone goes to code.dlang.org and decides to download some FreeType2 bindings, they should just work. The memory corruption bugs that could occur due to binary incompatibility with some random copy of the original C library would be extremely hard to diagnose. They would also undermine the memory safety that a lot of people depend on when using D. for example, gcc: "Normally, the type is unsigned int if there are no negative values in the enumeration, otherwise int. If -fshort-enums is specified, then if there are negative values it is the first of signed char, short and int that can represent all the values, otherwise it is the first of unsigned char, unsigned short and unsigned int that can represent all the values. On some targets, -fshort-enums is the default; this is determined by the ABI." https://gcc.gnu.org/onlinedocs/gcc-6.4.0/gcc/Structures-unions-enumerations-and-bit-fields-implementation.html#Structures-unions-enumerations-and-bit-fields-implementation msvc++: "A variable declared as enum is an int." https://docs.microsoft.com/en-us/cpp/c-language/enum-type I was starting to think along these lines as well. With respect to the above, I'm wondering if something like this could be done: ` template NativeEnumBase(long minValue, long maxValue) { static if(platform A) { static if(minValue < 0) // need signed? { static if(maxValue > int.max) // need long? alias NativeEnumBase = long; else alias NativeEnumBase = int; } else { static if(maxValue > uint.max) // need long? alias NativeEnumBase = ulong; else alias NativeEnumBase = uint; } } else static if(platform B) { // etc... alias NativeEnumBase = long; } else { static assert("unsupported compiler"); } } enum Some_C_Enum_ : NativeEnumBase!(-1, 2) { SCE_INVALID = -1, SCE_ZERO = 0, SCE_ONE = 1, SCE_TWO = 2, } ` So the question is, is there a way from inside D code to determine what the native enum size would be for a given set of min and max enum values? While C and C++ do not specify enum size, are there platform or compiler level specifications we could rely on? It's probably pretty safe to assume it's an int; people who play tricks with "-fshort-enums" deserve what's coming to them :) Agreed ;)
Re: extern(C) enum
On Saturday, 16 September 2017 at 12:34:58 UTC, nkm1 wrote: On Saturday, 16 September 2017 at 03:06:24 UTC, Timothy Foster wrote: [...] [...] So it appears I'm screwed then. Example: typedef enum FT_Size_Request_Type_ { FT_SIZE_REQUEST_TYPE_NOMINAL, FT_SIZE_REQUEST_TYPE_REAL_DIM, FT_SIZE_REQUEST_TYPE_BBOX, FT_SIZE_REQUEST_TYPE_CELL, FT_SIZE_REQUEST_TYPE_SCALES, FT_SIZE_REQUEST_TYPE_MAX } FT_Size_Request_Type; typedef struct FT_Size_RequestRec_ { FT_Size_Request_Type type; FT_Long width; FT_Long height; FT_UInt horiResolution; FT_UInt vertResolution; } FT_Size_RequestRec; FT_Size_Request_Type_ could be represented by char. Maybe the compiler makes it an int, maybe not. Maybe the compiler makes 'FT_Size_Request_Type_' char sized, but then pads 'FT_Size_RequestRec_' to align 'width' to 4 bytes...or maybe not. Maybe a member of 'FT_Size_Request_Type_' sits right before a char or bool in some struct..so can't rely on padding. I don't really see a way to deal with this aside from branching the entire library and inserting something like 'FT_SIZE_REQUEST_TYPE__FORCE_INT = 0x' into every enum incase the devs used it in a struct.
Re: extern(C) enum
On Friday, 15 September 2017 at 19:35:50 UTC, nkm1 wrote: On Friday, 15 September 2017 at 19:21:02 UTC, Timothy Foster wrote: I believe C enum size is implementation defined. A C compiler can pick the underlying type (1, 2, or 4 bytes, signed or unsigned) that fits the values in the enum. No, at least, not C99. See 6.4.4.3: "An identifier declared as an enumeration constant has type int". You must be thinking about C++. Thanks - this works for me. The bindings are for an open source C library. So I guess I'm safe as long as I can be sure I'm using a C99 compiler and strongly typing as int in D. C++ seems to be a much more complicated situation, but it appears that for 'enum class' or 'enum struct' the underlying type is int, even when it's not specified. § 7.2: [1] "The enum-keys enum class and enum struct are semantically equivalent; an enumeration type declared with one of these is a scoped enumeration, and its enumerators are scoped enumerators." [2] "For a scoped enumeration type, the underlying type is int if it is not explicitly specified." [1][2] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf Shame that even relatively new C++ code tends to use unscoped enums.
Re: Ranges suck!
On Thursday, 14 September 2017 at 23:53:20 UTC, Your name wrote: [...] I understand your frustration. The fact that "inout" is actually a keyword makes it hard not to think that some very strange fetishes were at play during the creation of this language. As a whole though, the language is very usable, and has many great features not present in similar languages. Just this morning, I was able to replace a fairly large and ugly pile of code with this: import std.utf; foreach(c; myString.byUTF!dchar) { // ... }
Re: extern(C) enum
On Friday, 15 September 2017 at 07:24:34 UTC, Jonathan M Davis wrote: On Friday, September 15, 2017 04:15:57 bitwise via Digitalmars-d-learn wrote: I translated the headers for FreeType2 to D, and in many cases, enums are used as struct members. If I declare an extern(C) enum in D, is it guaranteed to have the same underlying type and size as it would for a C compiler on the same platform? extern(C) should have no effect on enums. It's for function linkage, and enums don't even have an address, so they don't actually end up in the program as a symbol. And since C's int and D's int are the same on all platforms that D supports (we'd have c_int otherwise, like we have c_long), any enum with a base type of int (which is the default) will match what's in C. - Jonathan M Davis I'm confused...is it only C++ that has implementation defined enum size? I thought that was C as well.
Re: extern(C) enum
On Friday, 15 September 2017 at 06:57:31 UTC, rikki cattermole wrote: On 15/09/2017 5:15 AM, bitwise wrote: I translated the headers for FreeType2 to D, and in many cases, enums are used as struct members. If I declare an extern(C) enum in D, is it guaranteed to have the same underlying type and size as it would for a C compiler on the same platform? No need for extern(C). Be as specific as you need, but most likely you won't need to (e.g. first is automatically 0). enum Foo : int { Start = 0, StuffHere End } This is for D/C interop though. enum E { A, B, C } struct S { E e; } So based on the underlying type chosen by each compiler, the size of struct S could change. I can't strongly type the D enums to match, because I don't know what size the C compiler will make 'E', unless D somehow gauntness the same enum-sizing as the C compiler would.
extern(C) enum
I translated the headers for FreeType2 to D, and in many cases, enums are used as struct members. If I declare an extern(C) enum in D, is it guaranteed to have the same underlying type and size as it would for a C compiler on the same platform?
Re: Is compiling for Android/iOS possible?
On Wednesday, 6 September 2017 at 18:34:28 UTC, Timothy Foster wrote: I'm just wondering if I made an application for Windows/Mac/Linux if I could get it to also work on mobile devices, or would I have to rewrite the application in another language to get it to work? If it's possible, what should I be looking at to get something like a "Hello World" example to show on my phone using D? For iOS, there's this: https://github.com/smolt/ldc-iphone-dev I'm not sure if it's production ready though.
Re: string to character code hex string
On Saturday, 2 September 2017 at 18:28:02 UTC, Moritz Maxeiner wrote: [...] Code will eventually look something like the following. The point is to be able to retrieve the exported function at runtime only by knowing what the template arg would have been. export extern(C) const(Reflection) dummy(string fqn)(){ ... } int main(string[] argv) { enum ARG = "AA"; auto hex = toAsciiHex(ARG); // original writeln(dummy!ARG.mangleof); // reconstructed at runtime auto remangled = dummy!"".mangleof; remangled = remangled.replaceFirst( "_D7mainMod17", "_D7mainMod" ~ (17 + hex.length).to!string); remangled = remangled.replaceFirst( "VAyaa0_", "VAyaa" ~ ARG.length.to!string ~ "_" ~ hex); writeln(remangled); return 0; }
Re: string to character code hex string
On Saturday, 2 September 2017 at 18:28:02 UTC, Moritz Maxeiner wrote: In UTF8: --- utfmangle.d --- void fun_ༀ() {} pragma(msg, fun_ༀ.mangleof); --- --- $ dmd -c utfmangle.d _D6mangle7fun_ༀFZv --- Only universal character names for identifiers are allowed, though, as per [1] [1] https://dlang.org/spec/lex.html#identifiers What I intend to do is this though: void fun(string s)() {} pragma(msg, fun!"ༀ".mangleof); which gives: _D7mainMod21__T3funVAyaa3_e0bc80Z3funFNaNbNiNfZv where "e0bc80" is the 3 bytes of "ༀ". The function will be internal to my library. The only thing provided from outside will be the string template argument, which is meant to represent a fully qualified type name.
Re: string to character code hex string
On Saturday, 2 September 2017 at 17:45:30 UTC, Moritz Maxeiner wrote: If this (unnecessary waste) is of concern to you (and from the fact that you used ret.reserve I assume it is), then the easy fix is to use `sformat` instead of `format`: Yes, thanks. I'm going to go with a variation of your approach: private string toAsciiHex(string str) { import std.ascii : lowerHexDigits; import std.exception: assumeUnique; auto ret = new char[str.length * 2]; int i = 0; foreach(c; str) { ret[i++] = lowerHexDigits[(c >> 4) & 0xF]; ret[i++] = lowerHexDigits[c & 0xF]; } return ret.assumeUnique; } I'm not sure how the compiler would mangle UTF8, but I intend to use this on one specific function (actually the 100's of instantiations of it). It will predictably named though. Thanks!
Re: string to character code hex string
On Saturday, 2 September 2017 at 17:41:34 UTC, Ali Çehreli wrote: You're right but I think there is no intention of interpreting the result as UTF-8. "f62026" is just to be used as "f62026", which can be converted byte-by-byte back to "ö…". That's how understand the requirement anyway. Ali My intention is compute the mangling of a D template function that takes a string as a template parameter without having the symbol available. I think that means that converting each byte of the string to hex and tacking it on would suffice.
Re: string to character code hex string
On Saturday, 2 September 2017 at 15:53:25 UTC, bitwise wrote: [...] This seems to work well enough. string toAsciiHex(string str) { import std.array : appender; auto ret = appender!string(null); ret.reserve(str.length * 2); foreach(c; str) ret.put(format!"%x"(c)); return ret.data; }
string to character code hex string
I need to convert a string of characters to a string of their hex representations. "AAA" -> "414141" This seems like something that would be in the std lib, but I can't find it. Does it exist? Thanks
Re: traits for function having actual source declaration?
On Friday, 1 September 2017 at 17:26:11 UTC, ketmar wrote: [...] they *should* listen. anyone who doesn't just aksing for troubles, and i see no reason to guard 'em further. Yeah...eventually came to the same conclusion ;) Thanks
Re: traits for function having actual source declaration?
On Friday, 1 September 2017 at 14:38:38 UTC, bitwise wrote: When I'm using __traits(allMembers), I get a all the invisible functions added by the compiler as well "__ctor", "__xdtor", "__cpctor", etc.. Is there a way to filter them out? dlang's "Lexical" page says: "Identifiers starting with __ (two underscores) are reserved." So I suppose I could just filter out functions with leading underscores, but still seems unreliable, as people may not listen.
traits for function having actual source declaration?
When I'm using __traits(allMembers), I get a all the invisible functions added by the compiler as well "__ctor", "__xdtor", "__cpctor", etc.. Is there a way to filter them out?
Re: struct field initialization
On Wednesday, 16 August 2017 at 18:17:36 UTC, kinke wrote: On Wednesday, 16 August 2017 at 18:11:05 UTC, bitwise wrote: If I define a non-default constructor for a struct, are the fields initialized to T.init by the time it's called? The struct instance is initialized with T.init before invoking the constructor. Thanks for the quick response. In regards to my second question, the "value = T(args);" variant seems to work, even with a const T, without calling a postblit - so I guess that's what I'll use.
Re: struct field initialization
On Wednesday, 16 August 2017 at 18:11:05 UTC, bitwise wrote: [...] I'm asking this because I need to forward args to a container's node's value. Something like this: struct Node(T) { int flags; T value; // maybe const this(Args...)(int flags, auto ref Args args) { this.flags = flags; // this? emplace(, args); // or this? value = T(args); // ? } } struct Container(T) { Node!T[] nodes; void add(Args...)(auto ref Args args) { int flags = 1234; auto p = cast(Node!T*)malloc(Node!T.sizeof); nodes ~= emplace(p, flags, args); } }
struct field initialization
If I define a non-default constructor for a struct, are the fields initialized to T.init by the time it's called? or am I responsible for initializing all fields in that constructor? ..or do structs follow the same rules as classes? https://dlang.org/spec/class.html#field-init Thanks
Re: __dtor vs __xdtor
On Friday, 11 August 2017 at 17:20:18 UTC, HyperParrow wrote: [...] I made a mistake but it's not about i, which is a global. I meant "other.__dtor." just before the last assert. This doesn't change the results. hmm...indeed ;) On Friday, 11 August 2017 at 17:24:17 UTC, HyperParrow wrote: [...] is it possible to have only __dtor without also having __xdtor? Like, if I want to call a struct's destructor, do I have to check for both, or can I just always check for, and call __xdtor? Always use __xdtor unless you know there's no other destructor to call. Ok cool, thanks.
Re: __dtor vs __xdtor
On Friday, 11 August 2017 at 17:06:40 UTC, HyperParrow wrote: [...] int i; struct Foo { template ToMix(){ ~this(){i;}} ~this(){++i;} mixin ToMix; } void main() { Foo* foo = new Foo; foo.__xdtor; assert(i==3); Foo* other = new Foo; foo.__dtor; assert(i==4); // and not 6 ;) } = I think you mean assert(i == 1) for the second one, right?
Re: __dtor vs __xdtor
On Friday, 11 August 2017 at 17:02:20 UTC, HyperParrow wrote: On Friday, 11 August 2017 at 16:53:02 UTC, bitwise wrote: What do they do? What's the difference? Thanks __xdtor() also calls the __dtor() that are mixed with template mixins while __dtor() only call the __dtor() that matches to the normal ~this(){} Ok thanks. I don't understand why you would ever want to call __dtor then...is it possible to have only __dtor without also having __xdtor? Like, if I want to call a struct's destructor, do I have to check for both, or can I just always check for, and call __xdtor?
__dtor vs __xdtor
What do they do? What's the difference? Thanks
Re: lambda function with "capture by value"
On Saturday, 5 August 2017 at 18:17:49 UTC, Simon Bürger wrote: If a lambda function uses a local variable, that variable is captured using a hidden this-pointer. But this capturing is always by reference. Example: int i = 1; auto dg = (){ writefln("%s", i); }; i = 2; dg(); // prints '2' Is there a way to make the delegate "capture by value" so that the call prints '1'? Note that in C++, both variants are available using [&]() { printf("%d", i); } and [=]() { printf("%d", i); } respectively. I asked about this a couple of day ago: http://forum.dlang.org/thread/ckkswkkvhfojbcczi...@forum.dlang.org The problem is that the lambda captures the entire enclosing stack frame. This is actually a bug because the lambda should only capture the enclosing *scope*, not the entire stack frame of the function. So even if you were to copy `i` into a temporary in some nested scope where a lambda was declared (this works in C# for example), that temporary would still reside in the same stack frame as the outer `i`, which means there would still be only one copy of it. There is a workaround in Timon's post here: http://forum.dlang.org/post/om2aqp$2e9t$1...@digitalmars.com Basically, that workaround wraps the nested scope in another lambda to force the creation of a separate stack frame.
Re: returning D string from C++?
On Sunday, 6 August 2017 at 16:46:40 UTC, Mike Parker wrote: On Sunday, 6 August 2017 at 16:23:01 UTC, bitwise wrote: So I guess you're saying I'm covered then? I guess there's no reason I can think of for the GC to stop scanning at the language boundary, let alone any way to actually do that efficiently. It's not something you can rely on. If the pointer is stored in memory allocated from the C heap, then the GC will never see it and can pull the rug out from under you. Best to make sure it's never collected. If you don't want to keep a reference to it on the D side, then call GC.addRoot on the pointer. That way, no matter where you hand it off, the GC will consider it as being live. When you're done with it, call GC.removeRoot. I was referring specifically to storing gc_malloc'ed pointers on the stack, meaning that I'm calling a C++ function on a D call stack, and storing the pointer as a local var in the C++ function before returning it to D. The more I think about it, the more I think it has to be ok to do. Unless D stores [ESP] to some variable at each extern(*) function call, then the GC would have no choice but indifference as to what side of the language boundary it was scanning on. If it did, I imagine it would say so here: https://dlang.org/spec/cpp_interface.html#memory-allocation
Re: returning D string from C++?
On Sunday, 6 August 2017 at 05:31:51 UTC, Marco Leise wrote: Am Sat, 05 Aug 2017 20:17:23 + schrieb bitwise: [...] In due diligence, you are casting an ANSI string into a UTF-8 string which will result in broken Unicode for non-ASCII window titles. In any case it is better to use the wide-character versions of Windows-API functions nowadays. [...] Good point. (pun not originally intended ;) All serious projects I have done for Windows thus far have actually been in C# (default UTF-16), so I guess I've been spoiled. Second I'd like to mention that you should have set ret.length = GetWindowText(_hwnd, (char*)ret.ptr, ret.length); Currently your length is anything from 1 to N bytes longer than the actual string[2], which is not obvious because any debug printing or display of the string stops at the embedded \0 terminator. [...] Totally right! I looked right at this info in the docs..not sure how I still got it wrong ;) Thanks
Re: returning D string from C++?
On Saturday, 5 August 2017 at 21:18:29 UTC, Jeremy DeHaan wrote: On Saturday, 5 August 2017 at 20:17:23 UTC, bitwise wrote: I have a Windows native window class in C++, and I need a function to return the window title. [...] As long as you have a reachable reference to the GC memory SOMEWHERE, the GC won't reclaim it. It doesn't have to be on the stack as long as it is reachable through the stack. I'm basically worried about this happening: virtual DString getTitle() const { DString ret; ret.length = GetWindowTextLength(_hwnd) + 1; ret.ptr = (const char*)gc_malloc(ret.length, 0xA, NULL); gc collection on another thread GetWindowText(_hwnd, (char*)ret.ptr, ret.length); // BOOM return ret; } So I guess you're saying I'm covered then? I guess there's no reason I can think of for the GC to stop scanning at the language boundary, let alone any way to actually do that efficiently. Thanks
returning D string from C++?
I have a Windows native window class in C++, and I need a function to return the window title. So in D, I have this: // isn't D's ABI stable enough to just return this from C++ // and call it a string in the extern(C++) interface? anyways.. struct DString { size_t length; immutable(char)* ptr; string toString() { return ptr[0..length]; } alias toString this; } extern(C++) interface NativeWindow { DString getTitle() const; } and in C++, this: class NativeWindow { public: struct DString { size_t length; const char* ptr; }; virtual DString getTitle() const { DString ret; ret.length = GetWindowTextLength(_hwnd) + 1; ret.ptr = (const char*)gc_malloc(ret.length, 0xA, NULL); GetWindowText(_hwnd, (char*)ret.ptr, ret.length); return ret; } }; So while it's not generally safe to _store_ pointers to D's GC allocated memory exclusively in C++, I've read that D's GC scans the stack, and getTitle() is being called from D(and so, is on that stack..right?). So is the string I'm returning safe from GC collection? Thanks
Re: Mallocator and 'shared'
On Saturday, 11 February 2017 at 04:32:37 UTC, Michael Coulombe wrote: On Friday, 10 February 2017 at 23:57:18 UTC, bitwise wrote: [...] A shared method means that it can only be called on a shared instance of the struct/class, which will have shared fields. A shared method should be logically thread-safe, but that cannot be guaranteed by the compiler. A non-shared method can touch shared memory, and thus should be thread-safe if it does, but can only be called on a non-shared instance with possibly non-shared fields. shared/non-shared methods don't mix because you generally need to use different, less-efficient instructions and algorithms to be thread-safe and scalable in a shared method. In the case of Mallocator, there are no fields so as far as I can tell the attribute doesn't do much except for documentation and for storing references to it in other shared structs/objects. Thanks for the explanation, but I'm still confused. It seems like you're saying that 'shared' should mean both 'thread safe' and 'not thread safe' depending on context, which doesn't make sense. Example: shared A a; struct A { int x, y; void foo() shared { a.x = 1; } } int main(string[] argv) { a.x = 5; a.y = 5; a.foo(); return 0; } Qualifying 'a' with 'shared' means that it's shared between threads, which means that accessing it is _not_ thread safe. Since the method 'foo' accesses 'a', 'foo' is also _not_ thread safe. Given that both the data and the method are 'shared', a caller should know that race conditions are possible and that they should aquire a lock before accessing either of them...or so it would seem. I imagine that qualifying a method with 'shared' should mean that it can access shared data, and hence, is _not_ thread safe. This prevent access to 'shared' data from any non 'shared' context, without some kind of bridge/cast that a programmer would use when they knew that they had aquired the lock or ownership of the data. Although this is what would make sense to me, it doesn't seem to match with the current implementation of 'shared', or what you're saying. It seems that methods qualified with 'shared' may be what you're suggesting matches up with the 'bridge' I'm trying to describe, but again, using the word 'shared' to mean both 'thread safe' and 'not thread safe' doesn't make sense. Firstly, because the same keyword should not mean two strictly opposite things. Also, if a 'shared' method is supposed to be thread safe, then the fact that it has access to shared data is irrelevant to the caller. So 'shared' as a method qualifier doesn't really make sense. What would make more sense is to have a region where 'shared' data could be accessed - Maybe something like this: struct S { shared int x; Lock lk; private void addNum(int n) shared { x += num; } int add(int a, int b) { shared { lk.lock(); addNum(a); addNum(b); lk.unlock(); } } } So above, 1) 'x' would be shared, and mutating it would not thread safe. 2) 'addNum' would have access to 'shared' data, and also be non-thread-safe 3) 'x' and 'addNum' would be inaccessible from 'add' since they're 'shared' 4) a 'shared' block inside 'add' would allow access to 'x' or 'addNum', with the responsibility being on the programmer to lock. 5) alternatively, 'shared' data could be accessed from within a 'synchronized' block. I thought 'shared' was a finished feature, but it's starting to seem like it's a WIP. This kind of feature seems like it has great potential, but is mostly useless in it's current state. After more testing with shared, it seems that 'shared' data is mutable from many contexts, from which it would be unsafe to mutate it without locking first, which basically removes any gauruntee that would make 'shared' useful. Again, tell me if I'm wrong here, but there seems to be a lot of holes in 'shared'. Thanks
Mallocator and 'shared'
https://github.com/dlang/phobos/blob/cd7846eb96ea7d2fa65ccb04b4ca5d5b0d1d4a63/std/experimental/allocator/mallocator.d#L63-L65 Looking at Mallocator, the use of 'shared' doesn't seem correct to me. The logic stated in the comment above is that 'malloc' is thread safe, and therefore all methods of Mallocator can be qualified with 'shared'. I thought that qualifying a method as 'shared' meant that it _can_ touch shared memory, and is therefore _not_ thread safe. The following program produces this error: "Error: shared method Mallocator.allocate is not callable using a non-shared object" import std.experimental.allocator.mallocator; int main(string[] argv) { Mallocator m; m.allocate(64); return 0; } And the above error is because it would be un(thread)safe to call those methods from a non-shared context, due to the fact that they may access shared memory. Am I wrong here?
Re: struct: default construction or lazy initialization.
On Wednesday, 1 February 2017 at 23:24:27 UTC, kinke wrote: It's not that bad. D just doesn't support a default ctor for structs at all and simply initializes each instance with T.init. Your `s2` initialization is most likely seen as explicit default initialization (again with T.init). Destructing both instances is exactly what should happen. I was going to add a point about this. 1| S s1; 2| S s2 = S(); The effect of line 1 and 2 are exactly the same - which is that the lhs ends up with S.init. S.this() should either be called at line 2, or the syntax of line 2 should be forbidden on the grounds that default struct ctors cannot be declared.
Re: struct: default construction or lazy initialization.
On Wednesday, 1 February 2017 at 01:52:40 UTC, Adam D. Ruppe wrote: On Wednesday, 1 February 2017 at 00:43:39 UTC, bitwise wrote: Container!int c; // = Container!int() -> can't do this. Can you live with Container!int c = Container!int.create(); because D supports that and can force the issue with `@disable this();` which causes compilation to fail any place where it isn't explicitly initialized. I suppose this works, but to be honest, I wouldn't use it. I really don't feel like I'm asking to "have my cake and eat it too" by expecting a proper solution for this. The current behavior doesn't even really make sense. Example: struct S { // this(){} this(Args...)(auto ref Args args) { writeln("ctor"); } ~this() { writeln("dtor"); } } void foo(Args...)(auto ref Args args) { writeln("foo"); } int main(string[] argv) { S s; S s2 = S(); foo(); return 0; } outputs: foo dtor dtor I would expect that I could at least have this() invoked for 's2', but I can't even declare it at all. So while 'S()' looks like a constructor call, it doesn't call one. Instead, the current behavior forces explicit initialization of objects, pointless boilerplate, or unorthodox/unreliable workarounds. Even more confusingly, the above example prints "foo" but not "ctor", because calling variadic functions with no arguments is fine - except for constructors. Finally, destructors are currently called on objects which were never constructed. You can't even call what's going on with structs RAII at this point.
Re: struct: default construction or lazy initialization.
C#'s "Dispose" pattern comes to mind here. You don't leak memory, you just leak file handles and graphics resources instead when you forget to explicitly call Dispose().
Re: struct: default construction or lazy initialization.
On Tuesday, 31 January 2017 at 23:52:31 UTC, Ali Çehreli wrote: On 01/31/2017 03:15 PM, bitwise wrote: [...] Thanks for the response, but this doesn't really solve the problem. > If the object is defined at module scope as shared static > immutable It is indeed possible to initialize immutable objects by pure functions as done inside shared static this() below: I didn't mean that I wanted my object shared-static-immutable, but only that a solution would have to account for that possibility. Yes, the situation is different from C++ but it's always possible to call a function (which constructor is one) to make the object. I'm saying that a caller should not have to explicitly initialize an object before use, but that a programmer should not have to add boilerplate to deal with zombie objects all over the place either. A container for example: struct Container(T) { void pushBack(T); // ok: mutable method, can lazily initialize payload. // not ok: container may be immutable Range!(const T) opSlice() const; Iterator!(const T) find(T) const; bool empty() const; size_t count() const; } Container!int c; // = Container!int() -> can't do this. if(c.empty) // can't initialize here either.. c.pushBack(1); Such innocent looking code will fail without boilerplate inserted everywhere. -I can't lazily initialize the container in "empty()". -I can't pre-emptively initialize it in a default constructor. This problem causes the propagation of null checks all over the place. Objects returned from the container will have to have a "zombie" state as well, and check validity at each use. I wouldn't classify this as "a difference", but as a hole. Although I don't remember where, I recently saw a discussion about how "mutable" may possibly be implemented. IIRC, there was no solution stated in that discussion. The only solution that comes to mind would be to somehow relax the constraints of const, and make it possible to prevent a struct from being declared immutable, so that lazy initialization could be done. Recent discussions seem to indicate structs having default ctors is not an option.
struct: default construction or lazy initialization.
Unless I'm missing something, it seems that neither of these are actually possible. Consider an object which needs internal state to function. The obvious answer is to create it in the constructor: struct Foo(T) { T* payload; this() { payload = cast(T*)malloc(T.sizeof); } ~this() { free(payload); } void foo() { // do something with payload that fails if not initialized } } But this is not possible in D, because structs can't have default constructors. So one may think, I can use lazy initialization instead: struct Foo(T) { T* _payload; ~this() { if(_payload) free(_payload); } @property T* payload() const { if(!_payload) (cast(Foo!T*)).payload = cast(T*)malloc(T.sizeof); return _payload; } void foo() { T* p = payload(); // do something with payload that fails if not initialized } void bar() const { T* p = payload(); // do something with payload that fails if not initialized } } So in C++, the above would be fine. Since payload can never be perceived by the caller as uninitialized, the fact that it "breaks" const is irrelevant. But you can't do this in D. If the object is defined at module scope as shared static immutable, the compiler may put it in a readonly section of the executable which would cause an access violation upon trying to initialize it, and there is no way to prevent this from happening. I'm hoping someone will tell me I'm wrong here, because the only alternative to the above approaches is to add boilerplate to _every_ _single_ _function_ that uses the payload in order to deal with separate cases where it's uninitialized. Is there really no solution for this?
Re: returning 'ref inout(T)' - not an lvalue?
On Wednesday, 25 January 2017 at 21:04:50 UTC, Adam D. Ruppe wrote: On Wednesday, 25 January 2017 at 20:42:52 UTC, bitwise wrote: Is it not possible to return a ref from an inout function? It isn't the inout that's getting you, it is the const object in main(). const(List!int) c; Make that mutable and it works. Why? Cuz the `C list` in the iterator keeps that const with it I'm not sure why that kills it though, the error tells me it is an internal cast that is breaking things but I don't see why that logically would. This was intentional, because I thought that due to transitivity, 'alias T' would also be const, and opIndex would return const(T) when created from a const(List!T). I just tried the following though, and it outputs 'int' rather than 'const(T)' for a const(List!T): alias T = typeof(list.data[0]); pragma(msg, T.stringof); This is not what I thought would happen. I actually changed it up like this, and things seem to work: alias T = CopyTypeQualifiers!(C, typeof(list.data[0])); ref inout(T) opIndex(int i) inout{ return list.data[pos + i]; } This might arguably be a bug, but you could work around it by checking for that const This and offering a different method that just returns const instead of ref. Given what the solution was, I think the error message could be improved, but I'm not sure what the right approach would be. T[] data = new T[1]; BTW what do you think this line does? I ask because most people who use it don't get what they expect out of it Should be an array of 'T' with all elements set to T.init right? This was just an example - for some reason, I thought it would make things clearer. Thanks
Re: Safely moving structs in D
On Tuesday, 24 January 2017 at 11:46:47 UTC, Jonathan M Davis wrote: On Monday, January 23, 2017 22:26:58 bitwise via Digitalmars-d-learn wrote: [...] Moving structs is fine. The postblit constructor is for when they're copied. A copy is unnecessary if the original isn't around anymore - e.g. passing an rvalue to a function can move the value; it doesn't need to copy it. Even passing an lvalue doesn't need to result in a copy if the lvalue is not referenced at any point after that function call. However, if you're going to end up with two distinct copies, then they need to actually be copies, and a postblit constructor will be called. [...] Awesome, thanks - this makes sense.
returning 'ref inout(T)' - not an lvalue?
Compiling the code below gives these errors: main.d(92): Error: cast(inout(int))this.list.data[cast(uint)(this.pos + i)] is not an lvalue main.d(101): Error: template instance main.Iterator!(const(List!int)) error instantiating main.d(108): instantiated from here: first!(const(List!int)) struct Iterator(C) { C list; int pos; alias T = typeof(list.data[0]); this(C list, int pos) { this.list = list; this.pos = pos; } ref inout(T) opIndex(int i) inout { return list.data[pos + i]; } } class List(T) { T[] data = new T[1]; auto first(this This)() { return Iterator!This(this, 0); } } int main(string[] argv) { const(List!int) c; auto it = c.first; } Is it not possible to return a ref from an inout function?
Re: Safely moving structs in D
On Monday, 23 January 2017 at 23:04:45 UTC, Ali Çehreli wrote: On 01/23/2017 02:58 PM, bitwise wrote: I'm confused about what the rules would be here. It would make sense to call the postblit if present, but std.Array currently does not: https://github.com/dlang/phobos/blob/04cca5c85ddf2be25381fc63c3e941498b17541b/std/container/array.d#L884 Post-blit is for copying though. Moving should not call post-blit. You may want to look at the implementation of std.algorithm.move to see how it plays with post-blit: https://dlang.org/phobos/std_algorithm_mutation.html#.move Ali That's a good point. It didn't click at first, but checking for postblit is done with 'hasElaborateCopyConstructor(T)'. I had thought that what memmove was doing would be considered "blitting", and hence require a postblit afterwards. I did look at std.move, but was mistaken about which code path was being taken. It seemed like structs that defined only a postblit would have been moved by assignment: https://github.com/dlang/phobos/blob/366f6e4e66abe96bca9fd69d03042e08f787d040/std/algorithm/mutation.d#L1310 But in actuality, the memcpy branch fires because hasElaborateAssign(T) returns true for structs with a postblit - which was unexpected. I don't really understand why, but this makes things clearer. Thanks
Re: Safely moving structs in D
I'm confused about what the rules would be here. It would make sense to call the postblit if present, but std.Array currently does not: https://github.com/dlang/phobos/blob/04cca5c85ddf2be25381fc63c3e941498b17541b/std/container/array.d#L884
Safely moving structs in D
Is it ok to memcpy/memmove a struct in D? Quote from here: https://dlang.org/spec/garbage.html "Do not have pointers in a struct instance that point back to the same instance. The trouble with this is if the instance gets moved in memory, the pointer will point back to where it came from, with likely disastrous results." This seems to suggests it's ok to move structs around in memory without calling their postblit...but if this is the case, why does postblit even exist, if it's not strictly guaranteed to be called after the struct has been blitted?
Re: Threading Questions
On Thursday, 8 October 2015 at 10:11:38 UTC, Kagamin wrote: On Thursday, 8 October 2015 at 02:31:24 UTC, bitwise wrote: If you have System.Collections.Generic.List(T) static class member, there is nothing wrong with using it from multiple threads like this: The equivalent of your D example would be class Foo { static List numbers = new List(); void bar() { new Thread(()=>{ numbers.Add(1); }).Start(); } } That still doesn't explain what you mean about it being illegal in other languages or why you brought up C# in the first place. Bit
Re: Threading Questions
On Thursday, 8 October 2015 at 20:42:46 UTC, Kagamin wrote: On Thursday, 8 October 2015 at 13:44:46 UTC, bitwise wrote: That still doesn't explain what you mean about it being illegal in other languages or why you brought up C# in the first place. Illegal means the resulting program behaves incorrectly, potentially leading to silent failures and data corruption. C# is a language that allows such bugs, and D disallows them - treats such code as invalid and rejects. Ah, I see. I thought you meant illegal meant it won't compile. Wouldn't it be more correct to say that it's undefined behaviour? Bit
Re: Threading Questions
On Wednesday, 7 October 2015 at 09:09:36 UTC, Kagamin wrote: On Sunday, 4 October 2015 at 04:24:55 UTC, bitwise wrote: I use C#(garbage collected) for making apps/games, and while, _in_theory_, the GC is supposed to protect you from leaks, memory is not the only thing that can leak. Threads need to be stopped, graphics resources need to be released, etc. XNA doesn't manage graphics resources? On Monday, 5 October 2015 at 17:40:24 UTC, bitwise wrote: I'm not sure what's going to be done with shared, but I do think it's annoying that you can't do this: shared Array!int numbers; someThread... { numbers.clear(); // 'clear' is not shared } So this means that on top of the already ridiculous number of attributes D has, now you have to mark everything as shared too =/ That's illegal in other languages too except that they allow you to do it. If you want concurrent collections, you must code them separately: https://msdn.microsoft.com/en-us/library/system.collections.concurrent%28v=vs.110%29.aspx I'm not sure what you mean by illegal. AFAIK 'shared' is unique to D. As far as simply locking and then accessing a global variable(class static member) in C#, there is no problem doing that from multiple threads. If you have System.Collections.Generic.List(T) static class member, there is nothing wrong with using it from multiple threads like this: class Foo { static List numbers = new List(); void bar() { new Thread(()=>{ lock(numbers) { numbers.Add(1); }).Start(); } } Bit
Re: Threading Questions
On Monday, 5 October 2015 at 00:23:21 UTC, Jonathan M Davis wrote: On Sunday, October 04, 2015 14:42:48 bitwise via Digitalmars-d-learn wrote: Since D is moving towards a phobos with no GC, what will happen to things that are classes like Condition and Mutex? Phobos and druntime will always use the GC for some things, and some things just plain need classes. Rather, we're trying to make it so that Phobos does not use the GC when it doesn't need to use the GC as well reduce how much the GC is required for stuff like string processing where lazy ranges can be used instead in many cases. I was under the impression that the idea was to _completely_ eliminate the GC. It says in Andre's 2015H1 vision statement: "We aim to make the standard library usable in its entirety without a garbage collector." I understand the allocation/freeing of memory is expensive, but I thought the actual sweep of the GC was a problem too, and that disabling the GC to avoid the sweep was the plan for some people. I don't know how long D's GC takes to sweep, but even a 5ms pause would be unacceptable for a performance intensive game. I guess if you use @nogc properly though, you could still safely turn off the GC, right? As for Condition and Mutex specifically, I don't know whey they were ever classes except perhaps to take advantage of the monitor in Object. Maybe they'll get changed to structs, maybe they won't, but most D code is thread-local, and most of the code that isn't is going to use message passing, which means that explicit mutexes and conditions are unnecessary. So, most code won't be impacted regardless of what we do with Condition and Mutex. You may be right. I wrote a simple download manager in D using message passing. It was a little awkward at first, but in general, the spawn/send/receive API seems very intuitive. It feels awkward because the data you're working with is out of reach, but I guess it's safer that way. Regardless, I doubt that anything will be done with Condition or Mutex until shared is revisted, which is supposed to happen sometime soon but hasn't happened yet. What happens with shared could completely change how Condition and Mutex are handled (e.g. they don't support shared directly even though they should probably have most of their members marked with shared, because Sean Kelly didn't want to be doing anything with shared that he'd have to change later). - Jonathan M Davis I'm not sure what's going to be done with shared, but I do think it's annoying that you can't do this: shared Array!int numbers; someThread... { numbers.clear(); // 'clear' is not shared } So this means that on top of the already ridiculous number of attributes D has, now you have to mark everything as shared too =/ Bit
Re: Threading Questions
On Monday, 5 October 2015 at 20:18:18 UTC, Laeeth Isharc wrote: On Monday, 5 October 2015 at 17:40:24 UTC, bitwise wrote: You may be right. I wrote a simple download manager in D using message passing. It was a little awkward at first, but in general, the spawn/send/receive API seems very intuitive. It feels awkward because the data you're working with is out of reach, but I guess it's safer that way. Any possibility of a blog post on your experience of doing so ? ;) [I should start writing some directly, but for time being, until I have my blog up and running again, I write from time to time on Quora]. A few minutes of writing now and then can have a remarkably big impact as well as clarifying your own thoughts, and the time invested is amply repaid, even viewed from a narrowly self-interested perspective. Unfortunately, my time is limited right now. I do have another project, which I've decided will either be finished or discarded by the dawn of 2016. So in the near future, I should have more time for other things. I had same experience with learning message passing. Feels like learning to eat with chopsticks in the beginning, but soon enough it feels much more civilised when it's the right tool for the job. I like the way my Worker class works because when I don't need the thread anymore, I can simply discard the object that represents the thread. As long as the Worker object is higher up on the stack than anything it's working on, all is well, and the concept of spawn/join is not visible while programming. This works out ok, because while the jobs I'm doing are slow enough to make a UI thread lag, they aren't long-running enough to where waiting for the Worker's thread to join in the destructor becomes a problem. There may be a small lag as the Worker's destructor waits for the last job to finish and the thread to join, but it's only happens once in the lifetime of the worker, so it's not a big deal. If care is not taken, the above could be subject to these problems: 1) shared memory corruption 2) worker accessing dead memory if it's placed on the stack below what it's working on 3) queueing a long running task could freeze the program on ~Worker() If you're moving or copying data into a thread, then returning the result(which can be ignored) I think most of the above can be solved. It's still a bit foreign to me though, and C++ has no such construct yet afaik. I read a bit about std::future and so on, but I'm not sure if they're standard yet. The biggest blocker though, is that the project I'm using that Worker class in is a Unity3D plugin. They only very recently updated their iOS libs to allow libc++ > 98 Bit
Re: Threading Questions
On Wednesday, 30 September 2015 at 10:32:01 UTC, Jonathan M Davis wrote: On Tuesday, September 29, 2015 22:38:42 Johannes Pfau via Digitalmars-d-learn wrote: [...] What I took from the answers to that SO question was that in general, it really doesn't matter whether a condition variable has spurious wakeups. You're going to have to check that the associated bool is true when you wake up anyway. Maybe without spurious wakeups, it wouldn't be required if only one thread was waiting for the signal, but you'd almost certainly still need an associated bool in case it becomes true prior to waiting. In addition, if you want to avoid locking up your program, it's ferquently the case that you want a timed wait so that you can check whether the program is trying to exit (or at least that the thread in question is being terminated), and you'd need a separate bool in that case as well so that you can check whether the condition has actually been signaled. So, ultimately, while spurious wakeups do seem wrong from a correctness perspective, when you look at what a condition variable needs to do, it usually doesn't matter that spurious wakeups exist, and a correctly used condition variable will just handle spurious wakeups as a side effect of how it's used. - Jonathan M Davis Yea, I guess you're right. The class in the example I posted was a crude reproduction of something I'm using right now in another project: http://codepad.org/M4fVyiXf I don't think it would make a difference whether it woke up randomly or not. I've been using this code regularly with no problems. Bit
Re: Threading Questions
On Tuesday, 29 September 2015 at 23:20:31 UTC, Steven Schveighoffer wrote: yeah, that could probably be done. One thing to note is that these classes are from ages ago (probably close to 10 years). New API suggestions may be allowed. -Steve I'm still thinking about my last rant, here... So by new API, do you mean just adding a couple of new functions, or rewriting a new Condition class(as is the plan for streams)? Since D is moving towards a phobos with no GC, what will happen to things that are classes like Condition and Mutex? If DIP74 were implemented, Condition and Mutex could be made ref counted, but DIP74 seems like something that will be very complicated, and may not happen for a long time. So the only other alternative is to make it a struct, but for a Mutex, that would prevent you from doing this: Mutex m = new Mutex(); synchronized(m) { } I also don't mind the way that the current streams are made up of a class hierarchy. Although inheritance is overused sometimes, I don't think it's bad. But, if I'm correct about the current trend in D, it seems any new stream stuff will end up getting flattened into some template/struct solution. Any comments on this? Thanks, Bit
Re: Threading Questions
On Tuesday, 29 September 2015 at 19:10:58 UTC, Steven Schveighoffer wrote: An object that implements the Monitor interface may not actually be a mutex. For example, a pthread_cond_t requires a pthread_mutex_t to operate properly. Right! I feel like I should have caught the fact that ConditionVariable still has to use pthread_cond_t under the hood, and adopts all of it's behaviour and requirements as a result. 4. Technically, you shouldn't access member variables that are GC allocated from a dtor. I know it's a struct, but structs can be GC allocated as well. Right forgot about that. GC's are really beginning to get on my nerves.. IMO, RAII for GC is a horrible tradeoff. I'm still not sure I would like Rust, but their memory model is making it a very enticing proposition. I'm almost at the point where I just don't care how much convenience, or familiarity D can offer in other areas.. Its starting to seem like none of it is worth it with a GC-based memory model standing in the way. Maybe this is an exageration...D has a lot of great features..but it's the net benefit that will ultimately determine whether or not people use D. I use C#(garbage collected) for making apps/games, and while, _in_theory_, the GC is supposed to protect you from leaks, memory is not the only thing that can leak. Threads need to be stopped, graphics resources need to be released, etc.. So when I can't rely on RAII to free these things, I need to free them explicitly, which basically puts me right back where I started. Anyways, I realize this will probably be buried 3 pages deep in D-Learn by Monday, but at least I feel better :) Bit
Re: Threading Questions
On Monday, 28 September 2015 at 11:47:38 UTC, Russel Winder wrote: I hadn't answered as I do not have answers to the questions you ask. My reason: people should not be doing their codes using these low-level shared memory techniques. Data parallel things should be using the std.parallelism module. Dataflow-style things should be using spawn and channels – akin to the way you do things in Go. So to give you an answer I would go back a stage, forget threads, mutexes, synchronized, etc. and ask what do you want you workers to do? If they are to do something and return a result then spawn and channel is exactly the right abstraction to use. Think "farmer–worker", the farmer spawns the workers and then collects their results. No shared memory anywyere – at least not mutable. https://www.youtube.com/watch?v=S7pGs7JU7eM Bit
Threading Questions
Hey, I've got a few questions if anybody's got a minute. I'm trying to wrap my head around the threading situation in D. So far, things seem to be working as expected, but I want to verify my solutions. 1) Are the following two snippets exactly equivalent(not just in observable behaviour)? a) Mutex mut; mut.lock(); scope(exit) mut.unlock(); b) Mutex mut; synchronized(mut) { } Will 'synchronized' call 'lock' on the Mutex, or do something else(possibly related to the interface Object.Monitor)? 2) Phobos has 'Condition' which takes a Mutex in the constructor. The documentation doesn't exactly specify this, but should I assume it works the same as std::condition_variable in C++? For example, is this correct? Mutex mut; Condition cond = new Condition(mut); // mut must be locked before calling Condition.wait synchronized(mut) // depends on answer to (1) { // wait() unlocks the mutex and enters wait state // wait() must re-acquire the mutex before returning when cond is signalled cond.wait(); } 3) Why do I have to pass a "Mutex" to "Condition"? Why can't I just pass an "Object"? 4) Will D's Condition ever experience spurious wakeups? 5) Why doesn't D's Condition.wait take a predicate? I assume this is because the answer to (4) is no. 6) Does 'shared' actually have any effect on non-global variables beside the syntactic regulations? I know that all global variables are TLS unless explicitly marked as 'shared', but someone once told me something about 'shared' affecting member variables in that accessing them from a separate thread would return T.init instead of the actual value... or something like that. This seems to be wrong(thankfully). For example, I have created this simple Worker class which seems to work fine without a 'shared' keyword in sight(thankfully). I'm wondering though, if there would be any unexpected consequences of doing things this way. http://dpaste.com/2ZG2QZV Thanks! Bit
Re: Threading Questions
Pretty please? :)
Re: Calling DLL coded in D from Java
On Tue, 16 Jun 2015 18:47:03 -0400, DlangLearner bystan...@gmail.com wrote: I'd like to know if it is possible to call an DLL coded in D from Java? What you're looking for is JNI (Java Native Interface). If you export your D functions correctly, as you have done(extern(C) export) then you can call them the same way you would a C DLL. This tutorial seems like it may have what you need: http://www.codeproject.com/Articles/2876/JNI-Basics Bit
Re: Conditional Compilation Multiple Versions
On Sat, 13 Jun 2015 08:21:50 -0400, ketmar ket...@ketmar.no-ip.org wrote: On Fri, 12 Jun 2015 20:41:59 -0400, bitwise wrote: Is there a way to compile for multiple conditions? Tried all these: version(One | Two){ } version(One || Two){ } version(One Two){ } version(One) | version(Two){ } version(One) || version(Two){ } version(One) version(Two){ } Bit nope. Walter is against that, so we'll not have it, despite the triviality of the patch. Any idea what the rationale was for not allowing it? Bit
Re: Conditional Compilation Multiple Versions
On Sat, 13 Jun 2015 12:20:40 -0400, ketmar ket...@ketmar.no-ip.org wrote: On Sat, 13 Jun 2015 13:49:49 +, anonymous wrote: Taking it one step further: template Version(string name) { mixin( version(~name~) enum Version = true; else enum Version = false; ); } static if(Version!One || Version!Two) { ... } very elegant. Elegant indeed, but I think my pull request would be frowned upon if I tried to use this in druntime. Bit
Conditional Compilation Multiple Versions
Is there a way to compile for multiple conditions? Tried all these: version(One | Two){ } version(One || Two){ } version(One Two){ } version(One) | version(Two){ } version(One) || version(Two){ } version(One) version(Two){ } Bit
Re: Conditional Compilation Multiple Versions
On Fri, 12 Jun 2015 20:55:51 -0400, Márcio Martins marcio...@gmail.com wrote: I know... I too hate that one can't use simple logic ops... Indeed... Thanks. Bit
Re: GC Destruction Order
On Wednesday, 20 May 2015 at 08:01:46 UTC, Kagamin wrote: On Tuesday, 19 May 2015 at 22:15:18 UTC, bitwise wrote: Thanks for confirming, but given your apparent tendency toward pinhole view points, it's unsurprising that you don't understand what I'm asking. And what you're asking. Just for the record: C++ memory management techniques are not designed to work in GC environment. Yes, but D claims to support manual memory management. It seems to get second class treatment though. I'm pretty sure I can PInvoke malloc in C# too ;) On Wednesday, 20 May 2015 at 03:44:58 UTC, bitwise wrote: Basically, I can't design a struct and be sure the destructor will be called on the same thread as where it went out of scope. If your resource finalization code has some specific threading requirements, you implement those yourself in a way your code requires it. Or instead freeing resources normally in due time. AFAIK D does not provide any built in functionality like Objective-C's 'runOnMainThread', which makes this a painful option.
Re: GC Destruction Order
On Tue, 19 May 2015 19:03:02 -0400, bitwise bitwise@gmail.com wrote: Maybe I worded that incorrectly, but my point is that when you're running with the GC disabled, you should only use methods marked with @nogc if you want to make sure your code doesn't leak right? that's a lot of attributes O_O Bit which is why I am asking if there are any plans to implement something like @nogc for entire modules or classes. Bit
Re: GC Destruction Order
On Tue, 19 May 2015 19:16:14 -0400, Adam D. Ruppe destructiona...@gmail.com wrote: On Tuesday, 19 May 2015 at 23:10:21 UTC, bitwise wrote: which is why I am asking if there are any plans to implement something like @nogc for entire modules or classes. At the top: @nogc: stuff here Gotta do it inside the class too i think. Thanks! this seems to work too: @nogc { stuff } I think this is still a problem though: struct StackThing { ~this() { writeln(where am I?); } } class HeapThing{ StackThing thing; } HeapThing thing = new HeapThing(); Basically, I can't design a struct and be sure the destructor will be called on the same thread as where it went out of scope. I hope I'm wrong and DIP74 comes soon =/ Bit
Re: GC Destruction Order
On Tue, 19 May 2015 14:19:30 -0400, Adam D. Ruppe destructiona...@gmail.com wrote: On Tuesday, 19 May 2015 at 18:15:06 UTC, bitwise wrote: Is this also true for D? Yes. The GC considers all the unreferenced memory dead at the same time and may clean up the class and its members in any order. Ugh... I was really hoping D had something better up it's sleeve. I have heard about attempts to add precise GC to D though... would precise GC address this problem in some way? Bit
Re: GC Destruction Order
On Tue, 19 May 2015 17:52:36 -0400, rsw0x anonym...@anonymous.com wrote: On Tuesday, 19 May 2015 at 21:07:52 UTC, bitwise wrote: Any idea what the plans are?. Does RefCounted become thread safe? Correct me if I'm wrong though, but even if RefCounted itself was thread-safe, RefCounted objects could still be placed in classes, at which point you might as well use a GC'ed class instead, because you'd be back to square-one with your destructor racing around on some random thread. I don't understand what you're asking here. If you hold a RefCounted resource in a GC managed object, yes, it will be tied to the GC object's lifetime. With your avoidance of the GC, I feel like you were lied to by a C++ programmer that reference counting is the way to do all memory management, when in reality reference counting is dog slow and destroys your cache locality(esp. without compiler support.) Reference counting is meant to be used where you need absolute control over a resource's lifetime(IMHO,) not as a general purpose memory management tool. Thanks for confirming, but given your apparent tendency toward pinhole view points, it's unsurprising that you don't understand what I'm asking. Bit
Re: GC Destruction Order
On Tue, 19 May 2015 15:36:21 -0400, rsw0x anonym...@anonymous.com wrote: On Tuesday, 19 May 2015 at 18:37:31 UTC, bitwise wrote: On Tue, 19 May 2015 14:19:30 -0400, Adam D. Ruppe destructiona...@gmail.com wrote: On Tuesday, 19 May 2015 at 18:15:06 UTC, bitwise wrote: Is this also true for D? Yes. The GC considers all the unreferenced memory dead at the same time and may clean up the class and its members in any order. Ugh... I was really hoping D had something better up it's sleeve. It actually does, check out RefCounted!T and Unique!T in std.typecons. They're sort of limited right now but undergoing a major revamp in 2.068. Any idea what the plans are?. Does RefCounted become thread safe? Correct me if I'm wrong though, but even if RefCounted itself was thread-safe, RefCounted objects could still be placed in classes, at which point you might as well use a GC'ed class instead, because you'd be back to square-one with your destructor racing around on some random thread. I'm finding it hard to be optimistic about the memory model of D. The idea of marking absolutely everything in your program with @nogc just to make it safe is ludicrous. Something like this would be a little more reasonable, but I see no discussions about it: @nogc module my_module; or @nogc class Something{} DIP74 seems like it would improve the situation a lot, but wouldn't work as expected as long as any other class that may contain it could be GC'ed. This also seems like a monumental undertaking that won't actually be implemented for years, if at all. I'm hoping someone will correct me here, because other than the memory model, D seems like a very well designed language. Bit
Re: GC Destruction Order
On Tue, 19 May 2015 18:47:26 -0400, Steven Schveighoffer schvei...@yahoo.com wrote: On 5/19/15 5:07 PM, bitwise wrote: On Tue, 19 May 2015 15:36:21 -0400, rsw0x anonym...@anonymous.com wrote: On Tuesday, 19 May 2015 at 18:37:31 UTC, bitwise wrote: On Tue, 19 May 2015 14:19:30 -0400, Adam D. Ruppe destructiona...@gmail.com wrote: On Tuesday, 19 May 2015 at 18:15:06 UTC, bitwise wrote: Is this also true for D? Yes. The GC considers all the unreferenced memory dead at the same time and may clean up the class and its members in any order. Ugh... I was really hoping D had something better up it's sleeve. It actually does, check out RefCounted!T and Unique!T in std.typecons. They're sort of limited right now but undergoing a major revamp in 2.068. Any idea what the plans are?. Does RefCounted become thread safe? Correct me if I'm wrong though, but even if RefCounted itself was thread-safe, RefCounted objects could still be placed in classes, at which point you might as well use a GC'ed class instead, because you'd be back to square-one with your destructor racing around on some random thread. With the current GC, yes. RefCounted needs to be thread safe in order to use it. But if we change the GC, we could ensure destructors are only called in the thread they were created in (simply defer destructors until the next GC call in that thread). This seems like it could result in some destructors being delayed indefinitely. I'm finding it hard to be optimistic about the memory model of D. The idea of marking absolutely everything in your program with @nogc just to make it safe is ludicrous. That makes no sense, the GC is not unsafe. -Steve Maybe I worded that incorrectly, but my point is that when you're running with the GC disabled, you should only use methods marked with @nogc if you want to make sure your code doesn't leak right? that's a lot of attributes O_O Bit
GC Destruction Order
In C#, it's possible that class members can actually be destroyed before the containing object. Example: class Stuff { Class1 thing1; Class2 thing2; ~Stuff() { thing1.DoSomeFinalization(); // [1] } } I forget what the exact behavior was, but basically, [1] is unsafe because it may have already been destructed/freed by the time ~Stuff() is called. Is this also true for D? Bit
Re: GC Destruction Order
On Tue, 19 May 2015 14:55:55 -0400, Steven Schveighoffer schvei...@yahoo.com wrote: On 5/19/15 2:37 PM, bitwise wrote: On Tue, 19 May 2015 14:19:30 -0400, Adam D. Ruppe destructiona...@gmail.com wrote: On Tuesday, 19 May 2015 at 18:15:06 UTC, bitwise wrote: Is this also true for D? Yes. The GC considers all the unreferenced memory dead at the same time and may clean up the class and its members in any order. Ugh... I was really hoping D had something better up it's sleeve. It's actually quite impossible for the GC to know what pointers are owning pointers and what ones are not. And you could never have ownership cycles. You could use some version of malloc/free to do it. But you have to take care of GC references inside that malloc'd block. I have heard about attempts to add precise GC to D though... would precise GC address this problem in some way? No. Precise scanning just (potentially) cuts down on scanning time, and avoids false pointers. -Steve Ok, thanks for the quick answers =D Bit
Re: how does 'shared' affect member variables?
On Sat, 09 May 2015 21:32:42 -0400, Mike n...@none.com wrote: it looks like what you are trying to implement is what `synchronized` already provides: http://ddili.org/ders/d.en/concurrency_shared.html#ix_concurrency_shared.synchronized Mike Yes, but synchronized uses a mutex. Spin locks can perform better in situations where there won't be much contention for the lock. Bit
Re: how does 'shared' affect member variables?
On Sat, 09 May 2015 15:38:05 -0400, Mike n...@none.com wrote: On Saturday, 9 May 2015 at 18:41:59 UTC, bitwise wrote: Also, I wasn't able to find any thorough documentation on shared, so if someone has a link, that would be helpful. Here are a few interesting links: Iain Buclaw (lead developer for GDC) with his interpretation: http://forum.dlang.org/post/mailman.739.1431034764.4581.digitalmar...@puremagic.com Andrei Alexandrescu highlighting a critical flaw with `shared` http://forum.dlang.org/post/lruc3n$at1$1...@digitalmars.com The truth about shared http://p0nce.github.io/d-idioms/#The-truth-about-shared Interesting deprecation warning in latest compiler (See compiler output): http://goo.gl/EGvK72 But I don't know what the semantics are *supposed* to be, and I get the impression noone else knows either. I'll be watching this thread myself to see if someone can provide some insight. Mike So it seems that although it's not properly implemented, it's still not completely benign, right? I am trying to create a shared queue/array of delegates that run on the main thread. I don't know if D's 'shared' semantics will affect it or not, and whether or not casting to/from shared will cause problems with 'cas' or 'atomicStore' Would the following code work as expected? A simplified example: struct SpinLock { private int _lock = 0; void lock() { while(!cas(cast(shared(int)*)_lock, 0, 1)) {} } void unlock() { atomicStore(*cast(shared(int)*)_lock, 0); } } struct LockGuard(T) { private T* _lock; this(ref T lock) { _lock = lock; _lock.lock(); } ~this() { _lock.unlock(); } } class App { public: @property public static App instance() { return _instance; } this() { assert(!_instance); _instance = this; } ~this() { _instance = null; } void run(void delegate() dg) { auto lk = LockGuard!SpinLock(_lock); _actions.insertBack(dg); } void update() { auto lk = LockGuard!SpinLock(_lock); foreach(act; _actions) act(); _actions.clear(); } package: __gshared App _instance = null; SpinLock _lock; Array!(void delegate()) _actions; } Thread1: App.instance.run({ doSomething1(); }); Thread2: App.instance.run({ doSomething2(); }); Main Thread: App app = new MyAppType(); while(true) { app.update(); } Thanks, Bit
Re: how does 'shared' affect member variables?
On Sat, 09 May 2015 15:59:57 -0400, tcak t...@gmail.com wrote: If a variable/class/struct etc is not shared, for variables and struct, you find their initial value. For class, you get null. For first timers (I started using shared keyword more than 2 years ago), do not forget that: a shared method is all about saying that this method is defined for shared object. So, do not get confused. It happened to me a lot. Bad part of shared is that, you will be repeating it again, and again, and again, and again, everywhere. So, try to be patient if you are going to be using it for a long time. Stupidly, shared variables' value cannot be increased/decreased directly. Compiler says it is deprecated, and tells me to use core.atomic.atomicop. You will see this as well. Hey compiler! I know 100% that no other thing will be touching this variable. using the SpinLock and LockGuard from my code above, I created a working test case. It works as expected, with no shared keyword on anything. The variable App._instance is __gshared, but that's about all. /// import spinlock; import std.stdio; import std.concurrency; import std.container; import core.thread; class App { SpinLock _lock; Array!(void delegate()) _actions; __gshared App _instance = null; this() { assert(!_instance); _instance = this; } ~this() { _instance = null; } @property public static App instance() { return _instance; } void run(void delegate() dg) { auto lk = LockGuard!SpinLock(_lock); _actions.insertBack(dg); writeln(queued delegate); } void update() { writeln(updating); auto lk = LockGuard!SpinLock(_lock); foreach(act; _actions) act(); _actions.clear(); } } void WorkerThread() { writeln(started worker, going to sleep); Thread.sleep(500.msecs); App.instance.run({ writeln(running delegate queued from thread); }); } void main() { App app = new App(); Thread workerThread = new Thread(WorkerThread).start(); int ms = 0; while(ms 1000) { app.update(); Thread.sleep(100.msecs); ms += 100; } workerThread.join(); } // OUTPUT: updating started worker, going to sleep updating updating updating updating queued delegate updating running delegate queued from thread updating updating updating updating // No null references or 'init' values. A little confused now, but at least it works. Finally, I tested to see if the lock was actually working: void WorkerThread() { writeln(worker aquiring lock); App.instance._lock.lock(); writeln(worker aquired lock); App.instance._lock.unlock(); } void main() { App app = new App(); writeln(main locked); App.instance._lock.lock(); Thread workerThread = new Thread(WorkerThread).start(); writeln(main sleeping); Thread.sleep(2.seconds); writeln(main woke); App.instance._lock.unlock(); writeln(main unlocked); workerThread.join(); } As expected, output was: main locked main sleeping worker aquiring lock main woke main unlocked worker aquired lock And no 'shared' in sight. So now I'm confused as to when it has the affect that you were describing with null/init. Bit
how does 'shared' affect member variables?
What does 'shared' do to member variables? It makes sense to me to put it on a global variable, but what sense does it make putting it on a member of a class? What happens if you try to access a member of a class/struct instance from another thread that is not marked 'shared'? Also, I wasn't able to find any thorough documentation on shared, so if someone has a link, that would be helpful. Thanks Bit
Re: Efficiently passing structs
On Tue, 05 May 2015 18:58:34 -0400, Namespace rswhi...@gmail.com wrote: On Tuesday, 5 May 2015 at 21:58:57 UTC, bitwise wrote: On Tue, 05 May 2015 17:33:09 -0400, Namespace rswhi...@gmail.com wrote: I've discussed that so many times... just search for auto / scope ref... ;) It will never happen. See: http://forum.dlang.org/thread/ntsyfhesnywfxvzbe...@forum.dlang.org?page=1 http://forum.dlang.org/thread/ylebrhjnrrcajnvtt...@forum.dlang.org?page=1 http://forum.dlang.org/thread/mailman.2989.1356370854.5162.digitalmar...@puremagic.com http://forum.dlang.org/thread/tkzyjhshbqjqxwzpp...@forum.dlang.org#post-mailman.2965.1356319786.5162.digitalmars-d-learn:40puremagic.com http://forum.dlang.org/thread/hga1jl$18hp$1...@digitalmars.com I did read some of these. Has anyone brought up simply allowing in ref or const scope ref to accept rvalues? If DIPs 69 and 25 were implemented, I don't see why this would be a problem. I agree that const ref should not, but I don't see a problem with const scope ref. I haven't seen a conversation that was strongly in favor of DIP 36. Why hasn't it been rejected? Bit We proposed that in DIP 36: http://forum.dlang.org/thread/ylebrhjnrrcajnvtt...@forum.dlang.org?page=1 Some more interesting discussion parts: http://forum.dlang.org/thread/4f84d6dd.5090...@digitalmars.com http://forum.dlang.org/thread/km3k8v$80p$1...@digitalmars.com?page=1 http://forum.dlang.org/thread/cafdvkcvf6g8mc01tds6ydxqczbfp1q-a-oefvk6bgetwciu...@mail.gmail.com Awesome, thanks for the links. I haven't read all of these yet. Many people of the community really wants a solution +1 I stuck with auto ref + templates if I need lvalues + rvalues (which is often the case in game dev). Yeah... structs/template-auto-ref is fine for matrices, vectors, quaternions, colors, etc, but I'm not gonna be able to get away with that for any kind of shared assets like textures, materials, etc, etc.. so I hope this eventually gets fixed. but since Andrei and Walter believe that it brings no real benefit, nothing has changed. I suppose this is like the C++ argument always use std::vector instead of std::list because CACHE!, but there's a time and place for everything.. Bit
Re: Efficiently passing structs
On Tue, 05 May 2015 00:20:15 -0400, rsw0x anonym...@anonymous.com wrote: it does, auto ref can bind to both lvalues and rvalues. Create the function with an empty template like so, import std.stdio; struct S{ } void Foo()(auto ref S s){ } void main(){ S s; Foo(s); Foo(S()); } There might be other ways that I'm unaware of. Interesting.. Has this always worked? Theres a couple of forum conversations about trying to get auto ref to work for non-templates. The main problem seems to be that auto ref wont work for virtual functions. Also, I don't see how someone could arrive at the above solution without showing up here and asking first. Why not just add rvref to D? D is already bloated. Some of the discussions about auto ref seem to have arrived at the idea that adding a keyword is the only way fix this without changing existing behavior or adding new behavior that would share syntax with old behavior and be confusing. Anyways, for my purposes, templates will do fine, so thanks! Bit
Re: Efficiently passing structs
On Tue, 05 May 2015 10:44:13 -0400, rsw0x anonym...@anonymous.com wrote: On Tuesday, 5 May 2015 at 14:14:51 UTC, bitwise wrote: Interesting.. Has this always worked? Theres a couple of forum conversations about trying to get auto ref to work for non-templates. The main problem seems to be that auto ref wont work for virtual functions. I know its worked for a while, I often use it when I'm too lazy to put attributes in and just have the templates infer them for me ;) Nice ;) Also, I don't see how someone could arrive at the above solution without showing up here and asking first. You're probably right, maybe someone should submit a PR to https://github.com/p0nce/d-idioms/ I was actually thinking of trying to add it to the table here: http://dlang.org/function.html#parameters It's on the template page, but as it is truly the only way to ensure structs are passed efficiently, it may be a good idea to add a link, or some text on this page as well.
Re: Efficiently passing structs
On Tue, 05 May 2015 18:27:54 -0400, Gomen go...@asai.jp wrote: I am sorry for this post, I am just testing something. The retired D forum seems to have been re-purposed for testing ;) Bit
Re: Efficiently passing structs
On Tue, 05 May 2015 17:33:09 -0400, Namespace rswhi...@gmail.com wrote: I've discussed that so many times... just search for auto / scope ref... ;) It will never happen. See: http://forum.dlang.org/thread/ntsyfhesnywfxvzbe...@forum.dlang.org?page=1 http://forum.dlang.org/thread/ylebrhjnrrcajnvtt...@forum.dlang.org?page=1 http://forum.dlang.org/thread/mailman.2989.1356370854.5162.digitalmar...@puremagic.com http://forum.dlang.org/thread/tkzyjhshbqjqxwzpp...@forum.dlang.org#post-mailman.2965.1356319786.5162.digitalmars-d-learn:40puremagic.com http://forum.dlang.org/thread/hga1jl$18hp$1...@digitalmars.com I did read some of these. Has anyone brought up simply allowing in ref or const scope ref to accept rvalues? If DIPs 69 and 25 were implemented, I don't see why this would be a problem. I agree that const ref should not, but I don't see a problem with const scope ref. I haven't seen a conversation that was strongly in favor of DIP 36. Why hasn't it been rejected? Bit
Re: Efficiently passing structs
On Tue, 05 May 2015 14:49:07 -0400, Ali Çehreli acehr...@yahoo.com wrote: http://ddili.org/ders/d.en/lvalue_rvalue.html#ix_lvalue_rvalue.auto%20ref,%20parameter I've actually stumbled upon this site a few times, and it has been very helpful, so thanks =D Unfortunately though, I had no idea that auto ref was what I was looking for in the first place =/
Re: Efficiently passing structs
On Tue, 05 May 2015 11:54:53 -0400, Jonathan M Davis jmdavisp...@gmx.com wrote: On Tuesday, 5 May 2015 at 02:47:03 UTC, bitwise wrote: On Mon, 04 May 2015 00:16:03 -0400, Jonathan M Davis via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: D will move the argument if it can rather than copying it (e.g. if a temporary is being passed in), which reduces the need for worrying about copying like you tend to have to do in C++98, and I think that a lot of D code just doesn't worry about the cost of copying structs. How exactly would you move a struct? Just a memcpy without the postblit? Because D has postblit constructors rather than copy constructors, copying is done by blitting the entire struct and then calling the postlblit constructor afterwards, so unless the postlbit constructor is @disabled, a struct is moveable simply by blitting it and not calling the postblit constructor afterwards. And the compiler can choose to do a move whenever a copy is unnecessary (e.g. return value optimization or when a temporary is passed to a Gotcha. Correct me if I'm wrong though, but in C++(and I don't see why D would be different), RVO removes the need for the struct/class to be blitted completely. The struct/class will simply be constructed directly to the return address to begin with. I don't see how the above could achieved with parameter passing(by value), which is why I suggest rvref. Something like a Matrix4x4 lives in an awkward place between a class and a struct. Because of the fact that a graphics engine may have to deal with thousands of them per frame, both copying them at function calls, and allocating/collecting thousands of them per frame, are both unacceptable. I was reading up(DIP36, pull requests, forum) and it seems like auto ref was supposed to do something like this. Is there a reason you didn't mention it? You could use auto ref, but then you'd have to templatize the function, since it only works with templated functions, and if you have multiple auto ref parameters, then you'll get a combinatorial explosion of template instantations as you call the function with different combinations of lvalues and rvalues. It's basically like declaring each of the combinations of the function with ref and non-ref parameters, but you don't have to declare them all, and it doesn't work with virtual functions. I didn't mention auto ref mostly just to be simple. But because of that combinatorial explosion (be they declared implicitly via auto ref or manually) is a good reason IMHO to just not worry about this problem in most cases. It's just too tedious to duplicate all functions like that, and using templates isn't always acceptable. In theory, auto ref could work for non-templated functions by making it so that underneath the hood as ref except that any time you passed it an rvalue, it implicitly defined an lvalue for you to pass to the function, but that doesn't match what happens with auto ref with non-templated functions, and changing the behavior for templated functions would be unacceptable, because it would reduce our ability to forward parameters without changing their type, so we'd end up with auto ref doing different things on templated and non-templated functions, which is potentially confusing. And that solution has simply never been agreed upon. I have no idea if it ever will be. I'm not really worried about the symbol explosion, because most of the functions I would be using would be binary at most. I'm more worried about losing the ability to use virtual/non-templated functions with lvalue+rvalue refs. I really do believe this should be fixed. D SHOULD have a way to pass a large struct parameter as a ref whether its an lvalue or an rvalue without copying. And when I say copying, this includes blitting without calling postblit, which would still be quite pricey for something like a Matrix4x4(64 bytes for floats) if it had to happen thousands of times per frame in real time. And trading these thousands of copies for allocations/collections is just as bad. Why not just add rvref to D? Because we have too many attributes already. It's actually kind of astonishing that we're getting return ref, because Andrei was adamant that we not add any more parameter attributes, because we simply have too many already. I think that the only reasons that return ref is making it in is because of how it solves a real need and how simple it is, whereas Andrei is not at all convinced that having anything like C++'s const in D is needed. And while it might be nice, for the most part, we _are_ able to mostly write code without worrying about it. I do understand this, but what's more astonishing is that things have gotten this far and there isn't as way to pass a struct without making unnecessary copies ;) In a practical sense, I suppose templates/auto ref will
Re: Efficiently passing structs
On Mon, 04 May 2015 00:16:03 -0400, Jonathan M Davis via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: D will move the argument if it can rather than copying it (e.g. if a temporary is being passed in), which reduces the need for worrying about copying like you tend to have to do in C++98, and I think that a lot of D code just doesn't worry about the cost of copying structs. How exactly would you move a struct? Just a memcpy without the postblit? However, if you have a large object that you know is going to be expensive to copy, you're either going to have to use const ref (and thus probably duplicate the function to allow rvalues), or you're going to need to make it a reference type rather than having all of its data live on the stack (either by making it so that the struct contains a pointer to its data or by making it a class). In general, if you're dealing with a type that is going to be expensive to copy, I'd advise making it a reference type over relying on const ref simply because it's less error-prone that way. It's trivial to forget to use ref on a parameter, and generic code won't use it, so it'll generally work better to just make it a reference type. - Jonathan M Davis Something like a Matrix4x4 lives in an awkward place between a class and a struct. Because of the fact that a graphics engine may have to deal with thousands of them per frame, both copying them at function calls, and allocating/collecting thousands of them per frame, are both unacceptable. I was reading up(DIP36, pull requests, forum) and it seems like auto ref was supposed to do something like this. Is there a reason you didn't mention it? Why not just add rvref to D? rvref would be the same as ref, but would accept an lvalue or an rvalue without copying. You could make it const, scope, or whatever you want. It would be unsafe if used incorrectly, but thats what regular ref is for. I suppose additional security could be added though, like making rvref escape-proof by default. This would introduce no breaking changes and facilitate efficient passing of structs. Bit