Re: DMD release compiler flags when building with GDC
Am Sat, 09 Nov 2019 20:43:20 + schrieb Per Nordlöw: > I've noticed that the make flag ENABLE_LTO=1 fails as > > Error: unrecognized switch '-flto=full' > > when building dmd with GDC 9. > > Does gdc-9 support lto? If so what flags should I use? > > If not what are the preferred DFLAGS when building dmd with gdc? I think -flto is the proper flag for GCC/GDC. I don't know if LTO is working though. A long time ago there were some bugs, but maybe that's been fixed. You probably just have to try and see ;-) -- Johannes
Re: Building GDC with auto-generated header files
Am Tue, 30 Jul 2019 15:19:44 +1200 schrieb rikki cattermole: > On 30/07/2019 4:11 AM, Eduard Staniloiu wrote: >> Cheers, everybody >> >> I'm working on this as part of my GSoC project [0]. >> >> I'm working on building gdc with the auto-generated `frontend.h` [1], >> but I'm having some issues >> >> There are functions in dmd that don't have an `extern (C)` or `extern >> (C++)` but they are used by gdc (are exposed in `.h` files) >> >> An example of such a function is `checkNonAssignmentArrayOp`[2] from >> `dmd/arrayop.d` which is can be found in `gcc/d/dmd/expression.h` [3] > > It may have previously been extern(C) or its a gdc specific patch. > Either way PR please. Actually the code at https://github.com/gcc-mirror/gcc/blob/master/gcc/d/ dmd is still the C++ frontend. The DMD frontend in upstream master (https://github.com/dlang/dmd/blob/master/) and in GCC master are very different versions, so mismatches are expected. The latest DDMD GDC is here: https://github.com/gcc-mirror/gcc/commits/ ibuclaw/gdc However, it's still not a good idea to mix and match files from DMD upstream master and that GDC branch, as they will not be 100% in sync. It's best to simply use only files from the gcc/d repo, as that's what's used when compiling GDC. You could also have a look at the gcc/d/dmd/MERGE file, which will tell you what upstream DMD commit has been used in the respective GDC tree. -- Johannes
Re: Where is GDC being developed?
On Thursday, 21 March 2019 at 08:19:56 UTC, Per Nordlöw wrote: At https://github.com/D-Programming-GDC/GDC/commits/master there's the heading "This repository has been archived by the owner. It is now read-only." Where will the development of GDC continue? We use https://github.com/D-Programming-GDC/gcc for CI, but commits will go to the GCC SVN first, so GCC SVN or snapshot tarballs is the recommended way to get the latest GDC. There is one exception: When GCC development is in feature freeze, we might provide newer DMD frontends in a gdc-next branch at https://github.com/D-Programming-GDC/gcc . However, so far we have not set up this branch, this will probably happen in the next two weeks. Maybe I'll also provide DDMD-FE backports for GCC9 in that repo, but I'm not sure yet. The latest DDMD-FE is somewhere in the archived repos, but it hasn't been updated for some time.
Re: New to GDC on ARM 32-bit Ubuntu
Am Tue, 17 Jul 2018 04:51:04 + schrieb Cecil Ward: > I am getting an error when I try and compile anything with the GDC > compiler which is coming up associated with source code within a D > include file which is not one of mine > > I am using a Raspberry Pi with Ubuntu 16.04 and have just done an > "apt-get install gdc". Using ldc works fine. > > The error is : > root@raspberrypi:~# gdc mac_hex.d -O3 -frelease > /usr/include/d/core/stdc/config.d:58:3: error: static if conditional > cannot be at global scope > static if( (void*).sizeof > int.sizeof ) > ^ These files in /usr/include/d probably belong to the ldc package and therefore are not compatible with gdc. gdc automatically picks up files in /usr/include/d so this folder should not contain compiler-specific files. I think this has been fixed in more recent ubuntu releases. For now you could uninstall ldc to see if this is really the problem. -- Johannes
Re: D on AArch64 CPU
Am Sun, 14 May 2017 15:05:08 + schrieb Richard Delorme: > I recently bought the infamous Raspberry pi 3, which has got a > cortex-a53 4 cores 1.2 Ghz CPU (Broadcom). After installing on it > a 64 bit OS (a non official fedora 25), I was wondering if it was > possible to install a D compiler on it. > > I finally try GDC, on 6.3 gcc, and with support of version 2.68 > of the D language. After struggling a little on a few > phobos/druntime files, I got a compiler here too: > $ gdc --version > gdc (GCC) 6.3.0 > Copyright © 2016 Free Software Foundation, Inc. > Iain recently updated GDC & phobos up to 2.074 and we have a pull request for 2.075. So don't worry about fixing old GDC phobos/druntime versions, recent gdc git branches should already have AArch64 phobos changes. We have a test runner for AArch and GDC master here: https://buildbot.dgnu.org/#/builders/2/builds/29 There are still some failing test suite tests though and AFAICS we currently don't build phobos on that CI at all. (We can run ARM/AArch tests without special hardware, thanks to QEMU's user mode emulation) -- Johannes
Re: C style 'static' functions
Am Wed, 19 Jul 2017 19:18:03 + schrieb Petar Kirov [ZombineDev]: > On Wednesday, 19 July 2017 at 18:49:32 UTC, Johannes Pfau wrote: > > > > Can you explain why _object-level visibility_ would matter in > > this case? > > (I'm sure you have more experience with shared libraries than me, > so correct me if I'm wrong) > > We can't do attribute inference for exported functions because > changing the function body may easily change the function > signature (-> name mangling) and break clients of the (shared) > library. Therefore, it follows that attribute inference can only > be done for non-exported functions. OK, I didn't think of the stable ABI argument, that indeed does make sense. Leads to the strange consequence though that private functions called from templates need to be exported and therefore can't use inference. OT: if a function private function is exported and called from a public template things are difficult either way. Such a function needs to be considered to be 'logically' public: As the template code instantiated in another library will not get updated when you update the library with the private function, you also have to ensure that the program logic is still valid when mixing a new implementation of the private function and an old implementation of the template function -- Johannes
Re: C style 'static' functions
Am Wed, 19 Jul 2017 17:37:48 + schrieb Kagamin: > On Wednesday, 19 July 2017 at 15:28:50 UTC, Steven Schveighoffer > wrote: > > I'm not so sure of that. Private functions still generate > > symbols. I think in C, there is no symbol (at least in the > > object file) for static functions or variables. > > They generate hidden symbols. That's just how it implements > private functions in C: you can't do anything else without > mangling. This is not entirely correct. The symbols are local symbols in elf terminology, so local to an object file. Hidden symbols are local to an executable or shared library. > You probably can't compile two C units into one object > file if they have static functions with the same name - this > would require mangling to make two symbols different. 1) C does have mangling for static variables: void foo() {static int x;} ==> .local x.1796 2) Object file? No, but you cant compile two translation units into one object file anyway or declare two functions with the same name in one translation file. For executables and libraries, ELF takes care of this. One major usecase of static functions is not polluting the global namespace. --- static int foo(int a, int b) { return a + b + 42; } int bar(int a, int b) { return foo(a, b); } --- nm => 0017 T bar t foo --- static int foo(int a, int b) { return -42; } int bar(int a, int b); int main() { return bar(1, 2); } --- nm => U bar t foo U _GLOBAL_OFFSET_TABLE_ 0011 T main nm a.out | grep foo => 063a t foo 0670 t foo Additionally, when compiling with optimizations both foos are gone: All calls are inlined, the functions are never referenced and therefore removed. This can reduce executable size a lot if you have many local helper functions, so D may benefit from this optimization as well. -- Johannes
Re: C style 'static' functions
Am Wed, 19 Jul 2017 17:25:18 + schrieb Petar Kirov [ZombineDev]: > > > > Note: not 100% sure of all this, but this is always the way > > I've looked at it. > > You're probably right about the current implementation, but I was > talking about the intended semantics. I believe that with DIP45, > only functions and global variables annotated with the export > storage class would necessary have externally visible symbols. > Yes, this DIP is the solution to have true C-like static functions. Non-exported private will then be equivalent to C static. > Also, consider this enhancement request (which I think Walter and > Andrei approve of) - > https://issues.dlang.org/show_bug.cgi?id=13567 - which would be > doable only if private functions don't have externally visible > symbols. Can you explain why _object-level visibility_ would matter in this case? -- Johannes
Re: C style 'static' functions
On Wednesday, 19 July 2017 at 15:28:50 UTC, Steven Schveighoffer wrote: On 7/19/17 8:16 AM, Petar Kirov [ZombineDev] wrote: On Wednesday, 19 July 2017 at 12:11:38 UTC, John Burton wrote: On Wednesday, 19 July 2017 at 12:05:09 UTC, Kagamin wrote: Try a newer compiler, this was fixed recently. Hmm it turns out this machine has 2.0.65 on which is fairly ancient. I'd not realized this machine had not been updated. Sorry for wasting everyones' time if that's so, and thanks for the help. Just for the record, private is the analog of C's static. All private free and member functions are callable only from the module they are defined in. This is in contrast with C++, Java, C# where private members are visible only the class they are defined in. I'm not so sure of that. Private functions still generate symbols. I think in C, there is no symbol (at least in the object file) for static functions or variables. You could still call a private function in a D module via the mangled name I believe. -Steve Note: not 100% sure of all this, but this is always the way I've looked at it. That's correct. We unfortunately can't do certain optimizations because of this (executable size related: removing unused or inlined only functions, ...). The reason we can't make private functions object local are templates. A public template can access private functions, but the template instance may be emitted to another object. And as templates can't be checzked speculatively we don't even know if there's a template accessing a private function. Dlls on Windows face a similar problem. Once we get the export templates proposed in earlier Dll discussions we can make non-exported, private functions object local.
Re: "Rolling Hash computation" or "Content Defined Chunking"
Am Mon, 01 May 2017 21:01:43 + schrieb notna: > Hi Dlander's. > > Found some interesting reads ([1] [2] [3]) about the $SUBJECT and > wonder if there is anything available in the Dland?! > > If yes, pls. share. > If not, how could it be done (D'ish) > > [1] - > https://moinakg.wordpress.com/2013/06/22/high-performance-content-defined-chunking/ > - > https://github.com/moinakg/pcompress/blob/master/rabin/rabin_dedup.c > > [2] - > https://restic.github.io/blog/2015-09-12/restic-foundation1-cdc > > [3] - http://www.infoarena.ro/blog/rolling-hash > > Thanks & regards Interesting concept. I'm not aware of any D implementation but it shouldn't be difficult to implement this in D: https://en.wikipedia.org/wiki/Rolling_hash#Cyclic_polynomial There's a BSD licensed haskell implementation, so a BSD licensed port would be very easy to implement: https://hackage.haskell.org/package/optimal-blocks-0.1.0 https://hackage.haskell.org/package/optimal-blocks-0.1.0/docs/src/Algorithm-OptimalBlocks-BuzzHash.html To make an implementation D'ish it could integrate with either std.digest or process input ranges. If you want to use it exclusively for chunking your code can be more efficient (process InputRange until a boundary condition is met). When using input ranges, prefer some kind of buffered approach, Range!ubyte[] instead of Range!ubyte for better performance. If you really want the rolling hash value for each byte in a sequence this will be less efficient as you'll have to enter data byte-by-byte. In this case it's extremely important for performance that your function can be inlined, so use templates: ubyte[] data; foreach(b; data) { // This needs to be inlined for performance reasons rollinghash.put(b); } -- Johannes
Re: Compilation problems with GDC/GCC
Am Sat, 15 Apr 2017 14:01:51 + schrieb DRex: > On Saturday, 15 April 2017 at 13:08:29 UTC, DRex wrote: > > On Saturday, 15 April 2017 at 13:02:43 UTC, DRex wrote: > >> On Saturday, 15 April 2017 at 12:45:47 UTC, DRex wrote: > > > > Update to the Update, > > > > I fixed the lib failing to open by copying it to the location > > of my sources, and setting the ld to use libraries in that > > folder, however I am still getting the aforementioned undefined > > references :/ .. > > Okay, so I decided to link using GDC in verbose mode to figure > out what gdc is passing to gcc/the linker, and I copied the > output and ld linked the files, but I still have a problem. The > program is linked and created but cant run, and ld produces the > following error: > > ld: error in /usr/lib/gcc/x86_64-linux-gnu/5/collect2(.eh_frame); > no .eh_frame_hdr table will be created. > > I haven't the foggiest what this means, and no Idea how to fix > it. Does anyone know how to fix this issue? > > Are there any additional warnings? Maybe try running in verbose mode to get some more information? -- Johannes
Re: Compilation problems with GDC/GCC
Am Fri, 14 Apr 2017 13:03:22 + schrieb DRex: > On Friday, 14 April 2017 at 12:01:39 UTC, DRex wrote: > > > > the -r option redirects the linked object files into another > > object file, so the point being I can pass a D object and a C > > object to the linker and produce another object file. > > > > As for linking D files, do you mean passing the druntime > > libraries to ld? I used gdc -v and it gave me a whole bunch of > > info, it showed the an entry 'LIBRARY_PATH' which contains the > > path to libgphobos and libgdruntime as well as a whole bunch of > > other libs, i'm assuming that is what you are telling me to > > pass to the linker? > > I have tried passing libgphobos2.a and libgdruntime.a (and at one > point every library in the folder I found those two libs in) to > ld to link with my D source, but it still throws a billion > 'undefined reference' errors. > > I really need help here, I have tried so many different things > and am losing my mind trying to get this to work. > > the problem I have with passing the -r option to ld through gdc > is that -Wl is looking for libgcc_s.a which doesnt even exist on > the computer, which is annoying GDC should generally only need to link to -lgdruntime (and -lgphobos if you need it). However, if you really link using ld you'll have to provide the C startup files, -lc and similar stuff for C as well, which gets quite complicated. You'll have to post the exact commands you used and some of the missing symbol names so we can give better answers. -- Johannes
Re: Deduplicating template reflection code
Am Fri, 14 Apr 2017 13:41:45 + schrieb Moritz Maxeiner: > On Friday, 14 April 2017 at 11:29:03 UTC, Johannes Pfau wrote: > > > > Is there some way to wrap the 'type selection'? In pseudo-code > > something like this: > > > > enum FilteredOverloads(API) = ... > > > > foreach(Overload, FilteredOverloads!API) > > { > > > > } > > Sure, but that's a bit more complex: > > --- > [...] // IgnoreUDA declaration > [...] // isSpecialFunction declaration > > /// > template FilteredOverloads(API) > { > import std.traits : hasUDA, isSomeFunction, > MemberFunctionsTuple; > import std.meta : staticMap; > import std.typetuple : TypeTuple; > > enum derivedMembers = __traits(derivedMembers, API); > > template MemberOverloads(string member) > { > static if (__traits(compiles, __traits(getMember, API, > member))) > { > static if (isSomeFunction!(__traits(getMember, API, > member)) > && !hasUDA!(__traits(getMember, API, > member), IgnoreUDA) > && !isSpecialFunction!member) { > alias MemberOverloads = > MemberFunctionsTuple!(API, member); > } else { > alias MemberOverloads = TypeTuple!(); > } > } else { > alias MemberOverloads = TypeTuple!(); > } > } > > alias FilteredOverloads = staticMap!(MemberOverloads, > derivedMembers); > } > > //pragma(msg, FilteredOverloads!API); > foreach(Overload; FilteredOverloads!API) { > // function dependent code here > } > --- > > Nested templates and std.meta are your best friends if this is > the solution you prefer :) Great, thanks that's exactly the solution I wanted. Figuring this out by myself is a bit above my template skill level ;-) -- Johannes
Re: Deduplicating template reflection code
Am Fri, 14 Apr 2017 08:55:48 + schrieb Moritz Maxeiner: > > mixin Foo!(API, (MethodType) { > // function dependent code here > }); > foo(); > --- > > Option 2: Code generation using CTFE > > --- > string genFoo(alias API, string justDoIt) > { > import std.array : appender; > auto code = appender!string; > code.put(`[...]1`); > code.put(`foreach (MethodType; overloads) {`); > code.put(justDoIt); > code put(`}`); > code.put(`[...]2`); > } > > mixin(genFoo!(API, q{ > // function dependent code here > })()); > --- > > Personally, I'd consider the second approach to be idiomatic, but > YMMW. I'd prefer the first approach, simply to avoid string mixins. I think these can often get ugly ;-) Is there some way to wrap the 'type selection'? In pseudo-code something like this: enum FilteredOverloads(API) = ... foreach(Overload, FilteredOverloads!API) { } -- Johannes
Deduplicating template reflection code
I've got this code duplicated in quite some functions: - foreach (member; __traits(derivedMembers, API)) { // Guards against private members static if (__traits(compiles, __traits(getMember, API, member))) { static if (isSomeFunction!(__traits(getMember, API, member)) && !hasUDA!(__traits(getMember, API, member), IgnoreUDA) && !isSpecialFunction!member) { alias overloads = MemberFunctionsTuple!(API, member); foreach (MethodType; overloads) { // function dependent code here } } } } What's the idiomatic way to refactor / reuse this code fragment? -- Johannes
Re: GDC options
Am Sun, 12 Mar 2017 12:09:01 + schrieb Russel Winder via Digitalmars-d-learn: > Hi, > > ldc2 has the -unittest --main options to compile a file that has > unittests and no main so as to create a test executable. What causes > the same behaviour with gdc? > https://github.com/D-Programming-GDC/GDMD/tree/dport gdmd -unittest --main The unittest flag for GDC is -funittest but there's no flag to generate a main function. gdmd generates a temporary file with a main function to implement this. -- Johannes
Re: Mallocator and 'shared'
Am Tue, 14 Feb 2017 14:38:32 + schrieb Kagamin: > On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote: > > I remember some discussions about this some years ago and IIRC > > the final decision was that the compiler will not magically > > insert any barriers for shared variables. > > It was so some years ago, not sure if it's still so. I suspect > automatic barriers come from TDPL book and have roughly the same > rationale as autodecoding. They fix something, guess if this > something is what you need. At least this thread is from 2012, so more recent than TDPL: http://forum.dlang.org/post/k7pn19$bre$1...@digitalmars.com I'm not sure though if there were any further discussions/decisions after that discussion. -- Johannes
Re: Mallocator and 'shared'
Am Tue, 14 Feb 2017 13:01:44 + schrieb Moritz Maxeiner: > > It's not supposed to. Also, your example does not implement the > same semantics as what I posted and yes, in your example, there's > no need for memory barriers. In the example I posted, > synchronization is not necessary, memory barriers are (and since > synchronization is likely to have a significantly higher runtime > cost than memory barriers, why would you want to, even if it were > possible). > I'll probably have to look up about memory barriers again, I never really understood when they are necessary ;-) > > > > I remember some discussions about this some years ago and IIRC > > the final decision was that the compiler will not magically > > insert any barriers for shared variables. Instead we have > > well-defined intrinsics in std.atomic dealing with this. Of > > course most of this stuff isn't implemented (no shared support > > in core.sync). > > > > -- Johannes > > Good to know, thanks, I seem to have missed that final decision. > If that was indeed the case, then that should be reflected in the > documentation of `shared` (including the FAQ). https://github.com/dlang/dlang.org/pull/1570 I think it's probably somewhere in this thread: http://forum.dlang.org/post/k7pn19$bre$1...@digitalmars.com >1. Slapping shared on a type is never going to make algorithms on that >type work in a concurrent context, regardless of what is done with >memory barriers. Memory barriers ensure sequential consistency, they >do nothing for race conditions that are sequentially consistent. >Remember, single core CPUs are all sequentially consistent, and still >have major concurrency problems. This also means that having templates >accept shared(T) as arguments and have them magically generate correct >concurrent code is a pipe dream. > >2. The idea of shared adding memory barriers for access is not going >to ever work. Adding barriers has to be done by someone who knows what >they're doing for that particular use case, and the compiler inserting >them is not going to substitute. > >However, and this is a big however, having shared as compiler-enforced >self-documentation is immensely useful. It flags where and when data >is being shared. http://forum.dlang.org/post/mailman.1904.1352922666.5162.digitalmar...@puremagic.com > Most of the reason for this was that I didn't like the old > implications of shared, which was that shared methods would at some > time in the future end up with memory barriers all over the place. > That's been dropped, [...] -- Johannes
Re: Mallocator and 'shared'
Am Mon, 13 Feb 2017 17:44:10 + schrieb Moritz Maxeiner: > > Thread unsafe methods shouldn't be marked shared, it doesn't > > make sense. If you don't want to provide thread-safe interface, > > don't mark methods as shared, so they will not be callable on a > > shared instance and thus the user will be unable to use the > > shared object instance and hence will know the object is thread > > unsafe and needs manual synchronization. > > To be clear: While I might, in general, agree that using shared > methods only for thread safe methods seems to be a sensible > restriction, neither language nor compiler require it to be so; > and absence of evidence of a useful application is not evidence > of absence. The compiler of course can't require shared methods to be thread-safe as it simply can't prove thread-safety in all cases. This is like shared/trusted: You are supposed to make sure that a function behaves as expected. The compiler will catch some easy to detect mistakes (like calling a non-shared method from a shared method <=> system method from safe method) but you could always use casts, pointers, ... to fool the compiler. You could use the same argument to mark any method as @trusted. Yes it's possible, but it's a very bad idea. Though I do agree that there might be edge cases: In a single core, single threaded environment, should an interrupt function be marked as shared? Probably not, as no synchronization is required when calling the function. But if the interrupt accesses a variable and a normal function accesses the variable as well, the access needs to be 'volatile' (not cached into a register by the compiler; not closely related to this discussion) and atomic, as the interrupt might occur in between multiple partial writes. So the variable should be shared, although there's no multithreading (in the usual sense). > you'd still need those memory barriers. Also note that the > synchronization in the above is not needed in terms of semantics. However, if you move you synchronized to the complete sub-code blocks barriers are not necessary. Traditional mutex locking is basically a superset and is usually implemented using barriers AFAIK. I guess your point is we need to define whether shared methods guarantee some sort of sequential consistency? struct Foo { shared void doA() {lock{_tmp = "a";}}; shared void doB() {lock{_tmp = "b";}}; shared getA() {lock{return _tmp;}}; shared getB() {lock{return _tmp;}}; } thread1: foo.doB(); thread2: foo.doA(); auto result = foo.getA(); // could return "b" I'm not sure how a compiler could prevent such 'logic' bugs. However, I think it should be considered a best practice to always make a shared function a self-contained entity so that calling any other function in any order does not negatively effect the results. Though that might not always be possible. > My opinion on the matter of `shared` emitting memory barriers is > that either the spec and documentation[1] should be updated to > reflect that sequential consistency is a non-goal of `shared` > (and if that is decided this should be accompanied by an example > of how to add memory barriers yourself), or it should be > implemented. Though leaving it in the current "not implemented, > no comment / plan on whether/when it will be implemented" state > seems to have little practical consequence - since no one seems > to actually work on this level in D - and I can thus understand > why dealing with that is just not a priority. I remember some discussions about this some years ago and IIRC the final decision was that the compiler will not magically insert any barriers for shared variables. Instead we have well-defined intrinsics in std.atomic dealing with this. Of course most of this stuff isn't implemented (no shared support in core.sync). -- Johannes
Re: How to debug (potential) GC bugs?
Am Sun, 25 Sep 2016 16:23:11 + schrieb Matthias Klumpp: > Hello! > I am working together with others on the D-based > appstream-generator[1] project, which is generating software > metadata for "software centers" and other package-manager > functionality on Linux distributions, and is used by default on > Debian, Ubuntu and Arch Linux. > > For Ubuntu, some modifications on the code were needed, and > apparently for them the code is currently crashing in the GC > collection thread: http://paste.debian.net/840490/ > > The project is running a lot of stuff in parallel and is using > the GC (if the extraction is a few seconds slower due to the GC > being active, it doesn't matter much). > > [...] > > 2) How can one debug issues like the one mentioned above > properly? Since it seems to happen in the GC and doesn't give me > information on where to start searching for the issue, I am a bit > lost. > Can you get the GDC & LDC phobos versions? We added shared library support in 2.068 which replaced much of GDC-specific backported GC/TLS code with the standard upstream implementation. So using a recent 2.068 GDC could help. Judging from the stack trace you're probably using a 2.067 phobos: https://github.com/D-Programming-GDC/GDC/blob/722cf5670d927ef6182bf1b72765a64ca0fde693/libphobos/libdruntime/rt/lifetime.d#L1423 Here's some advice for debugging such a problem: The memory layout is usually deterministic when restarting the app in gdb with the run command. So you can do this: gdb app # run # SIGSEGV in # bt Then get the value of p when the app crashed, in the posted stack trace 0x7fdfae368000 # break rt_finalize2 if p = 0x7fdfae368000 # run Should now break whenever the object is collected, so you can check if it is collected twice. You can also use next to step until you get the classinfo in c and then print the classinfo contents: print c You can also use write breakpoints to find data corruption: find the value of pc: # break lifetime.d:1418 if p = 0x7fdfae368000 # run # print ppv # watch -l pc # or watch * (value of ppv) then disable the old breakpoint & run from start # disable 1 # run This should now break when data is written to the location. (The commands might not be 100% correct ;-)
Re: Debug prints in @nogc
Am Tue, 30 Aug 2016 16:37:53 + schrieb Cauterite: > On Tuesday, 30 August 2016 at 14:38:47 UTC, Nordlöw wrote: > > Just being able to print a string is not good enough. I want > > the variadic part writeln so I can debug-print values in my > > buggy code. Do you have a similar solution? > > Take a look at the example here: > http://dlang.org/phobos/std_traits#SetFunctionAttributes > You could make a `assumeNogc` template similar to the example > `assumePure`. > > Oh yeah, here's one I prepared earlier: > https://dpaste.dzfl.pl/254a5c2697a7 Nice! Here's a slightly modified version: https://dpaste.dzfl.pl/8c5ec90c5b39 This version does not need an additional delegate. It can be used like this: assumeNogc!writefln("foo %s", 42); assumeNogc!writeln("foo", 42);
Re: Debug prints in @nogc
Am Tue, 30 Aug 2016 10:26:28 + schrieb Nordlöw: > I'm struggling with debug printing in my @nogc-containers. > > The alternatives: > > assert(false, "Fixed message with no parameters"); > > is not enough for my needs > > debug writeln("Fixed"); > > doesn't bypass @nogc checking. Why? > > And temporary commenting out @nogc is cumbersome. > > I'm aware of C-style printf but that is not as flexible as > writeln. > > Any advice? - import std.stdio; debug { enum writelnPtr = !string; enum void function(string) @nogc writelnNoGC = cast(void function(string) @nogc)writelnPtr; } void main() @nogc { debug writelnNoGC("foo"); } - As long as it's only for debugging, the extra indirection shouldn't matter for performance. Even for release builds the optimizer can probably remove the indirection. An alternative solution is using mangleof + pragma(mangle) to refer to the external function. In both cases this approach can be tedious for templated methods. You don't want to write that boilerplate for every possible type combination. However, it should be possible to refactor the code above into a template and automate the boilerplate generation in some way.
Re: union initalization
Am Fri, 22 Jul 2016 01:48:52 + schrieb Rufus Smith: > I would like to combine two types > > > template Foo(A, B = 4) > { > union > { > byte b = B; > int a = A << 8; > } > } > > > I get an error about overlapping default initialization. They > don't actually overlap in this case because of the shift. To be pedantic about this: The initializers actually do overlap. As A << 8 shifts in zeroes, these bits are initialized as well. I think there's no idiomatic way to initialize only part of an integer.
Re: Probably trivial Linux link problem that I've spent hours on.
Am Tue, 5 Jul 2016 00:37:54 -0700 schrieb Ali Çehreli: > On 07/04/2016 08:32 PM, WhatMeWorry wrote: > > > /usr/bin/ld: cannot find -lsqlite3 > > collect2: error: ld returned 1 exit status > > I had the same issue when building Button with dub on Ubuntu 16.04. > My hack was to create the following symlink to the already existing > libsqlite3.so.0: > >sudo ln -s /usr/lib/x86_64-linux-gnu/libsqlite3.so.0 > /usr/lib/x86_64-linux-gnu/libsqlite3.so > > Ali > Some time ago Debian (and therefore Ubuntu as well) moved the unversioned .so symlinks into the -dev packages. This means you'll always need the -dev packages now when linking with a C library, even if you only use the dynamic library.
Re: vibe.d - asynchronously wait() for process to exit
Am Tue, 21 Jun 2016 03:01:39 + schrieb Vladimir Panteleev: > > As I recently learned, there's also signalfd. With that, had > Vibe.d had a primitive to wrap a file descriptor into a stream it > can manage, it would be as simple as reading from it. But it > doesn't seem to have one so I guess you need to use > createFileDescriptorEvent and the raw C read() function. Such a wrapper would be useful for some more things (inotify/fanotify). Anyway, I wrote such a similar wrapper for a serial port module: https://github.com/jpf91/vibe-serial/blob/master/src/vibe/serial.d#L145 Only reading is fully implemented / tested, but maybe this is still useful. This vibe.d issue could cause problems though: https://github.com/rejectedsoftware/vibe.d/issues/695
Re: Does D optimize sqrt(2.0)?
On Thursday, 11 February 2016 at 07:41:55 UTC, Enjoys Math wrote: If I just type out sqrt(2.0) in D, is that automatically made into a constant for me? Thanks. For GDC the answer is yes: http://explore.dgnu.org/#%7B%22version%22%3A3%2C%22filterAsm%22%3A%7B%22labels%22%3Atrue%2C%22directives%22%3Atrue%2C%22commentOnly%22%3Atrue%7D%2C%22compilers%22%3A%5B%7B%22sourcez%22%3A%22IYVwLg9gBAZhEAoCUAoA3iqWoEsC2ADhAE5hQDOYAJgHR7BgAWA3JtsQKZgjEB2FAR1IIATDQAMSVgF8gAA%3D%22%2C%22compiler%22%3A%22gdc-64%22%2C%22options%22%3A%22%22%7D%5D%7D LDC will probably optimize this as well. Not sure about DMD.
Re: Dub packages: Best practices for windows support
Am Sat, 30 Jan 2016 01:17:13 + schrieb Mike Parker: > On Friday, 29 January 2016 at 19:46:40 UTC, Johannes Pfau wrote: > > > Now on windows, things are more complicated. First of all, I > > can't seem > > to simply use "libs": ["foo"] as the linker won't find the C > > import .lib file. Then apparently there's no way to add a > > library search > > path with the MSVC linker? So I have to use the full path: > > "libs": [$PACKAGE_DIR\\path\\foo]. Without $PACKAGE_DIR paths > > are > > incorrect for applications using the library because of a dub > > problem. > > And then I'll have to use different import libraries and paths > > for -m32, > > -m64 and -m32mscoff. > > Now you know my motivation for creating Derelict. > > > > All this leads to the following questions: > > * Should cairoD copy the DLLs for all applications using > > cairoD? This > > way simply adding a dependency will work. However, if users > > want to > > use a self compiled cairo DLL with fewer dependencies there's > > no easy > > way to disable the file copying? > > * Should cairoD link in the .lib DLL import file? This might be > > useful > > even when not copying the DLLs. But if users want to link a > > custom > > import library that would be difficult. OTOH not copying DLLs > > and/or > > not linking the import library will make dub.json much more > > complicated for simple applications, especially if these > > applications > > want to support -m32, -m32mscoff and -m64. > > IMO, no to both of these (for now). Including all of these > dependencies is going to mean that all of your users, no matter > the platform, will pull the down with every new version of gtkd. > I recommend you provide all of the precompiled DLLs and import > libraries as a separate download and let the user do the > configuration needed to get it to link. Most Windows developers > are used to it. You can provide instructions for those who aren't. Thanks for the detailed answer. Thinking about this some more, copying the DLLs automatically is really a bad idea. The cairo dll depends on Freetype, so I'd have to ship a Freetype dll as well. But cairoD depends on DerelictFT and if DerelictFT then decided to also install DLLs automatically anything could happen. So even in this simple case installing DLLs for the user is not a good idea.
Re: UTF-16 endianess
Am Fri, 29 Jan 2016 18:58:17 -0500 schrieb Steven Schveighoffer: > On 1/29/16 6:03 PM, Marek Janukowicz wrote: > > On Fri, 29 Jan 2016 17:43:26 -0500, Steven Schveighoffer wrote: > >>> Is there anything I should know about UTF endianess? > >> > >> It's not any different from other endianness. > >> > >> In other words, a UTF16 code unit is expected to be in the > >> endianness of the platform you are running on. > >> > >> If you are on x86 or x86_64 (very likely), then it should be > >> little endian. > >> > >> If your source of data is big-endian (or opposite from your native > >> endianness), > > > > To be precise - my case is IMAP UTF7 folder name encoding and I > > finally found out it's indeed big endian, which explains my problem > > (as I'm indeed on x86_64). > >> it will have to be converted before treating as a wchar[]. > > > > Is there any clever way to do the conversion? Or do I need to swap > > the bytes manually? > > No clever way, just the straightforward way ;) > > Swapping endianness of 32-bits can be done with core.bitop.bswap. > Doing it with 16 bits I believe you have to do bit shifting. > Something like: > > foreach(ref elem; wcharArr) elem = ((elem << 8) & 0xff00) | ((elem >> > 8) & 0x00ff); > > Or you can do it with the bytes directly before casting There's also a phobos solution: bigEndianToNative in std.bitmanip.
Dub packages: Best practices for windows support
I want to add proper windows support to the cairoD dub package. cairoD is a wrapper for the [cairo](http://cairographics.org/) C library. As it can be difficult to obtain cairo DLLs on windows I want to ship these DLLs with cairoD. It is also possible to enable or disable additional cairo features. I want to use the DLLs from the [MSYS2 project](http://msys2.github.io/) which enable all relevant features. Because of that, the MSYS2 cairo DLL depends on _13_ other DLLs. On linux it would be common practice to simply have a "libs": ["foo"] entry in the dub configuration for cairoD. This way applications using the library won't have to manually link the C library. And everything (dub run, dub test) will just work. Now on windows, things are more complicated. First of all, I can't seem to simply use "libs": ["foo"] as the linker won't find the C import .lib file. Then apparently there's no way to add a library search path with the MSVC linker? So I have to use the full path: "libs": [$PACKAGE_DIR\\path\\foo]. Without $PACKAGE_DIR paths are incorrect for applications using the library because of a dub problem. And then I'll have to use different import libraries and paths for -m32, -m64 and -m32mscoff. Additionally, applications can't run without the DLLs in the same directory as the target application executable. So dub run and dub test won't work. As a solution it is possible to do this in the cairoD dub configuration: "copyFiles": ["lib\\dmc32\\*.dll"] The path again depends on -m32 vs -m64 and -m32mscoff All this leads to the following questions: * Should cairoD copy the DLLs for all applications using cairoD? This way simply adding a dependency will work. However, if users want to use a self compiled cairo DLL with fewer dependencies there's no easy way to disable the file copying? * Should cairoD link in the .lib DLL import file? This might be useful even when not copying the DLLs. But if users want to link a custom import library that would be difficult. OTOH not copying DLLs and/or not linking the import library will make dub.json much more complicated for simple applications, especially if these applications want to support -m32, -m32mscoff and -m64. * What's the best way to support all of -m32, -m32mscoff and -m64? I've got working import libraries and DLLs for all configurations, but how to support this in dub.json? I think the best way would be to have -ms32coff as a special architecture or flag for dub, but I can't seem to find any documentation about that. -m64 can be detected by x86_64 in platforms, but how to detect -m32 vs -m32mscoff? Alternatively I could simply let users choose the configurations manually. But adding dflags: ["-m32mscoff"] does not build the Derelict dependencies with the m32mscoff flag so linking will fail... DFLAGS="-m32mscoff" doesn't work with dub test as the dub test command ignores the DFLAGS variable. I'd have to check whether it works for applications, but then there's still no way to use the correct cairo import library in cairoDs dub.json
Re: Dub packages: Best practices for windows support
Am Fri, 29 Jan 2016 20:46:40 +0100 schrieb Johannes Pfau: > DFLAGS="-m32mscoff" doesn't work with dub test as the dub test > command ignores the DFLAGS variable. I'd have to check whether it > works for applications, but then there's still no way to use the > correct cairo import library in cairoDs dub.json Should have mentioned DFLAGS is an environment in that example, e.g using dub like this: DFLAGS="-m32mscoff" dub run DFLAGS="-m32mscoff" dub test
Re: Convert some ints into a byte array without allocations?
Am Sat, 16 Jan 2016 18:05:46 + schrieb Samson Smith: > On Saturday, 16 January 2016 at 16:28:21 UTC, Jonathan M Davis > wrote: > > > > But it will be less error-prone to use those functions, and if > > you _do_ actually need to swap endianness, then they're exactly > > what you should be using. We've had cases that have come up > > where using those functions prevented bugs precisely because > > the person writing the code got the sizes wrong (and the > > compiler complained, since nativeToBigEndian and friends deal > > with the sizes in a typesafe manner). > > > > - Jonathan M Davis > > If I'm hoping to have my hash come out the same on both bigendian > and littleendian machines but not send the results between > machines, should I take these precautions? I want one machine to > send the other a seed (in an endian safe way) and have both > machines generate the same hashes. > > Here's the relevant code: > > uint coordHash(int x, int y, uint seed){ > seed = FNV1a((cast(ubyte*) )[0 .. x.sizeof], seed); > return FNV1a((cast(ubyte*) )[0 .. y.sizeof], seed); > } > // Byte order matters for the below function > uint FNV1a(ubyte[] bytes, uint code){ > for(int iii = 0; iii < bytes.length; ++iii){ > code ^= bytes[iii]; > code *= FNV_PRIME_32; > } > return code; > } > > Am I going to get the same outcome on all machines or would a > byte array be divided up in reverse order to what I'd expect on > some machines? If it is... I don't mind writing separate versions > depending on endianness with > version(BigEndian)/version(LittleEndian) to get around a runtime > check... I'm just unsure of how endianness factors into the order > of an array... If you use the simple pointer cast you will end up with different byte orders on little vs big endian machines. Endianness does not affect array order in general: ubyte[] myArray = [1, 2, 3, 4]; myArray[0] == 1, myArray[1] == 2, ... This is the same on big vs little endian machines. Endianness does affect the representation of (multi-byte)numbers: int a = 42; ubyte[4] b = *cast(ubyte[4]) This will generate [42, 0, 0, 0] on little endian, [0, 0, 0, 42] on big endian. So if you want the same byte output for all architectures, just choose either big or little endian (which one doesn't matter). Then convert the values on the other architecture (e.g. if you choose little endian, do nothing on little endian, swap bytes on big endian). TLDR; Just use nativeToBigEndian or nativeToLittleEndian from std.bitmanip, these functions do the right thing. These functions do not use runtime checks, they use version(Big/LittleEndian) internally. nativeToBigEndian does not do anything on big endian machines, nativeToLittleEndian doesn't do anything on little endian machines.
Re: Convert some ints into a byte array without allocations?
Am Sat, 16 Jan 2016 15:46:00 + schrieb Samson Smith: > On Saturday, 16 January 2016 at 14:42:27 UTC, Yazan D wrote: > > On Sat, 16 Jan 2016 14:34:54 +, Samson Smith wrote: > > > >> [...] > > > > You can do this: > > ubyte[] b = (cast(ubyte*) )[0 .. int.sizeof]; > > > > It is casting the pointer to `a` to a ubyte (or byte) pointer > > and then taking a slice the size of int. > > This seems to work. Thankyou! You need to be careful with that code though. As you're taking the address of the a variable, b.ptr will point to a. If a is on the stack you must make sure you do not escape the b reference. Another option is using static arrays: ubyte[a.sizeof] b = *(cast(ubyte[a.sizeof]*)); Static arrays are value types. Whenever you pass b to a function it's copied and you don't have to worry about the lifetime of a. This pointer cast (int => ubyte[4]) is safe, but the inverse operation, casting from ubyte[4] to int, is not safe. For the inverse operation you'd have to use unions as shown in Yazans response.
Re: Socket - handling large numbers of incoming connections
Am Mon, 21 Dec 2015 23:29:14 + schrieb Adam D. Ruppe: > On Monday, 21 December 2015 at 23:17:45 UTC, Daniel Kozák wrote: > > If you want to reinvent the wheel you can use > > [...] it isn't like the bundled functions with the OS are > hard to use [...] > epoll and similar interfaces are not difficult to use. But you need to be careful to handle all error conditions caused by low level posix io calls (read/write) correctly. (Partial reads/writes, How do you handle EINTR? How do you handle error codes returned by the close function*? ...) * http://lwn.net/Articles/576478/
Re: Which GDC to download?
Am Thu, 01 Oct 2015 12:04:38 + schrieb NX: > Windows X86 64bit (x86_64-w64-mingw32) > > Standard builds > TargetDMDFE Runtime > GCC GDC revisionBuild Date arm-linux-gnueabi > 2.066.1 yes 5.2.0 dadb5a3784 > 2015-08-30 arm-linux-gnueabihf2.066.1 yes > 5.2.0 dadb5a3784 2015-08-30 > x86_64-w64-mingw322.066.1 yes > 5.2.0 dadb5a3784 2015-08-30 > > I'm totally confused about what does these mean: > > 1) Why there is a download targeting arm-linux-gnueabi(hf) and > what exactly it means? Is this a cross-compiler which will > produce obj files containing ARM instructions or what? If so, > will linking just work? and how? Linking only works for libraries which are included with the cross compiler. That usually means only the C/C++/D standard libraries will be available. You can link to other libraries with a cross-compiler, but you need to provides these libraries in some way: http://wiki.dlang.org/GDC/Cross_Compiler http://wiki.dlang.org/GDC/Cross_Compiler/Existing_Sysroot http://wiki.dlang.org/GDC/Cross_Compiler/Existing_Sysroot#Using_a_compiler_from_gdcproject.org.2Fdownloads For more information: http://build-gdc.readthedocs.org/en/latest/Cross-Compiler%20Basics/ > > 2) Is what I understand from "cross-compiler" correct? (a > compiler that can target different architectures than the host > architecture it's compiled for) > > 3) Which one to choose if I just want to write & compile windows > programs? > Adding to Adams answer I guess we (the GDC team) have to somehow present 'native compilers' more prominently. > 4) x86_64-w64-mingw32 is commented as "Unsupported alpha build. > SEH"? is that means windows-targeting version of the compiler is > highly unstable/not ready yet? What's "SEH"? Unfortunately Windows GDC builds are very unstable right now. I'd recommend using DMD or LDC for Windows.
Re: Threading Questions
Am Tue, 29 Sep 2015 15:10:58 -0400 schrieb Steven Schveighoffer: > > > 3) Why do I have to pass a "Mutex" to "Condition"? Why can't I just > > pass an "Object"? > > An object that implements the Monitor interface may not actually be a > mutex. For example, a pthread_cond_t requires a pthread_mutex_t to > operate properly. If you passed it anything that can act like a lock, > it won't work. So the Condition needs to know that it has an actual > Mutex, not just any lock-like object. > > I think I advocated in the past to Sean that Condition should provide > a default ctor that just constructs a mutex, but it doesn't look like > that was done. > But you'll need access to the Mutex in user code as well. And often you use multiple Conditions with one Mutex so a Condition doesn't really own the Mutex. > > > > 4) Will D's Condition ever experience spurious wakeups? > > What do you mean by "spurious"? If you notify a condition, anything > that is waiting on it can be woken up. Since the condition itself is > user defined, there is no way for the actual Condition to verify you > will only be woken up when it is satisfied. > > In terms of whether a condition could be woken when notify *isn't* > called, I suppose it's possible (perhaps interrupted by a signal?). > But I don't know why it would matter -- per above you should already > be checking the condition while within the lock. Spurious wakeup is a common term when talking about posix conditions and it does indeed mean a wait() call can return without ever calling notify(): https://en.wikipedia.org/wiki/Spurious_wakeup http://stackoverflow.com/questions/8594591/why-does-pthread-cond-wait-have-spurious-wakeups And yes, this does happen for core.sync.condition as well. As a result you'll always have to check in a loop: synchronized(mutex) { while(some_flag_or_expression) { cond.wait(); } } - synchronized(mutex) { some_flag_or_expression = true; cond.notify(); } > > I think there are cases with multiple threads where you can > potentially wake up the thread waiting on a condition AFTER the > condition was already reset by another. > > > 5) Why doesn't D's Condition.wait take a predicate? I assume this is > > because the answer to (4) is no. > > The actual "condition" that you are waiting on is up to you to > check/define. > He probably means that you could pass an expression to wait and wait would do the looping / check internally. That's probably a nicer API but not implemented. > > 6) Does 'shared' actually have any effect on non-global variables > > beside the syntactic regulations? > > I believe shared doesn't alter code generation at all. It only > prevents certain things and affects the type. > It shouldn't. I think in GDC it does generate different code, but that's an implementation detail that needs to be fixed.
Re: Debugging D shared libraries
Am Tue, 22 Sep 2015 14:40:43 + schrieb John Colvin <john.loughran.col...@gmail.com>: > On Tuesday, 22 September 2015 at 14:37:11 UTC, Russel Winder > wrote: > > On Sun, 2015-09-20 at 17:47 +0200, Johannes Pfau via > > Digitalmars-d -learn wrote: > >> [...] > > […] > >> [...] > > > > Debian Jessie is far too out of date to be useful. I'm on > > Debian Sid > > (still quite old), and Fedora Rawhide (not quite so old). > > > > Sadly GDC on Debian Sid tells me 5.2.1 20150911 which may well > > tell Iain which DMD is being used, but I haven't a clue. :-) > > > > […] > > > > The real problem using GDC is: > > > > gdc -I. -O3 -fPIC -c -o processAll_library_d.o > > processAll_library_d.d > > /usr/include/d/core/stdc/config.d:28:3: error: static if > > conditional > > cannot be at global scope > >static if( (void*).sizeof > int.sizeof ) > >^ > > > > I haven't had chance to sit down and see if this is reasonable > > or not. > > seeing as it's an error in core.stdc.config, I'd say it's > definitely not reasonable It's indeed strange. @Russel Winder if you can reproduce this with the latest GDC* please file a bug report. Even the error message doesn't make sense: static if works fine at global scope, AFAIK. * you could use the latest binaries from http://gdcproject.org/downloads
Re: Debugging D shared libraries
Am Sat, 19 Sep 2015 17:41:41 +0100 schrieb Russel Winder via Digitalmars-d-learn: > On Sat, 2015-09-19 at 16:33 +, John Colvin via Digitalmars-d-learn > wrote: > > On Saturday, 19 September 2015 at 16:15:45 UTC, Russel Winder > > wrote: > > > Sadly the: > > > > > > pragma(LDC_global_crt_ctor, 0) > > > void initRuntime() { > > > import core.runtime: Runtime; > > > Runtime.initialize(); > > >} > > > > > > will not compile under DMD :-( > > > > version(LDC){ /* ... */ } > > > > not that it helps make things work correctly, but at least > > they'll compile :) > > Indeed, it works well. Well for LDC. DMD and GDC are still broken. My > GDC problems are deeper that this code: Debian packages seem to have > weird problems and Fedora do not package GDC. > Have you tried using a newer GDC version? The debian jessie version probably uses the 2.064.2 frontend. I wanted to add @attribute(cctor/cdtor) support for some time now, I even wrote the code some time but didn't push it to the main repo for some reason. I'll put it on the TODO list but I can't work on this for the next 2-3 weeks.
Re: Debugging D shared libraries
Am Sun, 20 Sep 2015 17:47:00 +0200 schrieb Johannes Pfau: > Am Sat, 19 Sep 2015 17:41:41 +0100 > schrieb Russel Winder via Digitalmars-d-learn > : > > > On Sat, 2015-09-19 at 16:33 +, John Colvin via > > Digitalmars-d-learn wrote: > > > On Saturday, 19 September 2015 at 16:15:45 UTC, Russel Winder > > > wrote: > > > > Sadly the: > > > > > > > > pragma(LDC_global_crt_ctor, 0) > > > > void initRuntime() { > > > > import core.runtime: Runtime; > > > > Runtime.initialize(); > > > >} > > > > > > > > will not compile under DMD :-( > > > > > > version(LDC){ /* ... */ } > > > > > > not that it helps make things work correctly, but at least > > > they'll compile :) > > > > Indeed, it works well. Well for LDC. DMD and GDC are still broken. > > My GDC problems are deeper that this code: Debian packages seem to > > have weird problems and Fedora do not package GDC. > > > > Have you tried using a newer GDC version? The debian jessie version > probably uses the 2.064.2 frontend. > > I wanted to add @attribute(cctor/cdtor) support for some time now, I > even wrote the code some time but didn't push it to the main repo for > some reason. I'll put it on the TODO list but I can't work on this for > the next 2-3 weeks. Just realized this thread is titled "Debugging D shared libraries" ;-) GDC does not yet support shared libraries.
Re: No -v or -deps for gdc?
Am Tue, 15 Sep 2015 12:19:34 + schrieb Atila Neves: > gdmd supports those options but gdc doesn't. Is that likely to > always be the case? > > Atila gdmd is just a wrapper around gdc. If something is supported by gdmd it must also be supported by gdc (the exact switch names might differ). See: https://github.com/D-Programming-GDC/GDMD/blob/master/dmd-script Seems like -v maps to -fd-verbose and -deps to -fdeps.
Re: Huge output size for simple programs
Am Fri, 11 Sep 2015 05:36:27 -0700 schrieb Jonathan M Davis via Digitalmars-d-learn: > Now, as to why the gdc binary is so large, I don't know. My guess is > that it has something to do with the debug symbols. You could try > building with -g or -gc to see how that affects the dmd-generated > binary. The libphobos shipped with the GDC binaries or with a self-compiled GDC does contain debug information (default GCC policy IIRC?). That combined with static linking leads to huge executables. A simple 'strip' command reduces the executable size.
Re: Why hide a trusted function as safe?
Am Sun, 26 Jul 2015 13:11:51 + schrieb Dicebot pub...@dicebot.lv: I remember doing something like that in druntime because of objects - you can't override @safe method prototype with @trusted one. That's probably related to the fact that safe and trusted functions have different mangled names. This also means that going from trusted to safe later on causes ABI breakage.
Re: Can't use toHexString
Am Sun, 19 Jul 2015 12:08:16 + schrieb Adam D. Ruppe destructiona...@gmail.com: This line is illegal: return toHexString!(Order.decreasing)(crc.finish()); The std.digest.toHexString and variants return a *static array* which is static data and has a scope lifetime. It is horrendous a compiler bug that allows it to implicitly convert to string - it should not compile because it silently gives bad results, escaping a reference to temporary memory while claiming it is immutable. Confusingly though, this line is fine: return crcHexString(crc.finish()); It is an EXTREMELY subtle difference in the function signature (and totally bug prone, yet the documentation doesn't call it out...) Good catch. The version returning a string is meant to be used with the interface digest API which produces ubyte[]. As we don't know the length at compile time in that case we can't return a static array. But in hindsight it might have been better to use another function for this and not an overload, especially considering the static array=string conversion bug. Documentation pull request: https://github.com/D-Programming-Language/phobos/pull/3500 The good news is that the coversion bug has finally been fixed: https://issues.dlang.org/show_bug.cgi?id=9279
Re: How to setup GDC with Visual D?
Am Sat, 04 Jul 2015 11:15:57 + schrieb Guy Gervais ggerv...@videotron.ca: On Saturday, 4 July 2015 at 08:34:00 UTC, Johannes Pfau wrote: It's kinda fascinating that GDC/MinGW seems to work for some real world applications. I haven't really tried a real world application as of yet; mostly small puzzle-type problems to get a feel for D. I did run into a problem with this code: int answer = to!(int[])(split(7946590 6020978)).sum; It compiles fine under DMD but gives the error Error: no property 'sum' for type 'int[]' with GDC. GDC uses a slightly older phobos version. It seems quite some imports changed in the last phobos version. You need to import std.algorithm and your code should work with gdc: http://goo.gl/l4zKki
Re: How to setup GDC with Visual D?
Am Sat, 04 Jul 2015 06:30:31 + schrieb Marko Grdinic mra...@gmail.com: On Friday, 3 July 2015 at 23:45:15 UTC, Guy Gervais wrote: On Friday, 3 July 2015 at 19:17:28 UTC, Marko Grdinic wrote: Any advice regarding how I can get this to work? Thanks. I got GDC to work with VS2013 + VisualD by going into Tools-Options (The VS menu, not the one under Visual D) and adding the paths under Projects and Solutions - Visual D Settings - GDC Directories. I put the path to the bin folder in MinGW64 and the bin folder in GDC. I get a 10%-15% speed improvement, which is nice, but my binaries are 10 times larger. I have no idea where Visual D is supposed to be looking at, but I managed to get it to work by adding the Gdc/bin directory into path. With that it finds it anywhere. It's kinda fascinating that GDC/MinGW seems to work for some real world applications. Please note that it's in early alpha state, mostly unsupported and not really well-tested though. (I hope this will change later this year) Then 10x larger binaries are indeed caused by debug info. If you don't need the debug info you can use the included strip.exe to remove it: strip.exe yourapp.exe
Re: Startup files for STM32F4xx
Am Sun, 26 Apr 2015 00:14:42 + schrieb Mike n...@none.com: Usage: auto b = PORTB.load(); PORTB.toggle!PIN0; PORTB.PIN0 = Level.low; writeln(PORTB.PIN0); PORTB.TEST = 0b000; That's some nice code! and really leveraging D to great effect. I know that Volatile!(T) took some engineering to get right, so it would be nice to have that as an official type IMO. It should certainly be in druntime in the long term. But first it needs quite some testing. Maybe I'll propose it for std.experimental or I'll create a dub-package first. The remaining problem is performance. (With optimization the generated code is as good as equivalent C code. However, we need to avoid size overhead: e.g. struct initializers and the opX functions shouldn't generate functions in the executable, though tha can be fixed with the linker) I'm not sure I follow how the linker can solve this. Could you elaborate? That was a sloppily written statement, sorry. Performance as in speed / number of instructions / cycles is not an issue with optimization. Default initialization is a problem as even all-zero initializers go into bss right now. So we need n bytes per struct type in the bss section. (For the register wrapper every register is a type). If they went into rodata instead and if the linker merges all-zero symbols then the overhead is limited to the biggest used struct size. I'm not sure if linkers do this optimization. For small functions (the generated properties, operator overloads) the problem is that these are always force-inlined for performance but we still output a complete function (in order to give the function a valid address and similar things). The linker can remove these with -ffunction-sections and --gc-sections. It might still be nice to have 'static (force)inline' / 'extern (force)inline' semantics[1][2][3]. [1] http://stackoverflow.com/a/216546/471401 [2] http://www.greenend.org.uk/rjk/tech/inline.html [3] https://gcc.gnu.org/onlinedocs/gcc/Inline.html
Re: Startup files for STM32F4xx
Am Sat, 25 Apr 2015 11:38:45 + schrieb Martin Nowak c...@dawg.eu: On Saturday, 25 April 2015 at 05:07:04 UTC, Jens Bauer wrote: I hope to find a good way to use import for microcontroller libraries, so it'll be easy for everyone. I'm thinking about something like ... import mcu.stm32f439.all I think that belongs in the makefile/dub.json as -version=STM32F439. Then you could simply import mcu.gpio or mcu.spi. We need a better / clever generic approach to solve 'board configuration' issues. Here version blocks work but you might also want to specify other configuration options (clock speed if static, ...) and we don't have -D to define constants. I think we could use 'reverse' imports but I'm not sure if it's a horrible hack or a great idea: When compiling library code: // board.di (Only 'runtime' config variables) - module board.config; __gshared const int configA; __gshared const int configB; - //library code: lib.d - import board.config; void doFoo() // Function = runtime value { auto a = configA; } void templateFoo()() // Template = ctfe value { auto b = ctfeConfigA; //This value is not in .di = error if //template is instantiated in the library } - gdc lib.d -Ipath/for/board.di -o lib.o User code: // board.d ('runtime' + 'ctfe' config variables) - module board.config; __gshared const int configA = 42; __gshared const int configB = 42; enum ctfeConfigA = 42; - gdc lib.o board.d
Re: Startup files for STM32F4xx
Am Sat, 25 Apr 2015 18:31:45 + schrieb Jens Bauer doc...@who.no: On Saturday, 25 April 2015 at 17:58:59 UTC, Timo Sintonen wrote: On Saturday, 25 April 2015 at 17:04:18 UTC, Jens Bauer wrote: I think volatileLoad and volatileStore are intended for this (please correct me if my understanding is wrong). Yes. Actually I am not sure whether they already exist in gdc or not. Try to write for example regs.cmdr |= 0x20 with these functions and guess how many users will move to another language. Ah, I get the point now. :) I don't want to start another volatile discussion, but to me it seems an attribute would not be a bad idea. -And for completeness... read-only, write-only, read/write and perhaps even 'prohibited access'. I recall that something was marked prohibited in some way in a library once; I forgot how they did it, though. volatileLoad is not in gdc yet. I've written the code some months ago but I need to update it and then it needs to be reviewed. Always using volatileLoad/Store is annoying. The solution is to write a wrapper: Volatile!T: http://dpaste.dzfl.pl/dd7fa4c3d42b Volatile!size_t value; value += 1; assert(value == 1); Register wrapper: http://dpaste.dzfl.pl/3e6314714541 Register definition: enum Level : ubyte { low = 0, high = 1 } enum fields = [ Field(PIN0, 0, 0, true, Level, Access.readWrite), Field(PIN1, 1, 1, true, Level, Access.readWrite), Field(TEST, 2, 4, false, ubyte, Access.readWrite)]; mixin(generateRegisterType!ubyte(PORT, fields)); pragma(address, 0x25) extern __gshared PORTRegister PORTB; Usage: auto b = PORTB.load(); PORTB.toggle!PIN0; PORTB.PIN0 = Level.low; writeln(PORTB.PIN0); PORTB.TEST = 0b000; The remaining problem is performance. (With optimization the generated code is as good as equivalent C code. However, we need to avoid size overhead: e.g. struct initializers and the opX functions shouldn't generate functions in the executable, though tha can be fixed with the linker)
Re: md5 return toHexString
Am Fri, 24 Apr 2015 18:02:57 + schrieb AndyC a...@squeakycode.net: On Friday, 24 April 2015 at 17:56:59 UTC, tcak wrote: On Friday, 24 April 2015 at 17:50:03 UTC, AndyC wrote: Hi All, I cannot seem to understand whats wrong with this: // main.d import std.stdio; import std.digest.md; import std.file; string md5sum(const string fname) { MD5 hash; File f = File(fname, rb); foreach( ubyte[] buf; f.byChunk(4096)) { hash.put(buf); } string s = toHexString!(LetterCase.lower)(hash.finish()); writeln(s); //This is correct return s; } void main() { string crc = md5sum(main.d); writeln(crc); //This is garbage } The writeln in md5sum prints correctly, but the return s seems to mess it up? What's going on? Thanks for you time, -Andy Just do that return s.dup(). Then it works for me. That's probably due to the fact that s is in stack. But I am not sure how toHexString works. Ah, yep, that works. I'd originally written it as: return toHexString!(LetterCase.lower)(hash.finish()); Which doesn't work, so used the temp string to test it. Now I'm using: return toHexString!(LetterCase.lower)(hash.finish()).dup(); Kinda weird. But works. Thank you! -Andy https://issues.dlang.org/show_bug.cgi?id=9279 toHexstring doesn't return a string, it returns char[n], a fixed-size array of length n. It shouldn't implicitly convert to a string.
Re: IMAP library
Am Sun, 12 Apr 2015 17:27:31 + schrieb Jens Bauer doc...@who.no: On Saturday, 11 April 2015 at 22:45:39 UTC, Laeeth Isharc wrote: Yes - nice to know it can do that also. For me I need to have a way of managing large amounts of email (I have about 2mm messages) including for natural language processing etc. Dovecot/sieve + pipe facility is ok, but not perfect for everything. I guess it should work fine for regular ARM etc - perhaps not an Arduino! I won't say it's impossible, but it would be cumbersome processing email on an AVR. There are HTTP servers for AVR(8bit) devices, so it should be possible. Doesn't mean it's a good idea though ;-)
Re: Creating a microcontroller startup file
Am Tue, 07 Apr 2015 20:38:52 + schrieb Jens Bauer doc...@who.no: On Tuesday, 7 April 2015 at 20:33:26 UTC, Jens Bauer wrote: Question number 1: How can a C subroutine be made optional, so it's called only if it linked ? Question 1 might be answered by the following thread: http://forum.dlang.org/thread/mg1bad$30uk$1...@digitalmars.com -So no need to answer question 1. ;) I actually saw these errors when I first tested your examples, but I thought that was a mistake in the example code. I didn't even know that extern weak symbols get default values in C ;-)
Re: Placing variable/array in a particular section
Am Sat, 04 Apr 2015 10:38:44 + schrieb Jens Bauer doc...@who.no: On Saturday, 4 April 2015 at 02:57:22 UTC, Rikki Cattermole wrote: On 4/04/2015 3:08 a.m., Jens Bauer wrote: src/start.d:7:10: error: module attribute is in file 'gcc/attribute.d' which cannot be read import gcc.attribute; ^ Uhm, it seems that druntime is required for that; unfortunately it's not ported to the Cortex-M platform. I found the attribute.d file and tried a quick copy-and-paste, but no luck there. Correction: minlibd *is* a druntime port; just a minimal one. However, I'm not sure this feature is supported yet. Timo Sintonen did a lot of great work; perhaps getting this detail supported might not require much sweat. ;) Yeah, contact the GDC guys, they should be able to help you. Either the GDC newsgroup or as an issue on the compiler's project. A-ha! I just discovered this page... http://www.digitalmars.com/NewsGroup.html Atleast at this point, keep an open mind. Think of what you are doing as testing the current state :) I will, though my impression until now, is that the compiler is quite mature. (I accept that there will always be some minor issues when moving to a new system, but that does not drag down the quality of the compiler). It's possible to use gcc.attribute with custom mini-runtimes. You need the gcc/attribute.d file but you can simply copy/paste it from druntime[1], there are no dependencies. I'll push support for the section attribute in 1~2 hours. (waiting for the testsuite ;-) [1] https://github.com/D-Programming-GDC/GDC/blob/master/libphobos/libdruntime/gcc/attribute.d
Re: GDC fails to link with GSL and fortran code
Am Tue, 17 Mar 2015 12:13:44 + schrieb Andrew Brown aabrow...@hotmail.com: Thank you very much for your replies, I now have 2 solutions to my problem! Both compiling on a virtual machine running debian wheezy, and using gcc to do the linking produced executables that would run on the cluster. Compiling with the verbose flags for linker and compiler produced the following output: failed gdc attempt: http://dpaste.com/0Z5V4PV successful dmd attempt: http://dpaste.com/0S5WKJ5 successful use of gcc to link: http://dpaste.com/0YYR39V It seems a bit of a mess, with various libraries in various places. I'll see if I can get to the bottom of it, I think it'll be a learning experience. Thanks again for the swift and useful help and guidance. Andrew GCC's verbose output can indeed be quite confusing but if you know what to look for it's possible to find some useful information :-) In your case the linker messages hinted at a problem with libc. And as there were only a few errors it's likely a version compatibility problem. If you search for libc.so in these logs you'll find this: Failed GDC: attempt to open /usr/lib/../lib64/libc.so succeeded opened script file /usr/lib/../lib64/libc.so opened script file /usr/lib/../lib64/libc.so attempt to open /lib64/libc.so.6 succeeded /lib64/libc.so.6 GCC: attempt to open /software/lib/gcc/x86_64-redhat-linux/4.9.1/../../../../lib64/libc.so succeeded opened script file /software/lib/gcc/x86_64-redhat-linux/4.9.1/../../../../lib64/libc.so opened script file /software/lib/gcc/x86_64-redhat-linux/4.9.1/../../../../lib64/libc.so attempt to open /software/lib64/libc.so.6 succeeded /software/lib64/libc.so.6 The binary gdc searches for libaries in the 'usual' places, including /usr/lib64. Your gcc doesn't search in /usr/lib64 but in /software. You seem to have an incompatible libc in /usr/lib64 which gets picked up by gdc. This is one reason why binary compiler releases are difficult to maintain and we usually recommend to compile gdc from source. DMD avoids this mess by simply calling the local gcc instead of ld to link. GCC unfortunately doesn't support this and forces us to always call the linker directly.
Re: GDC fails to link with GSL and fortran code
Am Mon, 16 Mar 2015 16:44:45 + schrieb Andrew Brown aabrow...@hotmail.com: Hi, I'm trying to compile code which calls C and fortan routines from D on the linux cluster at work. I've managed to get it to work with all 3 compilers on my laptop, but LDC and GDC fail on the cluster (though DMD works perfectly). I'm using the precompiled compiler binaries on these systems, the cluster doesn't have the prerequistites for building them myself and I don't have admin rights. For GDC the commands I run are: gcc -c C_code.c Fortran_code.f gdc D_code.d C_code.o Fortran_code.f -lblas -lgsl -lgslcblas -lm -lgfortran -o out You could try to do the linking with the local compiler: gdc D_code.d gcc D_code.o C_code.o Fortran_code.o -lgphobos2 -lpthread -lblas -lgsl -lgslcblas -lm -L path/to/x86_64-gdcproject-linux-gnu/lib/ The error messages are: /software/lib64/libgsl.so: undefined reference to `memcpy@GLIBC_2.14' /software/lib64/libgfortran.so.3: undefined reference to `clock_gettime@GLIBC_2.17' /software/lib64/libgfortran.so.3: undefined reference to `secure_getenv@GLIBC_2.17' collect2: error: ld returned 1 exit status Seems like the binary GDC toolchain somehow picks up a wrong libc. The toolchains are built with GLIBC 2.14. But IIRC we don't ship the libc in the binary packages (for native compilers) and it should pick up the local libc. Please run gdc with the '-v' and '-Wl,--verbose' options and post a link to the full output. I can remove the gsl messages by statically linking to libgsl.a, but this doesn't solve the gfortran issues. If anyone knows a way round these issues, I'd be very grateful. I'd also eventually like to find a way to easily share linux biniaries with people, so they can use this code without these kinds of headaches. If anyone has any advice for making this portable, that would also help me out a lot. Usually the best option is to compile on old linux systems. Binaries often run on newer systems but not on older ones. You could setup debian wheezy or an older version in a VM or using docker. Or you use docker.io ;-) I personally think the docker approach is kind of overkill but avoiding compatibility issues is one of docker's main selling points.