Re: Help playing sounds using arsd.simpleaudio
On Wednesday, 30 October 2019 at 19:11:00 UTC, Adam D. Ruppe wrote: On Saturday, 26 October 2019 at 19:48:33 UTC, Murilo wrote: I play a sound the program never ends, the terminal continues to run the program and I have to end it manually. Any ideas what could be causing this? I am using it just as you had instructed. That happens if you don't call .stop() and .join() in order at the right time. Did you use the scope(exit) like I said? Thanks for the help. I did use the scope. Here is my code: AudioPcmOutThread audio = new AudioPcmOutThread(); audio.start(); scope (exit) { audio.stop(); audio.join(); } audio.playOgg("bell.ogg"); The problem persists.
Re: Accuracy of floating point calculations
On 2019-10-31 16:07:07 +, H. S. Teoh said: Maybe you might be interested in this: https://stackoverflow.com/questions/6769881/emulate-double-using-2-floats Thanks, I know the 2nd mentioned paper. Maybe switch to PPC? :-D Well, our customers don't use PPC Laptops ;-) otherwise that would be cool. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
Re: Accuracy of floating point calculations
On Thu, Oct 31, 2019 at 09:52:08AM +0100, Robert M. Münch via Digitalmars-d-learn wrote: > On 2019-10-30 15:12:29 +, H. S. Teoh said: [...] > > Do you mean *simulated* 128-bit reals (e.g. with a pair of 64-bit > > doubles), or do you mean actual IEEE 128-bit reals? > > Simulated, because HW support is lacking on X86. And PPC is not that > mainstream. I exect Apple to move to ARM, but never heard about 128-Bit > support for ARM. Maybe you might be interested in this: https://stackoverflow.com/questions/6769881/emulate-double-using-2-floats It's mostly talking about simulating 64-bit floats where the hardware only supports 32-bit floats, but the same principles apply for simulating 128-bit floats with 64-bit hardware. [...] > > In the meantime, I've been looking into arbitrary-precision float > > libraries like libgmp instead. It's software-simulated, and > > therefore slower, but for certain applications where I want very > > high precision, it's the currently the only option. > > Yes, but it's way too slow for our product. Fair point. In my case I'm mainly working with batch-oriented processing, so a slight slowdown is an acceptable tradeoff for higher accuracy. > Maybe one day we need to deliver an FPGA based co-processor PCI card > that can run 128-Bit based calculations... but that will be a pretty > hard way to go. [...] Maybe switch to PPC? :-D T -- If you want to solve a problem, you need to address its root cause, not just its symptoms. Otherwise it's like treating cancer with Tylenol...
Re: Is there any writeln like functions without GC?
On Thursday, 31 October 2019 at 15:11:42 UTC, Ferhat Kurtulmuş wrote: It would be nice if one reimplement writeln of Phobos by bypassing gc and use a custom nogc exception as described here*? Of course I can imagine that it would be a breaking change in the language and requires so much work for it to be compatible with other std modules/language features. *: https://www.auburnsounds.com/blog/2016-11-10_Running-D-without-its-runtime.html I can't imagine any possibility we'll ever see a breaking change to writeln. It would have to be an addition to Phobos.
Re: Is there any writeln like functions without GC?
On Thursday, 31 October 2019 at 13:46:07 UTC, Adam D. Ruppe wrote: On Thursday, 31 October 2019 at 03:56:56 UTC, lili wrote: Hi: why writeln need GC? It almost never does, it just keeps the option open in case * it needs to throw an exception (like if stdout is closed) * you pass it a custom type with toString that uses GC @nogc is just super strict and doesn't even allow for rare cases. It would be nice if one reimplement writeln of Phobos by bypassing gc and use a custom nogc exception as described here*? Of course I can imagine that it would be a breaking change in the language and requires so much work for it to be compatible with other std modules/language features. *: https://www.auburnsounds.com/blog/2016-11-10_Running-D-without-its-runtime.html
Re: Gtkd and libgtksourceview
On Thursday, 31 October 2019 at 11:47:41 UTC, Mike Wey wrote: girtod can be found here: https://github.com/gtkd-developers/gir-to-d It's a tool that generates a D Binding from GObject Introspection files, if you use the GTK installer from the gtkd website or msys2 the needed introspection (.gir) files should be installed on your system. Running the command above in the root of the GtkD project, will regenerate the source based on the gir files found on your system. The last two options passed to girtod are for backwards compatibility with older version of GtkD, and aren't needed for new bindings. And technically you could also run: "girtod -i GSource-3.0.gir" and it will create a "out" directory with a GtkSourceview binding that should be fairly complete. Okay, thanks, Mike. I'll try to set aside some time next week to dig into this.
Re: Is there any writeln like functions without GC?
On Thursday, 31 October 2019 at 03:56:56 UTC, lili wrote: Hi: why writeln need GC? It almost never does, it just keeps the option open in case * it needs to throw an exception (like if stdout is closed) * you pass it a custom type with toString that uses GC @nogc is just super strict and doesn't even allow for rare cases.
Documentation: is it intentional that template constraints are displayed after the signature?
e.g. here: https://dlang.org/library/object/destroy.html I was confused at first by the trailing if (!is(T == struct) && !is(T == interface) && !is(T == class) && !__traits(isStaticArray, T)); after I somehow managed to completely parse that page without recognizing all other constraints.
Re: Is there any writeln like functions without GC?
On Thursday, 31 October 2019 at 03:56:56 UTC, lili wrote: Hi: why writeln need GC? I cannot answer why it needs GC but something can be implemented like: import core.stdc.stdio; struct Point { int x; int y; } class Person { string name; uint age; } template GenStructMemberPrint(string structInstanceName, string memberName){ const char[] GenStructMemberPrint = "printf(\"%d\", " ~structInstanceName ~ "." ~ memberName ~ ");"; // static ifs can be used to use proper formats depending on the typeof struct member } // entire thing can be implemented using sprintf to obtain a nice formatted line void writeln2(A...)(A arguments) @nogc nothrow { foreach (a; arguments) { static if (is(typeof(a) == class) || is(typeof(a) == interface)) { printf("%s \n", typeof(a).stringof.ptr); } else static if (is(typeof(a) == string)) { printf("%s \n", a.ptr); } else static if (is(typeof(a) == struct)){ foreach (member; __traits(allMembers, typeof(a))) { //writeln(member); mixin(GenStructMemberPrint!(a.stringof, member)); // this needs some improvements to imitate writeln } } else { static assert( "non-supported type!"); } } } void main() { auto pt = Point(10, 20); auto per = new Person(); // nogc custom allocator can be used here writeln2("", per, pt); }
Re: std.container.array: Error: unable to determine fields of Test because of forward references
On Thursday, 31 October 2019 at 12:37:55 UTC, user1234 wrote: struct S { S*[] children; } because otherwise when you declare the array the compiler has not finished the semantic ana of S. --- struct Test { Test[] t; } --- Works today. Putting pointers into the container (and thus having another indirection) is not an option, though.
Re: std.container.array: Error: unable to determine fields of Test because of forward references
On Thursday, 31 October 2019 at 12:37:55 UTC, user1234 wrote: On Thursday, 31 October 2019 at 12:29:28 UTC, Tobias Pankrath wrote: [...] Try struct S { S*[] children; } because otherwise when you declare the array the compiler has not finished the semantic ana of S. so S size is not known while S* size is known as it's a pointer
Re: std.container.array: Error: unable to determine fields of Test because of forward references
On Thursday, 31 October 2019 at 12:29:28 UTC, Tobias Pankrath wrote: My Problem: --- (https://run.dlang.io/is/CfLscj) import std.container.array; import std.traits; struct Test { Test[] t; } struct Test2 { Array!Test2 t; } int main() { return FieldTypeTuple!Test.length + FieldTypeTuple!Test2; } --- I've found https://issues.dlang.org/show_bug.cgi?id=19407 but I am not using separate compilation, just `dmd test.d`. I want to have a tree structure like --- struct S { S[] children; } --- but I do not want to rely on the GC and thus wanted to use a custom array type. What's the best way to do this? Try struct S { S*[] children; } because otherwise when you declare the array the compiler has not finished the semantic ana of S.
std.container.array: Error: unable to determine fields of Test because of forward references
My Problem: --- (https://run.dlang.io/is/CfLscj) import std.container.array; import std.traits; struct Test { Test[] t; } struct Test2 { Array!Test2 t; } int main() { return FieldTypeTuple!Test.length + FieldTypeTuple!Test2; } --- I've found https://issues.dlang.org/show_bug.cgi?id=19407 but I am not using separate compilation, just `dmd test.d`. I want to have a tree structure like --- struct S { S[] children; } --- but I do not want to rely on the GC and thus wanted to use a custom array type. What's the best way to do this?
Re: Gtkd and libgtksourceview
On 31-10-2019 12:16, Ron Tarrant wrote: On Wednesday, 30 October 2019 at 22:26:41 UTC, Mike Wey wrote: --- girtod -i src --use-runtime-linker --use-bind-dir --- Hmmm... I'll need more information, I'm afraid. I Googled, but I'm not finding any instructions for building these DLLs. girtod can be found here: https://github.com/gtkd-developers/gir-to-d It's a tool that generates a D Binding from GObject Introspection files, if you use the GTK installer from the gtkd website or msys2 the needed introspection (.gir) files should be installed on your system. Running the command above in the root of the GtkD project, will regenerate the source based on the gir files found on your system. The last two options passed to girtod are for backwards compatibility with older version of GtkD, and aren't needed for new bindings. And technically you could also run: "girtod -i GSource-3.0.gir" and it will create a "out" directory with a GtkSourceview binding that should be fairly complete. -- Mike Wey
Re: Gtkd and libgtksourceview
On Wednesday, 30 October 2019 at 22:26:41 UTC, Mike Wey wrote: --- girtod -i src --use-runtime-linker --use-bind-dir --- Hmmm... I'll need more information, I'm afraid. I Googled, but I'm not finding any instructions for building these DLLs.
Re: Eliding of slice range checking
On Wednesday, 23 October 2019 at 11:20:59 UTC, Per Nordlöw wrote: Does DMD/LDC avoid range-checking in slice-expressions such as the one in my array-overload of `startsWith` defined as bool startsWith(T)(scope const(T)[] haystack, scope const(T)[] needle) { if (haystack.length >= needle.length) { return haystack[0 .. needle.length] == needle; // is slice range checking avoid here? } return false; } LDC is good at optimizing simple patterns, the only pitfall I know is https://forum.dlang.org/post/eoftnwkannqmubhjo...@forum.dlang.org
Re: Accuracy of floating point calculations
On 2019-10-30 15:12:29 +, H. S. Teoh said: It wasn't a wrong *decision* per se, but a wrong *prediction* of where the industry would be headed. Fair point... Walter was expecting that people would move towards higher precision, but what with SSE2 and other such trends, and the general neglect of x87 in hardware developments, it appears that people have been moving towards 64-bit doubles rather than 80-bit extended. Yes, which is wondering me as well... but all the AI stuff seems to dominate the game and follow the hype is still a frequently used management strategy. Though TBH, my opinion is that it's not so much neglecting higher precision, but a general sentiment of the recent years towards standardization, i.e., to be IEEE-compliant (64-bit floating point) rather than work with a non-standard format (80-bit x87 reals). I see it more of a "let's sell what people want". The CPU vendors don't seem able to market higher precision. Better implement a highly-specific and exploding command-set... Do you mean *simulated* 128-bit reals (e.g. with a pair of 64-bit doubles), or do you mean actual IEEE 128-bit reals? Simulated, because HW support is lacking on X86. And PPC is not that mainstream. I exect Apple to move to ARM, but never heard about 128-Bit support for ARM. I'm still longing for 128-bit reals (i.e., actual IEEE 128-bit format) to show up in x86, but I'm not holding my breath. Me too. In the meantime, I've been looking into arbitrary-precision float libraries like libgmp instead. It's software-simulated, and therefore slower, but for certain applications where I want very high precision, it's the currently the only option. Yes, but it's way too slow for our product. Maybe one day we need to deliver an FPGA based co-processor PCI card that can run 128-Bit based calculations... but that will be a pretty hard way to go. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
Why Dlang use parsing expression grammar (PEG) parser not BNF?
Hi: I want implementation Lua on D, I find that a PEG parser https://github.com/PhilippeSigaud/Pegged why do not use BNF parser. Is PEG better than BNF?