Re: requests module generates " Can't parse uri '' "
On Saturday, 9 November 2019 at 00:53:13 UTC, cartland wrote: On Friday, 8 November 2019 at 17:01:07 UTC, ikod wrote: *snip* Even this does it. import requests; void main() { } https://github.com/ikod/dlang-requests/issues/109
Re: requests module generates " Can't parse uri '' "
On Friday, 8 November 2019 at 17:01:07 UTC, ikod wrote: On Friday, 8 November 2019 at 03:29:41 UTC, cartland wrote: First time D use :) After dub init, I used the example from https://code.dlang.org/packages/requests and ran "dub add requests" [...] If you still stuck with this problem, you can post new issue on github project page, I'll try to solve it. https://github.com/ikod/dlang-requests Same on MacOS. Using dmd, ldc, or gdc. I'll raise an issue on https://github.com/ikod/dlang-requests. Even this does it. import requests; void main() { }
I need "duic.cpp" already compiled for .exe for convert .ui files to .d for Dlang
Hi, I'm at the beginning of Dlang studies. I want to create a graphical interface for my program. I use Qt Designer to draw the GUI and save to .ui file. Could someone send me the file "duic.exe" which converts .ui files into .d for Dlang? I need the "duic.cpp" already compiled for .exe. I alread use Qte5 for binding. Thank you very much. Someone that work with Qt5 and C++ can compile and send the .exe to me? Just send to internet and send me a link. Thank you. uic.cpp link: https://bitbucket.org/qtd/repo/src/default/tools/duic/
Re: How decode encoded Base64 to string text?
On Friday, 8 November 2019 at 12:36:37 UTC, Aldo wrote: On Friday, 8 November 2019 at 11:46:44 UTC, Marcone wrote: I can encode "Helo World!" to Base64 and get "TWFyY29uZQ==", but if I try to decode "TWFyY29uZQ==" I can not recovery "Helo World!" but [77, 97, 114, 99, 111, 110, 101]. How can I recover "Helo World!" when decode? Thank you. ... writeln(to!string(decoded)); ... Casting to string seems to work: writeln(cast(string) decoded); Thank you very much! It is working fine!
Re: Default initialization of static array faster than void initialization
On Friday, 8 November 2019 at 16:49:37 UTC, wolframw wrote: I compiled with dmd -O -inline -release -noboundscheck -mcpu=avx2 and ran the tests with the m array being default-initialized in one run and void-initialized in another run. The results: Default-initialized: 245 ms, 495 μs, and 2 hnsecs Void-initialized: 324 ms, 697 μs, and 2 hnsecs If you care about performance, you're much better off with LDC or GDC. DMD v2.089 takes 11.7 ms on my Win64 machine, LDC (`ldc2 -O -run gist.d`) v1.18 0.27 ms - that's 43x faster.
Re: Default initialization of static array faster than void initialization
One correction: I think you mean "using" a default-initialized array is faster (not the initialization itself). Another observation: dmd -O makes both cases slower! Hm? Ali
Re: requests module generates " Can't parse uri '' "
On Friday, 8 November 2019 at 03:29:41 UTC, cartland wrote: First time D use :) After dub init, I used the example from https://code.dlang.org/packages/requests and ran "dub add requests" [...] If you still stuck with this problem, you can post new issue on github project page, I'll try to solve it. https://github.com/ikod/dlang-requests
Default initialization of static array faster than void initialization
Hi, Chapter 12.15.2 of the spec explains that void initialization of a static array can be faster than default initialization. This seems logical because the array entries don't need to be set to NaN. However, when I ran some tests for my matrix implementation, it seemed that the default-initialized array is quite a bit faster. The code (and the disassembly) is at https://gist.github.com/wolframw/73f94f73a822c7593e0a7af411fa97ac I compiled with dmd -O -inline -release -noboundscheck -mcpu=avx2 and ran the tests with the m array being default-initialized in one run and void-initialized in another run. The results: Default-initialized: 245 ms, 495 μs, and 2 hnsecs Void-initialized: 324 ms, 697 μs, and 2 hnsecs What the heck? I've also inspected the disassembly and found an interesting difference in the benchmark loop (annotated with "start of loop" and "end of loop" in both disassemblies). It seems to me like the compiler partially unrolled the loop in both cases, but in the default-allocation case it discards every second result of the multiplication and saves each other result to the sink matrix. In the void-initialized version, it seems like each result is stored in the sink matrix. I don't see how such a difference can be caused by the different initialization strategies. Is there something I'm not considering? Also, if the compiler is smart enough to figure out that it can discard some of the results, why doesn't it just do away with the entire loop and run the multiplication only once? Since both input matrices are immutable and opBinary is pure, it is guaranteed that the result is always the same, isn't it?
Re: Using an async buffer
On Friday, 8 November 2019 at 14:32:25 UTC, bioinfornatics wrote: On Friday, 8 November 2019 at 08:58:36 UTC, bioinfornatics wrote: [...] I do not have found yet why the line counter is false. I can tell if the amount to read imply that the last read is not not strictly equal to the buffer then the result is false as if it left less thing to read the result is not shorter than requested buffer size as it is told into the documentation: https://dlang.org/phobos/std_stdio.html#.File.rawRead [...] Updating the code with: (ref ubyte[] buffer){ buffer = file.rawRead(buffer); } fix the problem.
Re: is the runtime implemented in betterC?
On Friday, 8 November 2019 at 15:25:40 UTC, dangbinghoo wrote: hmm, if runtime is implemented in regular D, how could the regular D code depends on regular D runtime be compiled when we doesn't have a D runtime even exists? Think of a file like this: // test.d void foo() { } void bar() { foo(); } The code in that file depends on code in that file... but of course it works since it is all there. The D runtime is similar. All it really is is a list of bunch of functions that might be called. If you define those functions in your build when you use them, it works as a unit. But like I said in my other message, the hard part is sometimes cpu arch but most the coupling is to an existing operating system...
Re: is the runtime implemented in betterC?
On Friday, 8 November 2019 at 13:52:18 UTC, kinke wrote: On Friday, 8 November 2019 at 10:40:15 UTC, dangbinghoo wrote: hi, I'm not sure what you are trying to achieve; you can easily cross-compile druntime & Phobos with LDC, see https://wiki.dlang.org/Building_LDC_runtime_libraries. hmm, if runtime is implemented in regular D, how could the regular D code depends on regular D runtime be compiled when we doesn't have a D runtime even exists? thinking thant we just have xtensa-llvm, and building ldc for xtensa CPU, the runtime will simply just not compiled. it's an egg-chicken problem? -- thanks!
Re: Using an async buffer
On Friday, 8 November 2019 at 08:58:36 UTC, bioinfornatics wrote: the error message was understandable to me, ... the error message was not understandable to me ... I do not have found yet why the line counter is false. I can tell if the amount to read imply that the last read is not not strictly equal to the buffer then the result is false as if it left less thing to read the result is not shorter than requested buffer size as it is told into the documentation: https://dlang.org/phobos/std_stdio.html#.File.rawRead as example I change the code in order to shuffle line content to get the ability to locate the bug Real last lines are: 1 190121746114132251381321230342516302196252336238211523943272873744285119323293314107 31631221132661353262123081115418570291330356278322215013742329426 71421331023593822146521912312869120169289362332157427352432313112226373403123825 681210152462691294263101232 90182312212511430133514352114282271133753782360462351124233948222161956731321 11822481725231121323330910521376234322119392811262411335432102108273 463633112212153255811679207 while the code give: 1 190121746114132251381321230342516302196252336238211523943272873744285119323293314107 31631221132661353262123081115418570291330356278322215013742329426 71421331023593822146521912312869120169289362332157427352432313112226373403123825 681210152462691294263101232 90182312212511430133514352114282271133753782360462351124233948222161956731321 11822481725231121323330910521376234322119392811262411335432102108273 4636331122121532558116792071512 1553203330299234738282167126033321272154232912111312416461868182323242 111932223509162312223104310231321116573254736811479599513441112318312221230321154 11363210249282132717260536372386522748224512596323581311 121932131303243861212327470532121636029110222323531121763 2499315312114672213683218207122451351984311032612832096363463812 I think we see the buffer content which is reused. so A big part of last read come from the previous chunk
Re: is the runtime implemented in betterC?
On Friday, 8 November 2019 at 10:40:15 UTC, dangbinghoo wrote: hi, is the runtime d code implemented purely with betterC? i was thinking that what's happening when we building ARM dlang compiler, when the dlang compiler ready in the first, there's no ARM version of the runtime lib and phobos, so, it's likely we are using bare metal D and trying to build the runtime. druntime is not compiled as `-betterC`, because the entire point of `-betterC` is to prevent a dependency on druntime, at least at link-time [so C forward declarations and some templates can be used by code compiled as `-betterC`]. I'm not sure what you are trying to achieve; you can easily cross-compile druntime & Phobos with LDC, see https://wiki.dlang.org/Building_LDC_runtime_libraries.
Re: is the runtime implemented in betterC?
On Friday, 8 November 2019 at 10:40:15 UTC, dangbinghoo wrote: is the runtime d code implemented purely with betterC? It is actually implemented in regular D! i was thinking that what's happening when we building ARM dlang compiler, when the dlang compiler ready in the first, there's no ARM version of the runtime lib and phobos, so, it's likely we are using bare metal D and trying to build the runtime. There's ARM/Linux druntime builds... but the real question is the operating system because the druntime heavily relies on an underlying OS.
Re: How decode encoded Base64 to string text?
On Friday, 8 November 2019 at 11:46:44 UTC, Marcone wrote: I can encode "Helo World!" to Base64 and get "TWFyY29uZQ==", but if I try to decode "TWFyY29uZQ==" I can not recovery "Helo World!" but [77, 97, 114, 99, 111, 110, 101]. How can I recover "Helo World!" when decode? Thank you. import std; void main(){ string text = "Helo World!"; auto encoded = Base64.encode(text.representation); auto decoded = Base64URL.decode("TWFyY29uZQ=="); writeln(encoded); // prints: "TWFyY29uZQ==" writeln(to!string(decoded)); // prints: [77, 97, 114, 99, 111, 110, 101] but I want to print: "Helo World!" } What Aldo said - Base64 operates on ubyte[], which a string is not (it's immutable(char)[]). There's also assumeUTF (https://dlang.org/library/std/string/assume_utf.html) which may document your code a bit better than a simple cast, but it does the same thing inside. The reason Base64 operates on ubyte[] is to be able to encode arbitrary data, while the reason to!string doesn't convert your ubyte[] to a readable string is that not all ubyte[] are valid strings, and displaying arbitrary data as if it were a string is sure to cause problems. -- Simen
Re: How decode encoded Base64 to string text?
On Friday, 8 November 2019 at 11:46:44 UTC, Marcone wrote: I can encode "Helo World!" to Base64 and get "TWFyY29uZQ==", but if I try to decode "TWFyY29uZQ==" I can not recovery "Helo World!" but [77, 97, 114, 99, 111, 110, 101]. How can I recover "Helo World!" when decode? Thank you. ... writeln(to!string(decoded)); ... Casting to string seems to work: writeln(cast(string) decoded);
How decode encoded Base64 to string text?
I can encode "Helo World!" to Base64 and get "TWFyY29uZQ==", but if I try to decode "TWFyY29uZQ==" I can not recovery "Helo World!" but [77, 97, 114, 99, 111, 110, 101]. How can I recover "Helo World!" when decode? Thank you. import std; void main(){ string text = "Helo World!"; auto encoded = Base64.encode(text.representation); auto decoded = Base64URL.decode("TWFyY29uZQ=="); writeln(encoded); // prints: "TWFyY29uZQ==" writeln(to!string(decoded)); // prints: [77, 97, 114, 99, 111, 110, 101] but I want to print: "Helo World!" }
Blog Post #86: Nodes-n-noodles, Part V - The Drawing Routines
Today's post doesn't just continue our look at Nodes-n-noodles (here's the link: https://gtkdcoding.com/2019/11/08/0086-nodes-v-node-drawing-routines.html) it's also a milestone for the GtkDcoding Blog... I've been accepted into the GitHub Sponsors program which means readers can now show their appreciation by supporting my efforts. Just click on the heart icon at the bottom of any post.
is the runtime implemented in betterC?
hi, is the runtime d code implemented purely with betterC? i was thinking that what's happening when we building ARM dlang compiler, when the dlang compiler ready in the first, there's no ARM version of the runtime lib and phobos, so, it's likely we are using bare metal D and trying to build the runtime. is this true? thanks! --- binghoo
Re: Using an async buffer
On Friday, 8 November 2019 at 01:12:37 UTC, Ali Çehreli wrote: On 11/07/2019 07:07 AM, bioinfornatics wrote: > Dear, > > I try to use the async buffer describe into std.parallelism > documentation but my test code core dump! I admit I don't fully understand the lifetime issues but removing the "code smell" of the modul-level File object solved the issue for me, which requires four changes: // (1) Comment out the module-level variable // File file; // ... @system void next(File file, ref ubyte[] buf) { // (2.a) Use the parameter 'file' // (2.b) Adjust the length of the buffer // (rawRead may read less than the requested size) buf = file.rawRead(buf); } // ... // (3) Define a local variable auto file = File(filePath, "rb"); // ... // (4) Use "callables" that use the local 'file': auto asyncReader = taskPool.asyncBuf((ref ubyte[] buf) => next(file, buf), () => file.eof, bufferSize); Ali Thanks a lot Ali, the error message was understandable to me, and the way you fixed differ from the documentation. Does that means they are a bug ? Moreover, this snippet is used to evaluate how to parse efficiently versus the wc -l command. fixed snippet: https://paste.fedoraproject.org/paste/B~PjobIlVXIaJPfMjCcWSA And I have 2 problems 1/ the executable is at least twice a time slower than wc -l I try to increase a) the buffer with a multiple of page size (parameter -n) b) the number of thread (parameter -t) 2/ the result is not exactly the same as wc -l give wich imply a bug while counting \n inside the buffer. $ time ./build/wc_File_by_chunks_async_buffer -n 4 -t 8 -i test_100M 1638656 test_100M real0m0.106s user0m0.101s sys 0m0.051s $ time wc -l test_100M 1638400 test_100M real0m0.067s user0m0.030s sys 0m0.037s Thanks again Ali without you It was impossible to me
Re: Using an async buffer
the error message was understandable to me, ... the error message was not understandable to me ...