Re: Get variables with call stack
On Saturday, 22 September 2018 at 05:49:05 UTC, Vladimir Panteleev wrote: In short: there is no easy way, in the general sense. If you can find something that achieves what you need in C++, there's a good chance that it would work to some extent (or could be adapted with reasonable effort) for D, too. D debug information has much in common with C++, however exceptions vary from platform to platform.
Re: Get variables with call stack
On Saturday, 22 September 2018 at 05:43:53 UTC, Vladimir Panteleev wrote: The only way to do that would be using a debugger. The specifics of the solution would thus depend on the platform. On POSIX, it would probably mean getting gdb to print a detailed backtrace for your project. On Windows, you might be able to achieve this by spawning a thread which then uses dbgeng.dll to get a detailed stack trace. One thing to note: only variables in stack frames since the most top-level exception block will be visible (so, you'll also need to disable D runtime's standard exception handler). The reason for this is that exceptions do not capture the entire stack, but extract only a stack trace during instantiation, so to get the entire stack, you'd need to breakpoint _d_throw or such, but at that point you don't know if you're within an exception frame ready to catch the thrown exception. In short: there is no easy way, in the general sense.
Re: Get variables with call stack
On Friday, 21 September 2018 at 19:08:36 UTC, ANtlord wrote: Hello! I need to make a some sort of error report system for an application. I want to catch base Exception class instance and report call stack and with the call stack I want to report all variables with their values. There are a couple of services that make report using call stack and provides variables' values. Sentry.io, New Relic etc. I see how to get call stack, the book Adam Ruppe writes helps me. How to get all variables from every layer of call stack? The only way to do that would be using a debugger. The specifics of the solution would thus depend on the platform. On POSIX, it would probably mean getting gdb to print a detailed backtrace for your project. On Windows, you might be able to achieve this by spawning a thread which then uses dbgeng.dll to get a detailed stack trace.
Re: Updating D beyond Unicode 2.0
On Friday, 21 September 2018 at 20:25:54 UTC, Walter Bright wrote: When I originally started with D, I thought non-ASCII identifiers with Unicode was a good idea. I've since slowly become less and less enthusiastic about it. First off, D source text simply must (and does) fully support Unicode in comments, characters, and string literals. That's not an issue. But identifiers? I haven't seen hardly any use of non-ascii identifiers in C, C++, or D. In fact, I've seen zero use of it outside of test cases. I don't see much point in expanding the support of it. If people use such identifiers, the result would most likely be annoyance rather than illumination when people who don't know that language have to work on the code. Extending it further will also cause problems for all the tools that work with D object code, like debuggers, disassemblers, linkers, filesystems, etc. To wit, Windows linker error with Unicode symbol: https://github.com/ldc-developers/ldc/pull/2850#issuecomment-422968161 Absent a much more compelling rationale for it, I'd say no. I'm torn. I completely agree with Adam and others that people should be able to use any language they want. But the Unicode spec is such a tire fire that I'm leery of extending support for it. Someone linked this Swift chapter on Unicode handling in an earlier forum thread, read the section on emoji in particular: https://oleb.net/blog/2017/11/swift-4-strings/ I was laughing out loud when reading about composing "family" emojis with zero-width joiners. If you told me that was a tech parody, I'd have believed it. I believe Swift just punts their Unicode support to ICU, like most any other project these days. That's a horrible sign, that you've created a spec so grotesquely complicated that most everybody relies on a single project to not have to deal with it.
Re: Rather D1 then D2
On Friday, 21 September 2018 at 21:17:52 UTC, new wrote: Thank you for your answer. too bad - have to think about it. You might be interested in the Volt language, which follows in D1's footsteps: https://github.com/VoltLang/Volta I believe it was created by some D users with the same opinion on D1/D2. Syntax is also very much like D1.
Re: Tuple DIP
On Wednesday, 19 September 2018 at 21:48:40 UTC, Timon Gehr wrote: On 19.09.2018 23:14, 12345swordy wrote: On Tuesday, 3 July 2018 at 16:11:05 UTC, 12345swordy wrote: On Thursday, 28 June 2018 at 13:24:11 UTC, Timon Gehr wrote: [...] Is there any way we can help on this? *Bump* I want this. So do I, but I need to get a quiet weekend or so to finish this. I am very tempted to start my own dip on this and finish it. Here's the current state of my implementation in DMD: https://github.com/dlang/dmd/compare/master...tgehr:tuple-syntax It has no tests yet, but basically, with those changes, you can write tuple literals `(1, 2.0, "3")`, you can unpack tuples using `auto (a, b) = t;` or `(int a, string b) = t;`, and tuples can be expanded using alias this on function calls, so you can now write things like `zip([1,2,3],[4,5,6]).map!((a,b)=>a+b)`. The implementation is still missing built-in syntax for tuple types, tuple assignments, and tuple unpacking within function argument lists and foreach loops. I was referring to the DIP. I am not familiar with the dmd compiler itself to create an implementation. Regardless I think you should finish you DIP and submit it as the process is going to take a very long time.
[Issue 19258] Cannot @disable ~this()
https://issues.dlang.org/show_bug.cgi?id=19258 --- Comment #2 from Илья Ярошенко --- btw, extern(C++) can be skipped, the issue is valid for extern(D) as well --
[Issue 19258] Cannot @disable ~this()
https://issues.dlang.org/show_bug.cgi?id=19258 Илья Ярошенко changed: What|Removed |Added Keywords||C++, rejects-valid --- Comment #1 from Илья Ярошенко --- required for integration with C++ --
[Issue 19258] New: Cannot @disable ~this()
https://issues.dlang.org/show_bug.cgi?id=19258 Issue ID: 19258 Summary: Cannot @disable ~this() Product: D Version: D2 Hardware: All OS: All Status: NEW Severity: normal Priority: P1 Component: dmd Assignee: nob...@puremagic.com Reporter: ilyayaroshe...@gmail.com Fails: struct S { this(int i){} @disable ~this(){} } extern(C++) S foo() { return S(3); } It should compiles because destructor is not called in `foo`. --
Re: "Error: function expected before (), not module *module* of type void
On Saturday, 22 September 2018 at 01:58:57 UTC, Adam D. Ruppe wrote: You probably shouldn't name a module the same as a member anyway, and it should also have two names, like "module myproject.isprime;" But the fix here is to just use the full name. import isPrime; void main() { isPrime.isPrime(x); // module_name.member_name } or change the import: import isPrime : isPrime; // specify you want the same-named member Both files are in the same directory. When compiling main.d, When compiling, be sure to pass both modules to it, or use the dmd -i if on a new version. dmd -i main.d or dmd main.d isPrime.d main.d:(.text._Dmain[_Dmain]+0x83): undefined reference to `_D7isPrime3isPFiZb' this likely means you forgot to compile in the isPrime module, so use the above dmd lines Thanks for your help, Adam! Right after posting my question, I started reading this site: https://www.tutorialspoint.com/d_programming/d_programming_modules.htm Based on that and your recommendation, here is what I ended up doing: I changed the filename of isPrime.d to isprime.d and put that in the subdirectory func/: func/isprime.d: module func.isprime; bool isPrime(int n) { // check to see if n is prime } I then changed main.d to: import func.isprime; void main() { isPrime(x); } Finally, per your suggestion, I compiled it using: dmd -i main.d Thanks again!
Re: Simple parallel foreach and summation/reduction
On Saturday, 22 September 2018 at 02:13:58 UTC, Chris Katko wrote: On Friday, 21 September 2018 at 12:15:59 UTC, Ali Çehreli wrote: On 09/21/2018 12:25 AM, Chris Katko wrote: [...] You can use a free-standing function as a workaround, which is included in the following chapter that explains most of std.parallelism: http://ddili.org/ders/d.en/parallelism.html That chapter is missing e.g. the newly-added fold(): https://dlang.org/phobos/std_parallelism.html#.TaskPool.fold Ali Okay... so I've got it running. The problem is, it uses tons of RAM. In fact, proportional to the working set. T test(T)(T x, T y) { return x + y; } double monte(T)(T x) { double v = uniform(-1F, 1F); double u = uniform(-1F, 1F); if(sqrt(v*v + u*u) < 1.0) { return 1; }else{ return 0; } } auto taskpool = new TaskPool(); sum = taskpool.reduce!(test)( taskpool.amap!monte( iota(num) ) ); taskpool.finish(true); 100 becomes ~8MB 1000 becomes 80MB 1, I can't even run because it says "Exception: Memory Allocation failed" Also, when I don't call .finish(true) at the end, it just sits there forever (after running) like one of the threads won't terminate. Requiring a control-C. But the docs and examples don't seem to indicate I should need that...
Re: Simple parallel foreach and summation/reduction
On Friday, 21 September 2018 at 12:15:59 UTC, Ali Çehreli wrote: On 09/21/2018 12:25 AM, Chris Katko wrote: On Thursday, 20 September 2018 at 05:51:17 UTC, Neia Neutuladh wrote: On Thursday, 20 September 2018 at 05:34:42 UTC, Chris Katko wrote: All I want to do is loop from 0 to [constant] with a for or foreach, and have it split up across however many cores I have. You're looking at std.parallelism.TaskPool, especially the amap and reduce functions. Should do pretty much exactly what you're asking. auto taskpool = new TaskPool(); taskpool.reduce!((a, b) => a + b)(iota(1_000_000_000_000L)); I get "Error: template instance `reduce!((a, b) => a + b)` cannot use local __lambda1 as parameter to non-global template reduce(functions...)" when trying to compile that using the online D editor with DMD and LDC. Any ideas? You can use a free-standing function as a workaround, which is included in the following chapter that explains most of std.parallelism: http://ddili.org/ders/d.en/parallelism.html That chapter is missing e.g. the newly-added fold(): https://dlang.org/phobos/std_parallelism.html#.TaskPool.fold Ali Okay... so I've got it running. The problem is, it uses tons of RAM. In fact, proportional to the working set. T test(T)(T x, T y) { return x + y; } double monte(T)(T x) { double v = uniform(-1F, 1F); double u = uniform(-1F, 1F); if(sqrt(v*v + u*u) < 1.0) { return 1; }else{ return 0; } } auto taskpool = new TaskPool(); sum = taskpool.reduce!(test)( taskpool.amap!monte( iota(num) ) ); taskpool.finish(true); 100 becomes ~8MB 1000 becomes 80MB 1, I can't even run because it says "Exception: Memory Allocation failed"
Re: "Error: function expected before (), not module *module* of type void
On Saturday, 22 September 2018 at 01:51:33 UTC, Samir wrote: main.d: import isPrime; void main() { isPrime(x); } You probably shouldn't name a module the same as a member anyway, and it should also have two names, like "module myproject.isprime;" But the fix here is to just use the full name. import isPrime; void main() { isPrime.isPrime(x); // module_name.member_name } or change the import: import isPrime : isPrime; // specify you want the same-named member Both files are in the same directory. When compiling main.d, When compiling, be sure to pass both modules to it, or use the dmd -i if on a new version. dmd -i main.d or dmd main.d isPrime.d main.d:(.text._Dmain[_Dmain]+0x83): undefined reference to `_D7isPrime3isPFiZb' this likely means you forgot to compile in the isPrime module, so use the above dmd lines
Re: "Error: function expected before (), not module *module* of type void
On Monday, 24 March 2008 at 17:41:11 UTC, Steven Schveighoffer wrote: I know you fixed the problem, but just an FYI, the reason is because when you import rollDice, you bring both rollDice the module and rollDice the function into the global namespace (which confuses the compiler 'cause it doesn't know what symbol you want to use). This is normally avoided in libraries by having a package tree. So for example, if you created everything in the subdirectory foo, and had your modules be: module foo.diceroller; import foo.rollDice; Then the import would import the module foo.rollDice, and the function rollDice, and the compiler would no longer be confused about what you are trying to call. IMO, this makes it difficult to write multi-file applications that live in one directory. It would be nice if this was changed... -Steve I know this thread is quite old but I still seem to be getting a similar error and don't understand how to resolve it. I currently have a program isPrime.d that I would like to reuse in other programs: isPrime.d: bool isPrime(int n) { // logic to check if n is prime } main.d: import isPrime; void main() { isPrime(x); } Both files are in the same directory. When compiling main.d, I get: Error: function expected before (), not module isPrime of type void I've tried changing the name of the function isPrime in isPrime.d to something else (as well as changing the name in the main program) but then I get an error similar to: In function `_Dmain': main.d:(.text._Dmain[_Dmain]+0x83): undefined reference to `_D7isPrime3isPFiZb' collect2: error: ld returned 1 exit status Error: linker exited with status 1 Thanks in advance.
Re: Updating D beyond Unicode 2.0
On 22/09/2018 11:17 AM, Seb wrote: In all seriousness I hate it when someone thought its funny to use the lambda symbol as an identifier and I have to copy that symbol whenever I want to use it because there's no convenient way to type it. (This is already supported in D.) This can be strongly mitigated by using a compose key. But they are not terribly common unfortunately.
Re: Updating D beyond Unicode 2.0
On Friday, 21 September 2018 at 20:25:54 UTC, Walter Bright wrote: But identifiers? I haven't seen hardly any use of non-ascii identifiers in C, C++, or D. In fact, I've seen zero use of it outside of test cases. I don't see much point in expanding the support of it. If people use such identifiers, the result would most likely be annoyance rather than illumination when people who don't know that language have to work on the code. ...you *do* know that not every codebase has people working on it who only know English, right? If I took a software development job in China, I'd need to learn Chinese. I'd expect the codebase to be in Chinese. Because a Chinese company generally operates in Chinese, and they're likely to have a lot of employees who only speak Chinese. And no, you can't just transcribe Chinese into ASCII. Same for Spanish, Norwegian, German, Polish, Russian -- heck, it's almost easier to list out the languages you *don't* need non-ASCII characters for. Anyway, here's some more D code using non-ASCII identifiers, in case you need examples: https://git.ikeran.org/dhasenan/muzikilo
Re: Updating D beyond Unicode 2.0
On Friday, 21 September 2018 at 23:17:42 UTC, Seb wrote: A: Wait. Using emojis as identifiers is not a good idea? B: Yes. A: But the cool kids are doing it: The C11 spec says that emoji should be allowed in identifiers (ISO publication N1570 page 504/522), so it's not just the cool kids. I'm not in favor of emoji in identifiers. In all seriousness I hate it when someone thought its funny to use the lambda symbol as an identifier and I have to copy that symbol whenever I want to use it because there's no convenient way to type it. It's supported because λ is a letter in a language spoken by thirteen million people. I mean, would you want to have to name a variable "lumиnosиty" because someone got annoyed at people using "i" as a variable name?
Re: Updating D beyond Unicode 2.0
On Friday, 21 September 2018 at 20:25:54 UTC, Walter Bright wrote: But identifiers? I haven't seen hardly any use of non-ascii identifiers in C, C++, or D. In fact, I've seen zero use of it outside of test cases. Do you look at Japanese D code much? Or Turkish? Or Chinese? I know there are decently sized D communities in those languages, and I am pretty sure I have seen identifiers in their languages before, but I can't find it right now. Just there's a pretty clear potential for observation bias here. Even our search engine queries are going to be biased toward English-language results, so there can be a whole D world kinda invisible to you and I. We should reach out and get solid stats before making a final decision. most likely be annoyance rather than illumination when people who don't know that language have to work on the code. Well, for example, with a Chinese company, they may very well find forced English identifiers to be an annoyance.
Re: Updating D beyond Unicode 2.0
On Friday, 21 September 2018 at 23:00:45 UTC, Erik van Velzen wrote: Agreed with Walter. I'm all on board with i18n but I see no need for non-ascii identifiers. Even identifiers with a non-latin origin are usually written in the latin script. As for real-world usage I've seen Cyrillic identifiers a few times in PHP. A: Wait. Using emojis as identifiers is not a good idea? B: Yes. A: But the cool kids are doing it: https://codepen.io/andresgalante/pen/jbGqXj In all seriousness I hate it when someone thought its funny to use the lambda symbol as an identifier and I have to copy that symbol whenever I want to use it because there's no convenient way to type it. (This is already supported in D.)
Re: Updating D beyond Unicode 2.0
Agreed with Walter. I'm all on board with i18n but I see no need for non-ascii identifiers. Even identifiers with a non-latin origin are usually written in the latin script. As for real-world usage I've seen Cyrillic identifiers a few times in PHP.
Re: This is why I don't use D.
On Friday, 21 September 2018 at 20:49:54 UTC, 0xEAB wrote: On Thursday, 20 September 2018 at 17:06:43 UTC, Neia Neutuladh wrote: The tester is now submodule-aware and I removed builds for packages with a `.gitmodules` file. I'm not sure whether this is actually a good idea. There are some projects that support both, DUB and submodules+makefile. Those would (unnecessarily) get excluded. I meant that I removed the past builds for things with git submodules so that they could be rebuilt. Stderr is captured now, but I believe I messed up the UI for it. C'est la vie.
Re: Rather D1 then D2
On Friday, 21 September 2018 at 21:07:57 UTC, Jonathan M Davis wrote: On Friday, September 21, 2018 2:33:01 PM MDT new via Digitalmars-d wrote: [...] Official support of D1 was dropped nearly 6 years ago: [...] Thank you for your answer. too bad - have to think about it.
Re: Rather D1 then D2
On Friday, September 21, 2018 2:33:01 PM MDT new via Digitalmars-d wrote: > hi, > is it possible to get a bug fixed x64 compiling D1? > I don't want to start some rant, but i don't like D2. D1 is > compact and not so overloaded with funny attributes. Official support of D1 was dropped nearly 6 years ago: https://forum.dlang.org/post/ afjlgjcftngzannrh...@dfeed.kimsufi.thecybershadow.net So, there will be no more official fixes or releases for D1. If you can find someone willing to fix a D1 bug for you (or fix it yourself) in your own fork, then it can certainly be fixed, but that's pretty much the only way it's going to be fixed. The sad truth is that if you really do want to continue to use D1, you're going to have to maintain it yourself or find a group of people willing to do so; otherwise eventually, the language and its libraries are going to become unusable due to a lack of maintenance. So, I fully expect that at some point here, you're going to have to switch to a different language. Whether that's D2 is up to you, but D1 is not maintained and is not going to be. - Jonathan M Davis
Re: This is why I don't use D.
On Thursday, 20 September 2018 at 17:06:43 UTC, Neia Neutuladh wrote: The tester is now submodule-aware and I removed builds for packages with a `.gitmodules` file. I'm not sure whether this is actually a good idea. There are some projects that support both, DUB and submodules+makefile. Those would (unnecessarily) get excluded.
Re: Rather D1 then D2
On Friday, 21 September 2018 at 20:44:12 UTC, 0xEAB wrote: On Friday, 21 September 2018 at 20:33:01 UTC, new wrote: D1 is compact and not so overloaded with funny attributes. just don't use all those funky attributes and you're fine :) bs - be serious. i don't wand to use D2, but D1.
Re: Rather D1 then D2
On Friday, 21 September 2018 at 20:33:01 UTC, new wrote: D1 is compact and not so overloaded with funny attributes. just don't use all those funky attributes and you're fine :)
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 18:20:21 UTC, Adam D. Ruppe wrote: You don't need an API call to do that. You just provide the icon in a resource to the linker or a separate resource thing. Some C++ environments do it via pragmas, or you can do it traditionally in a makefile/build command line pretty easily; no need to run fancy code. Jonathan created a built-in function to display an icon to work cross-platform without the trouble configuring for each different platform or IDE. SG.
Rather D1 then D2
hi, is it possible to get a bug fixed x64 compiling D1? I don't want to start some rant, but i don't like D2. D1 is compact and not so overloaded with funny attributes. thanks
Re: Jai compiles 80,000 lines of code in under a second
On 9/21/2018 7:46 AM, Steven Schveighoffer wrote: I can see the marketing now, "D finds infinite loops in compile-time code way faster than Jai!". We need you over in marketing!
Re: Updating D beyond Unicode 2.0
When I originally started with D, I thought non-ASCII identifiers with Unicode was a good idea. I've since slowly become less and less enthusiastic about it. First off, D source text simply must (and does) fully support Unicode in comments, characters, and string literals. That's not an issue. But identifiers? I haven't seen hardly any use of non-ascii identifiers in C, C++, or D. In fact, I've seen zero use of it outside of test cases. I don't see much point in expanding the support of it. If people use such identifiers, the result would most likely be annoyance rather than illumination when people who don't know that language have to work on the code. Extending it further will also cause problems for all the tools that work with D object code, like debuggers, disassemblers, linkers, filesystems, etc. Absent a much more compelling rationale for it, I'd say no.
Re: Jai compiles 80,000 lines of code in under a second
On 9/21/2018 9:29 AM, welkam wrote: Jai compiler perform parsing and lexing in different thread so its kinda multi threaded. Its possible to do the same with D front end. We can start here but there are plenty of low hanging fruits in compiler you just need to run profiler to find them D was designed to support mulithreaded compilation, but that was never implemented. An earlier DMD would do file I/O and compiling in separate threads. It was sadly removed.
Get variables with call stack
Hello! I need to make a some sort of error report system for an application. I want to catch base Exception class instance and report call stack and with the call stack I want to report all variables with their values. There are a couple of services that make report using call stack and provides variables' values. Sentry.io, New Relic etc. I see how to get call stack, the book Adam Ruppe writes helps me. How to get all variables from every layer of call stack?
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 13:37:58 UTC, aliak wrote: Si si, but i believe the loadExecutableIcon actually calls windows APIs to set an icon on an executable, and they'd probably @system which means I don't think that could be done in D. You don't need an API call to do that. You just provide the icon in a resource to the linker or a separate resource thing. Some C++ environments do it via pragmas, or you can do it traditionally in a makefile/build command line pretty easily; no need to run fancy code.
[Issue 18851] std.net.curl.post cannot be used with !ubyte
https://issues.dlang.org/show_bug.cgi?id=18851 wolframw changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED --- Comment #1 from wolframw --- fixed: https://github.com/dlang/phobos/pull/6710 --
Re: Converting a character to upper case in string
On Friday, 21 September 2018 at 12:15:52 UTC, NX wrote: How can I properly convert a character, say, first one to upper case in a unicode correct manner? In which code level I should be working on? Grapheme? Or maybe code point is sufficient? There are few phobos functions like asCapitalized() none of which are what I want. Use `asCapitalized` to capitalize the first letter or use something like this: import std.conv; import std.range; import std.stdio; import std.uni; void main(string[] args) { string input = "noe\u0308l"; int index= 2; auto graphemes= input.byGrapheme.array; string upperCased = [graphemes[index]].byCodePoint.text.toUpper; graphemes[index] = upperCased.decodeGrapheme; string output= graphemes.byCodePoint.text; writeln(output); }
Updating D beyond Unicode 2.0
D's currently accepted identifier characters are based on Unicode 2.0: * ASCII range values are handled specially. * Letters and combining marks from Unicode 2.0 are accepted. * Numbers outside the ASCII range are accepted. * Eight random punctuation marks are accepted. This follows the C99 standard. Many languages use the Unicode standard explicitly: C#, Go, Java, Python, ECMAScript, just to name a few. A small number of languages reject non-ASCII characters: Dart, Perl. Some languages are weirdly generous: Swift and C11 allow everything outside the Basic Multilingual Plane. I'd like to update that so that D accepts something as a valid identifier character if it's a letter or combining mark or modifier symbol that's present in Unicode 11, or a non-ASCII number. This allows the 146 most popular writing systems and a lot more characters from those writing systems. This *would* reject those eight random punctuation marks, so I'll keep them in as legacy characters. It would mean we don't have to reference the C99 standard when enumerating the allowed characters; we just have to refer to the Unicode standard, which we already need to talk about in the lexical part of the spec. It might also make the lexer a tiny bit faster; it reduces the number of valid-ident-char segments to search from 245 to 134. On the other hand, it will change the ident char ranges from wchar to dchar, which means the table takes up marginally more memory. And, of course, it lets you write programs entirely in Linear B, and that's a marketing ploy not to be missed. I've got this coded up and can submit a PR, but I thought I'd get feedback here first. Does anyone see any horrible potential problems here? Or is there an interestingly better option? Does this need a DIP?
Re: Jai compiles 80,000 lines of code in under a second
On Thursday, 20 September 2018 at 23:13:38 UTC, aliak wrote: And is there anyway to get even near the performance of Jai when it comes to compilations I watched the same video today. What a coincidence. In Jai example 80 000 lines of "code" include comments and empty lines. Since we know that that Jai example was written in parallel to language we can safely assume that most of that code is simple therefore its not surprising that Jai compiled that fast. Write C style code and DMD will perform similarly. Can we improve D compiler speed? Ofcourse but core developers are more focused on stability and very needed functionality than speed. Thats good because I rather have c++ interop than 10% faster compilation speed. Jai compiler perform parsing and lexing in different thread so its kinda multi threaded. Its possible to do the same with D front end. We can start here but there are plenty of low hanging fruits in compiler you just need to run profiler to find them
Re: Jai compiles 80,000 lines of code in under a second
On Fri, Sep 21, 2018 at 07:58:56AM +, mate via Digitalmars-d wrote: [...] > I realize that with build instructions written in unrestricted > language it is easier to create a dependency on something else than > the compiler, such as the OS. Maybe they plan to solve this problem > with appropriate facilities and discipline. [...] Relying on discipline, or rather, assuming discipline on the part of your coworker, never works, as shown by the past 20 years of failures in software. All it takes is for *one* person in a team of arbitrary size to do something stupid, and the entire tower of cards comes crashing down. You need actual, hard restrictions guaranteed by the compiler, not mere "programming by convention". T -- It's bad luck to be superstitious. -- YHL
Re: Jai compiles 80,000 lines of code in under a second
On Fri, Sep 21, 2018 at 10:53:39AM +, Vladimir Panteleev via Digitalmars-d wrote: > On Friday, 21 September 2018 at 07:58:16 UTC, mate wrote: > > Different sensibilities on where to put restrictions clearly lead to > > different designs. I am not sure myself what is best. > > The more people you have on your team, the more you appreciate the > restrictions. If you are working on a personal project alone, you are > in control and have full knowledge of the entire codebase, so > restrictions are a hindrance. When you are collaborating with someone > you know only by name from across the globe, being able to reason what > their code might or may not do is considerably helpful. +100. Many things I could get away with in my own personal projects, I wouldn't do in a team project (which is basically *any* non-trivial project these days). Unrestricted freedom to do whatever you want greatly reduces the ability to reason about the code, which is why these days structured programming constructs like if/else, while-loops, functions, etc., are preferred over unrestricted goto's, even though they are technically "more restrictive". The challenge is in finding the balance between restriction and not hampering the programmer's ability to express what he wants without jumping through hoops (Java's verbosity comes to mind... although, to be fair, given your typical "enterprise" development environment, this is not necessarily a bad thing, since it forces even bad code to conform to a certain predictable structure, which makes it easier to rewrite said bad code :-P when one of your coworkers turns out to be a cowboy programmer). Not an easy balance to strike, which is why designing a successful programming language is so hard. T -- Unix is my IDE. -- Justin Whear
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 13:28:47 UTC, aliak wrote: Sure, all true, but from what I've seen of Jai, it's not a simple language, and it does a decent amount of compile time stuff, but who knows, maybe the code is simple indeed. I remember a demo where he ran a game at compile time and was also fast AFAIR. I think that his goal is to keep it fast regardless of which features are used though. I hope. We don't have access to the source code being tested. We don't have access to the compiler. Until the language is actually made public, we can't make any substantive conclusions about its speed.
Re: Converting a character to upper case in string
On Friday, 21 September 2018 at 13:32:54 UTC, NX wrote: On Friday, 21 September 2018 at 12:34:12 UTC, Laurent Tréguier wrote: I would probably go for std.utf.decode [1] to get the character and its length in code units, capitalize it, and concatenate the result with the rest of the string. [1] https://dlang.org/phobos/std_utf.html#.decode So by this I assume it is sufficient to work with dchars rather than graphemes? -- import std.stdio; import std.conv; import std.string; import std.uni; size_t index = 1; auto theString = "he\u0308llo, world"; auto theStringPart = theString[index .. $]; auto firstLetter = theStringPart.decodeGrapheme; auto result = theString[0 .. index] ~ capitalize(firstLetter[].text) ~ theString[index + graphemeStride(theString, index) .. $]; writeln(result); -- This will capitalize graphemes as a whole, and might be better than what I previously wrote.
Re: Jai compiles 80,000 lines of code in under a second
On 9/21/18 10:19 AM, Nicholas Wilson wrote: On Friday, 21 September 2018 at 09:21:34 UTC, Petar Kirov [ZombineDev] wrote: I have been watching Jonathan Blow's Jai for a while myself. There are many interesting ideas there, and many of them are what made me like D so much in the first place. It's very important to note that the speed claims he has been making are all a matter of developer discipline. You can have an infinite loop executed at compile-time in both D and Jai. You're going to OOM pretty fast in D if you try :) I can see the marketing now, "D finds infinite loops in compile-time code way faster than Jai!". -Steve
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 09:21:34 UTC, Petar Kirov [ZombineDev] wrote: I have been watching Jonathan Blow's Jai for a while myself. There are many interesting ideas there, and many of them are what made me like D so much in the first place. It's very important to note that the speed claims he has been making are all a matter of developer discipline. You can have an infinite loop executed at compile-time in both D and Jai. You're going to OOM pretty fast in D if you try :)
Re: std.process.execute without capturing stderr?
On Friday, 21 September 2018 at 06:08:39 UTC, berni wrote: Sorry, I made a mistake while testing and after I found out, that it was not available in the documentation at dpldocs.info I concluded, that it must be a really new feature. But now it seems to me, that dpldocs is outdated a little bit, isn't it? Oh yeah, I haven't updated Phobos on it for a while, my attention most this year has been the dub thingy. Just did though.
Webassembly TodoMVC
Hey guys, Following the D->emscripten->wasm toolchain from CyberShadow and Ace17 I created a proof of concept framework for creating single page webassembly applications using D's compile time features. This is a proof of concept to find out what is possible. At https://skoppe.github.io/d-wasm-todomvc-poc/ you can find a working demo and the repo can be found at https://github.com/skoppe/d-wasm-todomvc-poc Here is an example from the readme showing how to use it. --- struct Button { mixin Node!"button"; @prop innerText = "Click me!"; } struct App { mixin Node!"div"; @child Button button; } mixin Spa!App; ---
Re: Converting a character to upper case in string
On Friday, 21 September 2018 at 13:32:54 UTC, NX wrote: On Friday, 21 September 2018 at 12:34:12 UTC, Laurent Tréguier wrote: I would probably go for std.utf.decode [1] to get the character and its length in code units, capitalize it, and concatenate the result with the rest of the string. [1] https://dlang.org/phobos/std_utf.html#.decode So by this I assume it is sufficient to work with dchars rather than graphemes? From what I've tested; it seems sufficient. I might be wrong though, I'm no unicode expert. It might still be a good idea to have a look at grapheme related functions.
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 09:21:34 UTC, Petar Kirov [ZombineDev] wrote: On Thursday, 20 September 2018 at 23:13:38 UTC, aliak wrote: Alo! I have been watching Jonathan Blow's Jai for a while myself. There are many interesting ideas there, and many of them are what made me like D so much in the first place. It's very important to note that the speed claims he has been making are all a matter of developer discipline. You can have an infinite loop executed at compile-time in both D and Jai. There's nothing magical Jai can do about that - the infinite loop is not going to finish faster ;) You can optimize the speed of compile-time computation just like you can optimize for run-time speed. Haha well, yes of course, can't argue with that :p I guess it makes more sense to compare the "intuitive" coding path of a given language. Eg: if I iterate a million objects in a for loop, because i want to process them, there is no other non-compile time way to do that. If language X takes an hour and language Y takes a millisecond, I'm pretty sure language X can't say it compiles fast, as that just seems like a pretty common-scenario and is not using the language in any way it was not meant to be used. What your observing with D is that right now many libraries including Phobos have tried to see how much they can push the language (to make for more expressive code or faster run-time) and not as much time has been spent on optimizing compile-time. If you take a code-base written in Java-like subset of the language, I can grantee you that DMD is going to very competitive to other languages like C++, Go, Java or C#. And that's considering that there are many places that could be optimized internally in DMD. But overall most of the time spent compiling D programs is: a) crazy template / CTFE meta-programming and b) inefficient build process (no parallel compilation for non-separate compilation, no wide-spread use of incremental compilation, etc.). AFAIR, there were several projects for a caching D compiler and that can go a long way to improve things. Ah I see. Ok so there's quite a bit of big wins it seems (parallelization e.g.). On the other hand, there are things that are much better done at compile-time, rather than run-time like traditional meta-programming. My biggest gripe with D is that currently you only have tools for declaration-level meta-programming (version, static if, static foreach, mixin template), but nothing else than plain strings for statement-level meta-programming. CTFE is great, but why re-implement the compiler in CTFE code, while the actual compiler is sitting right there compiling your whole program ;) Yeah I've always wondered this. But I just boiled it down to me not understanding how compilers work :) P.S. Jai: loadExecutableIcon(myIcon, exeLocation) D: static immutable ubyte[] icon = import("image.png).decodePng; Si si, but i believe the loadExecutableIcon actually calls windows APIs to set an icon on an executable, and they'd probably @system which means I don't think that could be done in D. (In D you have read-only access to the file-system at compile-time using the -J flag.) [0]: https://github.com/atilaneves/reggae
Re: Converting a character to upper case in string
On Friday, 21 September 2018 at 12:34:12 UTC, Laurent Tréguier wrote: I would probably go for std.utf.decode [1] to get the character and its length in code units, capitalize it, and concatenate the result with the rest of the string. [1] https://dlang.org/phobos/std_utf.html#.decode So by this I assume it is sufficient to work with dchars rather than graphemes?
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 05:39:35 UTC, Vladimir Panteleev wrote: On Friday, 21 September 2018 at 05:11:32 UTC, mate wrote: Note that the build can be done at compile time because the metaprogramming capabilities of the language are not limited in terms of system calls. Good luck bisecting that code base when any version of it did anything even mildly specific to the author's PC. Where your build system lives makes zero difference to bisecting. You can have author-PC specific behavior in the build recipe whether that's in a source file or a "build script". I guess it would be more compartmentalized though. But being able to say "the code here needs this feature" (which is not something you can do when the code doesn't know how to compile itself) could seems pretty useful.
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 01:04:51 UTC, Joakim wrote: On Friday, 21 September 2018 at 00:47:27 UTC, Adam D. Ruppe wrote: Of course, D can also take ages to compile one line of code. It all depends on that that line is doing... ctfe and templates are slow. C or Java style code compiling in D is very fast. I was going to say this too, ie how much of that Jai code is run at compile-time, how much is uninstantiated templates that is just skipped over like D does, and how much is templates instantiated many times? Lines of code is not a good enough measure with those programming constructs. I was just building the stdlib tests with LDC yesterday and they took so much memory on a new Linux/x64 VPS with 2GB of RAM that I had spun up that I couldn't even ssh in anymore. I eventually had to restart the VPS and add a swapfile, which I usually have but simply hadn't bothered with yet for this new Ubuntu 18.04 VPS. The stdlib tests instantiate a ton of templates. Sure, all true, but from what I've seen of Jai, it's not a simple language, and it does a decent amount of compile time stuff, but who knows, maybe the code is simple indeed. I remember a demo where he ran a game at compile time and was also fast AFAIR. I think that his goal is to keep it fast regardless of which features are used though. I hope. Regardless, you can't really claim X compiles fast if that's only true on a subset of the language features. Cause otherwise the statement "X compiles fast" is, well, just not true ;)
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 07:37:14 UTC, Walter Bright wrote: On 9/21/2018 12:19 AM, mate wrote: It depends on the developer not doing anything stupid Aye, there's the rub! The evolution of programming language discussions from "sufficiently smart compiler" to "sufficiently smart programmer using a sufficiently smart compiler".
Re: Converting a character to upper case in string
On Friday, 21 September 2018 at 12:15:52 UTC, NX wrote: How can I properly convert a character, say, first one to upper case in a unicode correct manner? In which code level I should be working on? Grapheme? Or maybe code point is sufficient? There are few phobos functions like asCapitalized() none of which are what I want. -- import std.conv : to; import std.stdio : writeln; import std.string : capitalize; import std.utf : decode; size_t index = 1; size_t oldIndex = index; auto theString = "hëllo, world"; auto firstLetter = theString.decode(index); auto result = theString[0 .. oldIndex] ~ capitalize(firstLetter.to!string) ~ theString[index .. $]; writeln(result); -- (This could be a lot prettier, but this seems to basically work)
Re: Converting a character to upper case in string
On Friday, 21 September 2018 at 12:15:52 UTC, NX wrote: How can I properly convert a character, say, first one to upper case in a unicode correct manner? In which code level I should be working on? Grapheme? Or maybe code point is sufficient? There are few phobos functions like asCapitalized() none of which are what I want. I would probably go for std.utf.decode [1] to get the character and its length in code units, capitalize it, and concatenate the result with the rest of the string. [1] https://dlang.org/phobos/std_utf.html#.decode
Converting a character to upper case in string
How can I properly convert a character, say, first one to upper case in a unicode correct manner? In which code level I should be working on? Grapheme? Or maybe code point is sufficient? There are few phobos functions like asCapitalized() none of which are what I want.
Re: Simple parallel foreach and summation/reduction
On 09/21/2018 12:25 AM, Chris Katko wrote: On Thursday, 20 September 2018 at 05:51:17 UTC, Neia Neutuladh wrote: On Thursday, 20 September 2018 at 05:34:42 UTC, Chris Katko wrote: All I want to do is loop from 0 to [constant] with a for or foreach, and have it split up across however many cores I have. You're looking at std.parallelism.TaskPool, especially the amap and reduce functions. Should do pretty much exactly what you're asking. auto taskpool = new TaskPool(); taskpool.reduce!((a, b) => a + b)(iota(1_000_000_000_000L)); I get "Error: template instance `reduce!((a, b) => a + b)` cannot use local __lambda1 as parameter to non-global template reduce(functions...)" when trying to compile that using the online D editor with DMD and LDC. Any ideas? You can use a free-standing function as a workaround, which is included in the following chapter that explains most of std.parallelism: http://ddili.org/ders/d.en/parallelism.html That chapter is missing e.g. the newly-added fold(): https://dlang.org/phobos/std_parallelism.html#.TaskPool.fold Ali
Re: Truly @nogc Exceptions?
On Friday, 21 September 2018 at 11:48:50 UTC, Nemanja Boric wrote: On Friday, 21 September 2018 at 10:06:06 UTC, Nemanja Boric wrote: On Friday, 21 September 2018 at 09:10:06 UTC, Jonathan M Davis wrote: [...] The @__future is fully (to a reasonable degree) implemented - and the `Throwable.message` was marked with this attribute to prevent breaking the user code when introducing this field, and it probably can just be removed from there at this point (since many releases have been released). [...] Interesting quote from Martin on that PR: With regards to Throwable.message we agreed on making it one of the first users of an upcoming reference counted string. Details on the design of the reference counted string first to be discussed on dlang-study. If in theory D had all the memory-safety features to make this work (think what languages like Rust have with linear/affine types [0] [1]) along with the necessary library implementations of smart pointers, RC-aware slicing, etc..., I would still prefer the sink-based approach in this situation, as it's much more elegant in my opinion. [0]: http://wiki.c2.com/?LinearTypes [1]: https://www.tweag.io/posts/2017-03-13-linear-types.html
Re: Truly @nogc Exceptions?
On Friday, 21 September 2018 at 10:06:06 UTC, Nemanja Boric wrote: On Friday, 21 September 2018 at 09:10:06 UTC, Jonathan M Davis wrote: [...] The @__future is fully (to a reasonable degree) implemented - and the `Throwable.message` was marked with this attribute to prevent breaking the user code when introducing this field, and it probably can just be removed from there at this point (since many releases have been released). [...] Interesting quote from Martin on that PR: With regards to Throwable.message we agreed on making it one of the first users of an upcoming reference counted string. Details on the design of the reference counted string first to be discussed on dlang-study.
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 07:58:16 UTC, mate wrote: Different sensibilities on where to put restrictions clearly lead to different designs. I am not sure myself what is best. The more people you have on your team, the more you appreciate the restrictions. If you are working on a personal project alone, you are in control and have full knowledge of the entire codebase, so restrictions are a hindrance. When you are collaborating with someone you know only by name from across the globe, being able to reason what their code might or may not do is considerably helpful.
Re: Walter's Guide to Translating Code From One Language to Another
On Thu, 20 Sep 2018 23:00:33 -0700, Walter Bright wrote: > The procedure is: > ... > 4. translate the code with as few edits as practical. Do not > reformat the code. Do not refactor it. Do not fix anything, no matter > how tempting. Reproduce the behavior of the original as much as > possible. I'd add 4b: file an issue for anything that needs to be fixed; otherwise it's quickly forgotten.
Re: Truly @nogc Exceptions?
On Friday, 21 September 2018 at 09:10:06 UTC, Jonathan M Davis wrote: [...] I think that the message member was added by Dicebot as an attempt to fix this issue, because Sociomantic needed it, but I don't know exactly what's going on with that or @__future. - Jonathan M Davis The @__future is fully (to a reasonable degree) implemented - and the `Throwable.message` was marked with this attribute to prevent breaking the user code when introducing this field, and it probably can just be removed from there at this point (since many releases have been released). This is the PR when Throwable.message was first introduced: https://github.com/dlang/druntime/pull/1445 We had the same discussion there, and you can see sociomantic's use case in Mihail's comments there. The entire PR have a very good discussion and we (not Sociomantic) still need sink-based method IIUC.
[Issue 19257] std.array.join does not handle const fields that cannot be converted to mutable
https://issues.dlang.org/show_bug.cgi?id=19257 --- Comment #1 from FeepingCreature --- See https://github.com/dlang/phobos/pull/6711 --
[Issue 19257] New: std.array.join does not handle const fields that cannot be converted to mutable
https://issues.dlang.org/show_bug.cgi?id=19257 Issue ID: 19257 Summary: std.array.join does not handle const fields that cannot be converted to mutable Product: D Version: D2 Hardware: x86_64 OS: Linux Status: NEW Severity: enhancement Priority: P1 Component: phobos Assignee: nob...@puremagic.com Reporter: default_357-l...@yahoo.de std.array.join tries to remove constness from its arrays' fields on the premise that it's constructing a new array anyways. However, consider const(Object)[][].join: const(Object) cannot be implicitly converted to Object, so the join fails. In that case, join should just return a const(Object)[] array. --
Re: Jai compiles 80,000 lines of code in under a second
On Thursday, 20 September 2018 at 23:13:38 UTC, aliak wrote: Alo! I have been watching Jonathan Blow's Jai for a while myself. There are many interesting ideas there, and many of them are what made me like D so much in the first place. It's very important to note that the speed claims he has been making are all a matter of developer discipline. You can have an infinite loop executed at compile-time in both D and Jai. There's nothing magical Jai can do about that - the infinite loop is not going to finish faster ;) You can optimize the speed of compile-time computation just like you can optimize for run-time speed. What your observing with D is that right now many libraries including Phobos have tried to see how much they can push the language (to make for more expressive code or faster run-time) and not as much time has been spent on optimizing compile-time. If you take a code-base written in Java-like subset of the language, I can grantee you that DMD is going to very competitive to other languages like C++, Go, Java or C#. And that's considering that there are many places that could be optimized internally in DMD. But overall most of the time spent compiling D programs is: a) crazy template / CTFE meta-programming and b) inefficient build process (no parallel compilation for non-separate compilation, no wide-spread use of incremental compilation, etc.). AFAIR, there were several projects for a caching D compiler and that can go a long way to improve things. With а build system like reggae[0] written in the same language as the one being compiled, the line between compile-time vs run-time becomes quite blurred. If the build system part of your project compiles fast enough, ultimately it doesn't matter if it runs at compile-time vs run-time. The only important part is whether the build system is pleasant to work with - e.g. having a concise declarative syntax that covers 80% of the cases while also exposing a procedural interface for the difficult parts that don't fit in the nice model. And all nice declarative abstractions have a procedural implementations that one needs write first. On the other hand, there are things that are much better done at compile-time, rather than run-time like traditional meta-programming. My biggest gripe with D is that currently you only have tools for declaration-level meta-programming (version, static if, static foreach, mixin template), but nothing else than plain strings for statement-level meta-programming. CTFE is great, but why re-implement the compiler in CTFE code, while the actual compiler is sitting right there compiling your whole program ;) P.S. Jai: loadExecutableIcon(myIcon, exeLocation) D: static immutable ubyte[] icon = import("image.png).decodePng; (In D you have read-only access to the file-system at compile-time using the -J flag.) [0]: https://github.com/atilaneves/reggae
Re: Truly @nogc Exceptions?
On Wednesday, September 19, 2018 3:16:00 PM MDT Steven Schveighoffer via Digitalmars-d wrote: > Given dip1008, we now can throw exceptions inside @nogc code! This is > really cool, and helps make code that uses exceptions or errors @nogc. > Except... I pointed out this problem when the DIP was orginally proposed and have always thought that it was kind of pointless because of this issue. The core problem is that Exception predates @nogc. It predates ranges. Its design makes sense if exceptions are rare, and you're willing to use the GC. IMHO, it even makes sense in most cases with most code that is trying to avoid allocations, because the allocations are only going to occur when an error condition occurs. So, for most code bases, they won't affect performance. The problem is that even though that usually makes perfect sense, if you've written your code so that the non-error path doesn't allocate, you'd really like to be able to mark it with @nogc, and you can't because of the exceptions. The DIP helps but not enough to really matter. And of course, in those code bases where you actually can't afford to have the error path allocate, it's even more of a problem. So, what we have is an Exception class that works perfectly well in a world without @nogc but which doesn't work with @nogc worth anything. And if we want to fix that, I think that we need to figure out how to fix Exception in a way that we can sanely transitition to whatever the new way we handle exception messages would be. I think that the message member was added by Dicebot as an attempt to fix this issue, because Sociomantic needed it, but I don't know exactly what's going on with that or @__future. - Jonathan M Davis
[Issue 19255] ldmd2.exe not found - must be in PATH?
https://issues.dlang.org/show_bug.cgi?id=19255 Rainer Schuetze changed: What|Removed |Added CC||r.sagita...@gmx.de --- Comment #1 from Rainer Schuetze --- VS2017 stores settings in a private registry that msbuild cannot access. msbuilding with Visual D 0.47 also checks HKLM\Software\LDC\InstallationFolder, but the latest visuald from Appveyor https://ci.appveyor.com/project/rainers/visuald also places the settings into a HKCU-key. Please note that the new compiler detection fails in the appveyor build because of a bug in phobos in stock dmd. --
[Issue 12885] const union wrongly converts implicitly to mutable
https://issues.dlang.org/show_bug.cgi?id=12885 FeepingCreature changed: What|Removed |Added CC||default_357-l...@yahoo.de Severity|enhancement |normal --- Comment #2 from FeepingCreature --- I just ran into this. This bug breaks std.json quite badly: see https://issues.dlang.org/show_bug.cgi?id=19256 , in which const(JSONValue) implicitly converts to JSONValue, allowing us to mutate JSON objects via a const parameter. This is definitely unacceptable. --
[Issue 19256] std.json: JSONValue allows violating constness
https://issues.dlang.org/show_bug.cgi?id=19256 --- Comment #1 from FeepingCreature --- See also bug 12885. https://issues.dlang.org/show_bug.cgi?id=12885 This bug is still relevant, as it happens in @safe code, not just @system - std.json violates @trusted by allowing bad things to happen. --
[Issue 11044] Escaping references to lazy argument are allowed and compile to wrong code
https://issues.dlang.org/show_bug.cgi?id=11044 Vladimir Panteleev changed: What|Removed |Added Keywords||wrong-code Summary|Escaping references to lazy |Escaping references to lazy |argument are allowed|argument are allowed and ||compile to wrong code Severity|normal |critical --- Comment #3 from Vladimir Panteleev --- Another test case: / test.d @safe: auto toDg(E)(lazy E value) { return { return value; }; } C t; class C { auto getDg() { // return () => prop; return toDg(prop); } final @property string prop() { assert(this is t); return null; } } void smashStack() { int[1024] dummy = 0xcafebabe; } void main() { t = new C(); auto result = t.getDg(); smashStack(); result(); } / The compiler needs to either reject the code (accepts-invalid), or generate a closure (wrong-code). Upgrading severity as this can manifest as a latent bug (depending on whether the stack was overwritten or not) in @safe code. --
[Issue 19256] std.json: JSONValue allows violating constness
https://issues.dlang.org/show_bug.cgi?id=19256 FeepingCreature changed: What|Removed |Added Severity|enhancement |normal --
[Issue 19256] New: std.json: JSONValue allows violating constness
https://issues.dlang.org/show_bug.cgi?id=19256 Issue ID: 19256 Summary: std.json: JSONValue allows violating constness Product: D Version: D2 Hardware: x86_64 OS: Linux Status: NEW Severity: enhancement Priority: P1 Component: phobos Assignee: nob...@puremagic.com Reporter: default_357-l...@yahoo.de Consider the following code: import std.json; @safe unittest { const JSONValue innerObj = JSONValue(["foo": JSONValue(1)]); assert(innerObj["foo"] == JSONValue(1)); // Why can I do this?? JSONValue value = innerObj; value["foo"] = JSONValue(2); assert(innerObj["foo"] == JSONValue(1)); } innerObj is changed, even though we access it through a const variable. This should not be allowed. --
Re: Simple parallel foreach and summation/reduction
On Friday, 21 September 2018 at 07:25:17 UTC, Chris Katko wrote: I get "Error: template instance `reduce!((a, b) => a + b)` cannot use local __lambda1 as parameter to non-global template reduce(functions...)" when trying to compile that using the online D editor with DMD and LDC. Any ideas? That's a long standing issue: https://issues.dlang.org/show_bug.cgi?id=5710 Using a string for the expression does work though: ``` import std.stdio, std.parallelism, std.range; void main() { taskPool.reduce!"a + b"(iota(1_000L)).writeln; } ```
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 07:19:41 UTC, mate wrote: Reproducible builds are out too, as the produced object file is no longer purely a function of the source code and compiler version. It depends on the developer not doing anything stupid in the build instructions, be it compiler-executed or not. Doesn’t it? I realize that with build instructions written in unrestricted language it is easier to create a dependency on something else than the compiler, such as the OS. Maybe they plan to solve this problem with appropriate facilities and discipline. With standard build systems, the produced object file can depend on some specific state of the OS too (I think there were Windows updates influencing how VisualStudio was producing object files).
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 07:37:14 UTC, Walter Bright wrote: On 9/21/2018 12:19 AM, mate wrote: It depends on the developer not doing anything stupid Aye, there's the rub! ;-) Different sensibilities on where to put restrictions clearly lead to different designs. I am not sure myself what is best. I agree that one would need to realize that compiling a program could potentially be harmful, and that could be a significant change in one’s habits.
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 06:02:26 UTC, mate wrote: I am actually not sure if there really are no limitations to Jai’s CTFE, in its current state. What I like with unrestricted CTFE is that it makes something that was completely safe a security problem.
Re: Jai compiles 80,000 lines of code in under a second
On Thursday, 20 September 2018 at 23:13:38 UTC, aliak wrote: you can create your build recipe inside the program But this is not a particularly good idea and is even against the times. Everyone is moving from powerful languages like makefiles to _less_ powerful languages (like dub.json) to describe how programs are built. Why is that so? The reasons we have dpldocs.info, dub test, dub build etc. are entirely because we use a _restricted_ DSL to build D programs. If we were all doing makefiles, it's easy to see there would be no common structure hence no automated doc generation, testing etc. Like a C project! There is a shift from imperative to declarative for build recipes in all other modern languages and Jai has made (yet another) wrong choice.
Re: Mobile is the new PC and AArch64 is the new x64
On Friday, 21 September 2018 at 00:55:25 UTC, RhyS wrote: The PC market will change but dying is a big word. So I'm stuck between "smartphones overtaking PC's" which I've been told has already happened, and "PC's dying" which apparently has too strong of a meaning... PC sales have dropped over the years for multiple reasons: * Adoption of smartphones and tablets * PC hardware getting so powerful, that people have little reason to upgrade * Consoles taking over PC for couch gaming But ... PC are a integral part of our daily business life. This is a market where PC decline is hard simply because the flexibility that PCs offer. You can do a lot with a smartphone and tablet but a lot of those tasks are way harder or time consuming then doing them on a PC. Things can evolve. Maybe in the future we'll simply talk to our phones all the time with good enough voice command software instead of typing on a keyboard ? Who knows. I can install termux on my phone but no way i will program for hours on a 6" screen. Let alone all the IDE and debugging tools i that are not available ( lets not start Vim discussions, thank you very much ). Don't worry, I'm not the one who's going to lecture you on vim being the best editor. I only ever use vim when git launches it to type a commit message (that is... if I haven't set `EDITOR=nano`). However, even if you won't program on a smartphone, maybe future generations of developers could at some point. If some people started learning on a smartphone they could get used to it and just continue on the platform they're comfortable with. (that's really speculation, I have no idea if it could actually happen) You can attach a keyboard to your phone, a bigger screen to your phone and you have half a PC. But you are still missing the software... That can change, software can be ported to Android after all (like LDC). We will probably move to a hybrid solution like this in the future, where people can use their smartphones as PCs ( with attachments for productivity ) but its a LONG road to get even close to the same level that a basic PC offers in terms of power and flexibility. A huge chunk of development nowadays is web development, which doesn't require all that much power AFAIK. You're right on flexibility though. A smartphone is nothing else then a smaller tablet, what is nothing else then a less flexible laptop, what is nothing else then a compacter and not flexible PC. Just basic concept like multi windows handling is like a alien idea on smartphones and badly done. Even Windows 3.1 was more capable on this part. Currently smartphones are not designed for the creativity and flexibility you need. Can they become this? Sure ... but not with the current mobile operating systems. Android is a resource hog ( JVM thank you very much ) that uses more memory then my Windows 10 installation while offering less flexibility! Microsoft tried and fell flat on their face. Out of curiosity, how did you come to such a situation regarding Android vs Windows 10 ? Win 10 on my machine takes at least 2Gb of RAM, Android certainly doesn't on my phone... Its possible we may see devices that are plenty powerful to do day to day tasks and see PCs become specialized tools requiring (high paid) experts. But smartphones will always be limited with cooling and power usage compared to a full blown pc. The only way to mitigate this is by having servers offload intensive tasks. Just like desktop computers will always be limited with cooling and power usage compared to any super-computer from the NASA. I do not see PCs dying out, just changing in nature. A smartphone is a PC, just one that is less flexible and is power limited because of its size. And that law will always be true. If you can put X power in a small device, you can put X * 10 in a bigger device, you can put X * 100 in a even bigger device. That law will always be true, yes. But if we can cram more and more in terms of power in less and less in terms of size, at some point we could have enough power in a very small device. And do not be so sure that ARM is the future... I have several NUCs around here and those things are darn powerful ( think 8 year old PC ) these days, with a very low power usage ( 6W ). And Risk-V is coming up... I never said anything about ARM being the future. The PC world as we know, never stops changing. But predictions that X will die are wrong. They simply evolve. A Smartphone/Tablet is a PC, so anybody making claims how PCs are dying, is simply stating that PCs are simple evolving into different forms. You're right; that was a wrong wording again. I should have talked about "classic desktop computers" instead of just "PC". (but "PC" is shorter to write and I'm lazy) And by the way, smartphone sales are also starting to plateau because people are less fast on replacing their phones these days. If it was not for the
Re: Jai compiles 80,000 lines of code in under a second
On 9/21/2018 12:19 AM, mate wrote: It depends on the developer not doing anything stupid Aye, there's the rub!
Re: Simple parallel foreach and summation/reduction
On Thursday, 20 September 2018 at 05:51:17 UTC, Neia Neutuladh wrote: On Thursday, 20 September 2018 at 05:34:42 UTC, Chris Katko wrote: All I want to do is loop from 0 to [constant] with a for or foreach, and have it split up across however many cores I have. You're looking at std.parallelism.TaskPool, especially the amap and reduce functions. Should do pretty much exactly what you're asking. auto taskpool = new TaskPool(); taskpool.reduce!((a, b) => a + b)(iota(1_000_000_000_000L)); I get "Error: template instance `reduce!((a, b) => a + b)` cannot use local __lambda1 as parameter to non-global template reduce(functions...)" when trying to compile that using the online D editor with DMD and LDC. Any ideas?
Re: Copy Constructor DIP and implementation
On Wednesday, 19 September 2018 at 00:05:15 UTC, Jonathan M Davis wrote: On Tuesday, September 18, 2018 10:58:39 AM MDT aliak via Digitalmars-d- announce wrote: This will break compilation of current code that has an explicit copy constructor, and the fix is simply to add the attribute @implicit. In that case, why not just use a transitional compiler switch? Why force everyone to mark their copy constructors with @implicit forever? The whole point of adding the attribute was to avoid breaking existing code. - Jonathan M Davis What about a command-line switch that just detects copy constructors? LLVM 8 supposedly has a new feature which breaks compatibility for casts from float -> int: Floating-point casts—converting a floating point number to an integer by discarding the data after the decimal—has been optimized, but in a way that might cause problems for developers who rely on undefined behavior around this feature. Clang has a new command-line switch to detect this issue. Could we do copy-constructors in a similar way?
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 06:34:47 UTC, Vladimir Panteleev wrote: The problem with putting it in the compiler is that it invalidates many contracts (and, thus, use cases) about what invoking the compiler can do. This means you can't bisect or reduce (as with Dustmite) the source code reliably. I am not able to see the difference it makes. Normally when you bisect you build the program to test using the build system. Is not it equivalent to what the Jai compiler would do? What cases do you have in mind? Reproducible builds are out too, as the produced object file is no longer purely a function of the source code and compiler version. It depends on the developer not doing anything stupid in the build instructions, be it compiler-executed or not. Doesn’t it?
Re: phobo's std.file is completely broke!
On Thursday, 20 September 2018 at 19:49:01 UTC, Nick Sabalausky (Abscissa) wrote: On 09/19/2018 11:45 PM, Vladimir Panteleev wrote: On Thursday, 20 September 2018 at 03:23:36 UTC, Nick Sabalausky (Abscissa) wrote: (Not on a Win box at the moment.) I added the output of my test program to the gist: https://gist.github.com/CyberShadow/049cf06f4ec31b205dde4b0e3c12a986#file-output-txt assert( dir.toAbsolutePath.length > MAX_LENGTH-12 ); Actually it's crazier than that. The concatenation of the current directory plus the relative path must be < MAX_PATH (approx.). Meaning, if you are 50 directories deep, a relative path starting with 50 `..\` still won't allow you to access C:\file.txt. Ouch. Ok, yea, this is pretty solid evidence that ALL usage of non-`\\?\` paths on Windows needs to be killed dead, dead, dead. If it were decided (not that I'm in favor of it) that we should be protecting developers from files named " a ", "a." and "COM1", then that really needs to be done on our end on top of mandatory `\\?\`-based access. Anyone masochistic enough to really WANT to deal with MAX_PATH and such is free to access the Win32 APIs directly. +1 On Windows, every logical path provided to the std file functions should be properly converted to a physical path starting with that prefix. Obviously this won't solve ALL Windows-specific problems, but that will AT LEAST remove a whole class of them.
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 06:30:40 UTC, Walter Bright wrote: On 9/20/2018 10:11 PM, mate wrote: Note that the build can be done at compile time because the metaprogramming capabilities of the language are not limited in terms of system calls. Back in the naive olden days, Microsoft released ActiveX, where a web page could load executable objects (!) from the internet and run them in the browser. It quickly became apparent that this was a disaster, as lots of people on the internet aren't to be trusted. CTFE on D doesn't allow making any system calls. This is on purpose. The usual argument against this is that source code distributions already usually include some sort of build or installation script (be it in the form of "configure", or a makefile, or a Visual Studio project), which can already execute arbitrary commands. The problem with putting it in the compiler is that it invalidates many contracts (and, thus, use cases) about what invoking the compiler can do. This means you can't bisect or reduce (as with Dustmite) the source code reliably. Reproducible builds are out too, as the produced object file is no longer purely a function of the source code and compiler version.
Re: Jai compiles 80,000 lines of code in under a second
On 9/20/2018 10:11 PM, mate wrote: Note that the build can be done at compile time because the metaprogramming capabilities of the language are not limited in terms of system calls. Back in the naive olden days, Microsoft released ActiveX, where a web page could load executable objects (!) from the internet and run them in the browser. It quickly became apparent that this was a disaster, as lots of people on the internet aren't to be trusted. CTFE on D doesn't allow making any system calls. This is on purpose.
Re: Walter's Guide to Translating Code From One Language to Another
On Friday, 21 September 2018 at 06:00:33 UTC, Walter Bright wrote: I've learned this the hard way, and I've had to learn it several times because I am a slow learner. I've posted this before, and repeat it because bears repeating. I find this is a great procedure for any sort of large refactoring -- minimal changes at each step and ensure tests are passing after every change. Thanks for sharing!
Re: std.process.execute without capturing stderr?
On Thursday, 20 September 2018 at 14:10:44 UTC, Steven Schveighoffer wrote: Hm... 2.079.0 had it: Sorry, I made a mistake while testing and after I found out, that it was not available in the documentation at dpldocs.info I concluded, that it must be a really new feature. But now it seems to me, that dpldocs is outdated a little bit, isn't it? Meanwhile I've got the latest version of dmd and made it working.
Walter's Guide to Translating Code From One Language to Another
I've learned this the hard way, and I've had to learn it several times because I am a slow learner. I've posted this before, and repeat it because bears repeating. The procedure is: 1. pass the test suite 2. prep the file for conversion, i.e. try to minimize the use of idioms that won't easily translate 3. pass the test suite 4. translate the code with as few edits as practical. Do not reformat the code. Do not refactor it. Do not fix anything, no matter how tempting. Reproduce the behavior of the original as much as possible. 5. pass the test suite 6. now reformat, refactor, fix (as separate PRs, of course, and passing the test suite in between each). 7. pass the test suite Note that without a test suite, you're doomed :-)
Re: Jai compiles 80,000 lines of code in under a second
On Friday, 21 September 2018 at 05:39:35 UTC, Vladimir Panteleev wrote: On Friday, 21 September 2018 at 05:11:32 UTC, mate wrote: Note that the build can be done at compile time because the metaprogramming capabilities of the language are not limited in terms of system calls. Good luck bisecting that code base when any version of it did anything even mildly specific to the author's PC. Indeed. I am actually not sure if there really are no limitations to Jai’s CTFE, in its current state. There are probably facilities in the stdlib to avoid the need for doing system specific things; also the build instructions would hopefully be contained in some function/file either by convention or as required by the compiler, limiting the scope of build debugging. Moreover, I got the feeling that the language is geared towards “good programmers” and is less concerned by mistakes happening because the author did something stupid.