[Issue 19260] New: extern(C++) `T* const` mangling
https://issues.dlang.org/show_bug.cgi?id=19260 Issue ID: 19260 Summary: extern(C++) `T* const` mangling Product: D Version: D2 Hardware: All OS: All Status: NEW Severity: enhancement Priority: P1 Component: dmd Assignee: nob...@puremagic.com Reporter: turkey...@gmail.com This function: void deallocate(T* const ptr, size_t count); Is critical for linking STL, but I can't mangle the function. D should obviously declare the function as: void deallocate(T* ptr, size_t count); Which is semantically identical in terms of usage, but the mangling of the const pointer is different and therefore can't link. What can we do? I can't use pragma(mangle), because `T` is a template arg, and could be anything! This has stopped me in my tracks :/ --
[Issue 19260] extern(C++) `T* const` mangling
https://issues.dlang.org/show_bug.cgi?id=19260 Manu changed: What|Removed |Added Keywords||C++, industry --
Re: Copy Constructor DIP and implementation
On Saturday, September 22, 2018 8:40:15 PM MDT Nicholas Wilson via Digitalmars-d-announce wrote: > On Sunday, 23 September 2018 at 01:08:50 UTC, Jonathan M Davis > > wrote: > > On Saturday, September 22, 2018 6:13:25 PM MDT Adam D. Ruppe > > > > via Digitalmars-d-announce wrote: > >> [...] > > > > Yeah, the problem has to do with how much you have to mark up > > your code. Whether you have @foo @bar @baz or foo bar baz is > > pretty irrelevant. And keywords eat up identifiers, so they're > > actually worse. > > > > In addition, most of the complaints about @implicit have to do > > with the fact that it doesn't even add anything. It's annoying > > that we have @nogc, @safe, pure, etc. but at least each of > > those adds something. @implicit is just there because of the > > fear of breaking a theoretical piece of code that's going to be > > extremely rare if it exists at all and in most cases would > > continue to work just fine even if it did exist. > > > > - Jonathan M Davis > > It appears that @implicit has been removed from the > implementation [1], but not yet from the DIP. > > https://github.com/dlang/dmd/commit/cdd8100 Well, that's a good sign. - Jonathan M Davis
Re: Copy Constructor DIP and implementation
On Sunday, 23 September 2018 at 01:08:50 UTC, Jonathan M Davis wrote: On Saturday, September 22, 2018 6:13:25 PM MDT Adam D. Ruppe via Digitalmars-d-announce wrote: [...] Yeah, the problem has to do with how much you have to mark up your code. Whether you have @foo @bar @baz or foo bar baz is pretty irrelevant. And keywords eat up identifiers, so they're actually worse. In addition, most of the complaints about @implicit have to do with the fact that it doesn't even add anything. It's annoying that we have @nogc, @safe, pure, etc. but at least each of those adds something. @implicit is just there because of the fear of breaking a theoretical piece of code that's going to be extremely rare if it exists at all and in most cases would continue to work just fine even if it did exist. - Jonathan M Davis It appears that @implicit has been removed from the implementation [1], but not yet from the DIP. https://github.com/dlang/dmd/commit/cdd8100
Re: phobo's std.file is completely broke!
On 09/22/2018 04:46 PM, Jonathan Marler wrote: Decided to play around with this for a bit. Made a "proof of concept" library: https://github.com/marler8997/longfiles It's just a prototype/exploration on the topic. It allows you to include "stdx.longfiles" instead of "std.file" which will enable the conversion in every call, or you can import "stdx.longfiles : toLongPath" and use that on filenames passed to std.file. Cool! Will have to take a closer look and try it out. Regarding this: "TODO: what should be done about the MS-DOS FAT filesystem?"... First of all, FAT16 can still be fully-used with the current interfaces anyway - it's just that if you attempt anything FAT16 doesn't support, the error you get will come from the OS rather than a D lib. But *unlike* the non-`\\?\` path issues, there really isn't anything here that needs to be worked around, or that even *can* be sensibly worked around. Besides, FAT16 is a rarely-used, long-since-outdated legacy format. Its successor, FAT32 has been around for more than 20 years, and I'm not aware of anything more recent than the 3.5" floppy that uses it by default. I'd say it safely falls into the category of "Too much of an esoteric special-case to be worth requiring that special support be added in the main 'path of least resistance' interface (as long as there's nothing preventing the user from handling it on their own if they really need to.)"
Re: Rather D1 then D2
On Saturday, September 22, 2018 7:34:55 PM MDT rikki cattermole via Digitalmars-d wrote: > On 23/09/2018 2:31 AM, Jonathan Marler wrote: > > On Saturday, 22 September 2018 at 13:25:27 UTC, rikki cattermole wrote: > >> Then D isn't the right choice for you. > > > > I think it makes for a better community if we can be more welcoming, > > helpful a gracious instead of responding to criticism this way. This is > > someone who saw enough potential with D to end up on the forums but had > > some gripes with it, after all who doesn't? I'm glad he took the > > initiative to provide us with good feedback, and he's not the first to > > take issue with the inconsistent '@' attribute syntax. I'm sure > > everyone can agree this inconsistency is less than ideal but that > > doesn't mean D isn't right for them and we should respond this feedback > > like this with thanks rather than dismissal. > > It's much better for the language and for the person looking into a > technology to be open to saying that it isn't the right tool for the job > after some discussion which has taken place. > > I would much rather stop people looking instead of trying to impose > changes on us that are not likely to happen in any acceptable time span. > Let alone at all. At least then, they can look back and see that we > didn't want to waste their or our time trying to compromise on something > that wasn't going to happen anyway. > > After all, don't we want to make people happy with their decision? We want to understand where D can and should be improved, but we also need to acknowledge where it shouldn't be changed. Not everyone is going to be happy with it, and we shouldn't try to make it so that everyone is going to be happy with it. For instance, from what I know of Go, I would be _extremely_ unhappy with it, and yet there are folks who absolutely love it. Would I try to convince such folks to come to D? No. If that's the kind of taste that they have in languages, I'd rather that they'd stay away from D and not risk affecting it in a negative way for me. On the other hand, if D fits reasonably well for someone but doesn't quite fit well enough due to some problem with the language, then maybe there's something that we can do to address that, making D work for them, and making it a better language for the rest of us. Ultimately, it's a question of balance. We want to improve D and make it work for more people, but we also don't want to make it worse for ourselves in the process of trying to make it work better for someone else. Listening to complaints about the language from both those trying it out and those who use it regularly can be important. Ultimately though, I think that it makes a lot more sense to focus on trying to fix the problems that existing users have rather than trying to fix the problems that potential users have. As Walter likes to talk about, when someone tells you why they're not using something, and you solve that problem for them, they always have another reason. And really, while we want D to have a large user base, as users of the language, first and foremost, we want it to work well for ourselves. If we can make the language work really well for what we need, then it will work well for other people as well, even if it won't work well for everyone. With regards to D1 users who are unhappy with D2, I think that it makes some sense to point out that a subset of D2 can be used in a way that's a lot like D1, but ultimately, if someone doesn't like the direction that D2 took, they're probably better off finding a language that better fits whatever it is that they're looking for in a language. Trying to convince someone to use a language that they don't like is likely to just make them unhappy. - Jonathan M Davis
Re: Rather D1 then D2
On 23/09/2018 2:31 AM, Jonathan Marler wrote: On Saturday, 22 September 2018 at 13:25:27 UTC, rikki cattermole wrote: Then D isn't the right choice for you. I think it makes for a better community if we can be more welcoming, helpful a gracious instead of responding to criticism this way. This is someone who saw enough potential with D to end up on the forums but had some gripes with it, after all who doesn't? I'm glad he took the initiative to provide us with good feedback, and he's not the first to take issue with the inconsistent '@' attribute syntax. I'm sure everyone can agree this inconsistency is less than ideal but that doesn't mean D isn't right for them and we should respond this feedback like this with thanks rather than dismissal. It's much better for the language and for the person looking into a technology to be open to saying that it isn't the right tool for the job after some discussion which has taken place. I would much rather stop people looking instead of trying to impose changes on us that are not likely to happen in any acceptable time span. Let alone at all. At least then, they can look back and see that we didn't want to waste their or our time trying to compromise on something that wasn't going to happen anyway. After all, don't we want to make people happy with their decision?
Re: Updating D beyond Unicode 2.0
On Sunday, 23 September 2018 at 00:18:06 UTC, Adam D. Ruppe wrote: I have seen Japanese D code before on twitter, but cannot find it now (surely because the search engines also share this bias). You can find a lot more Japanese D code on this blogging platform: https://qiita.com/tags/dlang Here's the most recent post to save you a click: https://qiita.com/ShigekiKarita/items/9b3aa8f716848278ef62
Re: Updating D beyond Unicode 2.0
On Saturday, 22 September 2018 at 12:37:09 UTC, Steven Schveighoffer wrote: But aren't some (many?) Chinese/Japanese characters representing whole words? -Steve Kind of hair-splitting, but it's more accurate to say that some Chinese/Japanese words can be written with one character. Like how English speakers wouldn't normally say that "A" and "I" are characters representing whole words.
Re: Copy Constructor DIP and implementation
On Saturday, September 22, 2018 6:13:25 PM MDT Adam D. Ruppe via Digitalmars-d-announce wrote: > On Saturday, 22 September 2018 at 17:43:57 UTC, 12345swordy wrote: > > If that where the case, then why not make it an actual keyword? > > A frequent complaint regarding D is that there are too many > > attributes, this will undoubtedly adding more to it. > > When I (and surely others like me) complain that there are too > many attributes, the complaint has nothing to do with the @ > character. I consider "nothrow" and "pure" to be part of the > problem and they lack @. Yeah, the problem has to do with how much you have to mark up your code. Whether you have @foo @bar @baz or foo bar baz is pretty irrelevant. And keywords eat up identifiers, so they're actually worse. In addition, most of the complaints about @implicit have to do with the fact that it doesn't even add anything. It's annoying that we have @nogc, @safe, pure, etc. but at least each of those adds something. @implicit is just there because of the fear of breaking a theoretical piece of code that's going to be extremely rare if it exists at all and in most cases would continue to work just fine even if it did exist. - Jonathan M Davis
Re: Updating D beyond Unicode 2.0
On Saturday, September 22, 2018 10:07:38 AM MDT Neia Neutuladh via Digitalmars-d wrote: > On Saturday, 22 September 2018 at 08:52:32 UTC, Jonathan M Davis > > wrote: > > Unicode identifiers may make sense in a code base that is going > > to be used solely by a group of developers who speak a > > particular language that uses a number a of non-ASCII > > characters (especially languages like Chinese or Japanese), but > > it has no business in any code that's intended for > > international use. It just causes problems. > > You have a problem when you need to share a codebase between two > organizations using different languages. "Just use ASCII" is not > the solution. "Use a language that most developers in both > organizations can use" is. That's *usually* going to be English, > but not always. For instance, a Belorussian company doing > outsourcing work for a Russian company might reasonably write > code in Russian. > > If you're writing for a global audience, as most open source code > is, you're usually going to use the most widely spoken language. My point is that if your code base is definitely only going to be used within a group of people who are using a keyboard that supports a Unicode character that you want to use, then it's not necessarily a problem to use it, but if you're writing code that may be seen or used by a general audience (especially if it's going to be open source), then it needs to be in ASCII, or it's a serious problem. Even if it's a character like lambda that most everyone is going to understand, many, many programmers are not going to be able type it on their keyboards, and that's going to cause nothing but problems. For better or worse, English is the international language of science and engineering, and that includes programming. So, any programs that are intended to be seen and used by the world at large need to be in ASCII. And the biggest practical issue with that is whether a character is even on a typical keyboard. Using a Unicode character in a program makes it so that make programmers cannot type it. And even given the large breadth of Unicode characters, you could even have a keyboard that supports a number of Unicode characters and still not have the Unicode character in question. So, open source programs need to be in ASCII. Now, I don't know that it's a problem to support a wide range of Unicode characters in identifiers when you consider the issues of folks whose native language is not English (especially when it's a language like Chinese or Japanese), but open source programs should only be using ASCII identifiers. And unfortunately, sometimes, the fact that a language supports Unicode identifiers has lead English speakers to do stupid things like use the lambda character in identifiers. So, I can understand Walter's reticence to go further with supporting Unicode identifiers, but on the other hand, when you consider how many people there are on the planet who use a language that doesn't even use the latin alphabet, it's arguably a good idea to fully support Unicode identifiers. - Jonathan M Davis
Re: Updating D beyond Unicode 2.0
On Saturday, 22 September 2018 at 19:59:42 UTC, Erik van Velzen wrote: Nobody in this thread so far has said they are programming in non-ASCII. This is the obvious observation bias I alluded to before: of course people who don't read and write English aren't in this thread, since they cannot read or write the English used in this thread! Ditto for bugzilla. Absence of evidence CAN be evidence of absence... but not when the absence is so easily explained by our shared bias. Neia Neutuladh posted one link. I have seen Japanese D code before on twitter, but cannot find it now (surely because the search engines also share this bias). Perhaps those are the only two examples in existence, but I stand by my belief that we must reach out to these other communities somehow and do a proper, proactive study before dismissing the possibility.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 20:53:02 UTC, krzaq wrote: C++ added contextual keywords, like `override` and `final`. If this can be done in C++, surely D is easier to parse? If D did more stuff like that, it would start to be harder to parse.
Re: Copy Constructor DIP and implementation
On Saturday, 22 September 2018 at 17:43:57 UTC, 12345swordy wrote: If that where the case, then why not make it an actual keyword? A frequent complaint regarding D is that there are too many attributes, this will undoubtedly adding more to it. When I (and surely others like me) complain that there are too many attributes, the complaint has nothing to do with the @ character. I consider "nothrow" and "pure" to be part of the problem and they lack @.
Re: "Error: function expected before (), not module *module* of type void
On 09/22/2018 04:51 AM, Samir wrote: Thanks for your help, Adam! Right after posting my question, I started reading this site: https://www.tutorialspoint.com/d_programming/d_programming_modules.htm Better read the original: http://ddili.org/ders/d.en/modules.html
Re: phobo's std.file is completely broke!
On Saturday, 22 September 2018 at 21:04:04 UTC, Vladimir Panteleev wrote: On Saturday, 22 September 2018 at 20:46:27 UTC, Jonathan Marler wrote: Decided to play around with this for a bit. Made a "proof of concept" library: I suggest using GetFullPathNameW instead of GetCurrentDirectory + manual path appending / normalization. It's also what CoreFX seems to be doing. Yes that allows the library to avoid calling buildNormalizedPath. I've implemented and pushed this change. This change also exposed a weakness in the Appender interface and I've created a bug for it: https://issues.dlang.org/show_bug.cgi?id=19259 The problem is there's no way to extend the length of the data in an appender if you don't use the `put` functions. So when I call GetFullPathNameW function to populate the data (like the .NET CoreFX implementation does) I can't extend the length.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 20:53:02 UTC, krzaq wrote: On Saturday, 22 September 2018 at 20:40:14 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 19:04:41 UTC, Henrik wrote: [...] That works for types but wouldn't work for keywords. Keywords have special meaning in the lexical stage and you can't extend/change the grammar of the language via an alias or typedef. You could do something like this with a preprocessor but then you run into all sorts of other problems (i.e. #define safe @safe). If you come up with other ideas then feel free to share. No one likes the current state but no one has come up with a good solution yet. C++ added contextual keywords, like `override` and `final`. If this can be done in C++, surely D is easier to parse? https://wiki.dlang.org/Language_Designs_Explained#Why_don.27t_we_create_a_special_rule_in_the_syntax_to_handle_non-keyword_function_attributes_without_an_.27.40.27_character.3F
[Issue 19259] New: std.array.Appender needs a way to set the length
https://issues.dlang.org/show_bug.cgi?id=19259 Issue ID: 19259 Summary: std.array.Appender needs a way to set the length Product: D Version: D2 Hardware: All OS: All Status: NEW Severity: enhancement Priority: P2 Component: phobos Assignee: nob...@puremagic.com Reporter: johnnymar...@gmail.com std.array.Appender needs a way to extend/add to the length of `data`. See the following use case: ``` uint tryAppendFullPathNameImpl(const(wchar)* nullTerminatedPath, Appender!(wchar[]) builder) { import core.sys.windows.winbase : GetFullPathNameW; auto prefixLength = builder.data.length; for (;;) { const result = GetFullPathNameW(nullTerminatedPath, builder.capacity - prefixLength, builder.data.ptr + prefixLength, null); if (result <= (builder.capacity - prefixLength)) { // NO WAY TO DO THIS: //builder.overrideDataLength(prefixLength + result); return result; } builder.reserve(prefixLength + result); } } ``` What's happening here is we are passing the Appender array to a C function that populates the array with our resuling "full path". Note that this implementation is taken from .NET CoreFX: https://github.com/dotnet/corefx/blob/1bff7880bfa949e8c5e46039808ec412640bbb5e/src/Common/src/CoreLib/System/IO/PathHelper.Windows.cs#L72 The problem is that once it's populated, we have no way of extending the length of the array after it was populated by the C function. --
Re: Converting a character to upper case in string
On Saturday, 22 September 2018 at 06:01:20 UTC, Vladimir Panteleev wrote: On Friday, 21 September 2018 at 12:15:52 UTC, NX wrote: How can I properly convert a character, say, first one to upper case in a unicode correct manner? That would depend on how you'd define correctness. If your application needs to support "all" languages, then (depending how you interpret it) the task may not be meaningful, as some languages don't have the notion of "upper-case" or even "character" (as an individual glyph). Some languages do have those notions, but they serve a specific purpose that doesn't align with the one in English (e.g. Lojban). In which code level I should be working on? Grapheme? Or maybe code point is sufficient? Using graphemes is necessary if you need to support e.g. combining marks (e.g. ̏◌ + S = ̏S). Uppercase and Lowercase gets even more funky with Turkish.
Re: Then new forum moderation
On Saturday, 22 September 2018 at 21:42:11 UTC, bauss wrote: Maybe it should be visible to more users? At present I do not believe this would bring an observable benefit.
Re: Then new forum moderation
On Saturday, 22 September 2018 at 19:41:56 UTC, Vladimir Panteleev wrote: On Saturday, 22 September 2018 at 19:09:24 UTC, bauss wrote: And on top of that maybe a flag system. This exists, but is only visible to certain users. Maybe it should be visible to more users?
Re: phobo's std.file is completely broke!
On Saturday, 22 September 2018 at 20:46:27 UTC, Jonathan Marler wrote: Decided to play around with this for a bit. Made a "proof of concept" library: I suggest using GetFullPathNameW instead of GetCurrentDirectory + manual path appending / normalization. It's also what CoreFX seems to be doing.
Re: Converting a character to upper case in string
On Saturday, 22 September 2018 at 06:01:20 UTC, Vladimir Panteleev wrote: On Friday, 21 September 2018 at 12:15:52 UTC, NX wrote: How can I properly convert a character, say, first one to upper case in a unicode correct manner? That would depend on how you'd define correctness. If your application needs to support "all" languages, then (depending how you interpret it) the task may not be meaningful, as some languages don't have the notion of "upper-case" or even "character" (as an individual glyph). Some languages do have those notions, but they serve a specific purpose that doesn't align with the one in English (e.g. Lojban). There are other traps in the question of uppercase/lowercase which makes is indeed very difficult to handle correctly if we don't define what correctly means. Examples: - It may be necessary to know the locale, i.e. the language of the string to uppercase. In Turkish uppercase of i is not I but İ and lowercase of I is ı (that was a reason for the calamitous low performance of toUpper/toLower in Java for example. - Some uppercases depend on what they are used for. German ß shouldbe uppercased as SS (note also btw that 1 codepoint becomes 2 in uppercase) in normal text, but for calligraphic work, road signs and other usages it can be capital ẞ. - Greek has 2 lowercase forms for Σ but two lowercase forms σ and ς depending on the word position. - While it becomes less and less relevant Serbo-croatian may use digraphs when transcoding the script from Cyrillic (Serbian) to Latin (Croatian), these digraphs have 2 uppercase forms (title-case and all capital): - dž -> DŽ or Dž - lj -> LJ or Lj - NJ -> Nj or nj Normalization would normally take care of that case. - Some languages may modify or remove diacritical signs when uppercasing. It is quite usual in French to not put accents on capitals. It is also clear that the operation of uppercasing is not symetric with lowercasing. In which code level I should be working on? Grapheme? Or maybe code point is sufficient? Using graphemes is necessary if you need to support e.g. combining marks (e.g. ̏◌ + S = ̏S).
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 20:40:14 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 19:04:41 UTC, Henrik wrote: On Saturday, 22 September 2018 at 15:45:09 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 15:25:32 UTC, aberba wrote: On Saturday, 22 September 2018 at 14:31:20 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 13:25:27 UTC, rikki cattermole wrote: Then D isn't the right choice for you. I think it makes for a better community if we can be more welcoming, helpful a gracious instead of responding to criticism this way. This is someone who saw enough potential with D to end up on the forums but had some gripes with it, after all who doesn't? I'm glad he took the initiative to provide us with good feedback, and he's not the first to take issue with the inconsistent '@' attribute syntax. I'm sure everyone can agree this inconsistency is less than ideal but that doesn't mean D isn't right for them and we should respond this feedback like this with thanks rather than dismissal. That inconsistency is an issue for me. I wish there a clear decision to make things consistent. Yeah there's been alot of discussion around it over the years, which is why I put this together about 4 years ago: https://wiki.dlang.org/Language_Designs_Explained#Function_attributes Gosh I've forgotten how long I've been using D. Interesting article. "int safe = 0; // This code would break if "safe" was added as a keyword" My question here: why didn't D use a similar solution as C when dealing with these things? Look at the introduction of the bool datatype in C99. They created the compiler reserved type "_Bool" and put "typedef _Bool bool" in "stdbool.h". The people wanting to use this new feature can include this header, and other can leave it be. No ugly "@" polluting the language on every line where it's used. Wouldn't a similar solution have been possible in D? That works for types but wouldn't work for keywords. Keywords have special meaning in the lexical stage and you can't extend/change the grammar of the language via an alias or typedef. You could do something like this with a preprocessor but then you run into all sorts of other problems (i.e. #define safe @safe). If you come up with other ideas then feel free to share. No one likes the current state but no one has come up with a good solution yet. Yes, of course you are right. A typedef for this problem wouldn't be good enough. But there are plenty of other solutions to encapsulate the ugliness in one area instead of spreading it to every codeline. What about a compiler switch like the one in gcc? "-std=c11"? until making the change to default? There is an excellent speech from Scott Meyers about this. I loved his books about C++, and his recommendations for D are just as good. 41:00 into the video he mentions the legacy crud of C++ and the accidental complexity it contains. He advises D to avoid the need of having "explainers" like him in the future, and to use the small legacy codebase in D to remove/avoid such accidental complexity. https://www.youtube.com/watch?v=KAWA1DuvCnQ I don't think enough people listened to him.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 20:53:02 UTC, krzaq wrote: C++ added contextual keywords, like `override` and `final`. If this can be done in C++, surely D is easier to parse? Currently this compiles: alias safe = int; @safe foo() { return 1; } safe bar() { return 2; } Making "safe" a keyword would cause the second definition to be ambiguous. (Not that there's much incentive to keep this syntax valid...)
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 20:40:14 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 19:04:41 UTC, Henrik wrote: [...] That works for types but wouldn't work for keywords. Keywords have special meaning in the lexical stage and you can't extend/change the grammar of the language via an alias or typedef. You could do something like this with a preprocessor but then you run into all sorts of other problems (i.e. #define safe @safe). If you come up with other ideas then feel free to share. No one likes the current state but no one has come up with a good solution yet. C++ added contextual keywords, like `override` and `final`. If this can be done in C++, surely D is easier to parse?
Re: phobo's std.file is completely broke!
On Thursday, 20 September 2018 at 19:49:01 UTC, Nick Sabalausky (Abscissa) wrote: On 09/19/2018 11:45 PM, Vladimir Panteleev wrote: On Thursday, 20 September 2018 at 03:23:36 UTC, Nick Sabalausky (Abscissa) wrote: (Not on a Win box at the moment.) I added the output of my test program to the gist: https://gist.github.com/CyberShadow/049cf06f4ec31b205dde4b0e3c12a986#file-output-txt assert( dir.toAbsolutePath.length > MAX_LENGTH-12 ); Actually it's crazier than that. The concatenation of the current directory plus the relative path must be < MAX_PATH (approx.). Meaning, if you are 50 directories deep, a relative path starting with 50 `..\` still won't allow you to access C:\file.txt. Ouch. Ok, yea, this is pretty solid evidence that ALL usage of non-`\\?\` paths on Windows needs to be killed dead, dead, dead. If it were decided (not that I'm in favor of it) that we should be protecting developers from files named " a ", "a." and "COM1", then that really needs to be done on our end on top of mandatory `\\?\`-based access. Anyone masochistic enough to really WANT to deal with MAX_PATH and such is free to access the Win32 APIs directly. Decided to play around with this for a bit. Made a "proof of concept" library: https://github.com/marler8997/longfiles It's just a prototype/exploration on the topic. It allows you to include "stdx.longfiles" instead of "std.file" which will enable the conversion in every call, or you can import "stdx.longfiles : toLongPath" and use that on filenames passed to std.file. There's also a test you can run rund test/test_with_longfiles.d (should work) rund test/test_without_longfiles.d (should fail) NOTE: use "rund test/cleantests.d" to remove the files...I wasn't able to via the windows explorer program.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 19:04:41 UTC, Henrik wrote: On Saturday, 22 September 2018 at 15:45:09 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 15:25:32 UTC, aberba wrote: On Saturday, 22 September 2018 at 14:31:20 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 13:25:27 UTC, rikki cattermole wrote: Then D isn't the right choice for you. I think it makes for a better community if we can be more welcoming, helpful a gracious instead of responding to criticism this way. This is someone who saw enough potential with D to end up on the forums but had some gripes with it, after all who doesn't? I'm glad he took the initiative to provide us with good feedback, and he's not the first to take issue with the inconsistent '@' attribute syntax. I'm sure everyone can agree this inconsistency is less than ideal but that doesn't mean D isn't right for them and we should respond this feedback like this with thanks rather than dismissal. That inconsistency is an issue for me. I wish there a clear decision to make things consistent. Yeah there's been alot of discussion around it over the years, which is why I put this together about 4 years ago: https://wiki.dlang.org/Language_Designs_Explained#Function_attributes Gosh I've forgotten how long I've been using D. Interesting article. "int safe = 0; // This code would break if "safe" was added as a keyword" My question here: why didn't D use a similar solution as C when dealing with these things? Look at the introduction of the bool datatype in C99. They created the compiler reserved type "_Bool" and put "typedef _Bool bool" in "stdbool.h". The people wanting to use this new feature can include this header, and other can leave it be. No ugly "@" polluting the language on every line where it's used. Wouldn't a similar solution have been possible in D? That works for types but wouldn't work for keywords. Keywords have special meaning in the lexical stage and you can't extend/change the grammar of the language via an alias or typedef. You could do something like this with a preprocessor but then you run into all sorts of other problems (i.e. #define safe @safe). If you come up with other ideas then feel free to share. No one likes the current state but no one has come up with a good solution yet.
Re: transposed with enforceNotJagged not throwing?
On Saturday, 22 September 2018 at 12:52:45 UTC, Steven Schveighoffer wrote: It was suggested when transposed was fixed to include opIndex, but never implemented. Maybe I'm too naive, but isn't it easy to implement it just the same way, it is done with transverse? That is: putting the "static if"-part from the constructor there in the constructor of Transposed?
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 19:41:16 UTC, JN wrote: Some code will break, sure, but it's a mechanical change that should be possible to apply by some tool. Who will run this tool? Who's gonna merge the PRs created with this tool? Compatibility fixes would have been easy in the past in many cases - nevertheless, it needs someone to apply them. Which often did not happen in the past, unfortunately. Also, with proper dub package validation, it should be easy to assess how many packages actually break if any. Now that there's the tester by Neia, one will see whether this works in practice. Time will tell.
Re: Then new forum moderation
On Saturday, 22 September 2018 at 19:09:24 UTC, bauss wrote: ... It's no rocket science, so it really doesn't do much in preventing I think. Really it can be automated like: 1. Copy the code 2. Go to run.dlang.io 3. Paste the code 4. Compile it 5. Wait for the output 6. Copy the output 7. Paste the output into the input field 8. Submit ... Or since the snippets are simple and some resembling C, the Bot could run itself or even in JavaScript parsing the body of the function and taking care of the return, with in the most case are int or float:var v = "int v(){ return 26 % 3 ? 13 / 3 : 42 % 5;}"; // Original Snippetvar types = ["int", "float"]; // Type of return var s = v.split(" "); var t = s[0].toLowerCase(); s.splice(0,1); if(types.indexOf(t)>-1){ t = t[0].charAt(0).toUpperCase() + t.slice(1); }document.writeln(eval("parse"+t+"((function " + s.join(' ') + ")())" ));This an example and will print 4 which is the result expected by the Captcha. https://js.do/code/241565 S.G
Re: Updating D beyond Unicode 2.0
On Saturday, 22 September 2018 at 19:59:42 UTC, Erik van Velzen wrote: Nobody in this thread so far has said they are programming in non-ASCII. I did. https://git.ikeran.org/dhasenan/muzikilo
Re: Updating D beyond Unicode 2.0
On Saturday, 22 September 2018 at 16:56:10 UTC, Neia Neutuladh wrote: On Saturday, 22 September 2018 at 16:56:10 UTC, Neia Neutuladh wrote: Walter was doing that thing that people in the US who only speak English tend to do: forgetting that other people speak other languages, and that people who speak English can learn other languages to work with people who don't speak English. He was saying it's inevitably a mistake to use non-ASCII characters in identifiers and that nobody does use them in practice. There's a more charitable view and that's that even furriners usually use English identifiers. Nobody in this thread so far has said they are programming in non-ASCII. If there was a contingent of Japanese or Chinese users doing that then surely they would speak up here or in Bugzilla to advocate for this feature?
Re: Webassembly TodoMVC
On Saturday, 22 September 2018 at 14:54:29 UTC, aberba wrote: Can the SPA code be released as a separate module for WebAssembly web app development? Currently the whole thing is not so developer-friendly, it was just the easiest way for me to get it up and running. Right now I am trying to ditch emscripten in favor of ldc's webassembly target. This will make it possible to publish it as a dub package (ldc only), as well as reduce some of the bloat. The downside is that ditching emscripten means I have to implement things like malloc and free myself. There is some obvious overlap between this and recent efforts by others (I remember D memcpy, and people trying to run it without libc, etc.), so I expect a situation in the future where all these efforts might be combined. Regardless, I don't need much from the C library, just enough to make (de)allocations and parts of the D standard library work. TL;DR I intend to publish it on dub, but it does takes some more time. What do you think of the struct approach compared to a traditional jsx/virtual-dom?
Re: Then new forum moderation
On Saturday, 22 September 2018 at 19:09:24 UTC, bauss wrote: And on top of that maybe a flag system. This exists, but is only visible to certain users.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 19:04:41 UTC, Henrik wrote: "int safe = 0; // This code would break if "safe" was added as a keyword" I'm not buying this one. The compiler could issue a warning, something like "Warning: keyword safe will become a keyword in future version, please rename the identifier", then after few months remove it. Some code will break, sure, but it's a mechanical change that should be possible to apply by some tool. Also, with proper dub package validation, it should be easy to assess how many packages actually break if any. It's one thing to break vibe-d or diamondmvc, it's other thing to break helloworld-d that wasn't updated in seven years.
Re: Then new forum moderation
On Saturday, 22 September 2018 at 19:09:24 UTC, bauss wrote: But what is there to stop a spammer from doing the same? Spammers are not going to exert that much effort in order to be able to spam 1 website, so that the moderators then change their algorithm and block them again. This is the key. Spammers win by targeting classes of websites running the same engine, or using generic bots that detect arbitrary forms, or employing humans to do it for them. To defeat them, you must pose a challenge specific to your website, so that people who are not interested in your website's topic will have difficulty solving, but not the other way around. I would suggest some real captcha software that are used by the majority of sites. No, those work MUCH worse than the above.
Re: Then new forum moderation
On Saturday, 22 September 2018 at 18:56:28 UTC, Vladimir Panteleev wrote: On Saturday, 22 September 2018 at 17:19:41 UTC, SashaGreat wrote: I did by head. But how a newbie would suppose to do that? For that challenge, you only non-obvious thing need to know is the syntax for the modulus and ternary operators, which are present in many programming languages. You can do tthe calculation with a regular desktop calculator. If that is too much, you can run the code on run.dlang.io. In this case, you only need to know how tocall a function and print its result. If that is also too much, you can ask for help in the #d IRC channel. The software suggests the above two options. In my opinion, it is a reasonable compromise, but I'm open to suggestions. (Note that at least once, a spammer managed to get through the CAPTCHA precisely by simply asking on #d, with a good samaritan providing the answer without inquiring further.) And by the way, after you do once why need to do every time? It is needed to prevent flooding. However, successfully solving the CAPTCHA a number of times across a period of time while logged in will whitelist your account. But what is there to stop a spammer from doing the same? I mean it's fairly easy to grab the captcha code and run it through a D compiler and then post the result automatically. It's no rocket science, so it really doesn't do much in preventing I think. Really it can be automated like: 1. Copy the code 2. Go to run.dlang.io 3. Paste the code 4. Compile it 5. Wait for the output 6. Copy the output 7. Paste the output into the input field 8. Submit It would take anyone familiar with basic http macros less than 10 minutes to automate that process, even less using a programming language if it's mass automation. The forums for D might just not be popular enough for any "bots" to bother I guess? I would suggest some real captcha software that are used by the majority of sites. And on top of that maybe a flag system. People being able to flag posts and if a specific post is flagged by enough people then it'll be "hidden" until moderation takes action by either making it "visible" again due to invalid flagging or deleting it because it was a valid flag. This can help not only against spammers, bots etc. but also when there are trolls making troll posts etc.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 15:45:09 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 15:25:32 UTC, aberba wrote: On Saturday, 22 September 2018 at 14:31:20 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 13:25:27 UTC, rikki cattermole wrote: Then D isn't the right choice for you. I think it makes for a better community if we can be more welcoming, helpful a gracious instead of responding to criticism this way. This is someone who saw enough potential with D to end up on the forums but had some gripes with it, after all who doesn't? I'm glad he took the initiative to provide us with good feedback, and he's not the first to take issue with the inconsistent '@' attribute syntax. I'm sure everyone can agree this inconsistency is less than ideal but that doesn't mean D isn't right for them and we should respond this feedback like this with thanks rather than dismissal. That inconsistency is an issue for me. I wish there a clear decision to make things consistent. Yeah there's been alot of discussion around it over the years, which is why I put this together about 4 years ago: https://wiki.dlang.org/Language_Designs_Explained#Function_attributes Gosh I've forgotten how long I've been using D. Interesting article. "int safe = 0; // This code would break if "safe" was added as a keyword" My question here: why didn't D use a similar solution as C when dealing with these things? Look at the introduction of the bool datatype in C99. They created the compiler reserved type "_Bool" and put "typedef _Bool bool" in "stdbool.h". The people wanting to use this new feature can include this header, and other can leave it be. No ugly "@" polluting the language on every line where it's used. Wouldn't a similar solution have been possible in D?
Re: Then new forum moderation
On Saturday, 22 September 2018 at 17:19:41 UTC, SashaGreat wrote: I did by head. But how a newbie would suppose to do that? For that challenge, you only non-obvious thing need to know is the syntax for the modulus and ternary operators, which are present in many programming languages. You can do tthe calculation with a regular desktop calculator. If that is too much, you can run the code on run.dlang.io. In this case, you only need to know how tocall a function and print its result. If that is also too much, you can ask for help in the #d IRC channel. The software suggests the above two options. In my opinion, it is a reasonable compromise, but I'm open to suggestions. (Note that at least once, a spammer managed to get through the CAPTCHA precisely by simply asking on #d, with a good samaritan providing the answer without inquiring further.) And by the way, after you do once why need to do every time? It is needed to prevent flooding. However, successfully solving the CAPTCHA a number of times across a period of time while logged in will whitelist your account.
Re: Copy Constructor DIP and implementation
On Monday, 17 September 2018 at 23:07:22 UTC, Manu wrote: On Mon, 17 Sep 2018 at 13:55, 12345swordy via Digitalmars-d-announce wrote: On Tuesday, 11 September 2018 at 15:08:33 UTC, RazvanN wrote: > Hello everyone, > > I have finished writing the last details of the copy > constructor DIP[1] and also I have published the first > implementation [2]. As I wrongfully made a PR for the DIP > queue in the early stages of the development of the DIP, I > want to announce this way that the DIP is ready for the > draft review now. Those who are familiar with the compiler, > please take a look at the implementation and help me improve > it! > > Thanks, > RazvanN > > [1] https://github.com/dlang/DIPs/pull/129 > [2] https://github.com/dlang/dmd/pull/8688 The only thing I object is adding yet another attribute to a already big bag of attributes. What's wrong with adding keywords? -Alexander I initially felt strongly against @implicit, it shouldn't be necessary, and we could migrate without it. But... assuming that @implicit should make an appearance anyway (it should! being able to mark implicit constructors will fill a massive usability hole in D!), then it doesn't hurt to use it eagerly here and avoid a breaking change at this time, since it will be the correct expression for the future regardless. If that where the case, then why not make it an actual keyword? A frequent complaint regarding D is that there are too many attributes, this will undoubtedly adding more to it. -Alexander
Re: Then new forum moderation
On Saturday, 22 September 2018 at 17:13:18 UTC, Vladimir Panteleev wrote: On Saturday, 22 September 2018 at 16:48:35 UTC, SashaGreat wrote: PS: By the way the CAPTCHA is awful, look what they throw to us: If you have a better idea of a CAPTCHA that would be easy for D programmers but hard for spammers, please submit a pull request: https://github.com/CyberShadow/dcaptcha First I didn't want to sound harsh, and by the way I sent the message without complete it. I did by head. But how a newbie would suppose to do that? You may say to open the compiler and try it or go with the online version, but it isn't too much? And by the way, after you do once why need to do every time? S.G.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 16:22:31 UTC, Russel Winder wrote: This is just so reminiscent of the Python 2 / Python 3 fiasco. Python 3 was clearly an improvement over Python 2, but the way in which the changes came to the community caused a violent split. Even after many years, there are those for whom Python 3 is anathema and not to be used. [...] Have you followed the discussion of Jonathan on @implicit? It seems that D2 is going in the opposite direction: more cruft is being added, for sake of (dubious) compatibility.
Re: Then new forum moderation
On Saturday, 22 September 2018 at 14:58:58 UTC, aberba wrote: I'm just seeing a ..."Your message has been saved, and will be posted after being **approved** by a moderator". This doesn't make sense. Your post was flagged by the spam filter. It was a false positive, which sometimes occurs with very short posts, as was yours. It was approved a few minutes after it was submitted. 1. what criteria decides if my comment deserves approval or not? Your post is not spam, an attack on other forum members, or egregiously inflammatory / off-topic. 2. Is there a full-time moderator available to ensure there no bureaucracy/delay? There are several persons who receive moderation notices and can act on them. I generally receive email throughout the day, so you could at least count me as a full-time moderator. 3. Is it not a much more better approach to delete when its reported as inappropriate by the public? No. We did this until this year. This resulted in: - Lots of spam (despite the spam filter and CAPTCHA). Web forums attract a LOT of spambots (and humans paid to post spam). - Spam in mailing list users' inboxes, even after a moderator deleted it off the forum, since you can't unsend an email. The new method catches a lot of spam that would otherwise get through. You don't see it, but the moderators do.
Re: Then new forum moderation
On Saturday, 22 September 2018 at 16:48:35 UTC, SashaGreat wrote: PS: By the way the CAPTCHA is awful, look what they throw to us: If you have a better idea of a CAPTCHA that would be easy for D programmers but hard for spammers, please submit a pull request: https://github.com/CyberShadow/dcaptcha
Re: Updating D beyond Unicode 2.0
On Saturday, 22 September 2018 at 12:35:27 UTC, Steven Schveighoffer wrote: But aren't we arguing about the wrong thing here? D already accepts non-ASCII identifiers. Walter was doing that thing that people in the US who only speak English tend to do: forgetting that other people speak other languages, and that people who speak English can learn other languages to work with people who don't speak English. He was saying it's inevitably a mistake to use non-ASCII characters in identifiers and that nobody does use them in practice. Walter talking like that sounds like he'd like to remove support for non-ASCII identifiers from the language. I've gotten by without maintaining a set of personal patches on top of DMD so far, and I'd like it if I didn't have to start. What languages need an upgrade to unicode symbol names? In other words, what symbols aren't possible with the current support? Chinese and Japanese have gained about eleven thousand symbols since Unicode 2. Unicode 2 covers 25 writing systems, while Unicode 11 covers 146. Just updating to Unicode 3 would give us Cherokee, Ge'ez (multiple languages), Khmer (Cambodian), Mongolian, Burmese, Sinhala (Sri Lanka), Thaana (Maldivian), Canadian aboriginal syllabics, and Yi (Nuosu).
Re: Then new forum moderation
On Saturday, 22 September 2018 at 14:58:58 UTC, aberba wrote: I'm just seeing a ..."Your message has been saved, and will be posted after being **approved** by a moderator". This doesn't make sense. ... This happens only for new topic? S.G.
Re: Then new forum moderation
On Saturday, 22 September 2018 at 16:45:15 UTC, SashaGreat wrote: On Saturday, 22 September 2018 at 14:58:58 UTC, aberba wrote: I'm just seeing a ..."Your message has been saved, and will be posted after being **approved** by a moderator". This doesn't make sense. ... This happens only for new topic? S.G. I'll not create a topic to check this behavior, but this message doesn't show up when replying inside a topic. PS: By the way the CAPTCHA is awful, look what they throw to us: int v() { return 26 % 3 ? 13 / 3 : 42 % 5; } I mean S.G.
Re: Updating D beyond Unicode 2.0
On Saturday, 22 September 2018 at 12:24:49 UTC, Shachar Shemesh wrote: If memory serves me right, hieroglyphs actually represent consonants (vowels are implicit), and as such, are most definitely "characters". Egyptian hieroglyphics uses logographs (symbols representing whole words, which might be multiple syllables), letters, and determinants (which don't represent any word but disambiguate the surrounding words). Looking things up serves me better than memory, usually. The only language I can think of, off the top of my head, where words have distinct signs is sign language. Logographic writing systems. There is one logographic writing system still in common use, and it's the standard writing system for Chinese and Japanese. That's about 1.4 billion people. It was used in Korea until hangul became popularized. Unicode also aims to support writing systems that aren't used anymore. That means Mayan, cuneiform (several variants), Egyptian hieroglyphics and demotic script, several extinct variants on the Chinese writing system, and Luwian. Sign languages generally don't have writing systems. They're also not generally related to any ambient spoken languages (for instance, American Sign Language is derived from French Sign Language), so if you speak sign language and can write, you're bilingual. Anyway, without writing systems, sign languages are irrelevant to Unicode.
Re: Rather D1 then D2
This is just so reminiscent of the Python 2 / Python 3 fiasco. Python 3 was clearly an improvement over Python 2, but the way in which the changes came to the community caused a violent split. Even after many years, there are those for whom Python 3 is anathema and not to be used. The Python community has now moved on, and the Python 3 haters are just left to their own devices. If they want to come to the community they have to be accepting that Python 3 is the mainline and not try to undermine that. The Python community is the most diverse and welcoming community of all the programming communities I have ever been involved with. The 2/3 war is over, 3 is the one true way. Until 4 is released. On Sat, 2018-09-22 at 14:31 +, Jonathan Marler via Digitalmars-d wrote: > On Saturday, 22 September 2018 at 13:25:27 UTC, rikki cattermole > wrote: > > Then D isn't the right choice for you. > > I think it makes for a better community if we can be more > welcoming, helpful a gracious instead of responding to criticism > this way. This is someone who saw enough potential with D to end > up on the forums but had some gripes with it, after all who > doesn't? I'm glad he took the initiative to provide us with good > feedback, and he's not the first to take issue with the > inconsistent '@' attribute syntax. I'm sure everyone can agree > this inconsistency is less than ideal but that doesn't mean D > isn't right for them and we should respond this feedback like > this with thanks rather than dismissal. Someone did say, Use D 2 but without the cruft and it looks and feels like D1. That seems like a constructive suggestion. Perhaps D 2 can be improved by getting rid of the cruft and saying backward compatibility is seriously over-rated. -- Russel. === Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Roadm: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk signature.asc Description: This is a digitally signed message part
Re: Updating D beyond Unicode 2.0
On Saturday, 22 September 2018 at 08:52:32 UTC, Jonathan M Davis wrote: Unicode identifiers may make sense in a code base that is going to be used solely by a group of developers who speak a particular language that uses a number a of non-ASCII characters (especially languages like Chinese or Japanese), but it has no business in any code that's intended for international use. It just causes problems. You have a problem when you need to share a codebase between two organizations using different languages. "Just use ASCII" is not the solution. "Use a language that most developers in both organizations can use" is. That's *usually* going to be English, but not always. For instance, a Belorussian company doing outsourcing work for a Russian company might reasonably write code in Russian. If you're writing for a global audience, as most open source code is, you're usually going to use the most widely spoken language.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 15:25:32 UTC, aberba wrote: On Saturday, 22 September 2018 at 14:31:20 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 13:25:27 UTC, rikki cattermole wrote: Then D isn't the right choice for you. I think it makes for a better community if we can be more welcoming, helpful a gracious instead of responding to criticism this way. This is someone who saw enough potential with D to end up on the forums but had some gripes with it, after all who doesn't? I'm glad he took the initiative to provide us with good feedback, and he's not the first to take issue with the inconsistent '@' attribute syntax. I'm sure everyone can agree this inconsistency is less than ideal but that doesn't mean D isn't right for them and we should respond this feedback like this with thanks rather than dismissal. That inconsistency is an issue for me. I wish there a clear decision to make things consistent. Yeah there's been alot of discussion around it over the years, which is why I put this together about 4 years ago: https://wiki.dlang.org/Language_Designs_Explained#Function_attributes Gosh I've forgotten how long I've been using D.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 14:31:20 UTC, Jonathan Marler wrote: On Saturday, 22 September 2018 at 13:25:27 UTC, rikki cattermole wrote: Then D isn't the right choice for you. I think it makes for a better community if we can be more welcoming, helpful a gracious instead of responding to criticism this way. This is someone who saw enough potential with D to end up on the forums but had some gripes with it, after all who doesn't? I'm glad he took the initiative to provide us with good feedback, and he's not the first to take issue with the inconsistent '@' attribute syntax. I'm sure everyone can agree this inconsistency is less than ideal but that doesn't mean D isn't right for them and we should respond this feedback like this with thanks rather than dismissal. That inconsistency is an issue for me. I wish there a clear decision to make things consistent.
Then new forum moderation
I'm just seeing a ..."Your message has been saved, and will be posted after being **approved** by a moderator". This doesn't make sense. 1. what criteria decides if my comment deserves approval or not? 2. Is there a full-time moderator available to ensure there no bureaucracy/delay? 3. Is it not a much more better approach to delete when its reported as inappropriate by the public? (White-listing vs Blacklisting,...which one permits a democratic and free speech environment?)
Re: Webassembly TodoMVC
On Friday, 21 September 2018 at 14:01:30 UTC, Sebastiaan Koppe wrote: Hey guys, Following the D->emscripten->wasm toolchain from CyberShadow and Ace17 I created a proof of concept framework for creating single page webassembly applications using D's compile time features. This is a proof of concept to find out what is possible. At https://skoppe.github.io/d-wasm-todomvc-poc/ you can find a working demo and the repo can be found at https://github.com/skoppe/d-wasm-todomvc-poc Here is an example from the readme showing how to use it. --- struct Button { mixin Node!"button"; @prop innerText = "Click me!"; } struct App { mixin Node!"div"; @child Button button; } mixin Spa!App; --- Can the SPA code be released as a separate module for WebAssembly web app development?
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 13:25:27 UTC, rikki cattermole wrote: Then D isn't the right choice for you. I think it makes for a better community if we can be more welcoming, helpful a gracious instead of responding to criticism this way. This is someone who saw enough potential with D to end up on the forums but had some gripes with it, after all who doesn't? I'm glad he took the initiative to provide us with good feedback, and he's not the first to take issue with the inconsistent '@' attribute syntax. I'm sure everyone can agree this inconsistency is less than ideal but that doesn't mean D isn't right for them and we should respond this feedback like this with thanks rather than dismissal.
Why is CI not running dmd's unittests?
Hi, I ran the dmd unittests on my Windows machine today and one of the tests in filename.d asserted. The cause for this has already been noticed a few days ago by someone else [1] but not by CI. Is it well-known that the dmd unittests (at least for the Windows build) are not automatically run? If so, what is the reasoning behind it? [1] https://github.com/dlang/dmd/commit/7baa0e82802839940fb0620bad02e97f741d2c27#r30571264
Re: Rather D1 then D2
On 23/09/2018 1:22 AM, new wrote: On Saturday, 22 September 2018 at 10:53:25 UTC, bauss wrote: On Saturday, 22 September 2018 at 09:42:48 UTC, Jonathan Marler wrote: I'd be interested to hear/read about the features that some developers don't like with D2. I'm going to guess it has to do with all the attributes for functions which you often have to remember is it @attribute or is it just attribute like is it @nogc or is it nogc etc. It's one of the things that probably throws off a lot of new users of D, because they feel like they __have__ to know those although they're often optional and you can live without them completely. They make the language seem bloated. the language is bloated. trying to read the source of D2 makes gives you the feeling of getting eye cancer. so we decided if D at all then it should be D1. Then D isn't the right choice for you.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 10:53:25 UTC, bauss wrote: On Saturday, 22 September 2018 at 09:42:48 UTC, Jonathan Marler wrote: I'd be interested to hear/read about the features that some developers don't like with D2. I'm going to guess it has to do with all the attributes for functions which you often have to remember is it @attribute or is it just attribute like is it @nogc or is it nogc etc. It's one of the things that probably throws off a lot of new users of D, because they feel like they __have__ to know those although they're often optional and you can live without them completely. They make the language seem bloated. the language is bloated. trying to read the source of D2 makes gives you the feeling of getting eye cancer. so we decided if D at all then it should be D1.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 13:22:03 UTC, new wrote: On Saturday, 22 September 2018 at 10:53:25 UTC, bauss wrote: On Saturday, 22 September 2018 at 09:42:48 UTC, Jonathan Marler wrote: [...] I'm going to guess it has to do with all the attributes for functions which you often have to remember is it @attribute or is it just attribute like is it @nogc or is it nogc etc. It's one of the things that probably throws off a lot of new users of D, because they feel like they __have__ to know those although they're often optional and you can live without them completely. They make the language seem bloated. the language is bloated. trying to read the source of D2 makes gives you the feeling of getting eye cancer. so we decided if D at all then it should be D1. sorry i meant D2-phobos
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 08:48:37 UTC, Nemanja Borić wrote: On Friday, 21 September 2018 at 21:07:57 UTC, Jonathan M Davis wrote: [...] Sociomantic "maintains" (well, much more in the past than today) D1 compiler and you can find latest releases here (Ubuntu): https://bintray.com/sociomantic-tsunami/dlang/dmd1 (direct link https://bintray.com/sociomantic-tsunami/dlang/dmd1/v1.082.1#files) or you can compile https://github.com/dlang/dmd/tree/dmd-1.x yourself and hope that the compiler bug is fixed - we've certainly fixed a lot of them in the past years (decade?). [...] Thank you for valuable info. i'll check that out.
Re: Updating D beyond Unicode 2.0
On Saturday, September 22, 2018 6:37:09 AM MDT Steven Schveighoffer via Digitalmars-d wrote: > On 9/22/18 4:52 AM, Jonathan M Davis wrote: > >> I was laughing out loud when reading about composing "family" > >> emojis with zero-width joiners. If you told me that was a tech > >> parody, I'd have believed it. > > > > Honestly, I was horrified to find out that emojis were even in Unicode. > > It makes no sense whatsover. Emojis are supposed to be sequences of > > characters that can be interepreted as images. Treating them like > > Unicode symbols is like treating entire words like Unicode symbols. > > It's just plain stupid and a clear sign that Unicode has gone > > completely off the rails (if it was ever on them). Unfortunately, it's > > the best tool that we have for the job. > But aren't some (many?) Chinese/Japanese characters representing whole > words? It's true that they're not characters in the sense that Roman characters are characters, but they're still part of the alphabets for those languages. Emojis are specifically formed from sequences of characters - e.g. :) is two characters which are already expressible on their own. They're meant to represent a smiley face, but it's a sequence of characters already. There's no need whatsoever to represent anything extra Unicode. It's already enough of a disaster that there are multiple ways to represent the same character in Unicode without nonsense like emojis. It's stuff like this that really makes me wish that we could come up with a new standard that would replace Unicode, but that's likely a pipe dream at this point. - Jonathan M Davis
Re: transposed with enforceNotJagged not throwing?
On 9/22/18 4:10 AM, Vladimir Panteleev wrote: On Saturday, 22 September 2018 at 06:16:41 UTC, berni wrote: Is it a bug or is it me who's doing something wrong? Looking at the implementation, it looks like enforceNotJagged was just never implemented for transposed (only transversed). It was suggested when transposed was fixed to include opIndex, but never implemented. https://github.com/dlang/phobos/pull/5805#discussion_r148251621 -Steve
Re: Updating D beyond Unicode 2.0
On 9/22/18 4:52 AM, Jonathan M Davis wrote: I was laughing out loud when reading about composing "family" emojis with zero-width joiners. If you told me that was a tech parody, I'd have believed it. Honestly, I was horrified to find out that emojis were even in Unicode. It makes no sense whatsover. Emojis are supposed to be sequences of characters that can be interepreted as images. Treating them like Unicode symbols is like treating entire words like Unicode symbols. It's just plain stupid and a clear sign that Unicode has gone completely off the rails (if it was ever on them). Unfortunately, it's the best tool that we have for the job. But aren't some (many?) Chinese/Japanese characters representing whole words? -Steve
Re: Updating D beyond Unicode 2.0
On 9/21/18 9:08 PM, Neia Neutuladh wrote: On Friday, 21 September 2018 at 20:25:54 UTC, Walter Bright wrote: But identifiers? I haven't seen hardly any use of non-ascii identifiers in C, C++, or D. In fact, I've seen zero use of it outside of test cases. I don't see much point in expanding the support of it. If people use such identifiers, the result would most likely be annoyance rather than illumination when people who don't know that language have to work on the code. you *do* know that not every codebase has people working on it who only know English, right? If I took a software development job in China, I'd need to learn Chinese. I'd expect the codebase to be in Chinese. Because a Chinese company generally operates in Chinese, and they're likely to have a lot of employees who only speak Chinese. And no, you can't just transcribe Chinese into ASCII. Same for Spanish, Norwegian, German, Polish, Russian -- heck, it's almost easier to list out the languages you *don't* need non-ASCII characters for. Anyway, here's some more D code using non-ASCII identifiers, in case you need examples: https://git.ikeran.org/dhasenan/muzikilo But aren't we arguing about the wrong thing here? D already accepts non-ASCII identifiers. What languages need an upgrade to unicode symbol names? In other words, what symbols aren't possible with the current support? Or maybe I'm misunderstanding something. -Steve
Re: Updating D beyond Unicode 2.0
On 22/09/18 15:13, Thomas Mader wrote: Would you suggest to remove such writing systems out of Unicode? What should a museum do which is in need of a software to somehow manage Egyptian hieroglyphs? If memory serves me right, hieroglyphs actually represent consonants (vowels are implicit), and as such, are most definitely "characters". The only language I can think of, off the top of my head, where words have distinct signs is sign language. It is a good question whether Unicode should include such a language (difficulty of representing motion in a font aside). Shachar
Re: Updating D beyond Unicode 2.0
On Saturday, 22 September 2018 at 11:28:48 UTC, Jonathan M Davis wrote: Unicode is supposed to be a universal way of representing every character in every language. Emojis are not characters. They are sequences of characters that people use to represent images. I do not understand how an argument can even be made that they belong in Unicode. As I said, it's exactly the same as arguing that words should be represented in Unicode. Unfortunately, however, at least some of them are in there. :| At least since the incorporation of Emojis it's not supposed to be a universal way of representing characters anymore. :-) Maybe there was a time when that was true I don't know but I think they see Unicode as a way to express all language symbols. And Emojis is nothing else than a language were each symbol stands for an emotion/word/sentence. If Unicode only allows languages with characters which are used to form words it's excluding languages which use other ways of expressing something. Would you suggest to remove such writing systems out of Unicode? What should a museum do which is in need of a software to somehow manage Egyptian hieroglyphs? Unicode was made to support all sorts of writing systems and using multiple characters per word is just one system to form a writing system.
Re: Updating D beyond Unicode 2.0
On 22/09/18 14:28, Jonathan M Davis wrote: As I said, it's exactly the same as arguing that words should be represented in Unicode. Unfortunately, however, at least some of them are in there. :| - Jonathan M Davis To be fair to them, that word is part of the "Arabic-representation forms" section. The "Presentation forms" sections are meant as backwards compatibility toward code points that existed before, and are not meant to be generated by Unicode aware applications. Shachar
Re: Updating D beyond Unicode 2.0
On Saturday, September 22, 2018 4:51:47 AM MDT Thomas Mader via Digitalmars- d wrote: > On Saturday, 22 September 2018 at 10:24:48 UTC, Shachar Shemesh > > wrote: > > Thank Allah that someone said it before I had to. I could not > > agree more. Encoding whole words as single Unicode code points > > makes no sense. > > The goal of Unicode is to support diversity, if you argue against > that you don't need Unicode at all. > What you are saying is basically that you would remove Chinese > too. > > Emojis are not my world either but it is an expression system / > language. Unicode is supposed to be a universal way of representing every character in every language. Emojis are not characters. They are sequences of characters that people use to represent images. I do not understand how an argument can even be made that they belong in Unicode. As I said, it's exactly the same as arguing that words should be represented in Unicode. Unfortunately, however, at least some of them are in there. :| - Jonathan M Davis
Re: Updating D beyond Unicode 2.0
On Saturday, 22 September 2018 at 10:24:48 UTC, Shachar Shemesh wrote: Thank Allah that someone said it before I had to. I could not agree more. Encoding whole words as single Unicode code points makes no sense. The goal of Unicode is to support diversity, if you argue against that you don't need Unicode at all. What you are saying is basically that you would remove Chinese too. Emojis are not my world either but it is an expression system / language.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 09:42:48 UTC, Jonathan Marler wrote: I'd be interested to hear/read about the features that some developers don't like with D2. I'm going to guess it has to do with all the attributes for functions which you often have to remember is it @attribute or is it just attribute like is it @nogc or is it nogc etc. It's one of the things that probably throws off a lot of new users of D, because they feel like they __have__ to know those although they're often optional and you can live without them completely. They make the language seem bloated.
Re: Webassembly TodoMVC
On Friday, 21 September 2018 at 14:01:30 UTC, Sebastiaan Koppe wrote: Hey guys, Following the D->emscripten->wasm toolchain from CyberShadow and Ace17 I created a proof of concept framework for creating single page webassembly applications using D's compile time features. This is a proof of concept to find out what is possible. At https://skoppe.github.io/d-wasm-todomvc-poc/ you can find a working demo and the repo can be found at https://github.com/skoppe/d-wasm-todomvc-poc Here is an example from the readme showing how to use it. --- struct Button { mixin Node!"button"; @prop innerText = "Click me!"; } struct App { mixin Node!"div"; @child Button button; } mixin Spa!App; --- Very cool! Thanks!
Re: Copy Constructor DIP and implementation
On Monday, 17 September 2018 at 19:10:27 UTC, Jonathan M Davis wrote: On Monday, September 17, 2018 8:27:16 AM MDT Meta via Digitalmars-d-announce wrote: [...] Honestly, I don't think that using a pragma instead of an attribute fixes much, and it goes against the idea of what pragmas are supposed to be for in that pragmas are supposed to be compiler-specific, not really part of the language. [...] I totally agree with this.
Re: Updating D beyond Unicode 2.0
On 22/09/18 11:52, Jonathan M Davis wrote: Honestly, I was horrified to find out that emojis were even in Unicode. It makes no sense whatsover. Emojis are supposed to be sequences of characters that can be interepreted as images. Treating them like Unicode symbols is like treating entire words like Unicode symbols. It's just plain stupid and a clear sign that Unicode has gone completely off the rails (if it was ever on them). Unfortunately, it's the best tool that we have for the job. - Jonathan M Davis Thank Allah that someone said it before I had to. I could not agree more. Encoding whole words as single Unicode code points makes no sense. U+FDF2 Shachar
Re: Updating D beyond Unicode 2.0
On Saturday, 22 September 2018 at 01:08:26 UTC, Neia Neutuladh wrote: ...you *do* know that not every codebase has people working on it who only know English, right? This topic boils down to diversity vs. productivity. If supporting diversity in this case is questionable. I work in a German speaking company and we have no developers who are not speaking German for now. In fact all are native speakers. Still we write our code, comments and commit messages in English. Even at university you learn that you should use English to code. The reasoning is simple. You never know who will work on your code in the future. If a company writes code in Chinese, they will have a hard time to expand the development of their codebase even though Chinese is spoken by that many people. So even though you could use all sorts of characters, in a productive environment you better choose not to do so. You might end up shooting yourself in the foot in the long run. Diversity is important in other areas but I don't see much advantage here. At least for now because the spoken languages of today don't differ tremendously in what they are capable of expressing. This is also true for todays programming languages. Most of them are just different syntax for the very same ideas and concepts. That's not very helpful to bring people together and advance. My understanding is that even life with it's great diversity just has one language (DNA) to define it.
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 08:48:37 UTC, Nemanja Borić wrote: On Friday, 21 September 2018 at 21:07:57 UTC, Jonathan M Davis wrote: [...] Sociomantic "maintains" (well, much more in the past than today) D1 compiler and you can find latest releases here (Ubuntu): https://bintray.com/sociomantic-tsunami/dlang/dmd1 (direct link https://bintray.com/sociomantic-tsunami/dlang/dmd1/v1.082.1#files) or you can compile https://github.com/dlang/dmd/tree/dmd-1.x yourself and hope that the compiler bug is fixed - we've certainly fixed a lot of them in the past years (decade?). [...] I'd be interested to hear/read about the features that some developers don't like with D2. Maybe you can point me to places where this has been shared in the past and/or reply with your own perspective? Others feel free to chime in as well. I should make it clear this is just a request to gather data, I'm not looking to start a debate, just wanting to see what can be learned from this. Thanks.
Re: Updating D beyond Unicode 2.0
On Friday, September 21, 2018 10:54:59 PM MDT Joakim via Digitalmars-d wrote: > I'm torn. I completely agree with Adam and others that people > should be able to use any language they want. But the Unicode > spec is such a tire fire that I'm leery of extending support for > it. Unicode identifiers may make sense in a code base that is going to be used solely by a group of developers who speak a particular language that uses a number a of non-ASCII characters (especially languages like Chinese or Japanese), but it has no business in any code that's intended for international use. It just causes problems. At best, a particular, regional keyboard may be able to handle a particular symbol, but most other keyboards won't be able too. So, using that symbol causes problems for all of the developers from other parts of the world even if those developers also have Unicode symbols in their native languages. > Someone linked this Swift chapter on Unicode handling in an > earlier forum thread, read the section on emoji in particular: > > https://oleb.net/blog/2017/11/swift-4-strings/ > > I was laughing out loud when reading about composing "family" > emojis with zero-width joiners. If you told me that was a tech > parody, I'd have believed it. Honestly, I was horrified to find out that emojis were even in Unicode. It makes no sense whatsover. Emojis are supposed to be sequences of characters that can be interepreted as images. Treating them like Unicode symbols is like treating entire words like Unicode symbols. It's just plain stupid and a clear sign that Unicode has gone completely off the rails (if it was ever on them). Unfortunately, it's the best tool that we have for the job. - Jonathan M Davis
Re: Rather D1 then D2
On Friday, 21 September 2018 at 21:07:57 UTC, Jonathan M Davis wrote: The sad truth is that if you really do want to continue to use D1, you're going to have to maintain it yourself or find a group of people willing to do so; Sociomantic "maintains" (well, much more in the past than today) D1 compiler and you can find latest releases here (Ubuntu): https://bintray.com/sociomantic-tsunami/dlang/dmd1 (direct link https://bintray.com/sociomantic-tsunami/dlang/dmd1/v1.082.1#files) or you can compile https://github.com/dlang/dmd/tree/dmd-1.x yourself and hope that the compiler bug is fixed - we've certainly fixed a lot of them in the past years (decade?). But - Sociomantic doesn't officially maintain D1 language (we're in process of moving our entire codebase to D2 - which is a long process, but we're getting there - checkout some of dconf videos where it was talked about it) - it is just about fixing bugs where ROI is good enough to justify fixing (which is in many cases - just backporting D2 compiler fixes, which is also not trivial), so don't expect any commitment, rather - expect this commitment to stop. That being said - I want to point out something that was already mentioned here - it is possible to use D2 subsets that are "D1 minded" (for example, Sociomantic is not the best codebase to look for modern D code), and our all D1 projects now compile as D2 code after simple machine translation (take a look into https://github.com/sociomantic-tsunami/ocean/ and run `make d2conv` and check the output - `make DVER=2 ` will compile generated D2 code and run unittests - don't forget to init git submodules and to install dependencies - https://bintray.com/sociomantic-tsunami/dlang/d1to2fix - or just run in sociomantictsunami/dlangdevel docker image). So it is possible to use D2 language and compiler and avoid all the features that you don't like, at least to a reasonable degree, and as a bonus you still get to cherry pick D2 features you like (and there are some even for D1 minded person).
Re: transposed with enforceNotJagged not throwing?
On Saturday, 22 September 2018 at 06:16:41 UTC, berni wrote: Is it a bug or is it me who's doing something wrong? Looking at the implementation, it looks like enforceNotJagged was just never implemented for transposed (only transversed).
Re: Rather D1 then D2
On Saturday, 22 September 2018 at 04:45:47 UTC, Vladimir Panteleev wrote: On Friday, 21 September 2018 at 21:17:52 UTC, new wrote: Thank you for your answer. too bad - have to think about it. You might be interested in the Volt language, which follows in D1's footsteps: https://github.com/VoltLang/Volta I believe it was created by some D users with the same opinion on D1/D2. Syntax is also very much like D1. thank you for the pointer, i definitely will look at this closer.
Re: Walter's Guide to Translating Code From One Language to Another
On Friday, 21 September 2018 at 06:24:14 UTC, Peter Alexander wrote: On Friday, 21 September 2018 at 06:00:33 UTC, Walter Bright wrote: I've learned this the hard way, and I've had to learn it several times because I am a slow learner. I've posted this before, and repeat it because bears repeating. I find this is a great procedure for any sort of large refactoring -- minimal changes at each step and ensure tests are passing after every change. Also, use a git commit for each logical change. If you discover a change that should have been put in a previous commit, use rebase --interactive to put it in right commit (of course that branch you're working on is purely local). Only when all the changes have been made can you decide to squash or not, or reorder them, or push them partially. TL;DR git can help to organize the refactoring, not be solely a recording device.
transposed with enforceNotJagged not throwing?
I expect this small program to throw an Exception: import std.stdio; import std.range; void main() { auto a = [[1,2], [4,5,3]]; a.transposed!(TransverseOptions.enforceNotJagged).writeln; } But it just outputs: [[1, 4], [2, 5], [3]] Is it a bug or is it me who's doing something wrong?
Re: Converting a character to upper case in string
On Friday, 21 September 2018 at 12:15:52 UTC, NX wrote: How can I properly convert a character, say, first one to upper case in a unicode correct manner? That would depend on how you'd define correctness. If your application needs to support "all" languages, then (depending how you interpret it) the task may not be meaningful, as some languages don't have the notion of "upper-case" or even "character" (as an individual glyph). Some languages do have those notions, but they serve a specific purpose that doesn't align with the one in English (e.g. Lojban). In which code level I should be working on? Grapheme? Or maybe code point is sufficient? Using graphemes is necessary if you need to support e.g. combining marks (e.g. ̏◌ + S = ̏S).
Re: SerialPort
On Thursday, 20 September 2018 at 10:51:52 UTC, braboar wrote: Can anybody give me a guide of using serial port? Here's a program I wrote (after lots of trial-and-error) to control my monitor through an USB serial-port adapter: https://github.com/CyberShadow/misc/blob/master/pq321q.d Hope this helps.
Re: Updating D beyond Unicode 2.0
On Saturday, 22 September 2018 at 04:54:59 UTC, Joakim wrote: To wit, Windows linker error with Unicode symbol: https://github.com/ldc-developers/ldc/pull/2850#issuecomment-422968161 That's a good argument for sticking to ASCII for name mangling. I'm torn. I completely agree with Adam and others that people should be able to use any language they want. But the Unicode spec is such a tire fire that I'm leery of extending support for it. The compiler doesn't have to do much with Unicode processing, fortunately.