Re: My statements related to terminating my SAoC relationship
On Monday, 15 October 2018 at 21:26:52 UTC, solidstate1991 wrote: I have done two mistakes: I underestimated the scope of the project and overestimated my capabilities. This caused a chain reaction, which in turn made the first milestone unreachable. You've done the right thing by facing the situation and addressing it. Forget your coding commitments for now and get yourself sorted out. Hungary certainly has taken a horrible turn for the worst, I hope it does not affect you too badly.
Re: Forums intermittently going down?
On Tuesday, 25 September 2018 at 21:20:29 UTC, Vladimir Panteleev wrote: Sometimes the database (SQLite) SQLite was designed initially to be single local process, one connection. You should get much better results with postgres though of course it has some maintenance overhead (mainly installation)
Re: Updating D beyond Unicode 2.0
On Saturday, 22 September 2018 at 08:52:32 UTC, Jonathan M Davis wrote: Honestly, I was horrified to find out that emojis were even in Unicode. It makes no sense whatsover. Emojis are supposed to be sequences of characters that can be interepreted as images. Treating them like Unicode symbols is like treating entire words like Unicode symbols. It's just plain stupid and a clear sign that Unicode has gone completely off the rails (if it was ever on them). Unfortunately, it's the best tool that we have for the job. According to the Unicode website, http://unicode.org/standard/WhatIsUnicode.html, """ Support of Unicode forms the foundation for the representation of languages and symbols in all major operating systems, search engines, browsers, laptops, and smart phones—plus the Internet and World Wide Web (URLs, HTML, XML, CSS, JSON, etc.)""" Note, unicode supports symbols, not just characters. The smiley face symbol predates its ':-)' usage in ascii text, https://www.smithsonianmag.com/arts-culture/who-really-invented-the-smiley-face-2058483/. It's fundamentally a symbol, not a sequence of characters. Therefore it is not unreasonable for it to be encoded with a unicode number. I do agree though, of course, that it would seem bizarre to use an emoji as a D identifier. The early history of computer science is completely dominated by cultures who use latin script based characters, and hence, quiet reasonably, text encoding and its automated visual representation by compute based devices is dominated by the requirements of latin script languages. However, the world keeps turning and, despite DT's best efforts, China et al. look to become dominant. Even if not China, the chances are that eventually a non-latin script based language will become very important. Parochial views like "all open source code should be in ASCII" will look silly. However, until that time D developers have to spend their time where it can be most useful. Hence the condition of whether to apply Neia's patch / ideas or not mainly depends on how much effort the donwstream effort will be (debuggers etc. as Walter pointed out), and how much the gain is. As unicode 2.0 is already supported I would take a guess that the vast majority of people with access to a computer can already enter identifiers in D that are rich enough for them. As Adam said though, it would be a good idea to at least ask!
Re: D is dead
On Thursday, 23 August 2018 at 10:41:03 UTC, Jonathan M Davis wrote: D does have a problem in general of having a lot of great features that work really well in isolation but don't necessarily work well in concert (and it doesn't help that some features have really never been properly finished). And frequently, the answer that folks go with is to simply not use sections of the language (e.g. it's _very_ common for folks to just give up on a lot of attributes like pure, nothrow, or @safe). A number of the issues do get worked out over time, but not all of them do, and sometimes the solutions cause a lot of problems. For instance, DIP 1000 may end up being great for @safe and will help solve certain issues, but it results in yet another attribute that has to be pasted all over your code and which most folks simply won't use. So, it's helping to fix a real problem, but is it making things better overall? I don't know. And while I definitely think that D is easier to understand than C++ (in spite of the increase in D's complexity over time), it's also very much true that D continues to get more and more complicated as we add more stuff. Generally, each solution is solving a real problem, and at least some of time, the solution actually interacts quite well with the rest of the language, but it all adds up. And honestly, I don't think that there's a real solution to that. Languages pretty much always get more complicated over time, and unless we're willing to get rid of more stuff, it's guaranteed to just become more complicated over time rather than less. D definitely improves over time, but certain classes of issues just never seem to be fixed for some reason (e.g. the issue with RAII and destructors really should have been fixed ages ago), and some of the major design decisions don't get fully sorted out for years, because they're not a high enough priority (e.g. shared). I don't really agree that D is in much danger of dying at this point, but I completely agree that we as a group are not doing a good enough job getting some of the key things done (much of which comes down to an issue of manpower, though some of it is also likely due to organizational issues). - Jonathan M Davis This is a great summary of the situation, thanks for such a good and honest appraisal. From a technical POV I'd say it could replace the whole thread. But there is a social/psychological aspect to the whole thing. Sachar's comment is obviously the cry of pain of someone whose back has just been broken by a last straw. He is being told, 'the straw you are complaining about is nothing'. There is a class of developers who expect things to Just Work TM, especially if they are told that it Just Works. Each time that they discover some combination of features that doesn't work they have to refactor their code and remember not to try that again. Ultimately the developer painfully learns the things that they should not attempt to use, or they give up before the process is complete and leave. I expect the pain caused by this is much more acute in a commercial environment where the pressure is on. Long term D developers have learnt not to bother with certain features or combinations of features and forget all the pain they went through to get that knowledge. They are ones saying, come in the water's lovely. For anyone considering using D for a commercial project the situation you describe is cause for concern. The issues can be fixed but it will take some brave and ruthless decisions, I suspect.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 23 August 2018 at 09:51:43 UTC, rikki cattermole wrote: Good luck getting W&A to agree to it, especially when there is yet another "critical D opportunity" on the table ;) No. They have power for as long as we the community say that they do. We are at the point where they need a check and balance to keep everybody going smoothly. And I do hope that they listen to us before somebody decides its forkin' time. No fork of D can be successful, it won't have the manpower, skills or willpower to draw on. Even with W and A it's already short. 'Threatening' W and A with a fork is an empty threat that just p***es them off. Bad move on your part.
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Tuesday, 31 July 2018 at 22:55:08 UTC, Laeeth Isharc wrote: Dpp doesn't work with STL yet. I asked Atila how long to #include vector and he thought maybe two months of full-time work. That's not out of the question in time, but we have too much else to do right now. I'm not sure if recent mangling improvements help and how much that changes things. But DPP keeps improving as does extern (C++) and probably one way and another it will work for quite a lot. Calypso makes cpp classes work as both value and reference types. I don't know the limit of what's possible without such changes - seems like C++ mangling is improving by leaps and bounds but I don't know when it will be dependable for templates. Yes OK, thanks. It's not that relevant what Andrei or Walter might think because it's a community-led project and we will make progress if somebody decides to spend their time working on it, or a company lends a resource for the same purpose. I'm sure they are all in favour of greater cpp interoperability, but I don't think the binding constraint is will from the top, but rather people willing and able to do the work. I think the DIP system has greatly improved the situation, but for anyone thinking of embarking on a lot of work for something like e.g. the GC, you do need to feel that there will be a good chance of it being adopted - otherwise all that work could go to waste. And if one wants to see it go faster then one can logically find a way to help with the work or contribute financially. I don't think anything else will make a difference. Agreed entirely. Same thing with Calypso. It's not ready yet to be integrated in a production compiler so it's an academic question as to the leadership's view about it. Where I'm coming from is that writing and maintaining something as large and complex as Calypso requires a whole heap both of motivation and also of encouragement from the sidelines - and especially from Walter and/or Andrei. If someone starts to feel that the backing is not there then it's very very hard to maintain motivation, particularly on infrastructure related code that if not integrated by Walter will always be hard for people to use and therefore not be widely adopted. To be fair to Walter though, this is a really intractable problem for him. He could adopt something like Calypso, and then find the original maintainer loses interest. That would leave Walter either needing to maintain someone else's complex code, or try to extricate himself from code having already integrated it. Also, there is no guarantee, in this particular case, that as C++ evolves it will still be possible to use Calypso's strategy. Of course there are other very good reasons for why adopting it is problematic. Still, it leaves the developer struggling, I expect, to maintain motivation. Considering the above, then knowing the general direction that Walter/Andrei want to take D, would be a great help in deciding what larger projects are worth undertaking. It seems to me, anyway (big caveat).
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 14:45:19 UTC, Paolo Invernizzi wrote: I forgot the link... here it is: https://www.quantamagazine.org/to-make-sense-of-the-present-brains-may-predict-the-future-20180710 An interesting article. I found that Dennet's Consciousness Explained, which is presumably debunked old hat by now, is full of interesting experiments and speculation about how we model things in our mind and how our perceptions feed into that. It's a long time since I read it but if I remember correctly he shows how we seem to have a kind of mental theatre which has an expectation of what will come next from the senses, leading to interesting mistakes in perception. It's a useful model of how the mind works. That website often carries good articles about new maths as well. Me and my colleague are pretty different, in the approach to that kind of stuff... Maybe I'll post on the Forum a 'Request for D Advocacy', a-la PostgreSQL, so the community can try to address some of his concerns about modern D, and lower his discomfort! :-P If you can explain to me what is the _direction_ of D in terms of interfacing with large C++ libraries it would be very much appreciated! I'd love to be using D for some of my projects but I have a perception that using e.g. VTK is still a difficult thing to do from D. Is that still true? What is the long term plan for D, is it extern(C++), a binding technology? Is there any interest in Calypso from the upper echelons? I want to know where D is trying to go, not just where it is now. I want to know if anyone has got their heart in it. My CV says my main languages are Java, Python and D. That last one is mainly wishful thinking at the moment. I wish it wasn't! Make me believe, Paulo!
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 21:27:12 UTC, bpr wrote: I hear you. You're looking (roughly) for a better Java/Go/Scala, and I'm looking for a better C/C++/Rust, at least for what I work on now. I don't think D can be both right now, and that the language which can satisfy both of us doesn't exist yet, though D is close. Yes, this. In the light of D's experience, is it even possible to have a language that satisfies both?
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 19:55:56 UTC, bpr wrote: On Saturday, 28 July 2018 at 15:36:43 UTC, Abdulhaq wrote: I think that I no longer fall into the category of developer that D is after. D is targeting pedal-to-the-metal requirements, and I don't need that. TBH I think 99% of developers don't need it. I'm 99% sure you just made that number up ;-) Sure, I plucked it out of thin air. But I do think of the software development world as an inverted pyramid in terms of performance demands and headcount. At the bottom of my inverted pyramid I have Linux and Windows. This code needs to be as performant as possible and bug free as possible. C/C++/D shine at this stuff. However, I number those particular developers in the thousands. Then we have driver writers. Performance is important here but as I user I feel that I wish they would concentrate on the 'bug-free' part a bit more. Especially those cowboys who develop printer and bluetooth drivers. Of course, according to them it's the hardware that stinks. These guys and galls number in the tens of thousands. Yes I made that up. Then we have a layer up, Libc developers and co. Then platform developers. Unity, Lumberyard for games. Apache. I think a great bulk of developers, though, sit at the application development layer. They are pumping out great swathes of Java etc. Users of Spring and dozens of other frameworks. C++ is usually the wrong choice for this type of work, but can be adopted in a mistaken bid for performance. Any how many are churning out all that javascript and PHP code? Hence I think that the number of developers who really need top performance is much smaller than the number who don't. For you, perhaps. I currently work mostly at a pretty low level and I'm pretty sure it's not just self delusion that causes us to use C++ at that low level. Perhaps you've noticed the rise of Rust lately? Are the Mozilla engineers behind it deluded in that they eschew GC and exceptions? I doubt it. I mostly prefer higher level languages with GCs, but nothing in life is free, and GC has significant costs. If I had to write CFD code, and I'd love to have a crack, then I'd really be wanting to use D for its expressiveness and performance. But because of the domain that I do work in, I feel that I am no longer in D's target demographic. I remember the subject of write barriers coming up in order (I think?) to improve the GC. Around that time Walter said he would not change D in any way that would reduce performance by even 1%. Hence I feel that D is ruling itself out of the application developer market. That's totally cool with me, but it me a long time to realise that it was the case and that therefore it was less promising to me than it had seemed before.
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 12:43:55 UTC, Laeeth Isharc wrote: It's tough when dealing with genuine - Knightian uncertainty or even more radical versions. When one doesn't even know the structure of the problem then maximising expected utility doesn't work. One can look at capacities - Choquet and the like - but then its harder to say something useful about what you should do. Sounds interesting, I'll look into it. But it's a loop and one never takes a final decision to master D. Also habits, routines and structures _do_ shape perception. In truth I avoid discussions that are really just arguing about definitions of words, but you made a couple of sweeping bumper-stickery comments That's entertaining. I've not been accused of that before! Bear in mind also I tend to write on my phone. I think I was just in need of a decent conversation. I didn't mean it in an accusatory manner :-). TBH I read those comments as coming from a D advocate who was in a motivational mood. They triggered a debate in me that has been wanting to come out, but I rarely contribute to forums these days. Yes I read Kahneman et al papers for the first time in 92 in the university library. I speed-read his book, and I thought it was a bad book. I work with a specialist in making decisions under uncertainty - she was the only person able to articulate to George Soros how he made money because he certainly couldn't, and she is mentioned in the preface to the revised version of Alchemy. She has the same view as me - behavioural finance is largely a dead end. One learns much more by going straight to the neuroeconomics and incorporating also the work of Dr Iain Macgilchrist. Kahneman makes a mistake in his choice of dimension. There's analytic and intuitive/gestalt and in my experience people making high stakes decisions are much less purely analytical than a believer in the popular Kahneman might suggest. What I said about prediction being overrated isn't controversial amongst a good number of the best traders and business people in finance. You might read Nassim Taleb also. You're way ahead of me here, obviously. I didn't read any Taleb until he made an appearance at the local bookshop. It was Black Swan and it didn't say anything that hadn't independently occurred to me already. However, for some reason it seemed to be a revelation to a lot of people. Well it's a pity the D Android ecosystem isn't yet mature. Still I remain in awe of the stubborn accomplishment of the man (with help) who got LDC to run on Android. It's not that bad calling D from Java. Some day I will see if I can help automate that - Kai started working on it already I think. D as a programming language has numerous benefits over Java, but trying to analyse why I would nevertheless choose Kotlin/Java for Android development: * The Android work I do largely does not need high low level performance. The important thinking that is done is the user interface, how communication with the servers should look for good performance, caching etc. Designing good algorithms. * Having done the above, I want a low friction way of getting that into code. That requires a decent expressive language with a quality build system that can churn out an APK without me having to think too hard about it. Kotlin/JDK8 are good enough and Android Studio helps a lot. * Given the above, choosing D to implement some of the code would just be a cognitive and time overhead. It's no reflection on D in any way, it's just that all the tooling is for Java and the platform API/ABI is totally designed to host Java. * "The man who (with help) got LDC to run on Android". The team, with the best will in the world, is too small to answer all the questions that the world of pain known as Android can throw up. Why doesn't this build for me? Gradle is killing me... Dub doesn't seem to be working right after the upgrade to X.Y... it works on my LG but not my Samsung... I've upgraded this but now that doesn't work anymore... * Will there be a functioning team in 5 years time? Will they support older versions of Android? Can I develop on Windows? Or Linux? Why not?., etc., etc. Since you already know D you need to answer a different question. What's the chance the compiler will die on the relevant horizon, and how bad will it be for me if that happens. Personally I'm not worried. If D should disappear in a few years, it wouldn't be the end of the world to port things. I just don't think that's very likely. I answered the Android question already, as for engineering /scientific work (I design/develop engineering frameworks/tools for wing designers) python has bindings to numpy, Qt, CAD kernels, data visualisation tools. Python is fast enough to string those things together and run the overarching algorithms, GUIs, launch trade studies, scipy optimisations. It has even more expressiv
[OT] Re: C's Biggest Mistake on Hacker News
On Friday, 27 July 2018 at 23:42:47 UTC, Laeeth Isharc wrote: For me, I think that managing money is about choosing to expose your capital intelligently to the market, balancing the risk of loss against the prospective gain and considering this in a portfolio sense. Prediction doesn't really come into that I think this apparent difference of opinion is down to different definitions of the word prediction. When I say prediction I mean the assessment of what are the possible futures for a scenario and how likely each one is. It can be conscious or unconscious. I think my understanding of the word is not an uncommon one. By my definition, when you balance the risk of loss (i.e. predict how likely you are to lose money) against the prospective gain (i.e. multiply the probability of each possible outcome by its reward and sum the total to get a prospective value) then you are, by my definition and therefore, for me, by definition, making predictions. It's not the prediction that matters but what you do. It's habits, routines, perception, adaptation and actions that matter. I agree they are integral to our behaviour and habits and routines do not involve the element of prediction. Perceptions come before and actions take place after the decision process is made (conscious or not) and so don't factor into this discussion for me. In truth I avoid discussions that are really just arguing about definitions of words, but you made a couple of sweeping bumper-stickery comments that trying to predict things was usually a waste of time and as an alternative we should 'be the change...'. I wholeheartedly agree we should 'be the change...' but it's not an alternative to making predictions, it goes hand in hand with it. I'm sure you've read Kahneman's Thinking, Fast and Slow. You made a generalisation that applies to the 'fast' part. I'm saying your universal rule is wrong because of the slow part. I learnt D many years ago just after Andrei's book came out. I love it but it's on the shelf at the moment for me. I rarely get time for side projects these days but when I do I want them to run on Android with easy access to all the APIs and without too much ado in the build setup. They must continue to work and be supported with future versions of Android. At work, on Windows, JDK8/JavaFX/Eclipse/maven and python/numpy/Qt/OpenCascade/VTK hit the spot. Each project I start I give some very hard thought about which development environment I'm going to use, and D is often one of those options. The likely future of D on the different platforms is an important part of that assessment, hence 'predicting' the future of D, hard and very unreliable though that is, is an important element in some of my less trivial decisions.
Re: C's Biggest Mistake on Hacker News
On Wednesday, 25 July 2018 at 23:27:45 UTC, Laeeth Isharc wrote: But making predictions is a tricky thing and mostly of not much value. I'm really surprised to hear you say this - so much money in the financial services is poured into making predictions, lots of them and as fast as possible. Isn't that one of the promises of D in that market? Whatever the reality about that, in the life of all humans the ability to make good predictions is fundamental to survival - if I cross the road now, will I be run over? If I build a chair to make money, will anyone buy it? Likewise, if I am investing time in developing my skills to further my career, will learning D be a net benefit? This important question depends heavily on predicting the future of D (among many other things). If I use D for my startup, will it be the secret sauce that will propel us to the top, or will I be better off with JDK8 or modern C++? I think it's more interesting to be the change you wish to see in the world. This has a lovely ring but it doesn't mean not to assess / predict if what you do will provide a net benefit.
Re: /^(?:([^:\/?#]+):)?(?:\/\/((?:(([^:@]*)(?::([^:@]*))?)?@)?([^:\/?#]*)(?::(\d*))?))?((((?:[^?#\/]*\/)*)([^?#]*))(?:\?([^#]*))?(?:#(.*))?)/, [your code here]
On Friday, 6 April 2018 at 13:10:07 UTC, jason wrote: what is this? It's a perl program that converts D code into APL
Re: Am I reading this wrong, or is std.getopt *really* this stupid?
On Sunday, 25 March 2018 at 14:46:23 UTC, Abdulhaq wrote: On Saturday, 24 March 2018 at 21:24:28 UTC, Andrei Alexandrescu wrote: That'd be great. I'm thinking something like an option std.getopt.config.commandLineOrder. Must be first option specified right after arguments. Sounds good? I thought this was a clever joke, but everyone is taking it seriously ?! "When running mygreatprog.exe, always run with --command-line-order CommandLine as the first command line option, otherwise mygreatprog.exe may misinterpret the command line" Oops sorry to reply to myself, I realise my mistake now :-)
Re: Am I reading this wrong, or is std.getopt *really* this stupid?
On Saturday, 24 March 2018 at 21:24:28 UTC, Andrei Alexandrescu wrote: That'd be great. I'm thinking something like an option std.getopt.config.commandLineOrder. Must be first option specified right after arguments. Sounds good? I thought this was a clever joke, but everyone is taking it seriously ?! "When running mygreatprog.exe, always run with --command-line-order CommandLine as the first command line option, otherwise mygreatprog.exe may misinterpret the command line"
Re: D course material
On Tuesday, 13 March 2018 at 21:12:16 UTC, David Gileadi wrote: On 3/13/18 2:08 PM, aberba wrote: On Tuesday, 13 March 2018 at 17:20:57 UTC, Meta wrote: On Tuesday, 13 March 2018 at 12:39:24 UTC, Dmitry Olshansky wrote: [...] Honestly I'd recommend TDPL. It's got a lot of good real-world examples, including some OOP ones, but more importantly examples that demonstrate concurrent programming, generic programming, procedural, and I think a few functional examples as well. Basically, it covers a very broad area in one book while also teaching you D. Boring stuff IMO. Interesting that you found it boring--I found it to be the opposite. It is one of the few programming books that I can read for enjoyment. The book is excellent but I did find the examples boring.
Re: State of D: The survey is killing man, way too much
On Saturday, 3 March 2018 at 19:46:38 UTC, Jonathan Marler wrote: On Saturday, 3 March 2018 at 17:42:25 UTC, David Gileadi wrote: On 3/3/18 8:08 AM, 0x wrote: The D survey is killing maan! Those are lots of questions in there If I ever get hold of the people behind it... Is it a coincidence that your user handle is "negative one"? ;) He's obviously unsigned and therefore cannot be "negative". In his mind he was just overflowed :) Ha, good one. Somehow I find this description as applying to 21st Century men in general.
Re: Being Positive
On Tuesday, 13 February 2018 at 11:36:35 UTC, psychoticRabbit wrote: On Tuesday, 13 February 2018 at 08:08:28 UTC, bauss wrote: On Tuesday, 13 February 2018 at 01:32:29 UTC, psychoticRabbit wrote: Personally, I found that youtube video (Life is better with Rust's community automation - YouTube) rather disturbing. Psychotic rabbit disturbed by programming related video. In other news
Re: Dub, Cargo, Go, Gradle, Maven
On Tuesday, 13 February 2018 at 10:06:43 UTC, welkam wrote: ADG? Google doesnt find anything relevant Acyclic directed graph
Re: Which language futures make D overcompicated?
On Friday, 9 February 2018 at 07:54:49 UTC, Suliman wrote: I like D, but sometimes it's look like for me too complicated. Go have a lot of fans even it not simple, but primitive. But some D futures make it very hard to learning. Small list by me: 1. mixins 2. inout 3. too many attributes like: @safe @system @nogc etc Which language futures by your opinion make D harder? This is a great question. The hard part of good language design is making things simple.
Re: Looking for a job in USA
On Saturday, 18 November 2017 at 08:59:53 UTC, Satoshi wrote: On Saturday, 18 November 2017 at 01:31:09 UTC, Indigo wrote: On Wednesday, 15 November 2017 at 17:32:50 UTC, Satoshi wrote: Hi, as the title says, I'm looking for a job opportunity in the USA (H1B visa sponsorship required). I'm experienced Software Engineer with a demonstrated history of working in the security and investigations industry. Skilled in C, C++, D, C#, SQL, Object-Oriented Programming, Software Development and Electrical Engineering. Strong engineering professional with willingness to further education. Actually I work as a full stack ASP.NET developer for SolarWinds in Brno (Czechia). There are couple of my open source projects what I have done in past. https://github.com/Rikarin If you are interested or you know someone who could hire me, please let me know! Thanks! What is your reasoning for coming to the US? You might want to rethink this as America is collapsing. America will be vastly different in 10 years and not a great a place to be. The amount of corruption in the government and the amount of vitriol that people have for each other are astonishing... and it is only getting worse. Actually, Slovakia (SK) and Czechia (CZ) are two most corrupted countries in the EU. We are paying huge taxes and getting nothing in back. If you are moving to settle down that it would be a bad decision IMO. If it is just temporary thing for a few years thing it might be ok depending on you end up. I wanna try to live in the US for a few years and then decide if I should leave or settle down. Do you mind me asking why you are leaving Czech? I hear there are a lot of pretty females there ;) Is it simply business or is it the country itself? To be honest, I couldn't imagine it being as bad as the US but I do not know much about it. To be honest, I'm curious as to what it is like over there because I plan on moving out of the US at some point and I'm looking for countries that are a bit more stable and not on the decline. Actually, CZ is rising up and getting better, but in business area and salaries it's still worse than in the US. Some places in EU are not safe yet. A lot of immigrants are going there from war zones. They are like groups of anarchists destroying everything, stealing, raping and not respecting the laws. Salary... In US you get $100,000/year as a senior developer or something like that, right? There it's only like $30,000/year. But the price of stuff like cars, grocery and everything what you can buy on amazon, e-bay, etc. is the same. The concept of a money in US seems to be different than in SK. There it's more about survive than enjoying life. People in US seems to be little more opened to strangers than here. That's the reasons why I want to leave. BTW: What's wrong with the US? Don't worry you'll fit right in...
Re: Rename 'D' to 'D++'
On Friday, 10 March 2017 at 11:25:11 UTC, Traktor TOni wrote: I think the name is just misleading, the D developers should at least be honest with themselves. well the tractor derives from the shire horse and Toni comes from Antonius so you should be honest too and rename yourself to Shirehorse++ Antonius--.
Re: Taking D to GDC Europe - let's make this tight
On Tuesday, 12 July 2016 at 11:27:18 UTC, Ethan Watson wrote: http://schedule.gdceurope.com/session/d-using-an-emerging-language-in-quantum-break My proposal for a talk has been accepted, and I'll be in Cologne next month presenting to industry peers. Congratulations and it sounds good, however I would say that it's something of a truism in sales not to criticize the competition - it doesn't have the effect that you think it will. The expressiveness of D code next to the (long winded etc.) equivalents in the other languages will be clear, so it's more effective to praise those languages for their historical strengths and let D talk for itself as a good improvement and worthy of investigation. IMHO.
Re: D mentioned and criticized
On Thursday, 19 May 2016 at 13:53:46 UTC, Guillaume Piolat wrote: On Wednesday, 18 May 2016 at 21:45:16 UTC, Abdulhaq wrote: On Tuesday, 17 May 2016 at 12:02:02 UTC, Guillaume Piolat wrote: On Tuesday, 17 May 2016 at 12:00:53 UTC, Guillaume Piolat wrote: Nim is much more interesting as a D alternative, in the sense that it is a. I give up, kept pressing ENTER while typing a message. Please finish, I have to know what follows "a" :-) Nim is much more interesting as a D alternative, in the sense that it is a more radical departure from C++. Be it in syntax, meta-programming, and experimental features. It certainly said no to variable-sized integers, while Loci stays with them. Thanks :-) - interesting.
Re: D mentioned and criticized
On Tuesday, 17 May 2016 at 12:02:02 UTC, Guillaume Piolat wrote: On Tuesday, 17 May 2016 at 12:00:53 UTC, Guillaume Piolat wrote: Nim is much more interesting as a D alternative, in the sense that it is a. I give up, kept pressing ENTER while typing a message. Please finish, I have to know what follows "a" :-)
Re: Researcher question – what's the point of semicolons and curly braces?
On Saturday, 14 May 2016 at 03:19:51 UTC, Joe Duarte wrote: I've been going through a lot of Unicode, icon fonts, and the Noun Project, looking for clean and concise representations for program logic. One of the ideas I've been working with is to leverage Unicode arrows. In most cases it's trivial aesthetic clean-up, like → instead of ->, and a lot of it could be simple autoreplace/autocomplete in tools. For if logic, you can an example of bent arrows, and how I'd express the alternatives for your example here: http://i1376.photobucket.com/albums/ah13/DuartePhotos/if%20block%20with%20Unicode%20arrows_zpsnuigkkxz.png there's a keyboard for those types of programs ;-) http://www.dyalog.com/uploads/images/Business/products/us_rc.jpg (APL keyboard)
Re: Follow-up post explaining research rationale
On Monday, 9 May 2016 at 19:09:35 UTC, Joe Duarte wrote: Hi all, As I mentioned on the other thread where I asked about D syntax, I'm a social scientist about to launch some studies of the effects of PL syntax on learnability, motivation to pursue programming, and differential gender effects on these factors. This is a long post – some of you wanted to know more about my research goals and rationale, and I also said I would post separately on the gender issue, so here we go... One (over-)simplified aspect to this, I think, is that men are more prone to the sort of monomania required to become expert programmers. Our life goals are also different such as Andrei and Walter being prepared to spend huge efforts (despite the real risk of failure) making their baby, D, successful. Women have a different emotional / life-goals setup, and are not inclined to such endeavours (massive generalisation I know, but we are necessarily talking in generalisations). Of course all men and women sit on spectra of male/female behaviours, and we observers of those spectra each have our own unique life experiences of such, leading to different opinions. This is just my take on it.
Re: Some questions on latest work
On Wednesday, 4 May 2016 at 02:42:40 UTC, Bill Hicks wrote: On Tuesday, 3 May 2016 at 19:05:03 UTC, ShamShime Azelkraft wrote: On Tuesday, 26 April 2016 at 21:49:33 UTC, Bill Hicks wrote: I suggest you smoke some DMT (and have a breakthrough), or have a few Ayahuasca sessions. If that doesn't set you off on a path to the greatest positive impact, then nothing ever will. Everything you've desired to achieve with D is a construct of your ego, and nothing more. ... DMT: The Spirit Molecule (2010) HD https://www.youtube.com/watch?v=LtT6Xkk-kzk Sorry Bill but reading a few wikipedia articles about the Beats and Carlos Castenada doesn't a spiritual guru make... even if you did get as far as a couple of youtube videos as well.
Re: Some questions on latest work
On Wednesday, 27 April 2016 at 17:57:55 UTC, Bill Hicks wrote: If I get up on a stage with a grin splitting my face and talk about how great D is, I'm considered a hero. But if I criticize D for it's flaws, then I'm a troll or someone who is just ranting. Anybody has the right to criticize D, just as people have the right to praise it. If D is part of your identity to the point where you can't stand hearing people criticize it and then get offended, then you have issues. Grow up. If you want to be taken seriously then you'll need to furnish us with a real name and stop hiding behind a pseudonym. You've also obviously got plenty of 'issues' that greatly subtract from any useful comment you might otherwise have made, so before investing too much time in unloading here you might care to reflect on the fact that no one can possibly take you seriously.
Re: Females in the community.
I have to say I agree that, for better or for worse, this thread alone demonstrates an occasional aggressiveness that puts me off, never mind women who are, generally speaking, less likely to weather the tone of voice often used here. Karabuta seems to be a non-native English speaker and got laid into for using the wrong word for women. He took the lashing in good spirits but it doesn't bode well for the thinner skinned who might otherwise have a valuable contribution to make. On Thursday, 24 March 2016 at 08:39:01 UTC, Ola Fosheim Grøstad wrote: On Thursday, 24 March 2016 at 04:05:53 UTC, Adam D. Ruppe wrote: On Wednesday, 23 March 2016 at 10:46:22 UTC, QAston wrote: I could point to the building you're sitting in. Most likely made almost exclusively by males. LOL. I happened to spend most the day today with a group of women... building something. (I was there too, of course, but I'm practically one of the sisters myself and they all did more work than me anyway. The other five are all non-controversially women.) I read this message out loud to them. We all got a good laugh. Yes, it was funny to me as my mother worked as an industrial designer in the 1960s and designed a top-of-the-line radio (within a group of men) called Tandberg Huldra 9. She spent a lot of time on the backlight, and came up with acrylic backlight as a novel solution (at that point in time). She wanted the front to be all black, but the head of the company didn't want that, so it was all aluminium coloured like the top image: http://nrhf.no/Tandberg/TR%20Radio/Tandberg%20Huldra/T'Huldra-9.html After she quit Tandberg released the version with only the bottom half in black... Which looks a bit silly. But guess what, some decades later audiophile equipment was black aluminium and acrylic backlights was standard... I am pretty sure that there are many "invisible" women involved with the products we use, but maybe men are spending more effort at getting their name published. Incidentally, she had to correct a newspaper earlier this year that wrongly attributed her design to a male designer (he was hired after she quit)... Later when she was teaching furniture design/interior architects, most students were female, so they tried to get some men in as well in order to get a more mixed group. Most educators know that having some diversity in a group is good for the social dynamics. The interaction in mixed groups are usually more interesting than all-male or all-female groups. Y'all should stick to arguing about the color of the bikeshed. Maybe or maybe not, but meta discussions are important for changing norms within a forum. If a given tone means that some women hesitate to join in, it probably also means that a group of men also hestitate to join in. Adjusting the tone might mean that more people will participate which would be better for all.
Re: Clojure vs. D in creating immutable lists that are almost the same.
On Saturday, 27 February 2016 at 22:31:28 UTC, Brother Bill wrote: That is, how to create one-off changes to an immutable data structure, while keeping the original immutable, as well as the one-off change, and maintain good performance. Clojure uses bit-partitioned hash tries. I recommend this video (Clojure Concurrency) https://www.youtube.com/watch?v=dGVqrGmwOAw slides here: https://github.com/dimhold/clojure-concurrency-rich-hickey/blob/master/ClojureConcurrencyTalk.pdf?raw=true (slide 21 about bit-partitioned hash tries)
Re: OT: 'conduct unbecoming of a hacker'
On Friday, 12 February 2016 at 03:19:52 UTC, Nick Sabalausky wrote: On 02/11/2016 04:54 PM, w0rp wrote: His article is way too long. It seems like an article about whining about how people whine too much. It's metawhine! :) These meta whines get on my nerves, everything was much better in Usenet days.
Re: C++17
On Wednesday, 27 January 2016 at 18:09:50 UTC, Ola Fosheim Grøstad wrote: On Wednesday, 27 January 2016 at 15:14:07 UTC, bachmeier wrote: And ironically, in this very thread, a C++ programmer has called D a toy language. D is incomplete, unfinished, unspecified and unstable. C++14 is an ISO standard and has several independent implementations that are polished and stable compiler releases. That makes D a toy language and C++ an industry standard. Not very difficult to grok, I would think. It make its immature but come on, it's not a toy.
Re: Proposal: Database Engine for D
On Thursday, 31 December 2015 at 17:14:55 UTC, Piotrek wrote: The goal of this post is to measure the craziness of an idea to embed a database engine into the D language ;) I think about a database engine which would meet my three main requirements: - integrated with D (ranges) - ACID - fast Since the days when I was working on financing data SW I become allergic to SQL. I though that NoSQL databases would fill the bill. Unfortunately they didn't. And I want to have an ability to write a code like this without too much effort: struct Person { string name; string surname; ubyte age; Address address; } DataBase db = new DataBase("file.db"); auto coll = db.collection!Person("NSA.Registry"); auto visitationList = coll.filter!(p => p.name == "James"); writeln (visitationList); And other things like updating and deleting from db. I think you get my point. So I started a PoC project based on SQLite design: https://github.com/PiotrekDlang/AirLock/blob/master/docs/database/design.md#architecture The PoC code: https://github.com/PiotrekDlang/AirLock/tree/master/src/database Can you please share your thoughts and experience on the topic? Has anyone tried similar things? Piotrek My two pence, if you want it to be fast then it must have a good implementation of indices. Your filter functions should not actually start collecting real records, but instead should simply change the way that the cursor traverses the underlying data store. You will need good query 'compilation' like the big boys do, which work out which tables and indices to use and in which order, based on stats of the data / indices. If you want ACID then SQL seems like a good approach to me, certainly I wouldn't want anything ORM-like for updating / inserting data. There a number of good libraries out there already, SQLite obviously springs to mind. It would be a fun project but perhaps a lot more work than you realised if you really want isolation levels, speed etc.
Re: D could catch this wave: web assembly
On Wednesday, 23 December 2015 at 10:06:20 UTC, Suliman wrote: For example I do not know JS. And only C++. How would look like my web-app with WASM? First have a look at this, qt-emscripten: http://vps2.etotheipiplusone.com:30176/redmine/projects/emscripten-qt/wiki/Demos WASM will allow programming languages and libraries to be compiled down to WASM code and then run in the browser, rather like is happening with qt-emscripten (C++ is converted to javascript). As regards how it is rendered, DOM, OpenGL etc., I guess that will be an implementation choice.
Re: RFC in Comparison between Rust, D and Go
On Monday, 9 November 2015 at 21:01:29 UTC, Andrei Alexandrescu wrote: On 11/09/2015 09:13 AM, Nordlöw wrote: Yet another shallow language comparison that needs to be corrected: https://www.quora.com/Which-language-has-the-brightest-future-in-replacement-of-C-between-D-Go-and-Rust-And-Why/answer/Matej-%C4%BDach?srid=itC4&share=1 My response: https://goo.gl/VTEYFk -- Andrei This is a very strong and honest summary of the situation IMHO, and the straight talking and pinpoint accuracy of the problems gives me extra hope for the future of D at the same time.
C++ compiler vs D compiler
Perhaps the answer to this is obvious, but what's harder to write from scratch - a C++ compiler or a D compiler? :-) We know Walter wrote a C++ compiler single handedly, does anyone else recall the C++ Grandmaster qualification, the free course where participants get to write a complete C++ compiler from scratch? I think it's dead now but I can't find any real info about that despite a serious google. What's the chances of anyone single-handedly writing a D compiler from scratch in the future? I know deadalnix is writing SDC in D - sounds interesting. Is the D language well enough documented / specified for a complete D implementation to be even possible (as things stand now)?
Re: Anyone working on updated Qt bindings?
On Saturday, 3 October 2015 at 01:58:01 UTC, Jeremy DeHaan wrote: I know a lot of people wish they had new bindings for Qt, so I was going to give it a go soon. Is anyone currently working on such a thing? I'd rather help someone than compete with them. I got quite far for Qt4 with https://github.com/alynch4047/smidgen and https://github.com/alynch4047/sqt . It's based on the sip bindings used for PyQt and for various reasons doesn't use externC++ etc (though that's easily changed) (I stuck to extern C). It handles virtual functions, namespaces etc. I based it on PyQt4/sip because it means that a lot of work has already been done, and it's a Qt wrapping technology that I've had a lot of experience with and know that it works very well. The code is well tested and I think isn't too far from covering most of Qt4. Only tested on Linux 64bit. I stopped development due to lack of time and a concern that Qt4 was on its way out and Android / iOS / Java / Web were the realistic future (somewhat unfortunately).
Re: Looking for GC intensive D programs
On Sunday, 28 June 2015 at 01:41:53 UTC, rsw0x wrote: Does anyone know of any GC intensive D programs that can preferably be ran with little to no setup? I remember Maxime said that the Higgs compiler was gc intensive https://github.com/higgsjs/Higgs
Re: D could catch this wave: web assembly
On Tuesday, 23 June 2015 at 11:09:31 UTC, Joakim wrote: As for a GC, why would webasm need to provide one? I'd think the languages would just be able to compile their own GC to webasm, which seems low-level enough. From the docs: Even before GC support is added to WebAssembly, it is possible to compile a language's VM to WebAssembly (assuming it's written in portable C/C++) and this has already been demonstrated (1, 2, 3). However, "compile the VM" strategies increase the size of distributed code, lose browser devtools integration, can have cross-language cycle-collection problems and miss optimizations that require integration with the browser.
Re: We simply must implement this for D to stay competitive
On Saturday, 20 June 2015 at 22:38:30 UTC, Walter Bright wrote: https://github.com/rollbear/basicpp I had a Video Genie in my youth too, I loved it :-)
Re: D could catch this wave: web assembly
On Thursday, 18 June 2015 at 10:36:16 UTC, Joakim wrote: Why can't they just admit that the core architecture of the web is horrific, ie an antiquated document format based on some shitty 50-year old IBM markup language (https://en.wikipedia.org/?title=Standard_Generalized_Markup_Language), a programming runtime that was cranked out in 10 days in the middle of the browser wars and certainly shows it (https://en.wikipedia.org/wiki/Brendan_Eich#Netscape_and_JavaScript), and a stylesheet language hacked on top to eliminate some redundancy, _by adding yet another language_?! Of course this is exactly true and it drives me mad too, but you can't just jettison it in favour of a better architecture. Given that it must be supported else it will break the interweb, what else is there to do but do but to build the new stuff on the side. With a canvas, OpenGL backing and a half-decent 'assembly language' to compile down to, it could be made into (ultimately) a satisfactory development platform. You would only need to use DOM and CSS for the top canvas/OpenGL node and from there down it's all however you want to roll it. As for performance then granted it seems bizarre to require all these layers below, but I remember watching a very interesting video about how running on the OS is subject to large overheads in the kernel, while running in the browser can bypass that and hence is not such a performance drop as you might expect - unfortunatel I can't dig up the link.
Re: Martin Nowak is officially MIA
On Wednesday, 17 June 2015 at 16:16:09 UTC, berlin wrote: well, read something to your world situation. take it from an old kufr that dos not want to live under islamic law: http://www.jihadwatch.org/ http://www.thereligionofpeace.com/ http://www.barenakedislam.com/ http://schnellmann.org/Understanding_Muhammad_Contents.html you might also want to take a closer look at "taqiyya" - that is why nobody can trust a muslim. It's your anger and hate you need to take a hard look at, it's taking you to a dark place. Whatever it is that's really eating you, I doubt it's the muslims that caused it. BTW if you want to learn about Islam, learn it from a muslim and not hate propagation sites.
Re: Martin Nowak is officially MIA
On Wednesday, 17 June 2015 at 13:55:14 UTC, Etienne wrote: On Wednesday, 17 June 2015 at 13:48:50 UTC, Abdulhaq wrote: On Wednesday, 17 June 2015 at 13:26:57 UTC, Etienne wrote: The likely explanation for people being out of touch for a few days is not muderous muslim immigrants. When you start to think that it is, you need look at the state of your mind, not the immigrants. Was he even serious? Sounded ironic to me, people who think like that around here in Canada are laughed at because we all immigrated 200-300 years ago :) It was a throwaway comment, no doubt, but the situation is too dangerous to let silly comments slide by without a mention - these things snowball. I'm sure there are plenty of problems with immigrants but blaming them for everything leads to an exaggerated xenophobia and the real problems are pushed to one side. Maybe one day Muslims will have their Martin Luther King, although I doubt the situation is so dire I don't think you understand the worldwide situation but this forum is not the place to discuss it. I will say that I doubt that MLK could have saved Grozny, Baghdad or Aleppo for instance.
Re: Martin Nowak is officially MIA
On Wednesday, 17 June 2015 at 13:26:57 UTC, Etienne wrote: The likely explanation for people being out of touch for a few days is not muderous muslim immigrants. When you start to think that it is, you need look at the state of your mind, not the immigrants. Was he even serious? Sounded ironic to me, people who think like that around here in Canada are laughed at because we all immigrated 200-300 years ago :) It was a throwaway comment, no doubt, but the situation is too dangerous to let silly comments slide by without a mention - these things snowball. I'm sure there are plenty of problems with immigrants but blaming them for everything leads to an exaggerated xenophobia and the real problems are pushed to one side.
Re: Martin Nowak is officially MIA
On Wednesday, 17 June 2015 at 12:14:05 UTC, berlin wrote: On Wednesday, 17 June 2015 at 11:51:30 UTC, Abdulhaq wrote: maybe one of the brain surgeons from africa (comming by boat) or some muslim (turkish, arabic or otherwise) got him at after 16:00? I know, don't feed the trolls, but: Your primary problem is not the immigrants. it the muslim immigrants with knives. not a troll, but a berliner experiences of multi-kulti knives. The likely explanation for people being out of touch for a few days is not muderous muslim immigrants. When you start to think that it is, you need look at the state of your mind, not the immigrants.
Re: Martin Nowak is officially MIA
maybe one of the brain surgeons from africa (comming by boat) or some muslim (turkish, arabic or otherwise) got him at after 16:00? I know, don't feed the trolls, but: Your primary problem is not the immigrants.
Re: PHP verses C#.NET verses D.
First off I would stress that architecture and process are more important than which of those 3 languages you choose, i.e. good testing (I prefer test driven), continuous integration, and a solid architecture that you are confident will provide the reliability, correctness and uptime that you require. Having said that I would then personally be conservative and choose to standardise on C# for its maturity, expressiveness and great tooling. It also has a good ecosystem (libraries etc.) which will prove very useful in business related tasks. D has better expressiveness and probably would run faster but given all the other factors I would be concerned right now about its slight lack of maturity and under-developed ecosystem.
Re: Asked on Reddit: Which of Rust, D, Go, Nim, and Crystal is the strongest and why?
D is really unique in the sense that it's open enough for people not to feel that they have to role their own. D also has enough features to satisfy many different users, although - and this is often forgotten - you don't _have_ to use them all. People like Go and Rust, because it tells them exactly what to do. D doesn't, they have to think for themselves, and a lot of people hate that, which is sad, because having loads of things to choose from makes you think more about your code and software design in general and it makes you a better programmer/coder/architect. Thinking like that is fine when you work on your own, but when you're in a large team and working on a large code base the prospect of trying to grok a dozen different coding approaches using different feature sets of some uber language is entirely unappealing and best avoided.
Re: Asked on Reddit: Which of Rust, D, Go, Nim, and Crystal is the strongest and why?
On Thursday, 11 June 2015 at 12:11:49 UTC, Ola Fosheim Grøstad wrote: As a norwegian I can't make up my mind as to whether I should write "color" or "colour". I suspect it will be taken as some kind of political statement. Hey, I am neutral! I use "color" in source code and "colour" in writing. :) As an Englishman I used to rail against the USA-ification of the language but now I've learnt to bite the bullet and actually follow the same rule as yourself. Saves a lot of indigestion :-)
Re: Asked on Reddit: Which of Rust, D, Go, Nim, and Crystal is the strongest and why?
I really wish people would stop complaining about other languages having the same features as D without giving credit. It is impossible to figure out exactly where ideas from features come from, but most features predate even C++ if being first is the main point. Hear, hear, is it so unlikely that one footstep should fall in the footprint of another? The hard part about designing an imperative language is not the individual features, the palette is given. The hard part is turning it into beautiful whole that is greater than the sum of the parts. And that is important, but difficult (or impossible) to achieve. This is it. Great languages (IMO) have condensed their features down to the smallest set of orthogonal features that they could manage. This makes the language easier to reason about, to share code, to maintain code, to learn, to read code, even writing it is often easier! Right now I feel that D is growing in 'features' and corner cases beyond the point where I want to explore it's depths. It's gone from a swim in the bay into crossing the Channel. I always think about Herb Sutters Guru of the Week column and how it made me think "ugh - too many oddities to learn". I could be wrong and I hope I am. It's quite a nice twist that the thread discussing which language is better branched into what version of English is the right one - as if such a thing is meaningful. Arguing about definitions and terminology is surely such a useless diversion.
Re: How does D improve on C++17?
On Monday, 27 April 2015 at 21:56:56 UTC, Meta wrote: On Monday, 27 April 2015 at 20:56:06 UTC, Andrei Alexandrescu wrote: On 4/27/15 2:13 AM, Idan Arye wrote: On Monday, 27 April 2015 at 01:28:01 UTC, Walter Bright wrote: Now on the front page of Hacker News! https://news.ycombinator.com/ https://news.ycombinator.com/item?id=9443462 Because tomorrow it won't be on the front page Soon as you post a direct link -> all votes go to spam. Basically any direct link to a HN article compromises the article. -- Andrei Too late, it's already dropped to 142. Between this ridiculous ranking system, shadow banning, the faux intellectualism, and the cult of Paul Graham, HN probably has to be one of the worst tech communities on the internet. I thought I was the only one who hated the 'hn comments are written in stone ready for Britannica' ethos so you made me laugh but I have to say its a good place to find articles.
Re: Interrogative: What's a good blog title?
On Monday, 27 April 2015 at 22:54:07 UTC, Andrei Alexandrescu wrote: I don't have a blog, and was thinking of starting one. E.g. the article on tracing allocations needs a home! I was wondering if you have any good ideas of what's a good blog name. I'd avoid branding my blog with my longish name, so I was thinking of something simple and easy to use in conversation (e.g. my current draft title at http://blog.erdani.com, the metareferential "You Are Reading This Blog's Title" is perhaps intriguing but difficult to talk about). Any thoughts? My only candidate right now is "Greasemonkey Philosopher". I'm shooting for a title that reflects the contrast between my low-level and high-level aspirations. Sadly enough, "greasemonkey" is a popular browser extension package, so it comes up in searches etc. Generally I'm looking for a phrase that's catchy but doesn't remind one of something else. Something contradictory, funny, etc. Please let me know of any thoughts you might have! Thanks, Andrei If it's about D and language development then how about "Deconstructed".
Re: Today's programming challenge - How's your Range-Fu ?
MiOn Sunday, 19 April 2015 at 02:20:01 UTC, Shachar Shemesh wrote: On 18/04/15 21:40, Walter Bright wrote: I'm not arguing against the existence of the Unicode standard, I'm saying I can't figure any justification for standardizing different encodings of the same thing. A lot of areas in Unicode are due to pre-Unicode legacy. I'm guessing here, but looking at the code points, é (U00e9 - Latin small letter E with acute), which comes from Latin-1, which is designed to follow ISO-8859-1. U0301 (Combining acute accent) comes from "Combining diacritical marks". The way I understand things, Unicode would really prefer to use U0065+U0301 rather than U00e9. Because of legacy systems, and because they would rather have the ISO-8509 code pages be 1:1 mappings, rather than 1:n mappings, they introduced code points they really would rather do without. This also explains the "presentation forms" code pages (e.g. http://www.unicode.org/charts/PDF/UFB00.pdf). These were intended to be glyphs, rather than code points. Due to legacy reasons, it was not possible to simply discard them. They received code points, with a warning not to use these code points directly. Also, notice that some letters can only be achieved using multiple code points. Hebrew diacritics, for example, do not, typically, have a composite form. My name fully spelled (which you rarely would do), שַׁחַר, cannot be represented with less than 6 code points, despite having only three letters. The last paragraph isn't strictly true. You can use UFB2C + U05B7 for the first letter instead of U05E9 + U05C2 + U05B7. You would be using the presentation form which, as pointed above, is only there for legacy. Shachar or shall I say שחר Yes Arabic is similar too
Re: Why I'm Excited about D
On Monday, 13 April 2015 at 16:43:00 UTC, deadalnix wrote: Thinking about it, this is probably the right thing to do, but the range interface makes it non obvious and confusing. Some time ago there was a long thread about formalising the interface for ranges, i.e. a clear and precise definition of what each method should do. Was a consensus reached and documented?
Re: I came up with a new logo for the D language
On Monday, 13 April 2015 at 15:12:18 UTC, Gary Willoughby wrote: On Monday, 13 April 2015 at 12:49:19 UTC, Abdulhaq wrote: I'd suggest a fresh look be introduced when the ref counting and GC work has been done, Believe it or not i'm not opposed to this. and personally I'd suggest just a simple clean 2D metro-ish "D" as the "logo", No. This is fashion. Hire a professional to do it and make long term usability a requirement. The 2d flat look will feel very old when the big boys (Microsoft, Apple) move on. Yep it's certainly a fashion but is there really any escape from that? I agree that professionals will do a much better job than an amateur will. At the end of the day it's Walter's and Andrei's decision and it's not a decision to take lightly. Sometimes a new logo (and associated re-brand) can give you an energetic new direction on the other hand it can cause more issues with recognisability and market confusion. Yes.
Re: I came up with a new logo for the D language
On Monday, 13 April 2015 at 10:31:06 UTC, ixid wrote: On Monday, 13 April 2015 at 07:12:29 UTC, deadalnix wrote: It does not matter if one knows this is planets or not (these aren't planet technically, but phobos and deimos, mars's moons). What does matter is that the logo is recognized and associated with D. Any logo change goes against that goal, so that's probably won't happen. Do you think anyone outside a tiny number of forum users would recognize the logo at this stage? C and D share a great feature - their entire ecosystem, ethos, is expressed entirely in a single letter. On reddit or hacker news one need merely write the letter D and everyone knows what you are talking about. Only a small fraction of those people associate D with its current logo - a white D on a red "shiny" background, the white D having some sort of blob attached to it. To me the logo looks far too busy. I find it clunky, forced (because it's trying to squash in two moons and a planet) and unattractive. On the subject of the D media "brand", I don't believe there really is one. For instance, take a look at the D on the cover of Andrei's book: http://erdani.com/index.php/books/tdpl/ - no blobs in sight. I'd suggest a fresh look be introduced when the ref counting and GC work has been done, and personally I'd suggest just a simple clean 2D metro-ish "D" as the "logo", but I do realise that I am just one tiny voice of many. I think the homage to Digital Mars is just confusing in the website logo but should be retained in related product names such as phobos. To compound the heresy and further stir the Wrath of Gary :-) I'd even change the colour to blue. I hasten to add that all the above thinking is pure meaningless bikeshedding and getting a better GC is where it's all at for me :-)
Re: I came up with a new logo for the D language
On Monday, 13 April 2015 at 08:26:32 UTC, Gary Willoughby wrote: On Monday, 13 April 2015 at 08:14:05 UTC, Abdulhaq wrote: On Sunday, 12 April 2015 at 22:02:01 UTC, Barry Smith wrote: It's simple, but clean. Somewhat similar to the old one. Hope you like it. http://s2.postimg.org/m6qcfemhl/dlang.png Email me at barry.of.sm...@gmail.com if you want the SVG version. This idea improves on the current one in that it is much clearer that the two 'blobs' are different objects to the D itself. I like it. It's terrible and looks like a student project. Please stop trying to destroy the D brand. You're making silly accusations. My skin is fairly thick so no offence taken but when it comes to 'destroying D' the sometimes toxic atmosphere here is far more effective.
Re: I came up with a new logo for the D language
On Sunday, 12 April 2015 at 22:02:01 UTC, Barry Smith wrote: It's simple, but clean. Somewhat similar to the old one. Hope you like it. http://s2.postimg.org/m6qcfemhl/dlang.png Email me at barry.of.sm...@gmail.com if you want the SVG version. This idea improves on the current one in that it is much clearer that the two 'blobs' are different objects to the D itself. I like it.
Re: DIP76: Autodecode Should Not Throw
On Tuesday, 7 April 2015 at 03:17:26 UTC, Walter Bright wrote: http://wiki.dlang.org/DIP76 The DIP lists the benefits but does not mention any cons. A con that I can see is that it is violating the 'fail fast' principle. By silently replacing data the developer will be presented with a probably-hard-to-debug problem later down the application lifecyle (probably in an unrelated area), wasting developer time.
Re: [Semi OT] The programming language wars
On Monday, 30 March 2015 at 18:49:01 UTC, Joakim wrote: On Sunday, 29 March 2015 at 21:17:26 UTC, Abdulhaq wrote: On Sunday, 29 March 2015 at 18:27:51 UTC, Joakim wrote: would suffice. When you said "I think rodent-based UIs will go the way of the dinosaur," you seemed to be talking about more than just programmers. I'm still waiting for The Last One (from Feb 1981) to reach fruition: http://www.tebbo.com/presshere/html/wf8104.htm http://teblog.typepad.com/david_tebbutt/2007/07/the-last-one-pe.html Once finished, there will be no more need to write any programs. Heh, that article is pretty funny. :) In the comments for the second link, the lead programmer supposedly said, "For me TLO remains the 1st ever programming wizard. Wrongly advertised and promoted, but inherentlyt a 'good idea'." Considering how widespread wizards are in Windows these days, the idea has certainly done well. I do think that that concept of non-technical users providing constraints and answering questions is the future of building software, it just can't be built by one isolated guy. The configuration and glue code can be auto-generated, but there will likely always need to be core libraries written in a programming language by programmers. But the same automation that has put most travel agents out of work will one day be applied to most programmers too. It was such an exciting time back then, but most of us who had a clue knew that it certainly couldn't be done (at that time, anyway). Around about the same time there was another article in PCW (a great magazine by the way) about a data compression tool that you could rerun over and over again to make files smaller and smaller ;-). I wish we could read the back issues like we can with Byte (on archive,org). Even the adverts are great to read for us old hands. As to whether we'll ever do it, I agree with previous comments that it's related to understanding language - context is everything, and that takes an understanding of life and its paraphernalia.
Re: [Semi OT] The programming language wars
On Sunday, 29 March 2015 at 18:27:51 UTC, Joakim wrote: would suffice. When you said "I think rodent-based UIs will go the way of the dinosaur," you seemed to be talking about more than just programmers. I'm still waiting for The Last One (from Feb 1981) to reach fruition: http://www.tebbo.com/presshere/html/wf8104.htm http://teblog.typepad.com/david_tebbutt/2007/07/the-last-one-pe.html Once finished, there will be no more need to write any programs.
Re: Calypso and the future of D
On Friday, 23 January 2015 at 11:26:07 UTC, Walter Bright wrote: On 1/23/2015 3:22 AM, Abdulhaq wrote: On Friday, 23 January 2015 at 10:53:47 UTC, Walter Bright wrote: Yes, it's tied to clang++. It may not even work on all the platforms we support. But that's no matter for now. When you say "for now", does that imply that at some time in the future it may matter, in which case isn't it better to get these issues thrashed out now? Is this a potential dead end? We don't need a perfect solution immediately. We do need a solution that's better than nothing, we can build on that as required. I actually agree (though I am nowhere near as well informed as most others on this forum) that Calypso is the way to go, however ISTM there are large implications that go with that decision. The infrastructure that Calyspo depends on is not something, IMO, that you "can build on as required" to address missing platforms, for instance. Gazing into my crystal ball I'd say that 5 years down the road there will be many D libraries with significant dependencies on C++ (Calypso bound) to the extent that the great unwashed will view D as a language that is only practicable on platforms supported by clang++. For me that is fine, but I suspect not fine for others. For instance, I would view Qt and VTK as key bindings. Others will want numeric libraries etc. etc.. I should also point out that D doesn't have 'nothing' in terms of alternatives, there exist other more traditional binding-based technologies that could flourish with 'official' support. Since I have a hand in one of those I should point out that I think Calypso, if it does what I think it does, will work better in terms of integration, speed etc. and from my personal perspective is therefore better. My goal is to have Qt, VTK, linear algebra, matrices etc., available for D (linux and Windows) and I'm not fussed exactly how it is done.
Re: Calypso and the future of D
On Friday, 23 January 2015 at 10:53:47 UTC, Walter Bright wrote: On 1/23/2015 2:42 AM, Abdulhaq wrote: Calypso sounds fantastic but seems very tied to one compiler - we all need to know if yourself and Walter are content with that, for instance. Yes, it's tied to clang++. It may not even work on all the platforms we support. But that's no matter for now. When you say "for now", does that imply that at some time in the future it may matter, in which case isn't it better to get these issues thrashed out now? Is this a potential dead end?
Re: Calypso and the future of D
On Friday, 23 January 2015 at 00:24:45 UTC, Andrei Alexandrescu wrote: I think it's important that we enable Calypso (https://github.com/Syniurge/Calypso) and related tooling that interfaces D with other languages, notably C++. A key topic in 2015 for D is playing well with C++. A good C++ interface will give access to a host of mature C++ libraries, starting of course with the C++ standard library. More importantly, it will provide a smooth migration path for organizations that want to do development in D whilst taking advantage of their legacy code. I'm very glad to see this coment from you so that we can get a more fully fleshed out vision for where D is heading with C++ interop. I for instance have developed Smidgen (https://github.com/alynch4047/smidgen) which was to wrap Qt. However, I halted development (having made very good progress, I believe) due to feeling that it could easily be superceded by a more direct interface to C++ such as Calypso seems to be. These efforts do amount to a lot of work and need commitment (to their architecture) at the top level too. For instance, I needed weak references which I have had to work around, without any feeling that the workaround would continue to work with future changes to the GC. I forget the details but I also had concerns relating to GCed objects being relocated. Calypso sounds fantastic but seems very tied to one compiler - we all need to know if yourself and Walter are content with that, for instance. Getting a plan out there so that everyone can pull in one direction would be great.
Re: D and Nim
On Monday, 5 January 2015 at 11:01:51 UTC, bearophile wrote: I don't remember having such bug in my life. Perhaps you are very good, but a language like D must be designed for more common programmers like Kenji Hara, Andrei Alexandrescu, or Raymond Hettinger. Bye, bearophile
Re: GBAiD (GameBoy Advance in D)
On Thursday, 25 December 2014 at 09:22:48 UTC, JN wrote: On Thursday, 25 December 2014 at 01:59:48 UTC, Meta wrote: On Thursday, 25 December 2014 at 00:05:06 UTC, MattCoder wrote: Hi, Not mine, just sharing: reddit: http://www.reddit.com/r/programming/comments/2qaxvs/gbaid_a_gameboy_advance_emulator_in_d/ github: https://github.com/DDoS/GBAiD Matheus. That's really neat. The author's coding style suggests that he's new to D and coming from Java, as I was in a similar place when I started with D and his code looks very similar to some that I wrote back then. I write the code in a same way, even though I first learned and used C++ before switching to Java. Is it really that bad? I know OOP isn't trendy nowadays and functional programming is cool, but it's a very simple way to write software. I don't think the fact you're not using "advanced" features makes the code less cool. OOP encourages use of inheritance (prefer composition over inheritance) and the overuse of state, increasing the likelihood of bugs. If you avoid those traps then it's a good paradigm.
Re: Why is `scope` planned for deprecation?
Just like the OOP introductory books that still insist in talking about Cars and Vehicles, Managers and Employees, Animals and Bees, always using inheritance as code reuse. Barely talking about is-a and has-a, and all the issues about fragile base classes. -- Paulo Hear, hear. One of the problems with many introductions to OOP-paradigmed languages such as C++ is that by having to spend a lot of time explaining how to implement inheritance, the novice reader thinks that OOP is the 'right' approach to solving many problems when in fact other techniques ('prefer composition over inheritance' springs to mind) are far more appropriate. This is one of the primary problems I find in code of even more experienced programmers.
Re: Fwd: Interfacing with C++
On Wednesday, 5 November 2014 at 12:39:16 UTC, albatroz wrote: On Monday, 3 November 2014 at 20:32:34 UTC, Abdulhaq wrote: You might find my project Smidgen (https://github.com/alynch4047/smidgen) useful, It's not finished but you might consider extending it rather than starting elsewhere. woe... The first time I see this project mentioned. You may consider publishing this in the announcement forum, it will give it more visibility, it looks very promising. And now to play with it... Thank you Please ask me any questions direct to alynch4...@gmail.com, I'll do my best to help you. You might be interested in my progress in wrapping Qt which can be seen at https://github.com/alynch4047/sqt . The example Qt wrapped program simply wraps a QLineEdit and overrides various virtual methods, however most of the *.sip format is implemented for D and I could probably wrap a fair amout of Qt now, but again it's on hold ATM. Please do let me know how you get on, even if it doesn't work for you!
Re: Fwd: Interfacing with C++
On Monday, 3 November 2014 at 20:32:34 UTC, Abdulhaq wrote: On Monday, 3 November 2014 at 18:24:12 UTC, Shriramana Sharma via Digitalmars-d wrote: It was recommended that I discuss this on this list rather than d.learn... (I didn't think I'd graduate out of d.learn *that* quickly...) -- Forwarded message -- Hello. I really really need to be able to interface well with a C++ library which contains lots of classes if I am going to further invest time into D. You might find my project Smidgen (https://github.com/alynch4047/smidgen) useful, It's not finished but you might consider extending it rather than starting elsewhere. The build systems works on Linux 64 using cmake so you should be able to get started fairly easily, and there is a sample mini set of C++ classes used for testing which you can look at. It was created to wrap Qt, using the same *.sip file definitions as are used by PyQt. The FEATURE list TODO and DONE list is as follows: BTW one of the reasons that I have not yet completed this project is that I don't have a handle on where the current C++ interfacing work is going and don't want to work on something only to have it made redundant. Also, it has to use a hack to work around the lack of weak references and if the GC is changed to relocate objects then it will break this aspect of the code.
Re: Fwd: Interfacing with C++
On Monday, 3 November 2014 at 18:24:12 UTC, Shriramana Sharma via Digitalmars-d wrote: It was recommended that I discuss this on this list rather than d.learn... (I didn't think I'd graduate out of d.learn *that* quickly...) -- Forwarded message -- Hello. I really really need to be able to interface well with a C++ library which contains lots of classes if I am going to further invest time into D. You might find my project Smidgen (https://github.com/alynch4047/smidgen) useful, It's not finished but you might consider extending it rather than starting elsewhere. The build systems works on Linux 64 using cmake so you should be able to get started fairly easily, and there is a sample mini set of C++ classes used for testing which you can look at. It was created to wrap Qt, using the same *.sip file definitions as are used by PyQt. The FEATURE list TODO and DONE list is as follows: FEATURES * All D * Understandable, maintainable code * Wraps protected and virtual methods, allows virtual methods to be overridden in D * Mixin classes in target C++ library supported * Allows custom type conversions between C++ and D types * C++ enums mapped to D enums and are type checked in D * Wraps nested C++ classes * Tested * Based on the sip format. This is well proven and allows simplified maintenance of wrappers for multiple versions of the target library. (All larger target libraries will need some ongoing maintenance of the wrapper regardless of the wrapping technology). PROVISOS * Currently works with wrapped method arguments/return types of X, X* and X& but not implemented are X*& etc. * Wrapped types which are returned by value must have a copy constructor (could be changed later) * DO NOT capture references to (stack-based) arguments when overriding wrapped virtual methods. They are destroyed by the wrapper when the virtual method ends. * (Qt only) When emitting signals, must use emit! notation, not just call signal. GOTCHAS === * An invalid getClassNameC can cause segfaults when compiling the target application because it is used in template instantiation * If getClassName does not work for a subclass (QMoveEvent) when we are expecting T = the base class (QEvent), and it was created as the base class (QEvent) (because getClassName did not work) then it casts to the sub class (QMOveEvent) on a later lookup (because it is defined on the other method as the return type) and crashes. DONE * Static methods * Default values * Enums * Multiple packages / modules * Nested classes e.g. QMetaObject::Connection * Transfer, TransferThis and TransferBack for arguments * Destructors - and deregister instance from createdInD * Multiple inheritance => multiple pointers * Conversion functions in package_wrapper.cpp should have ability to add #include directives at the top of package_wrapper.cpp * Virtual functions * Protected functions * Sip If clauses for features, platforms TODO * non-primitive Typedefs * Primitive types C -> D conversion - *.conf file %CToDType long = long %CToDType unsigned char = ubyte * Sip If clauses for timelines %Timeline {Qt_5_0_0 Qt_5_0_1 Qt_5_0_2} * KeepReference for arguments + tests for Transfer etc. - Easy, in class with KeepRef e.g. View.setModel(model /KeepReference/) it has an extra attribute - void* setModel_SMIKeepRef then in setModel() { View_setModel_SMIX23(model.wrappedObject); setModel_SMIKeepRef = model; } This will make D keep a reference to the model as long as the View instance is alive. Each view will have its own reference so the total can go above 1 for a given model. * getCastPointerForInterface can be easily improved by not switching on a name but instead each class has a separate variable for each base class pointer, that is populated in the constructor - each class has one extra pointer per interface implemented - override virtual void*[] getExtraPointers() { void*[] extraPointers = super.getExtraPointers(); extraPointers ~= wrappedObject_Calculator; return extraPointers; } - in constructor this() { wrappedObject_Calculator = castRectAsCalculator(wrappedObject); } - in destructor ~this() { deregisterWrappedObject(wrappedObject); deregisterWrappedObejct(wrappedObject_Calculator); } * instance_wrapper et al., need to also register base class pointers * getWrappedObject / getClassName - how to handle this in a x-module fashion. * Add support for wrapping members * Add QTest support * int arguments that take a default enum value & enum defaults * char** -> Use this _idea_ this(string[] args) { // if (m_instance != null) // throw new RuntimeException("QCoreApplication can only be initialized once"); argc = cast(int)args.length; argv = toStringzArray(args); this(&argc, argv); // m_instance.aboutToQuit.connect(m_instance, "disposeOfMyself()"); } * Threading? - wrappedObjects[] should be shared as with CPP instance tracker. Use signal.d's WeakRef and InvisibleAddress * qRegisterMetaType?? * Add %UsesConverter to package.sip,
Re: Possible quick win in GC?
On Monday, 29 September 2014 at 12:02:40 UTC, thedeemon wrote: On Monday, 29 September 2014 at 11:36:51 UTC, Abdulhaq wrote: So, (does anyone know) has this technique been discarded for D or is it 'just' a matter of the resources to do it? From the literature on this topic I remember attempts for automatic region inference mostly failed: it lead to some small regions here and there that didn't affect anything much and then one huge region where most of the data landed, requiring a full-blown GC inside that big region. Making it work in D, where everything can be mutated by anything and any type system wall is a cast() away from breaking, seems virtually impossible. Mmm yes I can imagine that being the case... oh well!
Re: Possible quick win in GC?
On Monday, 29 September 2014 at 09:45:38 UTC, Mike wrote: On Monday, 29 September 2014 at 07:03:29 UTC, Abdulhaq wrote: On Sunday, 28 September 2014 at 20:20:29 UTC, David Nadlinger wrote: On Sunday, 28 September 2014 at 16:29:45 UTC, Abdulhaq wrote: I got the idea after thinking that it should be fairly simple for the compiler to detect straightforward cases of when a variable can be declared as going on the stack - i.e. no references to it are retained after its enclosing function returns. LDC does the "fairly simple" part of this already in a custom LLVM optimizer pass. The issue is that escape analysis is fairly hard in general, and currently even more limited because we only do it on the LLVM IR level (i.e. don't leverage any additional attributes like scope, pure, … that might be present in the D source code). David That's interesting, yes I guessed that the escape analysis would present the harder part, but I'm hoping that the algorithm can be built up incrementally, identifying the easy wins first and then over time extending it to cover harder cases. One way that I see it working it is to conduct a form of lowering where the new operator has some information added to it to indicate the 'band' that the GC should place the non-root objects into (root objects go on the stack). Using the syntax of C++'s placement new (but totally different semantics) code could be lowered to e.g. External externalObj = new(0) External(); // 0 means use the default heap Foo foo = new(0x1234) Foo(); // 0x1234 is the heap/band id for this set of objects ... Bar bar = new (0x1234) Bar(); When the GC allocates memory it does so in the indicated band/heap, and then when foo (the root object of the object graph) goes out of scope the relevant band/heap is destroyed en bloc. The benefit of the idea is that when scanning for objects that can be deleted the GC does not need to consider those objects in the non default bands/heaps. For some classes of programs such as compilers (it was Higgs that gave me the stimulus), and with good static analysis (aye there's the rub cap'n) this could represent a very substantial time saving on ech GC sweep. Sounds a little like http://wiki.dlang.org/DIP46 Mike Ah thanks for the link yes there are definite similarities, Walter identifies sets of objects to go in his region through having a boundary on the pure function. My set is determined by static analysis and does not require the function to be pure. However, thedeemon has pointed out that this technique is in fact well known - I'm left to wonder if I should have a go at an implementation, but my wife would not be too chuffed about that.
Re: Possible quick win in GC?
On Monday, 29 September 2014 at 11:02:12 UTC, thedeemon wrote: On Sunday, 28 September 2014 at 16:29:45 UTC, Abdulhaq wrote... Congratulations on inventing http://en.wikipedia.org/wiki/Region-based_memory_management and "Region inference" in particular. Ah yes - it's identical isn't it - I don't know whether to be happy or sad about that :). They explain it much better too... So, (does anyone know) has this technique been discarded for D or is it 'just' a matter of the resources to do it?
Re: Possible quick win in GC?
On Sunday, 28 September 2014 at 20:20:29 UTC, David Nadlinger wrote: On Sunday, 28 September 2014 at 16:29:45 UTC, Abdulhaq wrote: I got the idea after thinking that it should be fairly simple for the compiler to detect straightforward cases of when a variable can be declared as going on the stack - i.e. no references to it are retained after its enclosing function returns. LDC does the "fairly simple" part of this already in a custom LLVM optimizer pass. The issue is that escape analysis is fairly hard in general, and currently even more limited because we only do it on the LLVM IR level (i.e. don't leverage any additional attributes like scope, pure, … that might be present in the D source code). David That's interesting, yes I guessed that the escape analysis would present the harder part, but I'm hoping that the algorithm can be built up incrementally, identifying the easy wins first and then over time extending it to cover harder cases. One way that I see it working it is to conduct a form of lowering where the new operator has some information added to it to indicate the 'band' that the GC should place the non-root objects into (root objects go on the stack). Using the syntax of C++'s placement new (but totally different semantics) code could be lowered to e.g. External externalObj = new(0) External(); // 0 means use the default heap Foo foo = new(0x1234) Foo(); // 0x1234 is the heap/band id for this set of objects ... Bar bar = new (0x1234) Bar(); When the GC allocates memory it does so in the indicated band/heap, and then when foo (the root object of the object graph) goes out of scope the relevant band/heap is destroyed en bloc. The benefit of the idea is that when scanning for objects that can be deleted the GC does not need to consider those objects in the non default bands/heaps. For some classes of programs such as compilers (it was Higgs that gave me the stimulus), and with good static analysis (aye there's the rub cap'n) this could represent a very substantial time saving on ech GC sweep.
Re: Possible quick win in GC?
You mean this? https://en.wikipedia.org/wiki/Escape_analysis Of course my proposal uses the techique of escape analysis as part of its methodology, but the essence of the idea is to greatly cut down on the work that the GC has to do on each sweep when dealing with objects that have been found to belong to a particular set. The objects in each set are in an object graph that has no incoming references from objects external to the set and which can therefore be allocated in their own heap that is destroyed when the root object goes out of scope. The saving takes place because the GC does not need to scan the default heap for pointers found in the new heaps (bands). For certain type of programs such as compilers / lexers / parsers where many temporary objects are allocated and shortly after deallocated this can result in a substantial time saving in execution. In terms of memory usage we would see multiple potentially large but short-lived spikes.
Re: Possible quick win in GC?
Here's a code snippet which mopefully makes things a bit clearer: /** * In this example the variable foo can be statically analysed as safe to go on the stack. * The new instance of Bar allocated in funcLevelB is only referred to by foo. foo can * be considered a root 'scoped' variable and the GC can delete both foo and the new Bar() * when foo goes out of scope. There is no need (except when under memory pressure) for * the GC to scan the band created for foo and it's related child allocations. */ import std.stdio; class Bar { public: int x; this(int x) { this.x = x; } } class Foo { public: Bar bar; } void funcLevelA() { Foo foo = new Foo(); // static analysis could detect this as able to go on the stack funcLevelB(foo); writeln(foo.bar.x); } void funcLevelB(Foo foo) { foo.bar = new Bar(12); // this allocated memory is only referred to by foo, which // static analysis has established can go on the stack } void main() { funcLevelA(); }
Possible quick win in GC?
Perhaps I've too had much caffeine today but I've had an idea which might give a fairly quick win on the GC speed situation. It's a simple idea at heart so it's very possible/likely that this is a well known idea that has already been discarded but anyway here goes. I got the idea after thinking that it should be fairly simple for the compiler to detect straightforward cases of when a variable can be declared as going on the stack - i.e. no references to it are retained after its enclosing function returns. At the moment AIUI it is necessary for a class instance to be declared by the programmer as 'scoped' for this to take place. Further, I was considering the type of ownership and boundary considerations that could be used to improve memory management - e.g. using the notion of an owner instance which, upon destruction, destroys all owned objects. Afer some consideration it seems to me that by using only static analysis a tree of references could be constructed of references from a root 'scoped' object to all referred to objects that are allocated after the allocation of the root object. When the root object goes out of scope it is destroyed and all the descendent objects from the root object (as identified by the static analysis) could also be destroyed in one simple shot. The static analysis of course constructs the tree by analysing the capturing of references from one object to another. It could be the case that even a simple static analysis at first (e.g. discard the technique in difficult situations) could cover a lot of use cases (statistically). Of course, if one of the descendent objects is referred to by an object which is not in the object tree, then this technique cannot be used. However, I envisage that there are many situations where upon the destruction of a root object all related post-allocated objects can also be destroyed. In terms of implementation I see this being done by what I am calling 'bands' within the GC. With the allocation of any identified root object, a new band (heap) is created in the GC. Child objects of the root object (i.e. only referred to by the root object and other child objects in its tree) are placed in the same band. When the root object goes out of scope the entire band is freed. This by definition is safe because the static analysis has ensured that there are no 'out-of-tree' references between child objects in the tree and out-of-tree (out-of-band) objects. This property also means that normal GC runs do not need to add the scoped root object as a GC root object - this memory will normally only be freed when the scoped root object at the top of the tree goes out of scope. If memory becomes constrained then the bands can be added as root objects to the GC and memory incrementally freed just as with regularly allocated objects. Sorry if this idea is daft and I've wasted your time!
Re: GCs in the news
On Thursday, 17 July 2014 at 16:56:56 UTC, Vic wrote: On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote: If D came without GC, it would have replaced C++ a long time ago! Agree +1000. If GC is so good, why not make it an option, have a base lib w/o GC. If I want GC, I got me JRE. It seems that some in D want to write a better JRE, and that just won't happen ever. Cheers, Vic I can't think of anyone posting here, to be honest, who wants to write a better JRE. The JRE is a virtual machine, and java compiles to bytecode that is run on the JVM. On the contrary, and in accordance with the core principle that D is a systems programming language, D compiles to native and (hopefully) highly optimised native machine code. There does exist something of a 'culture clash' where, by the very nature of GCs, there can be not-insignificant pauses in the running of the program that would be inimicable to real-time software such as high res complex games, operating systems, drivers etc. The response to this in the forums is either to improve the GC so that it doesn't ever pause for more than a certain amount of time (e.g. concurrent GCs, remove the global lock so other threads can continue to run), or to offer alternative memory management approaches such as ARC, which can also have pauses, but at other inflections as the program runs. Personally I'm a bit disappointed that the good work that has been done on GCs so far doesn't seem to be being picked up and run with, and nor do I see any reasons given as to why that is the case. Adnrei was threatening to start another GC an one point but unfortunately I haven't seen any more of that and we all know how short of time every one seems to be these days. Also on a personal note, I see some slightly snarky comments about D targeting C# and Java. Well from my perspective I'm extremely happy with the fact that D is a better C# and a better Java. I just wish it had Qt (I must finish my bindings for Qt) and/or ran on Android! The GC issues are irrelevant for me.
Re: Need Feedback for a new book - "D Cookbook"
Hi Paushali I'm interested in reviewing the book, you can contact me at alynch4...@gmail.com. I'd put the review on Amazon.
Re: A Perspective on D from game industry
On Sunday, 15 June 2014 at 20:10:34 UTC, Walter Bright wrote: On 6/15/2014 9:20 AM, Xinok wrote: Given that he lives in Italy, it's safe to assume that English is not his first language. But rather than consider what he has to say or dispute his arguments, you completely dismissed his point of view because his level of writing doesn't meet your standards. Xinok does have a point that we all should be aware of. I've found a very strong correlation between poor writing skills and disorganized thinking. (Your point about non-native English speakers is well taken, one must not confuse unfamiliarity with English with disorganized thinking.) I'm hardly the only one. If one wants their views to be taken seriously, pay attention to spelling, grammar, paragraphs, organized writing, etc. There's an awful lot of stuff to read on the internet, and poor writing often elicits a "meh, I'll skip this one and move on" reaction. True but if I'm going to judge a comment by the way it's written I'll take a second language piece over a foul and insulting rant any day of the week.
Re: A Perspective on D from game industry
On Sunday, 15 June 2014 at 13:19:12 UTC, Russel Winder via Digitalmars-d wrote: On Sun, 2014-06-15 at 12:30 +, Abdulhaq via Digitalmars-d wrote: […] learning the Android API - after all, JDK8 + tooling is bearable now. On the other hand Android API is Apache Harmony which is Java 6. Yes I keep forgetting that - wishful thinking maybe. Of note: Groovy finally works on Android, so you can use what Java 8 brings, on Java 6 and Java 7 using Groovy. And note Groovy may be a dynamic language, but it is also a static language. I'll look into it. Perhaps this question is just too broad, but if you wanted to develop an application on the Android platform right now, what approach would you take? Java, Groovy, web-based?
Re: A Perspective on D from game industry
On Sunday, 15 June 2014 at 11:28:12 UTC, Peter Alexander wrote: http://c0de517e.blogspot.ca/2014/06/where-is-my-c-replacement.html?m=1 The arguments against D are pretty weak if I'm honest, but I think it's important we understand what people think of D. I can confirm this sentiment is fairly common in the industry. Watch out for the little jab at Andrei :-P Reading his summary of the alternatives I felt D came out clearly on top, it's just that he didn't have the motivation to switch. Towards the end he mentions the web, for me (as an application developer rather than systems level guy) Android/iOS is the fly in the ointment - I'm torn as to whether to invest my energies in following D through its explorations or knuckling down and learning the Android API - after all, JDK8 + tooling is bearable now.
Re: SurveyMonkey for D users OS - Results
On Saturday, 31 May 2014 at 13:52:46 UTC, Rikki Cattermole wrote: On 1/06/2014 1:45 a.m., Abdulhaq wrote: On Saturday, 31 May 2014 at 13:37:26 UTC, Abdulhaq wrote: There's been 100 votes and the results are: Linux 64 bits: 53 Linux 32 bits: 4 Windows 64 bits:27 Windows 32 bits: 3 Mac: 7 Other: 6: "ArchLinux" "Android" "Centos 6" "MAC OSX, LINUX 64, Windows 64, FreeBSD 64" "bsd64" One 'other' vote was spoiled. It turns out that the free SurveyMonkey account only allows 100 votes max, but the profile has been much the same since 50 votes so I think the ratios are clear. If anyone has an OS other than the ones mentioned above then perhaps they could mention it in this thread. See the graph at https://www.surveymonkey.com/results/SM-5GGGJV5/ I'm personally not surprised by these results. But they will be skewed because of time zones and the limited number of participants. Which is a shame. Not to mention all those who use D plus don't read the NG. Shame it didn't make 24 hrs as all time zones would have been covered, still I think it's probably a pretty fair picture of the whole thing. I'm wondering what's the Linux 32 bit usages - embedded I guess. 64 bits seems to dominate in general. A couple of linux users seem not to know if they are 32 or 64 bit?
Re: SurveyMonkey for D users OS - Results
On Saturday, 31 May 2014 at 13:37:26 UTC, Abdulhaq wrote: There's been 100 votes and the results are: Linux 64 bits: 53 Linux 32 bits: 4 Windows 64 bits:27 Windows 32 bits: 3 Mac: 7 Other: 6: "ArchLinux" "Android" "Centos 6" "MAC OSX, LINUX 64, Windows 64, FreeBSD 64" "bsd64" One 'other' vote was spoiled. It turns out that the free SurveyMonkey account only allows 100 votes max, but the profile has been much the same since 50 votes so I think the ratios are clear. If anyone has an OS other than the ones mentioned above then perhaps they could mention it in this thread. See the graph at https://www.surveymonkey.com/results/SM-5GGGJV5/
SurveyMonkey for D users OS - Results
There's been 100 votes and the results are: Linux 64 bits: 53 Linux 32 bits: 4 Windows 64 bits:27 Windows 32 bits: 3 Mac: 7 Other: 6: "ArchLinux" "Android" "Centos 6" "MAC OSX, LINUX 64, Windows 64, FreeBSD 64" "bsd64" One 'other' vote was spoiled. It turns out that the free SurveyMonkey account only allows 100 votes max, but the profile has been much the same since 50 votes so I think the ratios are clear. If anyone has an OS other than the ones mentioned above then perhaps they could mention it in this thread.
Re: SurveyMonkey for D users OS
On Friday, 30 May 2014 at 18:56:49 UTC, Kagamin wrote: On Friday, 30 May 2014 at 17:39:00 UTC, Abdulhaq wrote: I won't skew the results by spilling the beans just yet. Do you think, users will migrate to another OS just for this survey if they see the results? No I don't think that. What can happen is that people see that 'their team is falling behind' and therefore vote when they otherwise would not be inclined. It can lead to more evenly balanced (and incorrect) results than a blind survey would see (it seems to me).
Re: SurveyMonkey for D users OS
On Friday, 30 May 2014 at 18:20:21 UTC, Dejan Lekic wrote: Abdulhaq wrote: I've created a SurveyMonkey survey to gather stats on D users OS usage. It only takes a few seconds to answer: https://www.surveymonkey.com/s/3BJRWP8 We can leave it running for a week or so, I'll keep you updated on results. Abdulhaq No offense, but for one question survey you could easily ask here instead of SurveyMonkey. No offense taken. IMO threads tend to go off topic too easily to make them a good tool for surveys. This way makes it very easy to keep focus and gather results. It only took me a few minutes to set it up.
Re: SurveyMonkey for D users OS
On Friday, 30 May 2014 at 17:24:51 UTC, Abdulhaq wrote: I've created a SurveyMonkey survey to gather stats on D users OS usage. It only takes a few seconds to answer: https://www.surveymonkey.com/s/3BJRWP8 We can leave it running for a week or so, I'll keep you updated on results. Abdulhaq 12 responses in 12 minutes, keep voting folks it's interesting :-) I won't skew the results by spilling the beans just yet.
SurveyMonkey for D users OS
I've created a SurveyMonkey survey to gather stats on D users OS usage. It only takes a few seconds to answer: https://www.surveymonkey.com/s/3BJRWP8 We can leave it running for a week or so, I'll keep you updated on results. Abdulhaq