Re: Tkd - Cross platform GUI toolkit based on Tcl/Tk
On Sunday, 4 May 2014 at 19:19:57 UTC, Nick Sabalausky wrote: -Jsource/example/media: Use stringImportPaths to specify import paths in a compiler independent way Error: multiple definition of tcl_38_307: _Tcl_Main and Tcl_Main: _Tcl_Main These errors should now be fixed in Tkd v1.0.1-beta.
Re: Tkd - Cross platform GUI toolkit based on Tcl/Tk
On Monday, 5 May 2014 at 08:58:34 UTC, Gary Willoughby wrote: On Sunday, 4 May 2014 at 19:19:57 UTC, Nick Sabalausky wrote: -Jsource/example/media: Use stringImportPaths to specify import paths in a compiler independent way Error: multiple definition of tcl_38_307: _Tcl_Main and Tcl_Main: _Tcl_Main These errors should now be fixed in Tkd v1.0.1-beta. I don't have the time to use this right now, but I will need a working _portable_ GUI framework in a near-future project, so I just want to say that this is awesome. BTW, this is probably not a good idea in near future but it I think it would be good to have a builtin Tkinter equivalent for D. Even if it's not completely phobos-like (Tkinter is not entirely pythonic either), having a basic GUI framework ready to go without any extra setup is a huge advantage.
Stand Back! D-Shirt
Hi, I don't think I told anyone, but I recall that it got a few smiles off people, and others expressed wishes to get something similar. If you recall the Stand Back! shirt I was wearing last year at DConf 2013, it was designed by myself, and made by CafePress. I need to find a way to make it more visible (took me about 30 minutes to find it in my own dashboard!) But here is the linky for it. http://www.cafepress.co.uk/cp/customize/product2.aspx?number=801347013 Whilst I'm confident that reprints should come out just fine, I sometimes find that CafePress can be hit or miss in terms of getting the final product printed correctly as you see it in preview. I'm also thinking about designing another D-Shirt for this year. :o) Regards Iain.
Re: My D book is now officially coming soon
On Monday, 3 March 2014 at 16:37:49 UTC, Adam D. Ruppe wrote: As some of you might know, I've been working on a D book over the last few months. It is now available as coming soon on the publisher's website: http://www.packtpub.com/discover-advantages-of-programming-in-d-cookbook/book Congrats. Is there a early access option, like Manning Early Access Program, instead of just pre-order? Thanks Dan
Re: My D book is now officially coming soon
We're publishing in about two weeks now so it won't be long until the real thing is out anyway!
Re: Tkd - Cross platform GUI toolkit based on Tcl/Tk
On 5/5/2014 4:58 AM, Gary Willoughby wrote: On Sunday, 4 May 2014 at 19:19:57 UTC, Nick Sabalausky wrote: -Jsource/example/media: Use stringImportPaths to specify import paths in a compiler independent way Error: multiple definition of tcl_38_307: _Tcl_Main and Tcl_Main: _Tcl_Main These errors should now be fixed in Tkd v1.0.1-beta. Excellent. I just grabbed the latest, copied the dlls and setup scripts, and it works now. I did find file a couple issues though: https://github.com/nomad-software/tcltk/issues/4 https://github.com/nomad-software/tkd/issues/11 Also, regarding DUB directory copying for the tcl init scripts, the docs mention a postBuildCommands which I would think could be used for that purpose (or for pretty much anything else).
Re: Tkd - Cross platform GUI toolkit based on Tcl/Tk
On Monday, 5 May 2014 at 16:17:34 UTC, Nick Sabalausky wrote: Excellent. I just grabbed the latest, copied the dlls and setup scripts, and it works now. I did find file a couple issues though: https://github.com/nomad-software/tcltk/issues/4 https://github.com/nomad-software/tkd/issues/11 Ok, i'll take a look at these. Also, regarding DUB directory copying for the tcl init scripts, the docs mention a postBuildCommands which I would think could be used for that purpose (or for pretty much anything else). Yeah that's the plan but i need a nice method of referring to the output directory when used as a dependency. I've been through this with Sönke and he reckons the next version of dub will be able to do this. See: https://github.com/rejectedsoftware/dub/issues/299
Re: Stand Back! D-Shirt
On Monday, 5 May 2014 at 11:49:23 UTC, Iain Buclaw wrote: Hi, I don't think I told anyone, but I recall that it got a few smiles off people, and others expressed wishes to get something similar. If you recall the Stand Back! shirt I was wearing last year at DConf 2013, it was designed by myself, and made by CafePress. I need to find a way to make it more visible (took me about 30 minutes to find it in my own dashboard!) But here is the linky for it. http://www.cafepress.co.uk/cp/customize/product2.aspx?number=801347013 Whilst I'm confident that reprints should come out just fine, I sometimes find that CafePress can be hit or miss in terms of getting the final product printed correctly as you see it in preview. I'm also thinking about designing another D-Shirt for this year. :o) Regards Iain. Here's a link for those in the US: http://www.cafepress.com/cp/customize/product2.aspx?number=801347013 I think I might buy one of these!
Re: More radical ideas about gc and reference counting
On Sunday, 4 May 2014 at 22:56:41 UTC, H. S. Teoh via Digitalmars-d wrote: On Sat, May 03, 2014 at 10:48:47PM -0500, Caligo via Digitalmars-d wrote: [...] Last but not least, currently there are two main ways for new features to make it into D/Phobos: you either have to belong to the inner circle, or have to represent some corporation that's doing something with D. I'm sorry, but this is patently false. I am neither in the inner circle, nor do I represent any corporation, yet I've had many changes pulled into Phobos (including brand new code). I can't say I'm perfectly happy with the D development process either, but this kind of accusation is bordering on slander, and isn't helping anything. T There is a lot of truth in what Caligo has said, but I would word that part of it differently. A couple years ago I submitted std.rational, but it didn't go anywhere. About a year later I discovered that someone else had done a similar thing, but it never made it into Phobos either. Of course, it's not because we didn't belong to some inner circle, but I think it has to do with the fact that D has a very poor development process. The point being, something as simple as a Rational library shouldn't take years for it to become part of Phobos, specially when people are taking the time to do the work. --Arlen
Re: Scenario: OpenSSL in D language, pros/cons
On Sun, 04 May 2014 21:18:22 + Daniele M. via Digitalmars-d digitalmars-d@puremagic.com wrote: On Sunday, 4 May 2014 at 10:23:38 UTC, Jonathan M Davis via Digitalmars-d wrote: And then comes my next question: except for that malloc-hack, would it have been possible to write it in @safe D? I guess that if not, module(s) could have been made un-@safe. Not saying that a similar separation of concerns was not possible in OpenSSL itself, but that D could have made it less development-expensive in my opinion. I don't know what all OpenSSL is/was doing, and I haven't looked into it in great detail. I'm familiar with what caused heartbleed, and I'm somewhat familiar with OpenSSL's API from having dealt with it at work, but most of what I know about OpenSSL, I know from co-workers who have had to deal with it and other stuff that I've read about it, and in general, from what I understand, it's just plain badly designed and badly written, and it's a miracle that it works as well as it does. Most of the problems seem to stem from how the project is managed (including having horrible coding style and generally not liking to merge patches), but it's also certain that a number of the choices that they've made make it easier for security problems to creep in (e.g. using their own malloc in an attempt to gain some speed on some OSes). From what I know of SSL itself (and I've read some of the spec, but not all of it), very little of it (and probably none of it save for the actual operations on the sockets) actually requires anything that's @system. The problem is when you go to great lengths to optimize the code, which the OpenSSL guys seem to have done. When you do that, you do things like turn off array bounds checking and generally try and avoid many of the safety features that a language like D provides, since many of them do incur at least some overhead. Actually implementing SSL itself wouldn't take all that long from what I understand. The main problem is in maintenance - probably in particular with regards to the fact that you'd have to keep adding support for more encryption methods as they come out (which technically aren't part of SSL itself), but I'm not familiar enough with the details to know all of the nooks and crannies that would cause maintenance nightmares. The base spec is less than 70 pages long though. The fellow who answered the question here seems to think that implementing SSL itself is actually fairly easy and that he's done it several times for companies already: http://security.stackexchange.com/questions/55465 I fully expect that if someone were to implement it in D, it would be safer out of the box than a C implementation would be. But if you had to start playing tricks to get it faster, that would increase the security risk, and in order for folks to trust it, you'd have to get the code audited, which is a whole other order of pain (and one that potentially costs money, depending on who does the auditing). If I had more time, I'd actually be tempted to write an SSL implementation in D, but even if I were to do an excellent job of it, it would still need to be vetted by security experts to make sure that it didn't have horrible security bugs in it (much as it would be likely that there would be fewer thanks to the fact that it would be writen in D), and I suspect that it's the kind of thing that many people aren't likely to trust because of how critical it is. Nobody would expect/trust a single person to do this job :P Working in an open source project would be best. If someone around here implemented SSL in D, I fully expect that it would be open source, and I fully expect that one person could do it. It's just a question of how long it would take them - though obviously sharing the work among multiple people could make it faster. Where it's definitely required that more people get involved is when you want the code audited to ensure that it's actually safe and secure. And that's where implementing an SSL library is fundamentally different from implementing most libraries - it's so integral to security that it doesn't really cut it to just throw an implementation together and toss it out there for folks to use it if they're interested. Unfortunately, even if something better _were_ written in D, it's probably only the D folks who would benefit, since it's not terribly likely at this point that very many folks are going to wrap a D library in order to use it another language. Here I don't completely agree: if we can have a binary-compatible implementation done in D, then we would be able to modify software to eventually use it as a dependency. I don't see the necessary D dependencies as prohibitive here. If D were to be part of a typical Linux distro, a D compiler would have to be part of a typical Linux distro. We're getting there with gdc as it's getting merged into gcc, but I don't think that it's ended up on many distros
Re: Progress on Adam Wilson's Graphics API?
On 04/05/14 20:26, Jonas Drewsen wrote: Just had a quick look at the source code. If this is to be something like the official gfx library wouldn't it make sense to follow the phobos coding style? For example struct Size instead of struct SIZE To me, most code there looks like bindings. -- /Jacob Carlborg
Re: Scenario: OpenSSL in D language, pros/cons
On 04/05/14 23:20, Daniele M. wrote: You are right, devs would eventually abuse everything possible, although it would make it for sure more visible: you cannot advertize an un-@safe library as @safe, although I agree that a lot depends from devs/users culture. In D, you can at least statically determine which part of the code is @safe and un-@safe. -- /Jacob Carlborg
Re: Scenario: OpenSSL in D language, pros/cons
On Sun, 04 May 2014 13:29:33 + Meta via Digitalmars-d digitalmars-d@puremagic.com wrote: The only language I would really trust is one in which it is impossible to write unsafe code, because you can then know that the developers can't use such unsafe hacks, even if they wanted to. Realistically, I think that you ultimately have to rely on the developers doing a good job. Good tools help a great deal (including a programming language that's safe by default while still generally being efficient), but if you try and restrict the programmer such that they can only do things that are guaranteed to be safe, I think that you're bound to make it impossible to do a number of things, which tends to not only be very frustrating to the programmers, but it can also make it impossible to get the performance that you need in some circumstances. So, while you might be able to better trust a library written in a language that's designed to make certain types of problems impossible, I don't think that it's realistic for that language to get used much in anything performance critical like an SSL implementation. Ultimately, I think that the trick is to make things as safe as they can be without actually making it so that the programmer can't do what they need to be able to do. And while, I don't think that D hit the perfect balance on that one (e.g. we should have made @safe the default if we wanted that), I think that we've done a good job of it overall - certainly far better than C or C++. - Jonathan M Davis
Re: More radical ideas about gc and reference counting
On 05/05/14 00:55, H. S. Teoh via Digitalmars-d wrote: I'm sorry, but this is patently false. I am neither in the inner circle, nor do I represent any corporation, yet I've had many changes pulled into Phobos (including brand new code). I think he's referring to language changes. Things that will require a DIP, i.e. not just adding a new __trait. -- /Jacob Carlborg
Re: FYI - mo' work on std.allocator
On Monday, 28 April 2014 at 16:03:33 UTC, Andrei Alexandrescu wrote: Fair enough, I'll remove that part of the spec. Thanks! -- Andrei According to the docs, the multiple of sizeof(void*) restriction only applies to posix_memalign (and not to _aligned_malloc and aligned_alloc.) On Monday, 5 May 2014 at 04:04:56 UTC, Andrei Alexandrescu wrote: On 5/4/14, 8:06 PM, Marco Leise wrote: Virtual memory allocators seem obvious, but there are some details to consider. 1) You should not hard code the allocation granularity in the long term. It is fairly easy to get it on Windows and Posix systems: On Windows: SYSTEM_INFO si; GetSystemInfo(si); return si.allocationGranularity; On Posix: return sysconf(_SC_PAGESIZE); I've decided that runtime-chosen page sizes are too much of a complication for the benefits. Do the complications arise in the MmapAllocator or the higher level allocators? I'd like to see a Windows allocator based on VirtualAlloc, and afterwards a SystemAllocator defined to be MmapAllocator on unix-based and VirtualAlloc based on Windows. This SystemAllocator would ideally have an allocationGranularity property.
Re: Scenario: OpenSSL in D language, pros/cons
On Monday, 5 May 2014 at 06:35:07 UTC, Jonathan M Davis via Digitalmars-d wrote: On Sun, 04 May 2014 13:29:33 + Meta via Digitalmars-d digitalmars-d@puremagic.com wrote: The only language I would really trust is one in which it is impossible to write unsafe code, because you can then know that the developers can't use such unsafe hacks, even if they wanted to. Realistically, I think that you ultimately have to rely on the developers doing a good job. Good tools help a great deal (including a programming language that's safe by default while still generally being efficient), but if you try and restrict the programmer such that they can only do things that are guaranteed to be safe, I think that you're bound to make it impossible to do a number of things, which tends to not only be very frustrating to the programmers, but it can also make it impossible to get the performance that you need in some circumstances. So, while you might be able to better trust a library written in a language that's designed to make certain types of problems impossible, I don't think that it's realistic for that language to get used much in anything performance critical like an SSL implementation. Ultimately, I think that the trick is to make things as safe as they can be without actually making it so that the programmer can't do what they need to be able to do. And while, I don't think that D hit the perfect balance on that one (e.g. we should have made @safe the default if we wanted that), I think that we've done a good job of it overall - certainly far better than C or C++. - Jonathan M Davis Sometimes I wonder how much money have C design decisions cost the industry in terms of anti-virus, static and dynamic analyzers tools, operating systems security enforcements, security research and so on. All avoidable with bound checking by default and no implicit conversions between arrays and pointers. -- Paulo
Thread name conflict
Importing both core.thread and std.regex results in a conflict as both define a Thread type. Perhaps the regex module's author assumed there'd be no clash since it's a template - Thread(DataIndex). Should I file a bug suggesting a name change? Or maybe D ought to allow both parameterised and normal types to have the same name - C# for example allows it.
Re: Scenario: OpenSSL in D language, pros/cons
On Mon, 05 May 2014 07:39:13 + Paulo Pinto via Digitalmars-d digitalmars-d@puremagic.com wrote: Sometimes I wonder how much money have C design decisions cost the industry in terms of anti-virus, static and dynamic analyzers tools, operating systems security enforcements, security research and so on. All avoidable with bound checking by default and no implicit conversions between arrays and pointers. Well, a number of years ago, the folks who started the codebase of the larger products at the company I work at insisted on using COM everywhere, because we _might_ have to interact with 3rd parties, and they _might_ not want to use C++. So, foolishly, they mandated that _nowhere_ in the codebase should any C++ objects be passed around except by pointer. They then had manual reference counting on top of that to deal with memory management. That decision has cost us man _years_ in time working on reference counting-related bugs. Simply using smart pointers instead would probably have saved the company millions. COM may have its place, but forcing a whole C++ codebase to function that way was just stupid, especially when pretty much none of it ever had to interact directly with 3rd party code (and even if it had, it should have been done through strictly defined wrapper libraries; it doesn't make sense that 3rd parties would hook into the middle of your codebase). Seemingly simple decisions can have _huge_ consequences - especially when that decision affects millions of lines of code, and that's definitely the case with some of the decisions made for C. Some of them may have be unavoidable give the hardware situation and programming climate at the time that C was created, but we've been paying for them ever since. And unfortunately, the way things are going at this point, nothing will ever really overthrow C. We'll have to deal with it on some level for a long, long time to come. - Jonathan M Davis
Re: Scenario: OpenSSL in D language, pros/cons
On Monday, 5 May 2014 at 08:04:24 UTC, Jonathan M Davis via Digitalmars-d wrote: On Mon, 05 May 2014 07:39:13 + Paulo Pinto via Digitalmars-d digitalmars-d@puremagic.com wrote: Sometimes I wonder how much money have C design decisions cost the industry in terms of anti-virus, static and dynamic analyzers tools, operating systems security enforcements, security research and so on. All avoidable with bound checking by default and no implicit conversions between arrays and pointers. Well, a number of years ago, the folks who started the codebase of the larger products at the company I work at insisted on using COM everywhere, because we _might_ have to interact with 3rd parties, and they _might_ not want to use C++. So, foolishly, they mandated that _nowhere_ in the codebase should any C++ objects be passed around except by pointer. They then had manual reference counting on top of that to deal with memory management. That decision has cost us man _years_ in time working on reference counting-related bugs. Simply using smart pointers instead would probably have saved the company millions. COM may have its place, but forcing a whole C++ codebase to function that way was just stupid, especially when pretty much none of it ever had to interact directly with 3rd party code (and even if it had, it should have been done through strictly defined wrapper libraries; it doesn't make sense that 3rd parties would hook into the middle of your codebase). So the decision was made and CComPtr and _com_ptr_t weren't used?! I feel your pain. Well, now the codebase is WinRT ready. :) Seemingly simple decisions can have _huge_ consequences - especially when that decision affects millions of lines of code, and that's definitely the case with some of the decisions made for C. Some of them may have be unavoidable give the hardware situation and programming climate at the time that C was created, but we've been paying for them ever since. And unfortunately, the way things are going at this point, nothing will ever really overthrow C. We'll have to deal with it on some level for a long, long time to come. - Jonathan M Davis I doubt those decisions really made sense, given the other system programming languages at the time. Most of them did bounds checking, had no implicit conversions and operating systems were being written with them. Algol 60 reference compiler did not allow disabling bounds checking at all, for example. Quote from Tony Hoare's ACM award article[1]: A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law. [1]http://www.labouseur.com/projects/codeReckon/papers/The-Emperors-Old-Clothes.pdf
Re: Parallel execution of unittests
On 1 May 2014 18:40, Andrei Alexandrescu via Digitalmars-d digitalmars-d@puremagic.com wrote: On 5/1/14, 10:32 AM, Brad Anderson wrote: It hasn't been C:\TEMP for almost 13 years About the time when I switched :o). -- Andrei Amen to that! (Me too)
Re: Scenario: OpenSSL in D language, pros/cons
On Sunday, 4 May 2014 at 21:18:24 UTC, Daniele M. wrote: And then comes my next question: except for that malloc-hack, would it have been possible to write it in @safe D? I guess that if not, module(s) could have been made un-@safe. Not saying that a similar separation of concerns was not possible in OpenSSL itself, but that D could have made it less development-expensive in my opinion. TDPL SafeD visions notwithstanding, @safe is very very limiting. I/O is forbidden so simple Hello Worlds are right out, let alone advanced socket libraries.
Re: Parallel execution of unittests
On Monday, 5 May 2014 at 00:40:41 UTC, Walter Bright wrote: D has so many language features, we need a higher bar for adding new ones, especially ones that can be done straightforwardly with existing features. Sure, but you'll have to agree that there comes a point where library solutions end up being so syntactically convoluted that it becomes difficult to visually parse. Bad-practice non-sensical example. version(ParallelUnittests) const const(TypeTuple!(string, name, UnittestImpl!SomeT, test, bool, result)) testTempfileHammering(string name, alias fun, SomeT, Args...)(Args args) pure @safe @(TestSuite.standard) if ((name.length 0) __traits(parallelizable, fun) !is(Args) (Args.length 2) allSatisfy!(isSomeString, args)) { /* ... */ }
Re: Parallel execution of unittests
Walter Bright: D has so many language features, we need a higher bar for adding new ones, especially ones that can be done straightforwardly with existing features. If I am not wrong, all this is needed here is a boolean compile-time flag, like __is_main_module. I think this is a small enough feature and gives enough back that saves time, to deserve to be a built-in feature. I have needed this for four or five years and the need/desire isn't going away. Bye, bearophile
Re: GC vs Resource management.
On Sunday, 4 May 2014 at 16:13:23 UTC, Andrei Alexandrescu wrote: On 5/4/14, 4:42 AM, Marc Schütz schue...@gmx.net wrote: But I'm afraid your suggestion is unsafe: There also needs to be a way to guarantee that no references to the scoped object exist when it is destroyed. Actually, it should be fine to call the destructor, then blast T.init over the object, while keeping the actual memory in the GC. This possible approach has come up a number of times, and I think it has promise. -- Andrei Then accesses at runtime would still appear to work, but you're actually accessing something else than you believe you do. IMO, this is almost as bad as silent heap corruption. Such code should just be rejected at compile-time, if at all possible.
Re: Scenario: OpenSSL in D language, pros/cons
On Monday, 5 May 2014 at 09:32:40 UTC, JR wrote: On Sunday, 4 May 2014 at 21:18:24 UTC, Daniele M. wrote: And then comes my next question: except for that malloc-hack, would it have been possible to write it in @safe D? I guess that if not, module(s) could have been made un-@safe. Not saying that a similar separation of concerns was not possible in OpenSSL itself, but that D could have made it less development-expensive in my opinion. TDPL SafeD visions notwithstanding, @safe is very very limiting. I/O is forbidden so simple Hello Worlds are right out, let alone advanced socket libraries. I/O is not forbidden, it's just that writeln and friends currently can't be made safe, but that is being worked on AFAIK. While I/O usually goes through the OS, the system calls can be manually verified and made @trusted.
Re: Parallel execution of unittests
On Mon, 05 May 2014 10:00:54 + bearophile via Digitalmars-d digitalmars-d@puremagic.com wrote: Walter Bright: D has so many language features, we need a higher bar for adding new ones, especially ones that can be done straightforwardly with existing features. If I am not wrong, all this is needed here is a boolean compile-time flag, like __is_main_module. I think this is a small enough feature and gives enough back that saves time, to deserve to be a built-in feature. I have needed this for four or five years and the need/desire isn't going away. As far as I can tell, adding a feature wouldn't add much over simply using a version block for defining your demos. Just because something is done in python does not mean that it is appropriate for D or that it requires adding features to D in order to support it. Though I confess that I'm biased against it, because not only have I never needed the feature that you're looking for, but I'd actually consider it bad practice to organize code that way. It makes no sense to me to make it so that any arbitrary module can be the main module for the program. Such code should be kept separate IMHO. And I suspect that most folks who either haven't done much with python and/or who don't particularly like python would agree with me. Maybe even many of those who use python would; I don't know. Regardless, I'd strongly argue that this is a case where using user-defined versions is the obvious answer. It may not give you what you want, but it gives you want you need in order to make it so that a module has a main that's compiled in only when you want it to be. And D is already quite complicated. New features need to pass a high bar, and adding a feature just so that something is built-in rather than using an existing feature which solves the problem fairly simply definitely does not pass that bar IMHO. I'm completely with Walter on this one. - Jonathan M Davis
Re: Scenario: OpenSSL in D language, pros/cons
On Mon, 05 May 2014 10:24:27 + via Digitalmars-d digitalmars-d@puremagic.com wrote: On Monday, 5 May 2014 at 09:32:40 UTC, JR wrote: On Sunday, 4 May 2014 at 21:18:24 UTC, Daniele M. wrote: And then comes my next question: except for that malloc-hack, would it have been possible to write it in @safe D? I guess that if not, module(s) could have been made un-@safe. Not saying that a similar separation of concerns was not possible in OpenSSL itself, but that D could have made it less development-expensive in my opinion. TDPL SafeD visions notwithstanding, @safe is very very limiting. I/O is forbidden so simple Hello Worlds are right out, let alone advanced socket libraries. I/O is not forbidden, it's just that writeln and friends currently can't be made safe, but that is being worked on AFAIK. While I/O usually goes through the OS, the system calls can be manually verified and made @trusted. As the underlying OS calls are all C functions, there will always be @system code involved in I/O, but in most cases, we should be able to wrap those functions in D functions which are @trusted. Regarldess, I would think that SSL could be implemented without sockets - that is, all of its operations should be able to operate on arbitrary data regardless of whether that data is sent over a socket or not. And if that's the case, then even if the socket operations themselves had to be @system, then everything else should still be able to be @safe. Most of the problems with @safe stem either from library functions that don't use it like they should, or because the compiler does not yet do a good enough job with attribute inference on templated functions. Both problems are being addressed, so the situation will improve over time. Regardless, there's nothing fundamentally limited about @safe except for operations which are actually unsafe with regards to memory, and any case where something isn't @safe when it's actually memory safe should be and will be fixed (as well as any situation which isn't memory safe but is considered @safe anyway - we do unfortunately still have a few of those). - Jonathan M Davis
Re: Parallel execution of unittests
Jonathan M Davis: Just because something is done in python does not mean that it is appropriate for D or that it requires adding features to D in order to support it. I agree. On the other hand now I have years of experience in both language and I still have this need in D. It makes no sense to me to make it so that any arbitrary module can be the main module for the program. This feature is mostly for single modules, that you can download from archives, the web, etc. So it's for library code contained in single modules. In Python code is usually short, so in a single module you can implement many data structures, data visualization, data converters, etc. So it's quite handy for such modules to have demo, or even an interactive demo. Or they can be used with command lines arguments (with getopt), like a sound file converter. And then you can also import this module from other modules to perform the same operation (like sound file conversion) from your code. So you can use it both as a program that does something, and as a module for a larger system. Such code should be kept separate IMHO. This means that you now have two modules, so to download them atomically you need some kind of packaging, like a zip. If your project is composed by many modules this is not a problem. But if you have a single module project (and this happens often in Python), going from 1 to 2 files is not nice. I have written tens of reusable D modules, and some of them have a demo or are usable stand-alone when you have simpler needs. Maybe even many of those who use python would; I don't know. In Python is a very commonly used idiom. And there is not much in D that makes the same idiom less useful :-) Bye, bearophile
Re: Thread name conflict
05-May-2014 12:03, John Chapman пишет: Importing both core.thread and std.regex results in a conflict as both define a Thread type. Perhaps the regex module's author assumed there'd be no clash since it's a template - Thread(DataIndex). Should I file a bug suggesting a name change? Or maybe D ought to allow both parameterised and normal types to have the same name - C# for example allows it. Neat. I couldn't make a better case for D to finally fix visibility of private symbols. Why the heck should internal symbols conflict with public from other modules? No idea. Seems like turning std.regex into package will fix this one though, as I could put private things into std.regex.dont_touch_it_pretty_please. -- Dmitry Olshansky
Re: Thread name conflict
On Mon, 05 May 2014 15:55:13 +0400 Dmitry Olshansky via Digitalmars-d digitalmars-d@puremagic.com wrote: Why the heck should internal symbols conflict with public from other modules? No idea. Because no one has been able to convince Walter that it's a bad idea for private symbols to be visible. Instead, we've kept the C++ rules for that, and they interact very badly with module-level symbols - something that C++ doesn't have to worry about. Unfortunately, as I understand it, fixing it isn't quite as straightforward as making private symbols invisible. IIRC, Martin Nowak had a good example as to why as well as a way to fix the problem, but unfortunately, I can't remember the details now. Regardless, I think that most of us agree that the fact that private symbols conflict with those from other modules is highly broken. And it makes it _very_ easy to break code by making any changes to a module's implementation. The question is how to convince Walter. It'll probably require that someone just go ahead and implement it and then argue about a concrete implementation rather than arguing about the idea. - Jonathan M Davis
Re: Thread name conflict
On Monday, 5 May 2014 at 12:48:11 UTC, Jonathan M Davis via Digitalmars-d wrote: On Mon, 05 May 2014 15:55:13 +0400 Dmitry Olshansky via Digitalmars-d digitalmars-d@puremagic.com wrote: Why the heck should internal symbols conflict with public from other modules? No idea. Because no one has been able to convince Walter that it's a bad idea for private symbols to be visible. Instead, we've kept the C++ rules for that, and they interact very badly with module-level symbols - something that C++ doesn't have to worry about. As far as I know Walter does not object changes here anymore. It is only matter of agreeing on final design and implementing. Unfortunately, as I understand it, fixing it isn't quite as straightforward as making private symbols invisible. IIRC, Martin Nowak had a good example as to why as well as a way to fix the problem, but unfortunately, I can't remember the details now. I remember disagreeing with Martin about handling protection checks from template instances. Those are semantically verified at declaration point but actual instance may legitimately need access to private symbols of instantiating module (think template mixins). Probably there were other corner cases but I can't remember those I have not been arguing about :) Anyway, DIP22 is on agenda for DMD 2.067 so this topic is going to be back to hot state pretty soon.
Re: Parallel execution of unittests
On Mon, 05 May 2014 11:26:29 + bearophile via Digitalmars-d digitalmars-d@puremagic.com wrote: Jonathan M Davis: Such code should be kept separate IMHO. This means that you now have two modules, so to download them atomically you need some kind of packaging, like a zip. If your project is composed by many modules this is not a problem. But if you have a single module project (and this happens often in Python), going from 1 to 2 files is not nice. I have written tens of reusable D modules, and some of them have a demo or are usable stand-alone when you have simpler needs. Honestly, I wouldn't even consider distributing something that was only a single module in size unless it were on the scale of std.datetime, which we've generally agreed is too large for a single module. So, a single module wouldn't have enough functionality to be worth distributing. And even if I were to distribute such a module, I'd let its documentation speak for itself and otherwise just expect the programmer to read the code. Regardless, the version specifier makes it easy to have a version where main is defined for demos or whatever else you might want to do with it. So, I'd suggest just using that. I highly doubt that you'd be able to talk either Walter or Andrei into supporting a separate feature for this. At this point, we're trying to use what we already have to implement new things rather than adding new features to the language, no matter how minor it might seem. New language features are likely be restricted to things where we really need them to be language features. And this doesn't fit that bill. - Jonathan M Davis
Re: python vs d
On Wednesday, 30 April 2014 at 19:28:24 UTC, Ola Fosheim Grøstad wrote: Restricting dicts and arrays to a single element type requires more complicated logic in some cases. How you would handle elements of unexpected type in those arrays? What if mishandling is silent and causes a heisenbug? We had it and killed untyped arrays with fire, then breathed with a relief.
Re: Thread name conflict
On Mon, 05 May 2014 13:11:29 + Dicebot via Digitalmars-d digitalmars-d@puremagic.com wrote: On Monday, 5 May 2014 at 12:48:11 UTC, Jonathan M Davis via Digitalmars-d wrote: On Mon, 05 May 2014 15:55:13 +0400 Dmitry Olshansky via Digitalmars-d digitalmars-d@puremagic.com wrote: Why the heck should internal symbols conflict with public from other modules? No idea. Because no one has been able to convince Walter that it's a bad idea for private symbols to be visible. Instead, we've kept the C++ rules for that, and they interact very badly with module-level symbols - something that C++ doesn't have to worry about. As far as I know Walter does not object changes here anymore. It is only matter of agreeing on final design and implementing. Well, that's good to hear. Unfortunately, as I understand it, fixing it isn't quite as straightforward as making private symbols invisible. IIRC, Martin Nowak had a good example as to why as well as a way to fix the problem, but unfortunately, I can't remember the details now. I remember disagreeing with Martin about handling protection checks from template instances. Those are semantically verified at declaration point but actual instance may legitimately need access to private symbols of instantiating module (think template mixins). Probably there were other corner cases but I can't remember those I have not been arguing about :) IIRC, it had something to do with member functions, but I'd have to go digging through the newsgroup archives for the details. In general though, I think that private symbols should be ignored by everything outside of the module unless we have a very good reason to do otherwise. Maybe they should still be visible for the purposes of reflection or some other case where seeing the symbols would be useful, but they should never conflict with anything outside of the module without a really good reason. Anyway, DIP22 is on agenda for DMD 2.067 so this topic is going to be back to hot state pretty soon. It's long passed time that we got this sorted out. - Jonathan M Davis
Re: Thread name conflict
On Monday, 5 May 2014 at 13:33:13 UTC, Jonathan M Davis via Digitalmars-d wrote: IIRC, it had something to do with member functions, but I'd have to go digging through the newsgroup archives for the details. In general though, I think that private symbols should be ignored by everything outside of the module unless we have a very good reason to do otherwise. Maybe they should still be visible for the purposes of reflection or some other case where seeing the symbols would be useful, but they should never conflict with anything outside of the module without a really good reason. This works now and must continue to work I believe: // a.d; mixin template TMPL() { void foo() { z = 42; } } // b.d import a; private int z; mixin TMPL!();
Re: Parallel execution of unittests
Jonathan M Davis: Honestly, I wouldn't even consider distributing something that was only a single module in size unless it were on the scale of std.datetime, which we've generally agreed is too large for a single module. So, a single module wouldn't have enough functionality to be worth distributing. This reasoning style is similar to the Groucho Marx quote: I don't care to belong to any club that will have me as a member In the Python world online you can find thousands of single-module projects (few of them are mine). I have plenty of single D modules that encapsulate a single functionality. In Haskell cabal you can find many single modules that add functionality (plus larger projects like Diagrams). And I think D has to strongly encourage the creation of such ecosystem of modules that you download and use in your programs. You can't have everything in the standard library, it's not wise to re-write them (like 2D vectors, I have already seen them implemented ten different times in the D world), and there are plenty of useful things that can be contained in single modules, especially if such modules can import the functionality of one or more other modules. And even if I were to distribute such a module, I'd let its documentation speak for itself and otherwise just expect the programmer to read the code. A demo and the documentation are both useful. And the documentation can't replace stand-alone functionality. Regardless, the version specifier makes it easy to have a version where main is defined for demos or whatever else you might want to do with it. Bye, bearophile
Re: Enforced @nogc for dtors?
On Sunday, 4 May 2014 at 20:49:57 UTC, bearophile wrote: If we keep class destructors in D, is it a good idea to require them to be @nogc? This post comes after this thread in D.learn: http://forum.dlang.org/thread/vlnjgtdmyolgoiofn...@forum.dlang.org Bye, bearophile Not sure that would be a good idea: nogc means no interacting with the GC. In no way does it prevent accessing a ressource that itself is managed by the GC, which is what the original bug was about. Furthermore, classes *may* be deterministically desroyed, and preventing it from interacting with the GC, if only to remove scan pointers (think RefCounted/Array) would be needlessly restrictive.
Re: Enforced @nogc for dtors?
Also, the @nogc for destructors is specific to the current GC, and is a limitation that isn't really needed were destructors implemented properly in the current GC. On 5/5/14, monarch_dodra via Digitalmars-d digitalmars-d@puremagic.com wrote: On Sunday, 4 May 2014 at 20:49:57 UTC, bearophile wrote: If we keep class destructors in D, is it a good idea to require them to be @nogc? This post comes after this thread in D.learn: http://forum.dlang.org/thread/vlnjgtdmyolgoiofn...@forum.dlang.org Bye, bearophile Not sure that would be a good idea: nogc means no interacting with the GC. In no way does it prevent accessing a ressource that itself is managed by the GC, which is what the original bug was about. Furthermore, classes *may* be deterministically desroyed, and preventing it from interacting with the GC, if only to remove scan pointers (think RefCounted/Array) would be needlessly restrictive.
Re: More radical ideas about gc and reference counting
On Mon, May 05, 2014 at 06:16:34AM +, Arlen via Digitalmars-d wrote: On Sunday, 4 May 2014 at 22:56:41 UTC, H. S. Teoh via Digitalmars-d wrote: On Sat, May 03, 2014 at 10:48:47PM -0500, Caligo via Digitalmars-d wrote: [...] Last but not least, currently there are two main ways for new features to make it into D/Phobos: you either have to belong to the inner circle, or have to represent some corporation that's doing something with D. I'm sorry, but this is patently false. I am neither in the inner circle, nor do I represent any corporation, yet I've had many changes pulled into Phobos (including brand new code). I can't say I'm perfectly happy with the D development process either, but this kind of accusation is bordering on slander, and isn't helping anything. [...] There is a lot of truth in what Caligo has said, but I would word that part of it differently. A couple years ago I submitted std.rational, but it didn't go anywhere. About a year later I discovered that someone else had done a similar thing, but it never made it into Phobos either. Of course, it's not because we didn't belong to some inner circle, but I think it has to do with the fact that D has a very poor development process. The point being, something as simple as a Rational library shouldn't take years for it to become part of Phobos, specially when people are taking the time to do the work. [...] This wording is much more acceptable. ;-) While I think accusations of an elite inner circle are unfounded (and unfair), I do agree with the sentiment. I think some time ago there was some talk about very old pull requests that have been stuck at the bottom of the queue for months or even years, and nobody was looking at them. I don't know what came out of that talk, though -- apparently not very much. :-( OTOH, I did find that stubbornness and persistence help. If you keep pestering everybody about your new contribution, and keep pushing it even if people seem to ignore/dislike it, keep updating your pull even if it seems nobody cares, eventually somebody will take notice and do something about it. Of course, this is not ideal -- open source projects really should be actively welcoming new contributions, not merely passively accepting them -- but that's the way it is right now, and I'm not sure how to change that. Perhaps stubborn and persistent pestering of the PTBs until they change? Might help, you never know. ;-) T -- Unix was not designed to stop people from doing stupid things, because that would also stop them from doing clever things. -- Doug Gwyn
Re: More radical ideas about gc and reference counting
Initial post in this thread makes focus on a change that does not fix anything and implies silent semantical breakage. I am glad Andrei has reconsidered it but feels like key problem is not that proposal itself was bad but that it does not even try to solve. Real issue being deterministic destruction and/or deallocation of polymorphic objects. Class destructors are bad for that indeed. But I want some better replacement before prohibiting anything. There is a very practical use case that highlights existing problem, it was discussed few months ago in exception performance thread. To speed up throwing of non-const exceptions it is desirable to use pool of exception objects. However currently there is no place where to put code for releasing object back into pool. One cannot use destructor because it will never be called if pool keeps the reference. One cannot use reference counting struct wrapper because breaks polymorphic catching of exceptions. Any good solution to be proposed should be capable of solving this problem.
Re: std.typed_allocator: very very very primitive tracing example
The only thing I could see being an issue is the size of resulting scanning code, which could probably be mitigated by using a `bitmap`-ish representation as a key, so that types that have the same bitmap use the same scanning function. Also, I see a *very* large number of indirect branching occurring with this design, so you might as well account for a branch misprediction for each an every object of a non-final class allocated, due to fields possibly storing classes derived from the type of the field. This also begs the question, how do you intend to account for references to objects allocated outside of the scope of that particular typed allocator, which may, or may not, have the function pointer for scanning them. On 5/4/14, Andrei Alexandrescu via Digitalmars-d digitalmars-d@puremagic.com wrote: On 5/4/14, 9:26 PM, Andrei Alexandrescu wrote: For those keeping score at home, I've just updated https://github.com/andralex/phobos/blob/allocator/std/allocator.d https://github.com/andralex/phobos/blob/allocator/std/typed_allocator.d Forgot to mention: this latest development seems to suggest that the UNTYPED part of the allocator is becoming somewhat feature stable, so it's approaching alpha state. What std.allocator (i.e. the untyped part) needs right now is a few best of prepackaged allocators that are known to work well and allow coders to simply use them as short types or function calls, without needing to go through the intricate process of designing application-specific allocators. Andrei
Re: A few considerations on garbage collection
On 5/4/14, Marco Leise via Digitalmars-d digitalmars-d@puremagic.com wrote: Would that affect all arrays, only arrays containing structs or only affect arrays containing structs with dtors? printf(hello\n.ptr); should still work after all. That should work independent of what the GC decides to do, as it should be getting emitted as a literal pointer to hello\n in the RO data-segment.
Re: More radical ideas about gc and reference counting
05-May-2014 10:16, Arlen пишет: On Sunday, 4 May 2014 at 22:56:41 UTC, H. S. Teoh via Digitalmars-d wrote: On Sat, May 03, 2014 at 10:48:47PM -0500, Caligo via Digitalmars-d wrote: [...] Last but not least, currently there are two main ways for new features to make it into D/Phobos: you either have to belong to the inner circle, or have to represent some corporation that's doing something with D. I'm sorry, but this is patently false. I am neither in the inner circle, nor do I represent any corporation, yet I've had many changes pulled into Phobos (including brand new code). I can't say I'm perfectly happy with the D development process either, but this kind of accusation is bordering on slander, and isn't helping anything. T There is a lot of truth in what Caligo has said, but I would word that part of it differently. A couple years ago I submitted std.rational, but it didn't go anywhere. About a year later I discovered that someone else had done a similar thing, but it never made it into Phobos either. The key to getting things done is persistence. Everybody is on their spare time, nobody aside from the author would be able to push it through. The process is not I submit code and it finds its way into the standard library. It's rather getting people to try your stuff first and listening to them. Then with enough momentum and feedback one would go to review queue. Then start a review if nobody objects, then get into pass or postpone cycle, then survive the mess as the pull request goes into Phobos proper. Last but not least the burden of getting something into it is minor compared to tending the bugs and maintaining the stuff afterwards. Of course, it's not because we didn't belong to some inner circle, but I think it has to do with the fact that D has a very poor development process. What that makes of some other open-source projects, that still traffic in patches over email :) The point being, something as simple as a Rational library shouldn't take years for it to become part of Phobos, specially when people are taking the time to do the work. Look at it this way - when something is simpler, it makes it that much harder to make the one and true version of it. Everybody knows what it is, and tries to put in some of his favorite sauce. The hardest things to push into Phobos are one-liners even if it makes a ton of things look better, more correct and whatnot. Anyhow I agree that Phobos development process (the one I know about most) is slow and imperfect largely due to the informal nature of participation. Some reviews were lively and great, some went in a gloomy silence with uncertain results without any good indication of the reason. --Arlen -- Dmitry Olshansky
Re: GC vs Resource management.
Very short feedback about original proposal: 1) managing local objects is not really a problem, we already have `scoped` in Phobos for that (and unimplemented scope qualifier as possible more reliable approach) 2) real problem is managing global objects without clear destruction point while still keeping those compatible with Object hierarchy (for inheritance / polymorphism). There is nothing proposed to address it.
Re: Parallel execution of unittests
However, the community is starting to standardize around Dub as the standard package manager. Dub makes downloading a package as easy as editing a JSON file (and it scales such that you can download a project of any size this way). Did Python have a proper package manager before this idiom arose?
Re: FYI - mo' work on std.allocator
On 5/5/14, 12:13 AM, safety0ff wrote: On Monday, 28 April 2014 at 16:03:33 UTC, Andrei Alexandrescu wrote: Fair enough, I'll remove that part of the spec. Thanks! -- Andrei According to the docs, the multiple of sizeof(void*) restriction only applies to posix_memalign (and not to _aligned_malloc and aligned_alloc.) Well I've changed the docs recently, so if they're good now, excellent. I've decided that runtime-chosen page sizes are too much of a complication for the benefits. Do the complications arise in the MmapAllocator or the higher level allocators? The latter, which make compile-time decisions based on alignment. I'd like to see a Windows allocator based on VirtualAlloc, and afterwards a SystemAllocator defined to be MmapAllocator on unix-based and VirtualAlloc based on Windows. This SystemAllocator would ideally have an allocationGranularity property. Properties that don't need to propagate are easier to accommodate. Andrei
Re: Scenario: OpenSSL in D language, pros/cons
On 2014-05-04 4:34 AM, Daniele M. wrote: I have read this excellent article by David A. Wheeler: http://www.dwheeler.com/essays/heartbleed.html And since D language was not there, I mentioned it to him as a possible good candidate due to its static typing and related features. However, now I am asking the community here: would a D implementation (with GC disabled) of OpenSSL have been free from Heartbleed-type vulnerabilities? Specifically http://cwe.mitre.org/data/definitions/126.html and http://cwe.mitre.org/data/definitions/20.html as David mentions. I find this perspective very interesting, please advise :) I'm currently working on a TLS library using only D. I've shared the ASN.1 parser here: https://github.com/globecsys/asn1.d The ASN.1 format allows me to compile the data structures to D from the tls.asn1 in the repo I linked to. It uses the equivalent of D template structures extensively with what's called an Information Object Class. Obviously, when it's done I need a DER serializer/deserializer which I intend on editing MsgPackD, and then I can do a handshake (read a ASN.1 certificate) and encrypt/decrypt AES/RSA using the certificate information and this cryptography library: https://github.com/apartridge/crypto . I've never expected any help so I'm not sure what the licensing will be. I'm currently working on the generation step for the ASN.1 to D compiler, it's very fun to make a compiler in D.
Re: Scenario: OpenSSL in D language, pros/cons
On 5/5/14, 2:32 AM, JR wrote: On Sunday, 4 May 2014 at 21:18:24 UTC, Daniele M. wrote: And then comes my next question: except for that malloc-hack, would it have been possible to write it in @safe D? I guess that if not, module(s) could have been made un-@safe. Not saying that a similar separation of concerns was not possible in OpenSSL itself, but that D could have made it less development-expensive in my opinion. TDPL SafeD visions notwithstanding, @safe is very very limiting. I/O is forbidden so simple Hello Worlds are right out, let alone advanced socket libraries. Sounds like a library bug. Has it been submitted? -- Andrei
Re: GC vs Resource management.
On 5/5/14, 3:18 AM, Marc Schütz schue...@gmx.net wrote: On Sunday, 4 May 2014 at 16:13:23 UTC, Andrei Alexandrescu wrote: On 5/4/14, 4:42 AM, Marc Schütz schue...@gmx.net wrote: But I'm afraid your suggestion is unsafe: There also needs to be a way to guarantee that no references to the scoped object exist when it is destroyed. Actually, it should be fine to call the destructor, then blast T.init over the object, while keeping the actual memory in the GC. This possible approach has come up a number of times, and I think it has promise. -- Andrei Then accesses at runtime would still appear to work, but you're actually accessing something else than you believe you do. IMO, this is almost as bad as silent heap corruption. Not as bad because memory safety is preserved and the errors are reproducible. Such code should just be rejected at compile-time, if at all possible. Yah that would be best. Andrei
Re: Parallel execution of unittests
On Thursday, 1 May 2014 at 16:08:23 UTC, Andrei Alexandrescu wrote: It got full because of tests (surprise!). Your actions? Fix the machine and reduce the output created by the unittests. It's a simple engineering problem. -- Andrei You can't. You have not control over that machine you don't even exactly know that test has failed because of full /tmp/ - all you got is a bug report that can't be reproduced on your machine. It is not that simple already and it can get damn complicated once you get to something like network I/O
Re: Parallel execution of unittests
On Thursday, 1 May 2014 at 19:22:36 UTC, Andrei Alexandrescu wrote: On 5/1/14, 11:49 AM, Jacob Carlborg wrote: On 2014-05-01 17:15, Andrei Alexandrescu wrote: That's all nice, but I feel we're going gung ho with overengineering already. If we give unittests names and then offer people a button parallelize unittests to push (don't even specify the number of threads! let the system figure it out depending on cores), that's a good step to a better world. Sure. But on the other hand, why should D not have a great unit testing framework built-in. It should. My focus is to get (a) unittest names and (b) parallel testing into the language ASAP. Andrei It is wrong approach. Proper one is to be able to define any sort of test running system in library code while still being 100% compatible with naive `dmd -unittest`. We are almost quite there, only step missing is transferring attributes to runtime unittest block reflection.
Re: More radical ideas about gc and reference counting
On 5/4/14, 11:16 PM, Arlen wrote: A couple years ago I submitted std.rational, but it didn't go anywhere. About a year later I discovered that someone else had done a similar thing, but it never made it into Phobos either. Of course, it's not because we didn't belong to some inner circle, but I think it has to do with the fact that D has a very poor development process. The point being, something as simple as a Rational library shouldn't take years for it to become part of Phobos, specially when people are taking the time to do the work. I looked into this (not sure to what extent it's representative of a pattern), and probably we could and should fix it. Looks like back in 2012 you've done the right things (http://goo.gl/kbYQJM) but for whatever reason there was not enough response from the community. Later on, Joseph Rushton Wakeling tried (http://goo.gl/XyQu3D) to put std.rational through the review process but things got stuck at https://github.com/D-Programming-Language/phobos/pull/1616 with support of traits by BigInt. I think the needs to support BigInt argument is not a blocker - we can release std.rational to only support built-in integers, and then adjust things later to expand support while keeping backward compatibility. I do think it's important that BigInt supports appropriate traits to be recognized as an integral-like type. If you, Joseph, or both would want to put std.rational again through the review process I think it should get a fair shake. I do agree that a lot of persistence is needed. Andrei
Re: Parallel execution of unittests
On 5/5/14, 8:11 AM, Dicebot wrote: On Thursday, 1 May 2014 at 16:08:23 UTC, Andrei Alexandrescu wrote: It got full because of tests (surprise!). Your actions? Fix the machine and reduce the output created by the unittests. It's a simple engineering problem. -- Andrei You can't. You have not control over that machine you don't even exactly know that test has failed because of full /tmp/ - all you got is a bug report that can't be reproduced on your machine. It is not that simple already and it can get damn complicated once you get to something like network I/O I know, incidentally the hhvm team has had the same problem two weeks ago. They fixed it (wthout removing file I/O from unittests). It's fixable. That's it. This segment started with your claim that unittests should do no file I/O because they may fail with a full /tmp/. I disagree with that, and with framing the full /tmp/ problem as a problem with the unittests doing file I/O. Andrei
Re: Parallel execution of unittests
On 5/5/14, 8:16 AM, Dicebot wrote: On Thursday, 1 May 2014 at 19:22:36 UTC, Andrei Alexandrescu wrote: On 5/1/14, 11:49 AM, Jacob Carlborg wrote: On 2014-05-01 17:15, Andrei Alexandrescu wrote: That's all nice, but I feel we're going gung ho with overengineering already. If we give unittests names and then offer people a button parallelize unittests to push (don't even specify the number of threads! let the system figure it out depending on cores), that's a good step to a better world. Sure. But on the other hand, why should D not have a great unit testing framework built-in. It should. My focus is to get (a) unittest names and (b) parallel testing into the language ASAP. Andrei It is wrong approach. Proper one is to be able to define any sort of test running system in library code while still being 100% compatible with naive `dmd -unittest`. We are almost quite there, only step missing is transferring attributes to runtime unittest block reflection. Penalizing unittests that were bad in the first place is pretty attractive, but propagating attributes properly is even better. -- Andrei
Re: Parallel execution of unittests
Meta: However, the community is starting to standardize around Dub as the standard package manager. Dub makes downloading a package as easy as editing a JSON file (and it scales such that you can download a project of any size this way). Having package manager(s) in Python doesn't make single module Python projects less popular or less appreciated. Most Python projects are very small, thanks to both standard library and code succinctness (that allows a small program to do a lot), and the presence of an healthy ecology of third party modules that you can import to avoid re-writing things already done by other people. All this should become more common in the D world :-) Did Python have a proper package manager before this idiom arose? Both are very old, and I am not sure, but I think the main module idiom is precedent. Bye, bearophile
Re: Parallel execution of unittests
On Monday, 5 May 2014 at 15:36:19 UTC, Andrei Alexandrescu wrote: On 5/5/14, 8:11 AM, Dicebot wrote: On Thursday, 1 May 2014 at 16:08:23 UTC, Andrei Alexandrescu wrote: It got full because of tests (surprise!). Your actions? Fix the machine and reduce the output created by the unittests. It's a simple engineering problem. -- Andrei You can't. You have not control over that machine you don't even exactly know that test has failed because of full /tmp/ - all you got is a bug report that can't be reproduced on your machine. It is not that simple already and it can get damn complicated once you get to something like network I/O I know, incidentally the hhvm team has had the same problem two weeks ago. They fixed it (wthout removing file I/O from unittests). It's fixable. That's it. It is possible to write a unit test which provides graceful failure reporting for such issues but once you get there it becomes hard to see actual tests behind boilerplate of environmental verification and actual application code behind tests. Any tests that rely on I/O need some sort of commonly repeated initialize-verify-test-finalize pattern, one that is simply impractical to do with unit tests. This segment started with your claim that unittests should do no file I/O because they may fail with a full /tmp/. I disagree with that, and with framing the full /tmp/ problem as a problem with the unittests doing file I/O. It was just a most simple example. Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite is an appropriate generalization of my statement. Full /tmp is not a problem, there is nothing broken about system with full /tmp. Problem is test reporting that is unable to connect failure with /tmp being full unless you do environment verification.
Re: More radical ideas about gc and reference counting
Andrei Alexandrescu: I think the needs to support BigInt argument is not a blocker - we can release std.rational to only support built-in integers, and then adjust things later to expand support while keeping backward compatibility. I do think it's important that BigInt supports appropriate traits to be recognized as an integral-like type. Bigints support is necessary for usable rationals, but I agree this can't block their introduction in Phobos if the API is good and adaptable to the successive support of bigints. If you, Joseph, or both would want to put std.rational again through the review process I think it should get a fair shake. I do agree that a lot of persistence is needed. Rationals are rather basic (important) things, so a little of persistence is well spent here :-) Bye, bearophile
Re: Scenario: OpenSSL in D language, pros/cons
05-May-2014 18:59, Etienne пишет: On 2014-05-04 4:34 AM, Daniele M. wrote: I have read this excellent article by David A. Wheeler: http://www.dwheeler.com/essays/heartbleed.html And since D language was not there, I mentioned it to him as a possible good candidate due to its static typing and related features. However, now I am asking the community here: would a D implementation (with GC disabled) of OpenSSL have been free from Heartbleed-type vulnerabilities? Specifically http://cwe.mitre.org/data/definitions/126.html and http://cwe.mitre.org/data/definitions/20.html as David mentions. I find this perspective very interesting, please advise :) I'm currently working on a TLS library using only D. I've shared the ASN.1 parser here: https://github.com/globecsys/asn1.d Cool, keep us posted. The ASN.1 format allows me to compile the data structures to D from the tls.asn1 in the repo I linked to. It uses the equivalent of D template structures extensively with what's called an Information Object Class. Obviously, when it's done I need a DER serializer/deserializer which I intend on editing MsgPackD, and then I can do a handshake (read a ASN.1 certificate) and encrypt/decrypt AES/RSA using the certificate information and this cryptography library: https://github.com/apartridge/crypto . I've never expected any help so I'm not sure what the licensing will be. I'm currently working on the generation step for the ASN.1 to D compiler, it's very fun to make a compiler in D. Aye, D seems to be a nice choice for writing compilers. -- Dmitry Olshansky
Get object address when creating it in for loop
How to get and address of newly created object and put it in pointer array? int maxNeurons = 100; Neuron*[] neurons = new Neuron*[](maxNeurons); Neuron n; for(int i = 0; i maxNeurons; i++) { n = new Neuron(); neurons[] = n; // here n always returns same adress } writefln(Thread func complete. Len: %s, neurons); This script above will print array with all the same address values, why is that? Thanks
Re: Get object address when creating it in for loop
On Monday, 5 May 2014 at 16:15:43 UTC, hardcoremore wrote: neurons[] = n; // here n always returns same adress You're taking the address of the pointer, which isn't changing. Just use plain n - when you new it, it is already a pointer so just add that value to your array.
Re: Enforced @nogc for dtors?
On Monday, 5 May 2014 at 14:17:04 UTC, Orvid King via Digitalmars-d wrote: Also, the @nogc for destructors is specific to the current GC, and is a limitation that isn't really needed were destructors implemented properly in the current GC. How does one implement destructors (described below) properly in a garbage collector? I'm a bit puzzled by the recent storm over destructors. I think of garbage collected entities (classes in Java) as possibly having finalizers, and scoped things as possibly having destructors. The two concepts are related but distinct. Destructors are supposed to be deterministic, finalizers by being tied to a tracing GC are not. Java doesn't have stack allocated objects, but since 1.7 has try-'with resources' and AutoCloseable to cover some cases in RAII-like fashion. My terminology is from this http://www.hpl.hp.com/techreports/2002/HPL-2002-335.html IMO, since D has a GC, and stack allocated structs, it would make sense to use different terms for destruction and finalization, so what you really want is to properly implement finalizers in your GC. I'm a lot more reluctant to use classes in D now, and I'd like to see a lot more code with @nogc or compiled with a the previously discussed and rejected no runtime switch. Interestingly, Ada finalization via 'controlled' types is actually what we call destructors here. The Ada approach is interesting, but I don't know if a similar approach would fit well with D, which is a much more pointer intensive language.
Re: Get object address when creating it in for loop
On 5/5/2014 12:15 PM, hardcoremore wrote: How to get and address of newly created object and put it in pointer array? int maxNeurons = 100; Neuron*[] neurons = new Neuron*[](maxNeurons); Neuron n; for(int i = 0; i maxNeurons; i++) { n = new Neuron(); neurons[] = n; // here n always returns same adress } writefln(Thread func complete. Len: %s, neurons); This script above will print array with all the same address values, why is that? Thanks These sorts of questions should go in digitalmars.D.learn, but your problem is a simple typo here: neurons[] = n; That sets the *entire* array to n. You forgot the index: neurons[i] = n;
Re: Parallel execution of unittests
On 5/5/14, 8:55 AM, Dicebot wrote: It was just a most simple example. Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite is an appropriate generalization of my statement. Full /tmp is not a problem, there is nothing broken about system with full /tmp. Problem is test reporting that is unable to connect failure with /tmp being full unless you do environment verification. Different strokes for different folks. -- Andrei
Re: More radical ideas about gc and reference counting
On Mon, May 05, 2014 at 03:55:12PM +, bearophile via Digitalmars-d wrote: Andrei Alexandrescu: I think the needs to support BigInt argument is not a blocker - we can release std.rational to only support built-in integers, and then adjust things later to expand support while keeping backward compatibility. I do think it's important that BigInt supports appropriate traits to be recognized as an integral-like type. Bigints support is necessary for usable rationals, but I agree this can't block their introduction in Phobos if the API is good and adaptable to the successive support of bigints. Yeah, rationals without bigints will overflow very easily, causing many usability problems in user code. If you, Joseph, or both would want to put std.rational again through the review process I think it should get a fair shake. I do agree that a lot of persistence is needed. Rationals are rather basic (important) things, so a little of persistence is well spent here :-) [...] I agree, and support pushing std.rational through the queue. So, please don't give up, we need it get it in somehow. :) T -- I see that you JS got Bach.
Re: FYI - mo' work on std.allocator
Am Sun, 04 May 2014 21:05:01 -0700 schrieb Andrei Alexandrescu seewebsiteforem...@erdani.org: I've decided that runtime-chosen page sizes are too much of a complication for the benefits. Alright. Note however, that on Windows the allocation granularity is larger than the page size (64 KiB). So it is a cleaner design in my eyes to use portable wrappers around page size and allocation granularity. 2) For embedded Linux systems there is the flag MAP_UNINITIALIZED to break the guarantee of getting zeroed-out memory. So if it is desired, »zeroesAllocations« could be a writable property there. This can be easily done, but from what MAP_UNINITIALIZED is strongly discouraged and only implemented on small embedded systems. Agreed. In the cases where I used virtual memory, I often wanted to exercise more of its features. As it stands now »MmapAllocator« works as a basic allocator for 4k blocks of memory. Is that the intended scope or are you open to supporting all of it? For now I just wanted to get a basic mmap-based allocator off the ground. I am aware there's a bunch of things to do. The most prominent is that (according to Jason Evans) Linux is pretty bad at munmap() so it's actually better to advise() pages away upon deallocation but never unmap them. Andrei That sounds like a more complicated topic than anything I had in mind. I think a »std.virtualmemory« module should already implement all the primitives in a portable form, so we don't have to do that again for the next use case. Since cross-platform code is always hard to get right, it could also avoid latent bugs. That module would also offer functionality to get the page size and allocation granularity and wrappers for common needs like getting n KiB of writable memory. Management however (i.e. RAII structs) would not be part of it. It sounds like not too much work with great benefit for a systems programming language. -- Marco
Re: Get object address when creating it in for loop
Hi Guys, Thanks so much for your reply. This fixes my problem like Adam D. Ruppe suggested: int maxNeurons = 100; Neuron[] neurons = new Neuron[](maxNeurons); Neuron n; for(int i = 0; i maxNeurons; i++) { n = new Neuron(); neurons[] = n; } But can you give me a more details so I can understand what is going on. What is the difference between Neuron[] neurons = new Neuron[](maxNeurons); and Neuron*[] neurons = new Neuron*[](maxNeurons); As I understand Neuron*[] should create array which elements are pointers? Is it possible to instantiate 100 objects in a for loop and get a address of each object instance and store it in array of pointers? Thanks
Re: Parallel execution of unittests
On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote: On 5/5/14, 8:55 AM, Dicebot wrote: It was just a most simple example. Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite is an appropriate generalization of my statement. Full /tmp is not a problem, there is nothing broken about system with full /tmp. Problem is test reporting that is unable to connect failure with /tmp being full unless you do environment verification. Different strokes for different folks. -- Andrei There is nothing subjective about it. It is a very well-define practical goal - getting either reproducible or informative reports for test failures from machines you don't have routine access to. Why still keeping test sources maintainable (ok this part is subjective). It is relatively simple engineering problem but you discard widely adopted solution for it (strict control of test requirements) without proposing any real alternative. I will yell at someone when it breaks is not really a solution.
Re: FYI - mo' work on std.allocator
05-May-2014 20:57, Marco Leise пишет: That sounds like a more complicated topic than anything I had in mind. I think a »std.virtualmemory« module should already implement all the primitives in a portable form, so we don't have to do that again for the next use case. Since cross-platform code is always hard to get right, it could also avoid latent bugs. I had an idea of core.vmm. It didn't survive the last review though, plus I never got around to test OSes aside from Windows Linux. Comments on initial design are welcome. https://github.com/D-Programming-Language/druntime/pull/653 -- Dmitry Olshansky
Re: Running Phobos unit tests in threads: I have data
On Sunday, 4 May 2014 at 17:01:23 UTC, safety0ff wrote: On Saturday, 3 May 2014 at 22:46:03 UTC, Andrei Alexandrescu wrote: On 5/3/14, 2:42 PM, Atila Neves wrote: gdc gave _very_ different results. I had to use different modules because at some point tests started failing, but with gdc the threaded version runs ~3x faster. On my own unit-threaded benchmarks, running the UTs for Cerealed over and over again was only slightly slower with threads than without. With dmd the threaded version was nearly 3x slower. Sounds like a severe bug in dmd or dependents. -- Andrei This reminds me of when I was parallelizing a project euler solution: atomic access was so much slower on DMD that it made performance worse than the single threaded version for one stage of the program. I know that std.parallelism does make use of core.atomic under the hood, so this may be a factor when using DMD. Funny you should say that, a friend of mine tried porting a lock-free algorithm of his from Java to D a few weeks ago. The D version ran 3 orders of magnitude slower. Then I tried gdc and ldc on his code. ldc produced code running at around 80% of the speed of the Java version, fdc was around 30%. But dmd...
Re: More radical ideas about gc and reference counting
Am Mon, 5 May 2014 09:39:30 -0700 schrieb H. S. Teoh via Digitalmars-d digitalmars-d@puremagic.com: On Mon, May 05, 2014 at 03:55:12PM +, bearophile via Digitalmars-d wrote: Andrei Alexandrescu: I think the needs to support BigInt argument is not a blocker - we can release std.rational to only support built-in integers, and then adjust things later to expand support while keeping backward compatibility. I do think it's important that BigInt supports appropriate traits to be recognized as an integral-like type. Bigints support is necessary for usable rationals, but I agree this can't block their introduction in Phobos if the API is good and adaptable to the successive support of bigints. Yeah, rationals without bigints will overflow very easily, causing many usability problems in user code. If you, Joseph, or both would want to put std.rational again through the review process I think it should get a fair shake. I do agree that a lot of persistence is needed. Rationals are rather basic (important) things, so a little of persistence is well spent here :-) [...] I agree, and support pushing std.rational through the queue. So, please don't give up, we need it get it in somehow. :) T That experimental package idea that was discussed months ago comes to my mind again. Add that thing as exp.rational and have people report bugs or shortcomings to the original author. When it seems to be usable by everyone interested it can move into Phobos proper after the formal review (that includes code style checks, unit tests etc. that mere users don't take as seriously). As long as there is nothing even semi-official, it is tempting to write such a module from scratch in a quickdirty fashion and ignore existing work. The experimental package makes it clear that this code is eventually going to the the official way and home brewed stuff wont have a future. Something in the standard library is much less likely to be reinvented. On the other hand, once a module is in Phobos proper, it is close to impossible to change the API to accommodate for a new use case. That's why I think the most focused library testing and development can happen in the experimental phase of a module. The longer it is, the more people will have tried it in their projects before formal review, which would greatly improve informed decisions. The original std.rationale proposal could have been in active use now for months! -- Marco
Re: More radical ideas about gc and reference counting
On Monday, 5 May 2014 at 17:22:58 UTC, Marco Leise wrote: Am Mon, 5 May 2014 09:39:30 -0700 schrieb H. S. Teoh via Digitalmars-d digitalmars-d@puremagic.com: On Mon, May 05, 2014 at 03:55:12PM +, bearophile via Digitalmars-d wrote: Andrei Alexandrescu: I think the needs to support BigInt argument is not a blocker - we can release std.rational to only support built-in integers, and then adjust things later to expand support while keeping backward compatibility. I do think it's important that BigInt supports appropriate traits to be recognized as an integral-like type. Bigints support is necessary for usable rationals, but I agree this can't block their introduction in Phobos if the API is good and adaptable to the successive support of bigints. Yeah, rationals without bigints will overflow very easily, causing many usability problems in user code. If you, Joseph, or both would want to put std.rational again through the review process I think it should get a fair shake. I do agree that a lot of persistence is needed. Rationals are rather basic (important) things, so a little of persistence is well spent here :-) [...] I agree, and support pushing std.rational through the queue. So, please don't give up, we need it get it in somehow. :) T That experimental package idea that was discussed months ago comes to my mind again. Add that thing as exp.rational and have people report bugs or shortcomings to the original author. When it seems to be usable by everyone interested it can move into Phobos proper after the formal review (that includes code style checks, unit tests etc. that mere users don't take as seriously). And same objections still remain.
Re: Get object address when creating it in for loop
On Mon, 05 May 2014 16:15:42 + hardcoremore via Digitalmars-d digitalmars-d@puremagic.com wrote: How to get and address of newly created object and put it in pointer array? int maxNeurons = 100; Neuron*[] neurons = new Neuron*[](maxNeurons); Neuron n; for(int i = 0; i maxNeurons; i++) { n = new Neuron(); neurons[] = n; // here n always returns same adress } writefln(Thread func complete. Len: %s, neurons); This script above will print array with all the same address values, why is that? n gives you the address of the local variable n, not of the object on the heap that it points to. You don't normally get at the address of class objects in D. There's rarely any reason to. Classes always live on the heap, so they're already references. Neuron* is by definition a pointer to a class _reference_ not to an instance of Neuron. So, you'd normally do Neuron[] neurons; for your array. I very much doubt that you really want an array of Neuron*. IIRC, you _can_ get at an address of a class instance by casting its reference to void*, but I'm not sure, because I've never done it. And even then, you're then using void*, not Neuron*. Also FYI, questions like this belong in D.learn. The D newsgroup is for general discussions about D, not for questions related to learning D. - Jonathan M Davis
Re: Enforced @nogc for dtors?
The current GC cannot allocate within a destructor because of the fact that it has to acquire a global lock on the GC before calling the actual destructor, meaning that attempting to allocate or do anything that requires a global lock on the GC is impossible, because the lock has already been acquired by the thread. Admittedly this isn't the way it actually fails, but it is the flaw in the design that causes it to fail. Destructors and finalizers are the same thing. They are declared the same, function the same, and do the same things. In D, the deterministic invocation of a destructor is a side-effect of the optimization of allocations to occur on the stack rather than the heap, whether this is done by the user by declaring a value a struct, or by the compiler when it determines the value never escapes the scope. Currently the GC doesn't invoke the destructor of a struct that has been heap allocated, but I view this as a bug, because it is the same thing as if it had been declared as a class instead, and a destructor must take this into account, and not be dependent on the deterministic destruction qualities of stack-allocated values. On 5/5/14, Brian Rogoff via Digitalmars-d digitalmars-d@puremagic.com wrote: On Monday, 5 May 2014 at 14:17:04 UTC, Orvid King via Digitalmars-d wrote: Also, the @nogc for destructors is specific to the current GC, and is a limitation that isn't really needed were destructors implemented properly in the current GC. How does one implement destructors (described below) properly in a garbage collector? I'm a bit puzzled by the recent storm over destructors. I think of garbage collected entities (classes in Java) as possibly having finalizers, and scoped things as possibly having destructors. The two concepts are related but distinct. Destructors are supposed to be deterministic, finalizers by being tied to a tracing GC are not. Java doesn't have stack allocated objects, but since 1.7 has try-'with resources' and AutoCloseable to cover some cases in RAII-like fashion. My terminology is from this http://www.hpl.hp.com/techreports/2002/HPL-2002-335.html IMO, since D has a GC, and stack allocated structs, it would make sense to use different terms for destruction and finalization, so what you really want is to properly implement finalizers in your GC. I'm a lot more reluctant to use classes in D now, and I'd like to see a lot more code with @nogc or compiled with a the previously discussed and rejected no runtime switch. Interestingly, Ada finalization via 'controlled' types is actually what we call destructors here. The Ada approach is interesting, but I don't know if a similar approach would fit well with D, which is a much more pointer intensive language.
Re: Running Phobos unit tests in threads: I have data
On Saturday, 3 May 2014 at 12:26:13 UTC, Rikki Cattermole wrote: On Saturday, 3 May 2014 at 12:24:59 UTC, Atila Neves wrote: Out of curiosity are you on Windows? No, Arch Linux 64-bit. I also just noticed a glaring threading bug in my code as well that somehow's never turned up. This is not a good day. Atila I'm surprised. Threads should be cheap on Linux. Something funky is definitely going on I bet. Threads are never cheap.
Re: Running Phobos unit tests in threads: I have data
Going to take a wild guess, but as core.atomic.casImpl will never be inlined anywhere with DMD, due to it's inline assembly, you have the cost of building and destroying a stack frame, the cost of passing the args in, moving them into registers, saving potentially trashed registers, etc. every time it even attempts to acquire a lock, and the GC uses a single global lock for just about everything. As you can imagine, I suspect this is far from optimal, and, if I remember right, GDC uses intrinsics for the atomic operations. On 5/5/14, Atila Neves via Digitalmars-d digitalmars-d@puremagic.com wrote: On Sunday, 4 May 2014 at 17:01:23 UTC, safety0ff wrote: On Saturday, 3 May 2014 at 22:46:03 UTC, Andrei Alexandrescu wrote: On 5/3/14, 2:42 PM, Atila Neves wrote: gdc gave _very_ different results. I had to use different modules because at some point tests started failing, but with gdc the threaded version runs ~3x faster. On my own unit-threaded benchmarks, running the UTs for Cerealed over and over again was only slightly slower with threads than without. With dmd the threaded version was nearly 3x slower. Sounds like a severe bug in dmd or dependents. -- Andrei This reminds me of when I was parallelizing a project euler solution: atomic access was so much slower on DMD that it made performance worse than the single threaded version for one stage of the program. I know that std.parallelism does make use of core.atomic under the hood, so this may be a factor when using DMD. Funny you should say that, a friend of mine tried porting a lock-free algorithm of his from Java to D a few weeks ago. The D version ran 3 orders of magnitude slower. Then I tried gdc and ldc on his code. ldc produced code running at around 80% of the speed of the Java version, fdc was around 30%. But dmd...
Why not memory specific destructors?
I never got the real issue with destructors(I haven't seen the issue explained, just a lot of talk about it being a problem and how to fix it) but I think doing away with them would be a very bad idea. Assuming the only/main issue is with the GC not guaranteeing to call them then that is really throwing out the baby with the bathwater. Some of us do not want to be locked down by the GC. If you shape the D language around using the GC then you just our hole deeper and deeper. (We are trying to get out of this hole, remember?) So, instead of removing destructors why not have multiple types? If the object is manually allocated then we can guarantee the destructor will be called when the object is free'ed. But basically, since they would be different types of destructors there would be no confusion about when they would or wouldn't be called. 1. GC destructors - Never called when the object is managed by the GC. (or maybe one can flag certain ones to always be called and the GC will respect that) 2. Manual memory management destructors - Always called when the object is allocated by manually. 3. Others(ARC, etc) - Same principle. So, while this could provide different behavior depending on how you use memory(not a great thing but possibly necessary), it at least provides the separation for a choice. (and it's a about choice, not about forcing people to use something that doesn't work for them) It seems to me we have 4 basic lifetimes of an object: 1. Fixed/Physical Scope - The object lives and dies very quickly and is well defined. 2. UnFixed/Logical Scope - The scope is not well defined but something somewhere free's the object in a predictable way when it(the programmer) decides it should be free'ed). 3. Auto Scope - A combination of the above where an object can live in both at the same time and automatically determines when it goes out of the the last scope. This is like ARC type of stuff. 4. Unknown/Non-Deterministic/Unpredictable - There are no scopes. Objects lifetimes are completely handled by God(the GC). We don't have to worry about any of it. Unfortunately D's GC hasn't had it's `god mode` flag set. 1 and 2 essentially are old school manual memory management. If we have object's lifetimes that exist in different ways then having different destructors for these possibilities seems logical. The problem may simply be that we are trying to fit one destructor to all the cases and it simply doesn't work that way. Anyways... just food for thought.
Re: FYI - mo' work on std.allocator
On 5/5/14, 9:57 AM, Marco Leise wrote: That sounds like a more complicated topic than anything I had in mind. I think a »std.virtualmemory« module should already implement all the primitives in a portable form, so we don't have to do that again for the next use case. Since cross-platform code is always hard to get right, it could also avoid latent bugs. That module would also offer functionality to get the page size and allocation granularity and wrappers for common needs like getting n KiB of writable memory. Management however (i.e. RAII structs) would not be part of it. It sounds like not too much work with great benefit for a systems programming language. I think adding portable primitives to http://dlang.org/phobos/std_mmfile.html (plus better yet refactoring its existing code to use them) would be awesome and wouldn't need a DIP. -- Andrei
Re: Parallel execution of unittests
On 5/5/14, 10:08 AM, Dicebot wrote: On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote: On 5/5/14, 8:55 AM, Dicebot wrote: It was just a most simple example. Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite is an appropriate generalization of my statement. Full /tmp is not a problem, there is nothing broken about system with full /tmp. Problem is test reporting that is unable to connect failure with /tmp being full unless you do environment verification. Different strokes for different folks. -- Andrei There is nothing subjective about it. Of course there is. -- Andrei
Re: Parallel execution of unittests
On Monday, 5 May 2014 at 18:24:43 UTC, Andrei Alexandrescu wrote: On 5/5/14, 10:08 AM, Dicebot wrote: On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote: On 5/5/14, 8:55 AM, Dicebot wrote: It was just a most simple example. Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite is an appropriate generalization of my statement. Full /tmp is not a problem, there is nothing broken about system with full /tmp. Problem is test reporting that is unable to connect failure with /tmp being full unless you do environment verification. Different strokes for different folks. -- Andrei There is nothing subjective about it. Of course there is. -- Andrei You are not helping your point to look reasonable.
Re: Parallel execution of unittests
On 5/5/14, 11:25 AM, Dicebot wrote: On Monday, 5 May 2014 at 18:24:43 UTC, Andrei Alexandrescu wrote: On 5/5/14, 10:08 AM, Dicebot wrote: On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote: On 5/5/14, 8:55 AM, Dicebot wrote: It was just a most simple example. Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite is an appropriate generalization of my statement. Full /tmp is not a problem, there is nothing broken about system with full /tmp. Problem is test reporting that is unable to connect failure with /tmp being full unless you do environment verification. Different strokes for different folks. -- Andrei There is nothing subjective about it. Of course there is. -- Andrei You are not helping your point to look reasonable. My understanding here is you're trying to make dogma out of engineering choices that may vary widely across projects and organizations. No thanks. Andrei
Re: Enforced @nogc for dtors?
On Monday, 5 May 2014 at 17:46:35 UTC, Orvid King via Digitalmars-d wrote: Destructors and finalizers are the same thing. That is exactly the point that I am arguing against. That they are confused in D (or 'unified', if you think is a good thing) I accept, but I think it's a language design error, or at least an unfortunate omission. Did you read the citation I provided? I think Boehm's argument is convincing; you've provided no rebuttal. The entire brouhaha going on now is because they're different: we assume that destructors will be called at a precise time so we can use them to manage constrained resources and we don't know that about finalizers.
Re: Parallel execution of unittests
On Monday, 5 May 2014 at 18:29:40 UTC, Andrei Alexandrescu wrote: My understanding here is you're trying to make dogma out of engineering choices that may vary widely across projects and organizations. No thanks. Andrei I am asking to either suggest an alternative solution or to clarify why you don't consider it is an important problem. Dogmatic approach that solves the issue is still better than ignoring it completely. Right now I am afraid you will push for quick changes that will reduce elegant simplicity of D unittest system without providing a sound replacement that will actually fit into more ambitious use cases (as whole parallel thing implies).
Re: Scenario: OpenSSL in D language, pros/cons
On Monday, 5 May 2014 at 14:59:13 UTC, Etienne wrote: On 2014-05-04 4:34 AM, Daniele M. wrote: I have read this excellent article by David A. Wheeler: http://www.dwheeler.com/essays/heartbleed.html And since D language was not there, I mentioned it to him as a possible good candidate due to its static typing and related features. However, now I am asking the community here: would a D implementation (with GC disabled) of OpenSSL have been free from Heartbleed-type vulnerabilities? Specifically http://cwe.mitre.org/data/definitions/126.html and http://cwe.mitre.org/data/definitions/20.html as David mentions. I find this perspective very interesting, please advise :) I'm currently working on a TLS library using only D. I've shared the ASN.1 parser here: https://github.com/globecsys/asn1.d The ASN.1 format allows me to compile the data structures to D from the tls.asn1 in the repo I linked to. It uses the equivalent of D template structures extensively with what's called an Information Object Class. Obviously, when it's done I need a DER serializer/deserializer which I intend on editing MsgPackD, and then I can do a handshake (read a ASN.1 certificate) and encrypt/decrypt AES/RSA using the certificate information and this cryptography library: https://github.com/apartridge/crypto . I've never expected any help so I'm not sure what the licensing will be. I'm currently working on the generation step for the ASN.1 to D compiler, it's very fun to make a compiler in D. This is a quite radical approach, I am very interested to see its development! Have you thought about creating an SSL/TLS implementations tester instead? With the compiled information I see this goal quite well in range.
Re: Scenario: OpenSSL in D language, pros/cons
On Monday, 5 May 2014 at 10:41:41 UTC, Jonathan M Davis via Digitalmars-d wrote: On Mon, 05 May 2014 10:24:27 + via Digitalmars-d digitalmars-d@puremagic.com wrote: On Monday, 5 May 2014 at 09:32:40 UTC, JR wrote: On Sunday, 4 May 2014 at 21:18:24 UTC, Daniele M. wrote: And then comes my next question: except for that malloc-hack, would it have been possible to write it in @safe D? I guess that if not, module(s) could have been made un-@safe. Not saying that a similar separation of concerns was not possible in OpenSSL itself, but that D could have made it less development-expensive in my opinion. TDPL SafeD visions notwithstanding, @safe is very very limiting. I/O is forbidden so simple Hello Worlds are right out, let alone advanced socket libraries. I/O is not forbidden, it's just that writeln and friends currently can't be made safe, but that is being worked on AFAIK. While I/O usually goes through the OS, the system calls can be manually verified and made @trusted. As the underlying OS calls are all C functions, there will always be @system code involved in I/O, but in most cases, we should be able to wrap those functions in D functions which are @trusted. Regarldess, I would think that SSL could be implemented without sockets - that is, all of its operations should be able to operate on arbitrary data regardless of whether that data is sent over a socket or not. And if that's the case, then even if the socket operations themselves had to be @system, then everything else should still be able to be @safe. Most of the problems with @safe stem either from library functions that don't use it like they should, or because the compiler does not yet do a good enough job with attribute inference on templated functions. Both problems are being addressed, so the situation will improve over time. Regardless, there's nothing fundamentally limited about @safe except for operations which are actually unsafe with regards to memory, and any case where something isn't @safe when it's actually memory safe should be and will be fixed (as well as any situation which isn't memory safe but is considered @safe anyway - we do unfortunately still have a few of those). - Jonathan M Davis You nailed it. If we wanted to translate the theoretical exercise into something real, it would be nice to have an implementation of PolarSSL that works on ring buffers only, then leave network layer integration to clients. Much cleaner separation of concerns.
Re: Parallel execution of unittests
On 5/5/14, 11:47 AM, Dicebot wrote: On Monday, 5 May 2014 at 18:29:40 UTC, Andrei Alexandrescu wrote: My understanding here is you're trying to make dogma out of engineering choices that may vary widely across projects and organizations. No thanks. Andrei I am asking to either suggest an alternative solution or to clarify why you don't consider it is an important problem. Clean /tmp/ judiciously. Dogmatic approach that solves the issue is still better than ignoring it completely. The problem with your stance, i.e.: Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite is an appropriate generalization of my statement. is that it immediately generalizes into the unreasonable: Unittests should do no $X because any sort of $X can fail because of reasons you don't control from the test suite. So that gets into machines not having any memory available, with full disks etc. Just make sure test machines are prepared for running unittests to the extent unittests are expecting them to. We're wasting time trying to frame this as a problem purely related to unittests alone. Right now I am afraid you will push for quick changes that will reduce elegant simplicity of D unittest system without providing a sound replacement that will actually fit into more ambitious use cases (as whole parallel thing implies). If I had my way I'd make parallel the default and single-threaded opt-in, thus penalizing unittests that had issues to start with. But I understand the merits of not breaking backwards compatibility so probably we should start with opt-in parallel unittesting. Andrei
Re: Thread name conflict
On 2014-05-05 15:32, Jonathan M Davis via Digitalmars-d wrote: Maybe they should still be visible for the purposes of reflection or some other case where seeing the symbols would be useful Yes, it's useful for .tupleof to access private members. -- /Jacob Carlborg
Re: Enforced @nogc for dtors?
Am 05.05.2014 19:46, schrieb Orvid King via Digitalmars-d: The current GC cannot allocate within a destructor because of the fact that it has to acquire a global lock on the GC before calling the actual destructor, meaning that attempting to allocate or do anything that requires a global lock on the GC is impossible, because the lock has already been acquired by the thread. Admittedly this isn't the way it actually fails, but it is the flaw in the design that causes it to fail. This is precisely the point. I see this whole discussion as going around in circles instead of fixing the GC. Which is fine, assuming that at the end of the day, D gets a sound automatic memory management model, be it RC/GC/compiler dataflow based, which doesn't keep be questioned all the time. Otherwise, I see this as the second coming of Tango vs Phobos. -- Paulo
Re: Formal review of std.lexer
17-Mar-2014 02:13, Martin Nowak пишет: On 02/22/2014 09:31 PM, Marc Schütz schue...@gmx.net wrote: But that still doesn't explain why a custom hash table implementation is necessary. Maybe a lightweight wrapper around built-in AAs is sufficient? I'm also wondering what benefit this hash table provides. Getting back to this. The custom hash map originaly was a product of optimization, the benefits over built-in AAs are: a) Allocation was amortized by allocating nodes in batches. b) Allowed custom hash function to be used with built-in type (string). Not sure how much of that stands today. -- Dmitry Olshansky
Re: More radical ideas about gc and reference counting
On Monday, 5 May 2014 at 00:44:43 UTC, Caligo via Digitalmars-d wrote: On Sun, May 4, 2014 at 12:22 AM, Andrei Alexandrescu via Digitalmars-d digitalmars-d@puremagic.com wrote: The on/off switch may be a nice idea in the abstract but is hardly the perfect recipe to good language feature development; otherwise everybody would be using it, and there's not overwhelming evidence to that. (I do know it's been done a few times, such as the (in)famous new scoping rule of the for statement for C++ which has been introduced as an option by VC++.) No, it's nothing abstract, and it's very practical and useful. Rust has such a thing, #![feature(X,Y,Z)]. So does Haskell, with {-# feature #-}. Even Python has __future__, and many others. Well, python __future__ it's not exactly that: it's for introducing changes that are impacting the actual codebase... It's some sort of extreme care for not braking anything out there. /Paolo
Re: Scenario: OpenSSL in D language, pros/cons
On 2014-05-05 2:54 PM, Daniele M. wrote: Have you thought about creating an SSL/TLS implementations tester instead? You mean testing existing TLS libraries using this information? The advantages of using all-D is having zero-copy buffers that inline with the other layers of streams when built inside another D project. I can also add processor-specific assembler-code algorithms of AES and RSA from openSSL (optimizing the critical parts can put it on par or better in speed). To answer the question about safety, the code is very modular and so when you decide to zero out memory of keys before/after serialization/deserialization or even for the buffers, it happens for everything regardless of the complexity of the application. It's definitely easier to make it safer!
Adding a chocolatey package
Are we allowed to put a DMD installer on http://chocolatey.org as a package? I'd be interested in building a bulk installer for DMD + dub + Mono-D but I'm not sure about the licensing terms b/c the dmd back-end is supposedly proprietary.
Re: Running Phobos unit tests in threads: I have data
On 5 May 2014 19:07, Orvid King via Digitalmars-d digitalmars-d@puremagic.com wrote: Going to take a wild guess, but as core.atomic.casImpl will never be inlined anywhere with DMD, due to it's inline assembly, you have the cost of building and destroying a stack frame, the cost of passing the args in, moving them into registers, saving potentially trashed registers, etc. every time it even attempts to acquire a lock, and the GC uses a single global lock for just about everything. As you can imagine, I suspect this is far from optimal, and, if I remember right, GDC uses intrinsics for the atomic operations. Aye, and atomic intrinsics though they may be, it could even be improved by switching over to C++ atomic intrinsics, which map directly to core.atomics. :)
Re: Adding a chocolatey package
On 5/5/2014 4:05 PM, Etienne wrote: Are we allowed to put a DMD installer on http://chocolatey.org as a package? I'd be interested in building a bulk installer for DMD + dub + Mono-D but I'm not sure about the licensing terms b/c the dmd back-end is supposedly proprietary. Go here (with JS on): http://www.walterbright.com/ Use the Send email to Walter Bright and request permission. He's known to be cool about this sort of thing. IIUC, the whole permission thing is just a formality necessitated by the backend's former life as part of various companies's commercial compilers.
Re: Adding a chocolatey package
On 5/5/2014 4:26 PM, Nick Sabalausky wrote: Use the Send email to Walter Bright and request permission. He's known to be cool about this sort of thing. IIUC, the whole permission thing is just a formality necessitated by the backend's former life as part of various companies's commercial compilers. Ahem, ...necessitated by **DMD's backend's** former life...
Re: Get object address when creating it in for loop
Hi Jonathan, Thanks for your reply. So actually I was getting the pointer of n itself. I understand now what was my problem. The problem was that I did not know that array support references of objects, so I thought that I must fill it with pointers of objects. But its great that I do not have to use pointers :) Thanks a lot.
Re: Running Phobos unit tests in threads: I have data
On Monday, 5 May 2014 at 17:56:11 UTC, Dicebot wrote: On Saturday, 3 May 2014 at 12:26:13 UTC, Rikki Cattermole wrote: On Saturday, 3 May 2014 at 12:24:59 UTC, Atila Neves wrote: Out of curiosity are you on Windows? No, Arch Linux 64-bit. I also just noticed a glaring threading bug in my code as well that somehow's never turned up. This is not a good day. Atila I'm surprised. Threads should be cheap on Linux. Something funky is definitely going on I bet. Threads are never cheap. Regarding this, I found this talk interesting: https://www.youtube.com/watch?v=KXuZi9aeGTw
Re: Adding a chocolatey package
On Monday, 5 May 2014 at 20:05:09 UTC, Etienne wrote: Are we allowed to put a DMD installer on http://chocolatey.org as a package? I'd be interested in building a bulk installer for DMD + dub + Mono-D but I'm not sure about the licensing terms b/c the dmd back-end is supposedly proprietary. The Windows installer does not include DMD in itself, it just downloads the zip file and sets it up. You could go the same way. This way, you don't need to get permission from WB.
Re: Adding a chocolatey package
On 2014-05-05 18:21, Vladimir Panteleev wrote: On Monday, 5 May 2014 at 20:05:09 UTC, Etienne wrote: Are we allowed to put a DMD installer on http://chocolatey.org as a package? I'd be interested in building a bulk installer for DMD + dub + Mono-D but I'm not sure about the licensing terms b/c the dmd back-end is supposedly proprietary. The Windows installer does not include DMD in itself, it just downloads the zip file and sets it up. You could go the same way. This way, you don't need to get permission from WB. Ok, so if the zip from dlang.org downloads is downloaded and unpacked on the computer automatically it's all good?
Re: The Current Status of DQt
http://forum.dlang.org/thread/wdddgiowaidcojbrk...@forum.dlang.org Worth a reddit announcement tomorrow morning? -- Andrei TkD is nice,but the exe's Memory usage is 6.8~7M,but DFL's only 2.8~3M,and only a single file on windows 7. https://github.com/Rayerd/dfl, https://github.com/FrankLIKE/dfl
Re: The Current Status of DQt
On 2014-05-04 09:26, w0rp wrote: Qt 4 support basically arises from what is easy to do right now. Supporting Qt 5 doesn't seem that far off. I went with Qt 4 for now because it's easier, and at this stage it's more important to work with something that can actually work and learn from that, than to try and work with something which might not actually work at all. Nice work, I think Qt 4 is a very nice start and can help bring a lot more interest in D from the C++ crowd if it's successfully implemented, I think these people worry mostly about using the same data types and interface in a new programming language.